id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.16580 | Superadditivity of anticanonical Iitaka dimension for contractions with
F-split fibres | Given a fibration $f: X \to Y$ with general fibre $X_y$, over a field of
positive characteristic, we establish the Iitaka-type inequality
$\kappa(X,-K_X) \leq \kappa(X_y,-K_{X_y})+\kappa(Y,-K_Y)$ whenever $X_y$ has
good F-singularities. | Marta Benozzo, Iacopo Brivio, Chi-Kang Chang | 2023-09-28T16:36:52Z | http://arxiv.org/abs/2309.16580v2 | # Superadditivity of anticanonical Iitaka dimension for contractions with \(f\)-split fibres
###### Abstract.
In this paper, we study a version of the Iitaka conjectures for anticanonical divisors over perfect fields of positive characteristic. That is, we prove the inequality \(\kappa(X,-K_{X})\leq\kappa(X_{y},-K_{X_{y}})+\kappa(Y,-K_{Y})\), for a contraction \(f\colon X\to Y\) with general fibre \(X_{y}\) having good arithmetic properties.
###### Contents
* 1 Introduction
* 2 Conventions and notation
* 3 Preliminaries
* 4 Stein-separable morphisms and canonical bundle formulae
* 5 F-complements
* 6 Injectivity Theorem
* 7 Proof
## 1. Introduction
A projective algebraic variety \(X\) is classified according to the positivity properties of its canonical divisor \(K_{X}\). The most basic measure of such positivity is its _Iitaka dimension_\(\kappa(X,K_{X})\), an invariant which measures the rate of growth of the spaces \(H^{0}(X,mK_{X})\) as a function of \(m\). For varieties over the complex numbers, Iitaka proposed the following Conjecture
**Conjecture 1.1** (Iitaka's Conjecture, \(C_{n,m}\)).: _Let \(f\colon X\to Y\) be a contraction of smooth projective complex varieties, of dimensions \(n\) and \(m\) respectively, and let \(y\in Y\) be a general point. Then_
\[\kappa(X,K_{X})\geq\kappa(X_{y},K_{X_{y}})+\kappa(Y,K_{Y}).\]
Although still open in general, over fields of characteristic \(0\) this conjecture is proven for many important classes of contractions ([14, 15, 16, 17, 18, 19, 20, 21]). In particular, \(C_{n,m}\) holds when \(\dim(Y)\leq 2\), and when \(X_{y}\) admits a good minimal model.
More recently it was shown in [14] that a similar inequality holds for the anticanonical divisors.
**Theorem 1.2** ([10, Theorem 1.1], \(C_{n,m}^{-}\)).: _Let \(f\colon X\to Y\) be a contraction of smooth complex projective varieties, of dimensions \(n\) and \(m\) respectively, such that the stable base locus \(\mathbb{B}(-K_{X})\) does not dominate \(Y\), and let \(y\in Y\) be a general point. Then_
\[\kappa(X,-K_{X})\leq\kappa(X_{y},-K_{X_{y}})+\kappa(Y,-K_{Y}).\]
The condition on the stable base locus is necessary, as shown by [10, Example 1.7].
Broadly speaking, for both Conjecture 1.1 and Theorem 1.2 the main tools are _semipositivity_ results for the sheaves \(f_{*}\omega_{X/Y}^{m}\). In particular, [10] makes use of the canonical bundle formula for klt-trivial fibrations ([12]), the proof of which relies on Hodge-theoretic methods. Given that these techinques are not available in characteristic \(p>0\) ([13]), it is natural to ask whether the above statements still hold in this case. As it turns out, both \(C_{n,m}\) and \(C_{n,m}^{-}\) can fail in general ([11, 12]). However, Conjecture 1.1 has been proven for generically smooth contractions of relative dimension one, or when \(\dim(X)\leq 3\) and \(p>5\) ([11], [12], [13], [14] and [15]). Due to the lack of generic smoothness results, over fields of positive characteristic, the general fibre of a contraction may be singular and even non-reduced. In this case, Patakfalvi showed Conjecture 1.1 when \(Y\) is of general type and \(X_{y}\) has non-nilpotent Hasse-Witt matrix ([16]). This suggests that we may expect \(C_{n,m}\) to hold for contractions whose fibres have "arithmetically nice" singularities. Similarly, it was proven in [12] that, when we have enough control on the singularities of the general fibre, \(C_{n,1}^{-}\) and \(C_{3,m}^{-}\) hold (the latter provided that \(p\geq 5\)), as well as \(C_{n,n-1}^{-}\) if \(\kappa(Y,-K_{Y})=0\).
In this paper we extend Theorem 1.2 to certain contractions in positive characteristic in any dimension.
**Theorem 1.3** (see Theorem 7.1).: _Let \(f\colon X\to Y\) be a contraction of smooth projective varieties over an algebraically closed field \(k\) of characteristic \(p>0\), such that a general fibre \(X_{y}\) is \(K\)-globally \(F\)-regular1. Assume there is \(m\in\mathbb{N}\setminus p\mathbb{N}\) such that \(-mK_{X}\) is Cartier and \(|V|\subseteq|-mK_{X}|\) such that \(|V|_{X_{y}}\) induces a morphism with Stein degree not divisible by \(p\). Then_
Footnote 1: See Definition 4.2
\[\kappa(X,-K_{X})\leq\kappa(X_{y},-K_{X_{y}})+\kappa(Y,-K_{Y}).\]
### Proof outline
The key statement is [10, Proposition 4.2], a result stating that negativity of \(-K_{X}\) descends along contractions in a controlled way. More precisely: if \(|-K_{X}-f^{*}E|_{\mathbb{Q}}\neq\emptyset\) then \(|-K_{Y}-\epsilon E|_{\mathbb{Q}}\neq\emptyset\) for small \(\epsilon>0\). This can be used to prove an injectivity theorem ([10, Theorem 4.3]) which implies Theorem 1.2 when \(\kappa(Y,-K_{Y})=0\). The proof of [10, Proposition 4.2] is a consequence of the following facts.
1. As \(\mathbb{B}(-K_{X})\) does not dominate \(Y\) we can take \(\Delta\in|-K_{X}|_{\mathbb{Q}}\) such that \((X_{y},\Delta_{X_{y}})\) has "nice" (i.e. klt) singularities;
2. \((X,\Delta)\xrightarrow{f}Y\) is now a log Calabi-Yau contraction, hence we have a canonical bundle formula \(K_{X}+\Delta\sim_{\mathbb{Q}}f^{*}(K_{Y}+B_{Y}+M_{Y})\).
3. If \(\Gamma\in|-K_{X}-f^{*}E|_{\mathbb{Q}}\), then a small perturbation \((X,\epsilon\Gamma)\) will still have "nice" singularities over the generic point of \(Y\) and \(\mathbb{B}(-K_{X}-\epsilon\Gamma)\) does not dominate \(Y\), hence we can apply (1) and (2) to the pair \((X,\epsilon\Gamma)\).
There are several obstacles to generalising this approach to characteristic \(p>0\). First of all, the canonical bundle formula is known to be false in general ([14, Example 3.5]). On the positive side, a weak form of it is known to hold for contractions whose generic fibre is globally \(F\)-split ([13, 15]), so we chose to restrict ourselves to this setup. Even so, there are still issues with the first and last point. If \(X_{y}\) is globally \(F\)-split, it is known that we can find a \(\Delta_{X_{y}}\in|-K_{X_{y}}|_{\mathbb{Q}}\) such that \((X_{y},\Delta_{X_{y}})\) is a globally \(F\)-split Calabi-Yau variety. However, one must show that such divisor lifts to an element of \(|-K_{X}|_{\mathbb{Q}}\), and we are able to show that this is the case when, for some \(m\geq 1\) not divisible by \(p\), the rational map \(\phi_{|-mK_{X}|}\) restricts to a morphism on \(X_{y}\) with Stein degree not divisible by \(p\). Lastly, globally \(F\)-split singularities behave poorly under perturbations, so this makes point (3) problematic. Our solution is to introduce \(K\)_-globally \(F\)-regular_ varieties, a notion that interpolates between globally \(F\)-split and globally \(F\)-regular varieties. Roughly speaking, \(X_{y}\) is \(K\)-globally \(F\)-regular if it is globally \(F\)-split and the Iitaka fibration of \(-K_{X_{y}}\) maps \(X_{y}\) to a variety which is globally \(F\)-regular. The advantage is that this class of varieties is stable under small perturbations by members of the anticanonical \(\mathbb{Q}\)-linear system. Under these assumptions we are able to prove the injectivity result Theorem 6.1.
In [10], the author concludes by reducing to the \(\kappa(Y,-K_{Y})=0\) case considering \(g\colon Y\to Z\), the Iitaka fibration of \(-K_{Y}\). Over fields of positive characteristic, this contraction may have highly singular fibers. However the singularities of the general fiber manifest themselves on the total space after a base-change by a sufficiently high power of the Frobenius. This allows us to circumvent the issue by considering
where \(X_{e}\) (resp. \(Y_{e}\)) is the normalization of the reduction of \(Y\times_{Z}Z^{e}\) (resp. of \(X\times_{Z}Z^{e}\)). The resulting contractions \(f_{e},g_{e}\) and \(h_{e}\) will be (universally) homeomorphic to the original ones, but their fibers will now be normal, thus the arguments of [10] apply in this case. We can then conclude by comparing the canonical divisors of \(X,Y\) and \(Z\) with those of \(X_{e},Y_{e}\) and \(Z_{e}\) using the correspondence between purely inseparable morphisms and foliations ([20]).
### Acknowledgements
We would like to thank our advisors Paolo Cascini and Jungkai Chen for their guidance and helpful suggestions throughout the project. We thank Karl Schwede, Yoshin Gongyo, and Hiromu Tanaka for answering our questions and pointing out useful papers. The first author was supported by the Engineering and Physical Sciences Research Council [EP/S021590/1],
the EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), University College London. The second author is supported by the National Center for Theoretical Sciences and a grant from the Ministry of Science and Technology, grant number MOST-110-2123-M-002-005. The third author is supported by the PhD student's scholarship of National Taiwan University, and by the scholarship of National Science and Technology Council for PhD students to study abroad, which he used to visit the University of Tokyo. We would also like to thank the National Centre for Theoretical Sciences Mathematics division for their support that allowed the first two authors to meet in person in Taipei.
## 2. Conventions and notation
* All of our schemes \(X\) will be irreducible, separated, essentially of finite type over a field \(k\) of characteristic \(p>0\), and \(F\)-finite. Note that for any such scheme the dualising complex is well-defined ([10, Section 2.4]).
* A _k-variety_\(X\) is a separated integral \(k\)-scheme of \(k\)-finite type. We denote by \(\operatorname{Sing}(X)\) the locus of singular points of \(X\).
* When \(X\) is an integral \(k\)-scheme we denote by \(k(X)\) its field of rational functions.
* A _\(\mathbb{K}\)-divisor_\(D\) on a scheme \(X\) is a formal finite linear combination \(D=\sum_{i}a_{i}D_{i}\), where \(D_{i}\) are irreducible closed codimension-one subsets of \(X\) and \(a_{i}\in\mathbb{K}\). We will take \(\mathbb{K}\in\{\mathbb{Z},\mathbb{Z}_{(p)},\mathbb{Q}\}\). If \(\mathbb{K}=\mathbb{Z}\) we refer to \(D\) as an _integral divisor_ or simply a _divisor_. We define the _positive part_ (resp. _negative part_) of \(D\) to be \(D^{+}\coloneqq\sum_{a_{i}>0}a_{i}D_{i}\) (resp. \(D^{-}\coloneqq\sum_{a_{i}<0}(-a_{i})D_{i}\)).
* Given a divisor \(D\) on a normal variety \(X\), we denote by \(|D|\) the complete linear system it defines. If \(V\subseteq H^{0}(X,D)\) is a subspace, we denote by \(|V|\subseteq|D|\) the corresponding linear subsystem. If \(Y\subseteq X\) is a closed subvariety, we denote by \(|V|_{Y}\) the linear subsystem given by the restriction of \(V\) to \(Y\). The corresponding rational maps will be denoted by \(\phi_{|D|},\phi_{|V|}\) and \(\phi_{|V|_{Y}}\) respectively.
* A \(\mathbb{Q}\)-divisor \(D\) is _\(\mathbb{Q}\)-Cartier_ if \(mD\) is Cartier for some \(m\). If there exists such \(m\) with \(p\nmid m\), then we say \(D\) is a _\(\mathbb{Z}_{(p)}\)-Cartier_\(\mathbb{Z}_{(p)}\)-divisor.
* If \(D_{1},D_{2}\) are \(\mathbb{Q}\)-divisors on a scheme \(X\) such that \(mD_{i}\) is integral for \(i=1,2\) and \(mD_{1}\sim mD_{2}\) for some positive integer \(m\), then we say \(D_{1}\) and \(D_{2}\) are _\(\mathbb{Q}\)-linearly equivalent_\(\mathbb{Q}\)-divisors, denoted by \(D_{1}\sim_{\mathbb{Q}}D_{2}\). If \(p\nmid m\) then we say \(D_{1}\) and \(D_{2}\) are _\(\mathbb{Z}_{(p)}\)-linearly equivalent \(\mathbb{Z}_{(p)}\)-divisors, denoted \(D_{1}\sim_{\mathbb{Z}_{(p)}}D_{2}\).
* Let \(f\colon X\to Y\) be a morphism of schemes and let \(D\) be a divisor on \(X\): we write \(D\sim_{Y}0\) if \(D\sim f^{*}M\) where \(M\) is a Cartier divisor on \(Y\). If \(D\) is a \(\mathbb{Q}\)-divisor (resp. a \(\mathbb{Z}_{(p)}\)-divisor) we write \(D\sim_{\mathbb{Q},Y}0\) (resp. \(D\sim_{\mathbb{Z}_{(p)},Y}0\)) if for some \(m\geq 1\) (resp. for some \(m\geq 1\) such that \(p\nmid m\)) we have that \(mD\) is integral and \(mD\sim_{Y}0\). In particular, we have that \(D\) is Cartier (resp. \(\mathbb{Q}\) or \(\mathbb{Z}_{(p)}\)-Cartier).
* Let \(D\) be a \(\mathbb{Q}\)-divisor on a scheme \(X\): we say \(D\) is _effective_ (\(D\geq 0\)) if all of its coefficients are non-negative. We say \(D\) is _\(\mathbb{Q}\)-effective_ (resp. _\(\mathbb{Z}_{(p)}\)-effective_) if, for some \(m\geq 1\) (resp. for some \(m\geq 1\) such that \(p\nmid m\)) \(mD\) is integral and \(H^{0}(X,mD)\neq 0\). Given \(\mathbb{Q}\)-divisors \(D_{1},D_{2}\)
we write \(D_{1}\geq D_{2}\) if \(D_{1}-D_{2}\) is effective, and we write \(D_{1}\geq_{\mathbb{Q}}D_{2}\) (resp. \(D_{1}\geq_{\mathbb{Z}_{(p)}}D_{2}\)) if \(D_{1}-D_{2}\) is \(\mathbb{Q}\)-effective (resp. \(\mathbb{Z}_{(p)}\)-effective).
* A _sub-couple_\((X,B)\) consists of an integral normal scheme \(X\) and a \(\mathbb{Q}\)-divisor \(B\). If \(B\geq 0\) we say \((X,B)\) is a _couple_. A _sub-pair_ is a sub-couple \((X,B)\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. If \(B\geq 0\) we say \((X,B)\) is a _pair_.
* Let \(f\colon X\to Y\) be a morphism of \(k\)-schemes, where \(k\) is algebraically closed. A _general fibre of \(f\)_ is \(X_{y}\coloneqq f^{-1}(y)\) where \(y\) is a \(k\)-point belonging to a dense open subset of \(Y\). We say \(X_{y}\) is a _very general fibre_ if \(y\) is a \(k\)-point belonging to a countable intersection of dense open subsets of \(Y\).
* A _contraction_ is a projective morphism of schemes \(f\colon X\to Y\) such that \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Y}\).
* Let \(f\colon X\to Y\) be a contraction with general fibres that are normal, \(D\) a divisor on \(X\) and \(X_{y}\) a general fibre. Then \(D\) is \(\mathbb{Q}\)-Cartier along any codimension \(1\) point of \(X_{y}\), hence we can define its restriction to \(X_{y}\), \(D_{X_{y}}:=D|_{X_{y}}\).
* Given a contraction \(f\colon X\to Y\) and \(D\) a divisor on \(X\), if \(\eta\) is the generic point of \(Y\), we denote by \(D_{\eta}\coloneqq D|_{X_{\eta}}\). As above, this restriction is well-defined. When the geometric generic fibre is normal, we use an analogous notation \(D_{\overline{\eta}}\coloneqq D|_{X_{\overline{\eta}}}\).
* Let \(f\colon X\to Y\) be a surjective projective morphism of schemes, and let \(f\colon X\xrightarrow{g}Z\xrightarrow{h}Y\) be its Stein factorisation. The degree of \(h\) is called the _Stein degree of \(f\)_ and it is denoted by \(\operatorname{St.deg}(f)\). Note that, when \(Y\) is normal and \(p\nmid\operatorname{St.deg}(f)\), then \(f\) is a split morphism, that is the natural map \(\mathcal{O}_{Y}\to f_{*}\mathcal{O}_{X}\) is a split inclusion, via the trace map \(\operatorname{Tr}_{X/Y}\colon f_{*}\mathcal{O}_{X}\to\mathcal{O}_{Y}\).
* Let \((X,B)\) be a sub-couple over \(\mathbb{C}\). A _model_ of \((X,B)\) is a normal, integral, separated, projective \(A\)-scheme of finite type \(\mathcal{X}\to\operatorname{Spec}(A)\), where \(A\) is a finitely generated \(\mathbb{Z}\)-algebra, together with a \(\mathbb{Q}\)-divisor \(\mathcal{B}\) such that \((X,B)=(\mathcal{X},\mathcal{B})\times_{\operatorname{Spec}(A)}\operatorname{ Spec}(\mathbb{C})\).
We refer the reader to [10] for the definitions of the various classes of singularities appearing in the Minimal Model Program.
_Remark 2.1_.: In the remainder, we will mostly deal with normal varieties, thus we will tacitly use their \(S2\) property. More precisely, when we use reflexive sheaves, we work locally over the regular locus, then extend the result to all of \(X\). This is needed, for example, when applying Grothendieck duality to the Frobenius morphism.
_Remark 2.2_.: In the sequel, sometimes we will pull-back divisors under equidimensional morphisms \(f\colon X\to Y\) of normal varieties, without requiring any Cartier assumption. This operation is well-defined since, given a prime divisor \(D\) on \(Y\), its preimage under \(f\) is a divisor on \(X\).
## 3. Preliminaries
### Iitaka dimension
In this section, after recalling its definition, we collect some results on the behaviour of the Iitaka dimension. We refer to [11] for more details.
**Definition-Proposition 3.1** ([11, 2.1.A]).: _Let \(X\) be a normal projective variety and \(L\) a \(\mathbb{Q}\)-divisor on it. For every \(m>0\) such that \(mL\) is integral and the linear system \(|mL|\) is not empty,
\(|mL|\) defines a rational map \(\phi_{|mL|}\colon X\dashrightarrow\mathbb{P}^{N_{m}}\). For \(m\gg 0\) and sufficiently divisible, \(\dim(\phi_{|mL|}(X))\) stabilises. The Iitaka dimension of \(L\) is defined as:_
\[\kappa(X,L)\coloneqq\begin{cases}-\infty\quad\text{if}\,|mL|=\emptyset\text{ for all }m\geq 0;\\ \max_{m\geq 1}\dim(\phi_{|mL|}(X))\quad\text{otherwise}.\end{cases}\]
_Remark 3.2_.: Sometimes it is useful to work with different characterisations of the Iitaka dimension. We recall one here. Let \(X\) be a normal projective variety over a field \(k\) and \(L\) a Cartier divisor on it. Define the _section ring of \(L\)_ as \(R(X,L)\coloneqq\bigoplus_{m=0}^{\infty}H^{0}(X,mL)\). If \(R(X,L)\neq 0\), it is an integral domain. Denote by \(Q(X,L)\) its fraction field. If \(\kappa(X,L)\geq 0\), then \(\kappa(X,L)=\operatorname{tr.deg}_{k}Q(X,L)\). For more details, we refer to [10, 1.3].
**Lemma 3.3**.: _Let \(\varphi\colon X^{\prime}\to X\) be a surjective morphism between normal projective varieties and \(L\) a Cartier divisor on \(X\). Then \(\kappa(X^{\prime},\varphi^{*}L)=\kappa(X,L)\)._
Proof.: Note that, as \(\varphi\) is surjective, \(\kappa(X,L)\leq\kappa(X^{\prime},\varphi^{*}L)\). If \(\varphi\) is a contraction, the result follows from the projection formula. By considering the Stein factorisation of \(\varphi\), we can thus reduce to the case where \(\varphi\) is finite.
Now suppose \(\varphi=\operatorname{F}^{e}\) for some \(e>0\), then the statement is trivial since \(\operatorname{F}^{e*}L=p^{e}L\). If \(\varphi\) is purely inseparable, there exists \(\psi\colon X\to X^{\prime}\) such that \(\varphi\circ\psi=\operatorname{F}^{e}\), for some \(e\geq 0\). Then:
\[\kappa(X,L)\leq\kappa(X^{\prime},\varphi^{*}L)\leq\kappa(X,\psi^{*}\phi^{*}L)= \kappa(X,L).\]
If, instead, \(\varphi\) is a Galois cover, the result is proven in [10, Proposition 1.5]. Note that they prove it over fields of characteristic \(0\), but, assuming \(\varphi\) is Galois, the same proof works also over fields of positive characteristic. If \(\varphi\) is separable, there exists \(\psi\colon X^{\prime\prime}\to X^{\prime}\) such that \(\varphi\circ\psi\) is Galois. Thus:
\[\kappa(X,L)\leq\kappa(X^{\prime},\varphi^{*}L)\leq\kappa(X^{\prime\prime}, \psi^{*}\phi^{*}L)=\kappa(X,L).\]
In general, we can factor \(\varphi\) in its separable and purely inseparable part and conclude by the above discussion.
**Lemma 3.4**.: _Let \(f\colon X\to Y\) be a projective morphism between varieties. Assume that a very general fibre \(X_{y}\) is reduced and normal. Let \(\eta\) be the generic point of \(Y\) and \(\overline{\eta}\) its geometric generic point. Let \(L\) be a Cartier divisor on \(X\). Then,_
\[\kappa(X_{\overline{\eta}},L_{\overline{\eta}})=\kappa(X_{\eta},L_{\eta})= \kappa(X_{y},L_{X_{y}}).\]
_Moreover, if \(\kappa(X_{\eta},L_{\eta})\geq 0\), the above equalities hold also for the general fibre \(X_{y}\)._
Proof.: The first equality is a consequence of the flat base change theorem. As for the second, note that we can assume \(f\) is flat without loss of generality, hence we conclude by [11, Theorem III.12.8].
Now, suppose \(\kappa(X_{\eta},L_{\eta})\geq 0\). Let \(H\) be an ample enough Cartier divisor on \(Y\) such that \(L+f^{*}H\) is \(\mathbb{Q}\)-effective. By the easy additivity Theorem, [12, Proposition 1] and [1, Lemma 2.20], for a general fibre \(X_{y}\) we have:
\[\kappa(X,L+f^{*}H)=\kappa(X_{y},L_{X_{y}})+\dim(Y)\quad\text{and}\quad\kappa( X,L+f^{*}H)=\kappa(X_{\eta},L_{\eta})+\dim(Y).\]
Thus, \(\kappa(X_{y},L_{X_{y}})=\kappa(X_{\eta},L_{\eta})\).
_Remark 3.5_.: In the above Lemma 3.4, if \(\kappa(X_{\eta},L_{\eta})=-\infty\), it may be false that the general fibre \(X_{y}\) satisfies \(\kappa(X_{y},L_{X_{y}})=-\infty\). Let \(\hat{Y}\) be an abelian variety and \(Y\) its dual. The Poincare bundle \(L\) on \(X\coloneqq\hat{Y}\times Y\to Y\) gives a counterexample.
### Frobenius
In this section, we define the different Frobenius morphisms we will use and outline their relations.
**Definition 3.6**.: Let \(X\) be an \(F\)-finite scheme over a field of characteristic \(p>0\). The _Frobenius_ morphism on \(X\), \(\operatorname{F}^{e}_{X}\colon X^{e}\to X\), for \(e\in\mathbb{N}\), is defined to be the identity on points and the \((p^{e})^{\text{th}}\)-power on the structure sheaf. When the underlying scheme is clear from the context we will just write \(\operatorname{F}^{e}\). Note that \(X^{e}\) and \(X\) are the same scheme, the index is just used to differentiate between the target and the source of \(\operatorname{F}^{e}\).
Given a morphism of \(k\)-schemes \(\pi\colon X\to V\), we have the following commutative diagram:
(3.1)
where \(X_{V^{e}}\) is the fibre product \(X\times_{V^{e}}V\) and \((\operatorname{F}^{e}_{V})_{X}\) is the induced map. The morphism \(\operatorname{F}^{e}_{X^{e}/V^{e}}\) denotes the \(e^{th}\) _relative Frobenius of X over V_. When \(V=\operatorname{Spec}(k)\) the relative Frobenius is also called the \(k\)_-linear Frobenius_.
_Remark 3.7_.: Note that \(\operatorname{F}^{e}\colon X^{e}\to X\) is not \(k\)-linear. However, if \(k\) is perfect, it differs from the \(k\)-linear Frobenius only by an automorphism of \(\operatorname{Spec}(k)\). On the other hand, if \(\operatorname{Spec}(k^{e})\to V^{e}\) is a \(k^{e}\)-point, then the base-change
\[\operatorname{F}^{e}_{X^{e}/V^{e}}\otimes_{V^{e}}k^{e}\colon X^{e}_{k^{e}} \to X_{k^{e}}\]
coincides with the \(k\)-linear Frobenius of \(X_{k}\coloneqq X\times_{V}\operatorname{Spec}(k)\).
### Frobenius base change
In positive characteristic, it is hard to control the singularities of the general fibre of a contraction due to the failure of generic smoothness theorems. However, after a base change with a high power of the Frobenius morphism, all the singularities of the general fibre appear on the total space.
**Lemma 3.8** ([11, Lemma 2.4]).: _Let \(f:X\to Y\) be a contraction between normal projective varieties over a perfect field of characteristic \(p>0\) and let \(\overline{\eta}\) be the geometric generic point of \(Y\). Consider the base change with a power of the Frobenius morphism:_
_Then, for \(e\gg 0\), \(((X_{Y^{e},\operatorname{red}})^{\nu})_{\overline{\eta}^{e}}=(X_{\overline{ \eta},\operatorname{red}})^{\nu}\)._
In positive characteristic, there is a correspondence between height one purely inseparable morphisms and foliations. Thanks to this, we are able to study the behaviour of the canonical divisors under purely inseparable base changes.
**Definition 3.9**.: A purely inseparable morphism of schemes \(a\colon X^{\prime}\to X\) is called of _height one_ if there exists \(\alpha\colon X\to X^{\prime}\) such that \(a\circ\alpha=\operatorname{F}\).
**Definition 3.10**.: Let \(X\) be a normal variety over a perfect field of characteristic \(p>0\). A _foliation_ on \(X\) is a subsheaf of the tangent sheaf, \(\mathcal{F}\subseteq T_{X}\), which is saturated and closed under \(p\)-powers.
_Remark 3.11_.: One normally requires that \(\mathcal{F}\) is _involutive_, that is closed under Lie brackets. However, this follows from closure under \(p\)-powers by [10].
**Proposition 3.12** ([11, Proposition 2.9]).: _Let \(X^{\prime}\) be a normal variety over a perfect field of characteristic \(p>0\). There is a \(1\)-to-\(1\) correspondence_
_given by:_
* \(X:=\operatorname{Spec}_{X^{\prime}}\mathcal{O}_{X^{\prime}}^{\mathcal{F}}\)_, where_ \(\mathcal{O}_{X^{\prime}}^{\mathcal{F}}\subseteq\mathcal{O}_{X^{\prime}}\) _is the subsheaf of_ \(\mathcal{O}_{X^{\prime}}\) _that is taken to zero by all the sections of_ \(\mathcal{F}\)_;_
* \(\mathcal{F}\coloneqq\{\partial\in T_{X^{\prime}}\text{ s.t. }\partial\mathcal{O}_{X}=0\}\)_._
_Moreover, morphisms of degree \(p^{r}\) correspond to foliations of rank \(r\)._
**Proposition 3.13** ([11, Proposition 2.10]).: _Let \(X^{\prime}\to X\) be a purely inseparable morphism of height one between normal varieties over a perfect field of characteristic \(p>0\) and let \(\mathcal{F}\) be the corresponding foliation. Then_
\[\omega_{X^{\prime}/X}\simeq(\det\mathcal{F})^{[p-1]}.\]
As a consequence of the flattening lemma [10, Theoreme 5.2.2] we have the following.
**Lemma 3.14** ([11, Lemma 2.19]).: _Let \(f\colon X\to Y\) be a projective dominant morphism of normal varieties. Then, there is an open subset \(U\subseteq Y\) with \(\operatorname{codim}(Y\setminus U)\geq 2\) such that \(X_{U}\coloneqq f^{-1}(U)\) is flat over \(U\)._
**Theorem 3.15** ([11, Theorem 3.1]).: _Let \(X\) be a normal variety over a perfect field \(k\) of characteristic \(p>0\) and let \(f\colon X\to Y\) be a morphism to a normal variety over \(k\). Let \(a\colon Y^{\prime}\to Y\) be a
finite purely inseparable \(k\)-morphism of height one from a normal variety, let \(X^{\prime}\) be the normalisation of the reduction of \(X\times_{Y}Y^{\prime}\) and \(f^{\prime}\colon X^{\prime}\to Y^{\prime}\) the induced morphism. Set \(\mathcal{A}\) to be the foliation induced by \(a\). Then:_
1. \(K_{X^{\prime}/X}\sim(p-1)D\) _for some Weil divisor_ \(D\) _on_ \(X^{\prime}\)_;_
2. _there is a non-empty open subset_ \(U\subseteq Y^{\prime}\) _and an effective divisor_ \(C\) _on_ \(f^{\prime-1}(U)\) _such that_ \(C\sim-D|_{f^{\prime-1}(U)}\)_._
_Moreover, assume \(X_{\overline{\eta}}\) is reduced, where \(\overline{\eta}\) is the geometric generic point of \(Y\), and \(f\) is equidimensional. Then:_
1. \(f^{\prime*}(\det\mathcal{A})-D\sim C\) _for some effective divisor_ \(C\) _on_ \(X^{\prime}\)_._
Proof.: Points (i) and (ii) are just points (a) and (b) of [15, Theorem 3.1]. As for point (iii), first, assume \(X\times_{Y}Y^{\prime}\) is reduced. Note that \(f^{\prime}\) is equidimensional since so is \(f\). In particular, as \(f\) and \(f^{\prime}\) are both equidimensional, we can freely replace \(Y\) by \(Y_{0}\coloneqq Y\setminus(\operatorname{Sing}(Y)\cup a(\operatorname{Sing}(Y^ {\prime})))\), \(Y^{\prime}\) by \(Y^{\prime}_{0}\coloneqq a^{-1}(Y_{0})\), \(X\) by \(f^{-1}(Y_{0})\) and \(X^{\prime}\) by \(f^{\prime-1}(Y^{\prime}_{0})\). Then point (iii) follows from point (d) of [15, Theorem 3.1]. If \(X\times_{Y}Y^{\prime}\) is not reduced, by Lemma 3.14, there exists \(U\subseteq Y\) with \(\operatorname{codim}(Y\setminus U)\geq 2\) such that \(f|_{X_{U}}\colon X_{U}\to U\) is flat, where \(X_{U}\coloneqq f^{-1}(U)\). Let \(U^{\prime}\coloneqq a^{-1}(U)\). By [14, Remark 2.5], the fibre product \(X_{U}\times_{U}U^{\prime}\) is reduced since \(f|_{X_{U}}\) is flat and \(X_{\overline{\eta}}\) is reduced. Let \(X^{\prime}_{U^{\prime}}\subseteq X^{\prime}\) be the normalisation of \(X_{U}\times_{U}U^{\prime}\). By the above discussion, we conclude \((f^{\prime*}(\det\mathcal{A})-D)|_{X^{\prime}_{U^{\prime}}}\sim C|_{X^{\prime }_{U^{\prime}}}\) for some divisor \(C\) on \(X^{\prime}\) such that \(C|_{X^{\prime}_{U^{\prime}}}\geq 0\). Since \(f^{\prime}\) is equidimensional, \(\operatorname{codim}(X^{\prime}\setminus X^{\prime}_{U^{\prime}})\geq 2\), therefore, by normality of \(X^{\prime}\), we can extend the above equation on all of \(X^{\prime}\).
In the sequel, we will need to consider base changes with purely inseparable maps that are not necessarily of height one. The previous results extend to this situation by induction on the height.
**Corollary 3.16**.: _Let \(f\colon X\to Y\) be an equidimensional contraction between normal projective varieties and \(g\colon Y\to Z\) a morphism between normal varieties. Let \(Y_{e}\) be the normalisation of the reduction of \(Y\times_{Z}Z^{e}\), \(X_{e}\) the normalisation of the reduction of \(X\times_{Y}Y_{e}\). Assume that \(X_{\overline{\eta}}\) is reduced, where \(\overline{\eta}\) is the geometric generic point of \(Y\). Let \(f_{e}\colon X_{e}\to Y_{e}\) and \(g_{e}\colon Y_{e}\to Z^{e}\) be the induced morphisms. Then:_
1. \(K_{X_{e}/X}-f_{e}^{*}K_{Y_{e}/Y}\sim-C\) _for some effective Weil divisor_ \(C\) _on_ \(X_{e}\)_;_
2. \(K_{Y_{e}/Y}\sim D\) _for some Weil divisor_ \(D\) _on_ \(Y_{e}\) _and there is a non-empty open subset_ \(U\subseteq Z^{e}\) _such that_ \(-D|_{g_{e}^{-1}(U)}\) _is effective._
Proof.: We proceed by induction. When \(e=1\), let \(\mathcal{A}\) be the foliation on \(Y_{1}\) corresponding to \(Y_{1}\to Y\). By Proposition 3.13, \(K_{Y_{1}/Y}\simeq(\det\mathcal{A})^{[p-1]}\) and, by Theorem 3.15,
\[K_{X_{1}/X}-(p-1)f_{1}^{*}(\det\mathcal{A})\sim-C\]
for some effective divisor \(C\) on \(X_{1}\), giving point (i). Point (ii) corresponds to points (i) and (ii) of the above Theorem 3.15. If \(e>1\), consider the diagram:
\(X_{e}\)\(Y_{e-1}\)\(Y_{e}\)\(Z^{e}\)\(Y_{e-1}\)\(Y\)
where \(\pi_{1}\), \(\pi_{2}\), \(p_{1}\) and \(p_{2}\) are the induced maps. By the inductive assumptions,
* \(K_{X_{e-1}/X}-f_{e-1}^{*}K_{Y_{e-1}/Y}\sim-C_{1}\) and \(C_{1}\geq 0\);
* \(K_{X_{e}/X_{e-1}}-f_{e}^{*}K_{Y_{e}/Y_{e-1}}\sim-C_{2}\) and \(C_{2}\geq 0\);
* \(K_{Y_{e-1}/Y}\sim D_{1}\) and there exists an open \(U_{1}\subseteq Z^{e-1}\) such that \(-D_{1}|_{g_{e-1}^{-1}(U_{1})}\geq 0\);
* \(K_{Y_{e}/Y_{e-1}}\sim D_{2}\) and there exists an open \(U_{2}\subseteq Z^{e}\) such that \(-D_{2}|_{g_{e}^{-1}(U_{2})}\geq 0\).
Setting \(C\coloneqq\pi_{2}^{*}C_{1}+C_{2}\), \(D\coloneqq p_{2}^{*}D_{1}+D_{2}\), and \(U\coloneqq U_{1}\cap U_{2}\) we get the claim.
### \(F\)-singularities
Throughout this subsection we will denote by \((X,B)\) a sub-couple such that \(K_{X}+B\) is a \(\mathbb{Z}_{(p)}\)-divisor. If \((1-p^{e})(K_{X}+B)\) is integral for some \(e\geq 1\), we will denote by \(\mathcal{L}^{(e)}_{X,B}\) or \(\mathcal{L}^{(e)}_{B}\) the divisorial sheaf \(\mathcal{O}_{X}((1-p^{e})(K_{X}+B))\). In particular, \(\mathcal{L}^{(e)}=\mathcal{L}^{(e)}_{X}=\mathcal{O}_{X}((1-p^{e})K_{X})\).
**Definition 3.17**.: Suppose \(B\geq 0\), let \(e\geq 1\) such that \((p^{e}-1)(K_{X}+B)\) is integral, and let \(L\) be a divisor on \(X\). We have a natural map of \(\mathcal{O}_{X}\)-modules induced by Grothendieck duality
\[T^{e}_{B}\colon\mathrm{F}^{e}_{*}\mathcal{L}^{(e)}_{B}\subseteq\mathrm{F}^{e }_{*}\mathcal{L}^{(e)}\to\mathcal{O}_{X},\]
which in turn induces
\[T^{e}_{B}(L)\colon\mathrm{F}^{e}_{*}\mathcal{L}^{(e)}_{B}\otimes_{\mathcal{O} _{X}}\mathcal{O}_{X}(L)\to\mathcal{O}_{X}(L).\]
The space of _Frobenius stable sections of \(\mathcal{O}_{X}(L)\)_ is defined as
\[S^{0}(X,B;L)\coloneqq\bigcap_{e>0:\,(1-p^{e})(K_{X}+B)\text{ is Weil}}\mathrm{ Image}(H^{0}(X,T^{e}_{B}(L)))\subseteq H^{0}(X,\mathcal{O}_{X}(L)).\]
Note that \(S^{0}(X,B;L)=\mathrm{Image}(H^{0}(X,T^{e}_{B}(L)))\) for some \(e\gg 0\).
**Proposition 3.18** ([16, Section 2]).: _We have correspondences_
\[\left\{\begin{array}{c}\mathbb{Q}\text{-divisors }\Delta\geq 0\text{ such that }\\ (1-p^{e})(K_{X}+\Delta)\text{ is integral }\end{array}\right\} \left\{\begin{array}{c}\text{divisorial sheaves }\mathcal{L}\text{ and }\\ \mathcal{O}_{X}\text{-linear maps }\\ \phi:\mathrm{F}_{*}^{e}\mathcal{L}\xrightarrow{\neq 0}\mathcal{O}_{X} \end{array}\right\}\Big{/}\sim\] \[\left\{\begin{array}{c}\mathbb{Q}\text{-divisors }\Delta\text{ such that }\\ (1-p^{e})(K_{X}+\Delta)\text{ is integral }\end{array}\right\} \left\{\begin{array}{c}\text{divisorial sheaves }\mathcal{L}\text{ and }\\ \mathcal{O}_{X}\text{-linear maps }\\ \phi:\mathrm{F}_{*}^{e}\mathcal{L}\xrightarrow{\neq 0}k(X)\end{array}\right\} \Big{/}\sim,\]
_where the horizontal arrows are bijections, and the equivalence relations on the right identify two maps which agree up to multiplication by a unit of \(H^{0}(X,\mathcal{O}_{X})\)._
Sketch of proof.: We outline the main ideas for the sake of completeness. When \(\Delta\geq 0\) we set \(\mathcal{L}\coloneqq\mathcal{O}_{X}((1-p^{e})(K_{X}+\Delta))\) and \(\phi\coloneqq T_{\Delta}^{e}\). Conversely, given \(\phi\), by Grothendieck duality we have
\[\phi\in\mathrm{Hom}_{\mathcal{O}_{X}}(\mathrm{F}_{*}^{e}\mathcal{ L},\mathcal{O}_{X}) \simeq\mathrm{Hom}_{\mathcal{O}_{X}}(\mathrm{F}_{*}^{e}(\mathcal{ L}(p^{e}K_{X})),\mathcal{O}_{X}(K_{X}))\] \[\simeq\mathrm{F}_{*}^{e}\mathrm{Hom}_{\mathcal{O}_{X}}(\mathcal{ L}(p^{e}K_{X}),\mathcal{O}_{X}(K_{X}))\] \[\simeq\mathrm{F}_{*}^{e}H^{0}(X,\mathcal{L}^{-1}((1-p^{e})K_{X})).\]
Hence we can identify \(\phi\) with an element \(D_{\phi}\in H^{0}(X,\mathcal{L}^{-1}((1-p^{e})K_{X}))\), and we can set \(\Delta\coloneqq D_{\phi}/(p^{e}-1)\). Note that if we change \(\phi\) by multiplication by a unit, we obtain the same \(\Delta\). It is easy to check that the two constructions are one the inverse of the other.
When \(\Delta=\Delta^{+}-\Delta^{-}\) is not effective one can see, by arguing locally, that there exists a non-zero \(\mathcal{O}_{X}\)-linear map \(\phi\) fitting in the diagram
for some effective Weil divisor \(E\geq(p^{e}-1)\Delta^{-}\). Indeed, working locally and letting \(s\in\mathcal{O}_{X}((1-p^{e})(K_{X}+\Delta^{+}))\) and \(s/f\in\mathcal{O}_{X}((1-p^{e})(K_{X}+\Delta))\), for \(f\) a regular function, we have
\[\phi\left(\mathrm{F}_{*}^{e}\left(\frac{s}{f}\right)\right)=\frac{T_{\Delta^ {+}}^{e}(\mathrm{F}_{*}^{e}(f^{p^{e}-1}s))}{f}.\]
Conversely, given \(\phi\) as in the hypothesis, a similar local computation shows that \(\mathrm{Image}(\phi)\subseteq\mathcal{O}_{X}(E)\), for some effective Weil divisor \(E\). Then the same argument as in the effective case yields the required divisor \(\Delta\) (see [17, Subsection 2.1 and Lemma 2.3]).
**Definition 3.19**.: We say \((X,B)\) is _globally sub-F-split_ (GsFS) if for all \(e\geq 1\) sufficiently divisible, letting \(\phi_{e}\colon\mathrm{F}_{*}^{e}\mathcal{L}_{B}^{(e)}\to k(X)\) be the map associated to \(B\) by Proposition3.18, there exists a map of \(\mathcal{O}_{X}\)-modules \(\sigma_{e}\colon\mathcal{O}_{X}\to\mathrm{F}_{*}^{e}\mathcal{L}_{B}^{(e)}\) such that \(\phi_{e}\circ\sigma_{e}=\mathrm{id}_{\mathcal{O}_{X}}\). If \(B\geq 0\), we say \((X,B)\) is _globally F-split_ (GFS).
**Definition 3.20**.: We say \((X,B)\) is _globally F-regular_ (GFR) if \(B\geq 0\) and for every effective Weil divisor \(E\) and all \(e\geq 1\) sufficiently divisible, the \(\mathcal{O}_{X}\)-linear map
\[\mathrm{F}_{*}^{e}\mathcal{L}_{B+\frac{E}{p^{e-1}}}^{(e)}\hookrightarrow \mathrm{F}_{*}^{e}\mathcal{L}_{B}^{(e)}\xrightarrow{T_{B}^{e}}\mathcal{O}_{X}\]
admits a splitting
\[\mathrm{id}_{\mathcal{O}_{X}}\colon\mathcal{O}_{X}\xrightarrow{\sigma_{E,e} }\mathrm{F}_{*}^{e}\mathcal{L}_{B+\frac{E}{p^{e-1}}}^{(e)}\to\mathcal{O}_{X}.\]
Globally \(F\)-split and globally \(F\)-regular pairs should be thought of as pairs of log Calabi-Yau type, resp. log Fano type, with arithmetically well-behaved Frobenius. This is made somewhat more precise by the next statements.
**Theorem 3.21** ([10, Theorem 5.1]).: _Let \((X,B)\) be a klt pair over \(\mathbb{C}\) such that \(-K_{X}-B\) is ample. Then \((X,B)\) has open GFR type, that is for every model \((\mathcal{X},\mathcal{B})\to Spec(A)\) of \((X,B)\) the set of primes \(\mathfrak{p}\subset A\) such that \((\mathcal{X}_{\mathfrak{p}},\mathcal{B}_{\mathfrak{p}})\) is GFR is open and dense in \(\mathrm{Spec}(A)\)._
A similar statement is expected to hold for Calabi-Yau pairs.
**Conjecture 3.22** ([10, Problem 5.1.2]).: _Let \((X,B)\) be a log canonical pair over \(\mathbb{C}\) such that \(-K_{X}-B\sim_{\mathbb{Q}}0\). Then \((X,B)\) has dense GFS type, that is for every model \((\mathcal{X},\mathcal{B})\to Spec(A)\) of \((X,B)\) the set of primes \(\mathfrak{p}\subset A\) such that \((\mathcal{X}_{\mathfrak{p}},\mathcal{B}_{\mathfrak{p}})\) is GFS is dense in \(\mathrm{Spec}(A)\)._
The next Lemma shows that the class of GFR varieties is stable under small perturbations of the boundary.
**Lemma 3.23** ([10, Corollary 3.10, Remark 3.11]).: _Let \((X,B)\) be a GFR projective couple where \(B\) is a \(\mathbb{Z}_{(p)}\)-divisor and let \(D\geq 0\) be a \(\mathbb{Q}\)-divisor. Then \((X,B+\epsilon D)\) is GFR for all \(0\leq\epsilon\ll 1\) such that \(B+\epsilon D\) is a \(\mathbb{Z}_{(p)}\)-divisor._
**Lemma 3.24** ([11, Example 3.4]).: _If \(B\geq 0\), then \((X,B)\) is GFS iff \(S^{0}(X,B;\mathcal{O}_{X})=H^{0}(X,\mathcal{O}_{X})\)._
**Lemma 3.25** ([10, Proposition 3.8.(i)]).: _Let \((X,B)\) be a couple such that \(B\) is a \(\mathbb{Z}_{(p)}\)-divisor. Then \((X,B)\) is GFR if and only if for all \(\mathbb{Q}\)-divisors \(D\geq 0\) the couple \((X,B+D/(p^{e}-1))\) is GFS for all \(e\gg 1\)._
### Calabi-Yau deformations of globally \(F\)-split varieties
In this subsection we will study how GFS singularities behave in family.
_Remark 3.26_.: Let \((X,B)\) be a sub-couple over a perfect field \(k\), such that \(K_{X}+B\) is a \(\mathbb{Z}_{(p)}\)-divisor. Then all the different classes of \(F\)-singularities can be given by replacing the absolute Frobenius \(\mathrm{F}_{X}^{e}\) by the \(k\)-linear Frobenius \(\mathrm{F}_{X^{e}/k^{e}}^{e}\), as the two differ by the automorphism \(\mathrm{F}_{k}^{e}\).
**Lemma 3.27**.: _Let \(k\) be a perfect field, and let \(R\) be a smooth local \(k\)-algebra, essentially of finite type, with fraction field \(K\) and residue field \(k\). Let \(\pi\colon\,(X,B=\sum_{i}a_{i}B_{i})\to\operatorname{Spec}R\) be a pair over \(R\), where \(\pi\) is a flat contraction with geometrically normal fibres and \(\operatorname{Supp}(B_{i})\to\operatorname{Spec}R\) is flat for all \(i\). Assume \((1-p^{e})(K_{X}+B)\sim 0\) and that \((X_{\overline{k}},B_{\overline{k}})\) is GFS. Then \((X_{\overline{K}},B_{\overline{K}})\) is GFS too._
Proof.: Let \(g^{e}\colon\operatorname{Spec}\overline{k}^{c}\to\operatorname{Spec}R^{e}\), and consider the corresponding base-change of the leftmost triangle in (3.1)
In what follows we will freely and implicitly replace \(X\) with the complement of a closed subset having codimension and relative codimension \(\geq 2\), since all the sheaves we will be dealing with are reflexive. In particular, as \(\pi\) has geometrically normal fibres, we may actually assume that \(\pi\) is smooth. By [10, Lemma 2.18] the trace map of the relative Frobenius is compatible with base-change, that is we have a natural commutative diagram
where the horizontal arrows are isomorphisms. Note that we have
\[(\operatorname{F}^{e}_{X^{e}/R^{e}})^{*}\omega_{X_{R^{e}}/R^{e}}=\omega^{p^{e} }_{X^{e}/R^{e}}\]
by [10, Lemma 3.1]. After twisting the above commutative square by \(\omega^{-1}_{X_{R^{e}}/R^{e}}\) and composing with the inclusion \(\mathcal{L}^{(e)}_{X^{e},B^{e}}\subseteq\mathcal{L}^{(e)}_{X^{e}}\), we obtain
\[\operatorname{F}^{e}_{X^{e}_{\overline{k}^{e}}/\overline{k}^{e},*}\mathcal{L} ^{(e)}_{X^{e}_{\overline{k}^{e}},\frac{R^{e}}{k^{e}}} \xleftarrow{\raisebox{-10.0pt}{\includegraphics[]{fig/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Reps/Repseps/Reps/Repseps/Reps/Repseps/Reps/Reps/Reps/Repseps/Reps/Repseps/Repseps/Reps/Reps/Reps/Reps/Repseps/Repseps/Repseps/Repseps/Reps/Reps/Reps/Repseps/Reps/Repseps/Repseps/Reps/Repseps/Reps/Reps/Repseps/Repseps/Reps/Repseps/Reps/Repseps/Repseps/Repseps/Reps/Repseps/Repseps/Reps/Repseps/Repseps/Reps/Reps/Reps/Reps/Repseps/Reps/Repseps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Repseps/Reps/Reps/Reps/Reps/Reps/Repseps/Repseps/Repseps/Reps/Repseps/Reps/Reps/Reps/Repseps/Reps/Repseps/Reps/Repseps/Reps/Repseps/Reps/Reps/Repseps/Repseps/
Hence, by taking global sections we obtain
Now, as \((1-p^{e})(K_{X^{e}/R^{e}}+B^{e})\sim 0\) we have a unique (up to \(R^{*}\)) nonzero section \(s\in H^{0}(X_{R^{e}},\mathcal{O}_{X^{e}}((1-p^{e})(K_{X^{e}/R^{e}}+B^{e})))\). As \((X_{\overline{k}},B_{\overline{k}})\) is GFS, by Lemma 3.24 we have that \(s_{\overline{k}^{e}}:=q_{e}^{*}(s)\) gets mapped to a unit in \(\overline{k}^{e}\) by the (twisted) trace map of \(\mathrm{F}^{e}_{X_{\overline{k}^{e}}/\overline{k}^{e}}\). Hence \(s\) gets mapped to a unit in \(R^{e}\) by the (twisted) trace map of \(\mathrm{F}^{e}_{X^{e}/R^{e}}\). As the trace map is compatible with base-change, by restricting to the geometric generic fibre the same argument yields that \(s_{\overline{K}^{e}}\) gets mapped to a unit in \(\overline{K}^{e}\) by the (twisted) trace map of \(\mathrm{F}^{e}_{X_{\overline{k}^{e}}/\overline{K}^{e}}\). Thus, Lemma 3.24 gives the conclusion.
## 4. Stein-separable morphisms and canonical bundle formulae
By combining the results of [1, 2] with [14] one can show that morphisms with \(p\)-prime Stein degree and GFS general fibre satisfy a canonical bundle formula.
**Definition-Proposition 4.1**.: _Let \(g\colon X\to Z\) be a contraction of normal schemes and let \(B\geq 0\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \((1-p^{e})(K_{X}+B)\sim_{Z}0\) for some \(e\geq 0\) and \((X_{\overline{\zeta}},B_{\overline{\zeta}})\) is globally \(F\)-split, where \(\zeta\in Z\) is the generic point. Then there exists a canonically defined effective \(\mathbb{Q}\)-divisor \(B^{Z}\) on \(Z\) such that_
1. \((1-p^{e})(K_{X}+B)\sim g^{*}((1-p^{e})(K_{Z}+B^{Z}))\)_;_
2. \((X,B)\) _is GFS if and only if_ \((Z,B^{Z})\) _is GFS;_
3. _if_ \(\Lambda\geq 0\) _is a_ \(\mathbb{Q}\)_-Cartier_ \(\mathbb{Q}\)_-divisor on_ \(Z\) _such that_ \(K_{Z}+B^{Z}+\Lambda\) _is_ \(\mathbb{Z}_{(p)}\)_-Cartier, then_ \((B+g^{*}\Lambda)^{Z}=B^{Z}+\Lambda\)_._
Proof.: Point (a) follows by [1, Theorem 3.17] (see also [1, Theorem 5.2]). This boils down to the following: write \((1-p^{e})(K_{X}+B)\sim g^{*}M\) for some Cartier divisor \(M\) on \(Z\), so that we have \(T_{B}^{e}\colon\mathrm{F}^{e}_{*}\mathcal{O}_{X}(f^{*}M)\to\mathcal{O}_{X}\). By pushing forward via \(g\) and using the projection formula we obtain \(\mathrm{F}^{e}_{*}\mathcal{O}_{Z}(M)\xrightarrow{\phi}\mathcal{O}_{Z}\). As \((X_{\overline{\zeta}},B_{\overline{\zeta}})\) is GFS we have \(\phi\neq 0\) ([1, Observation 3.19]), and Proposition 3.18 yields a canonically defined \(\mathbb{Q}\)-divisor \(B^{Z}\geq 0\) such that \(M\sim(1-p^{e})(K_{Z}+B^{Z})\) and \(\phi=T_{B^{Z}}^{e}\).
As for point (b), Lemma 3.24 implies it is enough to show
\[S^{0}(X,B;\mathcal{O}_{X})=S^{0}(Z,B^{Z};\mathcal{O}_{Z}).\]
By the construction in point (a) we have a commutative diagram
(4.1)
where the vertical arrows are isomorphisms. Taking global sections for \(e\geq 1\) large enough yields \(S^{0}(X,B;\mathcal{O}_{X})=S^{0}(Z,B^{Z};\mathcal{O}_{Z})\), thus we conclude by Lemma3.24.
Point (c) follows by the same argument as in (a), noting that (4.1) can be completed to
Observe that, since we are twisting by a divisor on \(Z\), we still have \(T^{e}_{B+g^{*}\Lambda,\overline{\zeta}}\neq 0\neq T^{e}_{B^{Z}+\Lambda, \overline{\zeta}}\).
We are now ready to define KGFR pairs.
**Definition 4.2**.: A projective couple \((X,B)\) is said to be \(K\)_-globally F-regular_ (KGFR) if
* \(-K_{X}-B\) is semiample with induced contraction \(f\colon X\to Y\);
* the geometric generic fibre of \(f\), \((X_{\overline{\eta}},B_{\overline{\eta}})\), is GFS;
* \(K_{X}+B\sim_{\mathbb{Z}_{(p)},Y}0\);
* the pair \((Y,B^{Y})\) induced by Proposition4.1 is GFR.
Note that \((X,B)\) is GFS _a posteriori_ thanks to Proposition4.1(b).
**Definition-Proposition 4.3**.: _Let \(h\colon Z\to Y\) be a separable finite morphism of integral normal \(k\)-schemes such that \(p\nmid\deg(h)\), and let \(B\geq 0\) be a \(\mathbb{Q}\)-divisor on \(Z\) such that \((Z,B)\) is globally \(F\)-split and \((1-p^{e})(K_{Z}+B)\sim_{Y}0\) for some \(e\geq 1\). Then there exists a canonically defined effective \(\mathbb{Q}\)-divisor \(B^{Y}\) on \(Y\) such that_
* \((1-p^{e})(K_{Z}+B)\sim h^{*}((1-p^{e})(K_{Y}+B^{Y}))\)_;_
* \((Y,B^{Y})\) _is GFS;_
* _if_ \(\Lambda\geq 0\) _is a_ \(\mathbb{Q}\)_-Cartier_ \(\mathbb{Q}\)_-divisor on_ \(Y\) _such that_ \(K_{Y}+B^{Y}+\Lambda\) _is_ \(\mathbb{Z}_{(p)}\)_-Cartier and_ \((Y,B^{Y}+\Lambda)\) _is GFS, then_ \((Z,B+h^{*}\Lambda)\) _is GFS too, and_ \((B+h^{*}\Lambda)^{Y}=B^{Y}+\Lambda\)_;_
* _If_ \((Z,B)\) _is GFR then_ \((Y,B^{Y})\) _is GFR._
Proof.: Let \(M\) be a Cartier divisor such that \((1-p^{e})(K_{Z}+B)\sim h^{*}(M)\), let \(\mathcal{L}\coloneqq\mathcal{O}_{Y}(M)\) and \(d\coloneqq\deg(h)\). By [13, Corollary 4.2] we have the following commutative diagram
(4.2)
where \(\phi_{Z}\) is defined by \(B\) using the correspondence in Proposition3.18 and \(\phi_{Y}\) is defined by composition with the natural maps. Note that, for each column, going up and then down yields the identity. By taking global sections and using GFS-ness of \((Z,B)\) we obtain \(1\in\operatorname{Image}(H^{0}(Y,\phi_{Y}))\). In particular \(\phi_{Y}\neq 0\), thus by Proposition3.18 we obtain an effective \(\mathbb{Q}\)-divisor \(B^{Y}\) such that \((Y,B_{Y})\) is GFS too and \((1-p^{e})(K_{Z}+B)\sim h^{*}((1-p^{e})(K_{Y}+B^{Y}))\).
To show point (c), denote by \(\lambda\colon\mathcal{L}((1-p^{e})\Lambda)\to\mathcal{L}\) the natural map, and observe that (4.2) can be completed to
As \((Y,B^{Y}+\Lambda)\) is GFS, we have \(1\in\operatorname{Image}(H^{0}(Y,\psi_{Y}))\), hence \(1\in\operatorname{Image}(H^{0}(Z,\psi_{Z}))\). By Proposition3.18 we see that the maps
\[\operatorname{F}_{*}^{e}h^{*}\mathcal{L}((1-p^{e})\Lambda)\xrightarrow{ \operatorname{F}_{*}^{e}h^{*}\lambda}\operatorname{F}_{*}^{e}h^{*}\mathcal{L} \xrightarrow{\phi_{Z}}\mathcal{O}_{Z}\ \text{ and }\ \operatorname{F}_{*}^{e}\mathcal{L}((1-p^{e}) \Lambda)\xrightarrow{\operatorname{F}_{*}^{e}\lambda}\operatorname{F}_{*}^{e }\mathcal{L}\xrightarrow{\phi_{Y}}\mathcal{O}_{Y}\]
correspond to the divisors \(B+h^{*}\Lambda\) and \(B^{Y}+\Lambda\), respectively.
To show point (d) let \(D\geq 0\) be a divisor on \(Y\). By Lemma3.25 it is enough to show that \((Y,B^{Y}+D/(p^{e}-1))\) is GFS whenever \(e\) is large enough. As \((Z,B)\) is GFR, we have that \((Z,B+h^{*}\left(D/(p^{e}-1)\right))\) is GFS provided \(e\gg 0\), again by Lemma3.25. Arguing as in point (a) we can push forward the splitting on \(Y\) via the trace map of
\[\begin{CD}h_{*}\mathcal{O}_{Z}@>{}>{}>h_{*}\mathrm{F}_{*}^{e^{\prime}} \mathcal{L}_{B+h^{*}(D/(p^{e}-1))}^{(e^{\prime})}@>{h_{*}T_{B+h^{*}(D/(p^{e}-1) )}}>{}>h_{*}\mathcal{O}_{Z}\\ h^{\sharp}\Bigg{\{}\Bigg{\}}^{\frac{\mathrm{Tr}_{Z/Y}}{d}}\\ \mathcal{O}_{Y}@>{}>{}>\mathrm{F}_{*}^{e^{\prime}}\mathcal{L}_{B^{Y}+D/(p^{e} -1)}^{(e^{\prime})}@>{T_{B^{Y}+D/(p^{e}-1)}}>{}>\mathcal{O}_{Y},\end{CD}\]
hence \((Y,B^{Y}+D/(p^{e}-1))\) is GFS.
_Remark 4.4_.: It might be tempting to try and extend this result to the case of a split finite morphism \(h\) (i.e. such that the natural map \(\mathcal{O}_{Y}\to h_{*}\mathcal{O}_{Z}\) is split). However, this can easily be seen to fail by taking \(Z=Y=\mathbb{P}^{1}\) and \(h=\mathrm{F}\). As explained in [1] a necessary and sufficient condition for \(\psi_{Z}\) to descend to a nonzero \(\psi_{Y}\) is that the splitting of \(\mathcal{O}_{Y}\to h_{*}\mathcal{O}_{Z}\) is given by the trace map. In this case it then follows that \(B=h^{*}B^{Y}+\mathrm{Ram}(h)\), where \(\mathrm{Ram}(h)\) denotes the ramification divisor of \(h\).
**Definition-Proposition 4.5**.: _Let \(f\colon X\to Y\) be a morphism of integral normal schemes such that \(p\nmid\mathrm{St.deg}(f)\), and let \(B\) be an effective \(\mathbb{Q}\)-divisor on \(X\) such that \((X,B)\) is globally \(F\)-split and \((1-p^{e})(K_{X}+B)\sim_{Y}0\). Then there exists a canonically determined effective \(\mathbb{Q}\)-divisor \(B^{Y}\) on \(Y\) such that_
1. \((1-p^{e})(K_{X}+B)\sim f^{*}((1-p^{e})(K_{Y}+B^{Y}))\)_;_
2. \((Y,B^{Y})\) _is GFS;_
3. _if_ \(\Lambda\geq 0\) _is a_ \(\mathbb{Q}\)_-Cartier_ \(\mathbb{Q}\)_-divisor on_ \(Y\) _such that_ \(K_{Y}+B^{Y}+\Lambda\) _is_ \(\mathbb{Z}_{(p)}\)_-Cartier, then_ \((B+f^{*}\Lambda)^{Y}=B^{Y}+\Lambda\)_;_
4. \((X,B+f^{*}\Lambda)\) _is GFS if and only if_ \((Y,B^{Y}+\Lambda)\) _is GFS._
5. \((Y,B^{Y})\) _is GFR whenever_ \((X,B)\) _is KGFR._
Proof.: Follows immediately from applying Proposition 4.1 and Proposition 4.3 to the Stein factorisation \(f\colon X\xrightarrow{g}Z\xrightarrow{h}Y\).
## 5. F-complements
Complements were first introduced by Shokurov in [11]: given a pair \((X,B)\) a _complement_ is a \(\mathbb{Q}\)-divisor \(\Gamma\geq 0\) such that \((X,B+\Gamma)\) is lc and \(K_{X}+B+\Gamma\sim_{\mathbb{Q}}0\). In this subsection we introduce an analogous notion for \(F\)-singularities in the relative setting.
**Definition 5.1**.: Let \(f\colon(X,B)\to Y\) be a contraction of normal quasi-projective varieties, where \(B=B^{+}-B^{-}\) is a \(\mathbb{Q}\)-divisor such that \(\mathrm{Supp}(B^{-})\) does not dominate \(Y\), and whose geometric generic fibre \((X_{\overline{\eta}},B_{\overline{\eta}})\) is globally \(F\)-split. Let \(L\) be a \(\mathbb{Q}\)-effective \(\mathbb{Q}\)-divisor on \(X\). We say \(L\)_admits an \(F\)-complement for \((X/Y,B)\)_ if there exists \(\Lambda\in|L|_{\mathbb{Q}}\) such that \((1-p^{e})(K_{X}+B+\Lambda)\sim_{Y}0\) for some \(e\geq 1\), and \((X_{\overline{\eta}},B_{\overline{\eta}}+\Lambda_{\overline{\eta}})\) is globally \(F\)-split. In this case we say \(\Lambda\) is an _F-complement for \((X/Y,B)\)_. When \(X\) is a \(k\)-variety and \(Y=\mathrm{Spec}(k)\) we just refer to \(\Lambda\) as an \(F\)-complement for \((X,B)\).
By results of Schwede and Smith we have that projective GFS couples admit \(F\)-complements.
**Theorem 5.2** ([10, Theorem 4.3.(ii)]).: _Let \(k\) be an \(F\)-finite field and let \((X,B)\) be a globally F-split quasi-projective normal variety over \(k\). Then there exists an F-complement \(\Gamma\) for \((X,B)\)._
We give a sufficient condition for the existence of \(F\)-complements on contractions with KGFR fibres.
**Theorem 5.3**.: _Let \(f\colon X\to Y\) be a contraction of normal quasi-projective varieties with general fibre \(X_{y}\), and let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(\operatorname{Supp}(B^{-})\) does not dominate \(Y\) and \((X_{y},B_{X_{y}})\) is KGFR. Let \(D\) be a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on \(Y\), set \(L\coloneqq-K_{X}-B-f^{*}D\), and assume there is \(m\in\mathbb{N}\setminus p\mathbb{N}\) such that \(mL\) is Cartier and \(|V|\subseteq|mL|\) such that \(\phi_{|V|_{X_{y}}}\) is a morphism and \(p\nmid\operatorname{St.deg}(\phi_{|V|_{X_{y}}})\). Then \(L\) admits an \(F\)-complement for \((X/Y,B)\)._
Before we give a proof, we need the following result.
**Proposition 5.4**.: _Let \(G\) be a non-normal projective \(k\)-variety, let \(A\) be an ample \(\mathbb{Q}\)-divisor on \(G\), let \(\pi\colon\overline{G}\to G\) be the normalisation morphism and let \(\overline{\Delta}\geq 0\) be a \(\mathbb{Q}\)-divisor on \(\overline{G}\) such that_
* \((\overline{G},\overline{\Delta})\) _is a GFR pair;_
* \(-K_{\overline{G}}-\overline{\Delta}\sim_{\mathbb{Q}}\pi^{*}A\)_._
_Then there exists an \(F\)-complement \(\overline{\Gamma}\) for \((\overline{G},\overline{\Delta})\) such that \(\overline{\Gamma}=\pi^{*}\Gamma\) for some \(\Gamma\in|A|_{\mathbb{Q}}\)._
Proof.: Let \(m\geq 1\) divisible enough, and let \(L\coloneqq mA\) and \(\overline{L}\coloneqq m(-K_{\overline{G}}-\overline{\Delta})\sim\pi^{*}L\). Denote by \(\overline{\mathcal{C}}\subset\mathcal{O}_{\overline{G}}\) and \(\mathcal{C}\subset\mathcal{O}_{G}\) the conductor ideals, and by \(\overline{C}\subset\overline{G}\) and \(C\subset G\) the respective closed subschemes, so that we have an induced finite morphism \(\pi|_{\overline{C}}\colon\overline{C}\to C\). We then have the following morphism of short exact sequences ([11, 2.1]) for all \(l\geq 1\):
(5.1)
Let now \(R\) be an effective Cartier divisor on \(\overline{G}\) such that \(\mathcal{O}_{\overline{G}}(-R)\subseteq\overline{\mathcal{C}}\). For \(l\gg 0\) we have that \((\overline{G},\overline{\Delta}+R/l)\) is still GFR by Lemma 3.23, thus we have an \(F\)-complement \(\overline{\Xi}\) for \((\overline{G},\overline{\Delta}+R/l)\) by Theorem 5.2. In particular, \(\overline{\Gamma}\coloneqq\overline{\Xi}+R/l\) is an \(F\)-complement for \((\overline{G},\overline{\Delta})\). We now need to show that \(\overline{\Gamma}\) descends to \(G\). After multiplying by \(lmn\) for some \(n\geq 1\) to clear denominators, we obtain that \(lmn\overline{\Gamma}\) is the divisor of a section \(\overline{\gamma}\in H^{0}(\overline{G},\mathcal{O}_{\overline{G}}(ln\overline {L})\otimes_{\mathcal{O}_{\overline{G}}}\overline{\mathcal{C}})\), and (5.1) shows that \(\overline{\gamma}=\pi^{*}\gamma\) for some \(\gamma\in H^{0}(G,\mathcal{O}_{G}(lnL)\otimes_{\mathcal{O}_{G}}\mathcal{C})\). We conclude by letting \(\Gamma\coloneqq(\gamma=0)/lmn\).
Proof of Theorem 5.3.: Up to replacing \(m\) with a multiple (cf. [10, Exercise 3.5]) we may assume \(m=p^{e}-1\) for some \(e\geq 1\). Throughout the rest of the proof we will always freely implictly replace \(m\) with a suitable multiple (equivalently, replace \(e\) with a suitable multiple). Consider the following
diagram
(5.2)
where notation is as follows: \(\Phi\coloneqq\phi_{|V|}\), \(\varphi\coloneqq\phi_{|V|_{X_{y}}}\), \(\pi\) is the normalisation morphism, and \(\overline{\varphi}\) is the naturally induced morphism. Let \(A\coloneqq\mathcal{O}_{W}(1)\) and \(A_{G}\coloneqq\mathcal{O}_{G}(1)\). Then Proposition4.5 yields an effective \(\mathbb{Q}\)-divisor \((B_{X_{y}})^{\overline{G}}\) on \(\overline{G}\), such that \((\overline{G},(B_{X_{y}})^{\overline{G}})\) is GFR and \(-m(K_{X_{y}}+B_{X_{y}})\sim\overline{\varphi}^{*}(-m(K_{\overline{G}}+(B_{X_{y }})^{\overline{G}}))\). By Proposition5.4 there is an \(F\)-complement \(\Lambda_{\overline{G}}\in|-K_{\overline{G}}-(B_{X_{y}})^{\overline{G}}|_{ \mathbb{Q}}\) for \((\overline{G},(B_{X_{y}})^{\overline{G}})\) such that \(\Lambda_{\overline{G}}=\pi^{*}\Lambda_{G}\) for some \(\Lambda_{G}\in|A_{G}/m|_{\mathbb{Q}}\). Letting \(\Lambda_{X_{y}}\coloneqq\varphi^{*}\Lambda_{G}\) we then have that \((X_{y},B_{X_{y}}+\Lambda_{X_{y}})\) is GFS and \(K_{X_{y}}+B_{X_{y}}+\Lambda_{X_{y}}\sim_{\mathbb{Z}_{(p)}}0\), by Proposition4.5. The above diagram (5.2) induces the following on global sections for all \(l\geq 0\):
As the two rightmost maps are surjective for all \(l\gg 0\) by Serre Vanishing, we conclude we can lift \(\Lambda_{X_{y}}\) to \(\Lambda\in|L|_{\mathbb{Q}}\). As \((X_{y},B_{X_{y}}+\Lambda_{X_{y}})\) is GFS, we have that \((X_{\overline{\eta}},B_{\overline{\eta}}+\Lambda_{\overline{\eta}})\) is also GFS by Lemma3.27.
Thanks to the KGFR condition on the fibres, we can find \(F\)-complements even after a small perturbation of the boundary.
**Corollary 5.5**.: _Let \(f\colon X\to Y\) be an equidimensional contraction of normal quasi-projective varieties with general fibre \(X_{y}\), and let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(\operatorname{Supp}(B^{-})\) does not dominate \(Y\) and \((X_{y},B_{X_{y}})\) is KGFR. Let \(D\) be a \(\mathbb{Q}\)-divisor on \(Y\), set \(L\coloneqq-K_{X}-B-f^{*}D\), and assume
_there is \(m\in\mathbb{N}\setminus p\mathbb{N}\) such that \(mL\) is Cartier and \(|V|\subseteq|mL|\) such that \(\phi_{|V|_{X_{y}}}\) is a morphism and \(p\nmid\operatorname{St.deg}(\phi_{|V|_{X_{y}}})\). Let \(E\) be a \(\mathbb{Q}\)-divisor on \(Y\) and suppose there exists a \(\mathbb{Q}\)-divisor \(0\leq\Gamma\sim_{\mathbb{Q}}L-f^{*}E\). Then \(L_{\epsilon}\coloneqq(1-\epsilon)L\) admits an \(F\)-complement for \((X/Y,B+\epsilon\Gamma)\) for all \(\epsilon\in\mathbb{Z}_{(p),>0}\) small enough._
Proof.: Let
\[B_{\epsilon}\coloneqq B+\epsilon\Gamma,\ \ D_{\epsilon}\coloneqq D+\epsilon E,\ \ L_{ \epsilon}\coloneqq-K_{X}-B_{\epsilon}-f^{*}D_{\epsilon}\]
so that \(L_{\epsilon}\sim_{\mathbb{Q}}(1-\epsilon)L\). The corollary will follow from Theorem 5.3 as soon as we verify that
1. \((X_{y},B_{\epsilon,F})\) is KGFR, and
2. there is \(n\in\mathbb{N}\setminus p\mathbb{N}\) s.t. \(nL_{\epsilon}\) is Cartier and there exists \(|V_{\epsilon}|\subseteq|nL_{\epsilon}|\) s.t. \(|V_{\epsilon}|_{X_{y}}\) induces a morphism with Stein degree not divisible by \(p\).
Let \(\psi\colon X_{y}\to H\) be the semiample contraction of \(-K_{X_{y}}-B_{X_{y}}\). Then Proposition 4.1 yields an effective \(\mathbb{Q}\)-divisor \((B_{X_{y}})^{H}\) such that \((H,(B_{X_{y}})^{H})\) is GFR. In particular we can write \(\Gamma_{X_{y}}=\psi^{*}\Gamma_{H}\) for some \(\Gamma_{H}\in|-K_{H}-(B_{X_{y}})^{H}|_{\mathbb{Q}}\). As \((H,(B_{X_{y}})^{H})\) is GFR we have \((H,(B_{X_{y}})^{H}+\epsilon\Gamma_{H})\) is GFR for \(\epsilon\) small enough such that \((B_{X_{y}})^{H}+\epsilon\Gamma_{H}\) is a \(\mathbb{Z}_{(p)}\)-divisor by Lemma 3.23, and \(K_{H}+(B_{X_{y}})^{H}+\epsilon\Gamma_{H}\) is \(\mathbb{Z}_{(p)}\)-Cartier, hence \((X_{y},B_{X_{y}}+\epsilon\Gamma_{X_{y}})\) is also KGFR by Proposition 4.1 (b) and (c). This shows (1). To show (2), let \(l\) be a \(p\)-prime positive integer such that \((1-\epsilon)l\) is an integer too. Then we conclude by letting \(V_{\epsilon}\coloneqq V^{\otimes(1-\epsilon)l}\subseteq H^{0}(X,mlL_{\epsilon})\).
## 6. Injectivity Theorem
The main result of this section is the following injectivity theorem (see [1, Theorem 4.3]).
**Theorem 6.1** (Injectivity Theorem).: _Let \(f\colon X\to Y\) be an equidimensional contraction between normal quasi-projective varieties over a perfect field. Assume the general fibre \(X_{y}\) is normal. Let \(B=B^{+}-B^{-}\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(\operatorname{Supp}(B^{-})\) does not dominate \(Y\) and \((X_{y},B_{X_{y}})\) is KGFR. Let \(D\) be a \(\mathbb{Q}\)-divisor on \(Y\), set \(L\coloneqq-K_{X}-B-f^{*}D\) and assume_
1. _there is_ \(m\in\mathbb{N}\setminus p\mathbb{N}\) _such that_ \(mL\) _is Cartier and_ \(|V|\subseteq|mL|\) _such that_ \(\phi_{|V|_{X_{y}}}\) _is a morphism with_ \(p\nmid\operatorname{St.deg}(\phi_{|V|_{X_{y}}})\)_;_
2. _there exists a_ \(\mathbb{Q}\)_-Cartier_ \(\mathbb{Q}\)_-divisor_ \(P\geq B^{-}\) _such that_ \(\kappa(X,f^{*}(-K_{Y}-D)+P)=0\)_._
_Then, the natural map_
\[H^{0}(X,mL)\to H^{0}(X_{y},mL_{X_{y}})\]
_is injective for all \(m\geq 0\). In particular we have \(\kappa(X,L)\leq\kappa(X_{y},L_{X_{y}})\)._
Provided that our contraction admits \(F\)-complements, we can follow the same proof as in [1, Theorem 3.8 and Proposition 4.2].
**Proposition 6.2**.: _Let \(f\colon X\to Y\) be a contraction of normal quasi-projective varieties, and let \(B=B^{+}-B^{-}\) be a \(\mathbb{Q}\)-divisor such that \(\operatorname{Supp}(B^{-})\) does not dominate \(Y\). Let \(D\) be a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on \(Y\), let \(L\coloneqq-K_{X}-B-f^{*}D\) and suppose \(L\) admits an \(F\)-complement for \((X/Y,B)\). Lastly assume_
1. \(f\) _is equidimensional or_
_
2. \(B\geq 0\) _and_ \(Y\) _is_ \(\mathbb{Q}\)_-Gorenstein._
_Then, \(f^{*}(-K_{Y}-D)+B^{-}\) is \(\mathbb{Q}\)-effective._
Proof.: Let \(0\leq\Lambda\sim_{\mathbb{Q}}L\) be an \(F\)-complement for \((X/Y,B)\) and consider \(\Delta\coloneqq B+\Lambda\), so that we have \((X_{\overline{\eta}},\Delta_{\overline{\eta}})\) is globally \(F\)-split and \(K_{X}+\Delta\sim_{\mathbb{Z}_{(p)},Y}0\). Then [1, Theorem 3.17] yields a canonically defined \(\mathbb{Q}\)-divisor \(\Delta_{Y}\) such that
\[K_{X}+\Delta\sim_{\mathbb{Z}_{(p)}}f^{*}(K_{Y}+\Delta_{Y})\sim_{\mathbb{Q}}f^{* }(-D).\]
Hence it is enough to show that \(f^{*}(\Delta_{Y})+B^{-}\) is \(\mathbb{Q}\)-effective. If \(B\geq 0\) then \(\Delta_{Y}=\Delta^{Y}\) ([1, Theorem 3.17], Proposition 4.1), in particular, it is effective. Suppose now \(f\) is equidimensional: then every component \(P\) of \(\operatorname{Supp}(\Delta^{v})\) is mapped to a prime divisor \(Q\subset Y\), hence \(f^{*}(\Delta_{Y})\geq\Delta^{v}\). Indeed, [1, Proposition 5.7] yields \(\operatorname{coeff}_{Q}(\Delta_{Y})=1-d_{Q}\), where
\[d_{Q}\coloneqq\sup\{t\text{ s.t. }(X,\Delta+f^{*}(tQ))\text{ is globally sub $F$-split over }\eta_{Q}\}.\]
In particular, as globally sub-\(F\)-split sub-couples are sub-log canonical in codimension one ([1, Lemma 2.14]), we have:
\[\operatorname{coeff}_{P}(\Delta^{v})\leq 1-d_{Q}\operatorname{coeff}_{P}(f^{*}(Q ))\leq\operatorname{coeff}_{P}(f^{*}(Q))(1-d_{Q})=\operatorname{coeff}_{P}(f^{ *}(\Delta_{Y})).\]
As \(\Delta^{v}+B^{-}\geq 0\), we conclude that
\[f^{*}(\Delta_{Y})+B^{-}\geq 0.\]
**Corollary 6.3**.: _Let \(f\colon X\to Y\) be a contraction of normal quasi-projective varieties, with \(Y\)\(\mathbb{Q}\)-Gorenstein, and let \(B=B^{+}-B^{-}\) be a \(\mathbb{Q}\)-divisor on \(X\). If \(B^{-}\neq 0\), assume moreover that \(f\) is equidimensional and that \(\operatorname{Supp}(B^{-})\) does not dominate \(Y\). Let \(D,E\) be \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisors on \(Y\), set \(L\coloneqq-K_{X}-B-f^{*}D\), and assume that_
1. _there exists_ \(0\leq\Gamma\sim_{\mathbb{Q}}L-f^{*}E\)_;_
2. \(L_{\epsilon}\coloneqq(1-\epsilon)L\) _admits an_ \(F\)_-complement for_ \((X,B+\epsilon\Gamma)\)_, for_ \(\epsilon\in[0,1)\cap\mathbb{Q}\)_._
_Then, \(f^{*}(-K_{Y}-D-\epsilon E)+B^{-}\) is \(\mathbb{Q}\)-effective._
Proof.: Let
\[B_{\epsilon}\coloneqq B+\epsilon\Gamma,\ \ D_{\epsilon}\coloneqq D+\epsilon E,\]
so that \(L_{\epsilon}=-K_{X}-B_{\epsilon}-f^{*}D_{\epsilon}\) and all the hypotheses of Proposition 6.2 are satisfied with respect to \(f\colon(X,B_{\epsilon})\to Y\), \(L_{\epsilon}\), and \(D_{\epsilon}\). Then Proposition 6.2 yields \(f^{*}(-K_{Y}-D_{\epsilon})+B_{\epsilon}^{-}=f^{*}(-K_{Y}-D-\epsilon E)+B^{-}\) is \(\mathbb{Q}\)-effective.
We are now ready to prove our injectivity theorem.
Proof of Theorem 6.1.: Let \(X_{y}\coloneqq f^{-1}(y)\) for a general \(y\in Y\) and note that we may assume \(f\) is flat over a neighborhood of \(y\). Let \(U\subseteq Y\) be the regular locus of \(Y\), its complement has codimension \(\geq 2\). We may assume \(y\in U\). Since \(f\) is equidimensional, the complement of \(X_{U}\coloneqq f^{-1}(U)\) has codimension \(\geq 2\). In particular, it is enough to prove \(H^{0}(X_{U},mL|_{X_{U}})\to H^{0}(X_{y},mL_{X_{y}})\) is injective. By substituting \(Y\) with \(U\) and \(X\) with \(X_{U}\), we can assume \(Y\) is smooth. By contradiction, suppose
the map \(H^{0}(X,mL)\to H^{0}(X_{y},mL_{X_{y}})\) is not injective. Then there exists a \(\mathbb{Q}\)-divisor \(0\leq N\sim_{\mathbb{Q}}L\) such that \(X_{y}\subseteq\operatorname{Supp}(N)\). Note that, by hypothesis, there exists a unique effective \(\mathbb{Q}\)-divisor \(M\sim_{\mathbb{Q}}f^{*}(-K_{Y}-D)+P\), hence we may assume \(X_{y}\not\subseteq\operatorname{Supp}(M)\). Consider now the diagram
where notation is as follows:
* \(Y^{\prime}\) is the blowup of \(Y\) at \(y\) with exceptional divisor \(E\), so that \(\mu^{*}K_{Y}+aE=K_{Y^{\prime}}\), with \(a=\dim Y-1\);
* \(X^{\prime}\) is the fibre product (hence it is also the blowup of \(X\) at \(X_{y}\) with exceptional divisor \(G\), since blowup and flat base-change commute);
* \(f^{\prime}\) is the induced morphism (note that since \(f\) is equidimensional, so is \(f^{\prime}\));
* \(B^{\prime}\) is the strict transform of \(B\), so that \(\pi^{*}(K_{X}+B)+bG=K_{X^{\prime}}+B^{\prime}\), with \(b\leq a\).
Let also \(D^{\prime}\coloneqq\mu^{*}D-aE\) and \(L^{\prime}\coloneqq-K_{X^{\prime}}-B^{\prime}-f^{\prime*}D^{\prime}\sim_{ \mathbb{Q}}\pi^{*}L+(a-b)G\geq\pi^{*}L\). As \(\operatorname{Supp}(N)\supseteq X_{y}\) we have \(\operatorname{Supp}(\pi^{*}N)\supseteq G\). Thus, letting \(N^{\prime}\coloneqq\pi^{*}N+(a-b)G\), \(\operatorname{Supp}(N^{\prime})\supseteq G\) too. In particular, for \(0<\delta\ll 1\), we have an effective divisor
\[0\leq\Gamma^{\prime}\coloneqq N^{\prime}-\delta G\sim_{\mathbb{Q}}L^{\prime}- f^{\prime*}E^{\prime},\ \ E^{\prime}\coloneqq\delta E.\]
Note that the data \(f^{\prime}\colon(X^{\prime},B^{\prime})\to Y^{\prime},D^{\prime},E^{\prime}, \Gamma^{\prime}\) satisfy the hypotheses of Corollary 5.5, so \(L^{\prime}\) admits an \(F\)-complement for \((X^{\prime}/Y^{\prime},B^{\prime}+\epsilon\Gamma^{\prime})\). Applying Corollary 6.3 to \(f^{\prime}\colon(X^{\prime},B^{\prime})\to Y^{\prime}\), \(L^{\prime},D^{\prime},E^{\prime},\Gamma^{\prime}\) yields the existence of an effective \(\mathbb{Q}\)-divisor \(\overline{\Gamma}\) such that
\[0\leq\overline{\Gamma}\sim_{\mathbb{Q}}f^{\prime*}(-K_{Y^{\prime }}-D^{\prime}-\epsilon E^{\prime})+(B^{\prime})^{-} =f^{\prime*}(\mu^{*}(-K_{Y}-D)-\epsilon E^{\prime})+(B^{\prime})^ {-}\] \[\leq f^{\prime*}(\mu^{*}(-K_{Y}-D)-\epsilon E^{\prime})+\pi^{*}P\] \[=\pi^{*}(f^{*}(-K_{Y}-D)+P)-\epsilon\delta G\] \[\sim_{\mathbb{Q}}\pi^{*}M-\epsilon\delta G,\]
contradicting the assumption \(X_{y}\not\subseteq\operatorname{Supp}(M)\).
## 7. Proof
Now, we have all the ingredients to prove the main theorem.
**Theorem 7.1**.: _Let \(f\colon X\to Y\) be a contraction of normal quasi-projective varieties over a perfect field \(k\) of positive characteristic. Assume the general fibre \(X_{y}\) is normal and let \(B\) be an effective \(\mathbb{Q}\)-divisor on \(X\) such that \((X_{y},B_{X_{y}})\) is \(K\)-globally \(F\)-regular. Assume \(Y\) is \(\mathbb{Q}\)-Gorenstein and \(D\) is a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on \(Y\). Set \(L\coloneqq-K_{X}-B-f^{*}D\) and suppose there is \(m\in\mathbb{N}\setminus p\mathbb{N}\) such that_
\(mL\) is Cartier and \(|V|\subseteq|mL|\) such that \(\phi_{|V|_{X_{y}}}\) is a morphism with \(p\nmid\operatorname{St.deg}(\phi_{|V|_{X_{y}}})\). Then, for a general fibre \(X_{y}\),_
\[\kappa(X,L)\leq\kappa(X_{y},L_{X_{y}})+\kappa(Y,-K_{Y}-D).\]
Proof.: By Theorem 5.3 and Proposition 6.2 we may assume that \(-K_{Y}-D\) is \(\mathbb{Q}\)-effective. We consider the contraction \(g\) induced by a sufficiently divisible power of \(-K_{Y}-D\) and we resolve it:
In the above diagram, \(X^{\prime}\) is the normalisation of the main component of \(X\times_{Y}Y^{\prime}\) and \(f^{\prime}\), \(\mu\), \(\pi\) are the induced maps. We can choose \(g^{\prime}\) such that \(f^{\prime}\) is a sufficiently high equidimensional birational model of \(f\) and \(\operatorname{Exc}(\pi)\subseteq(f^{\prime})^{-1}(\operatorname{Exc}(\mu))\). Specifically, \(f^{\prime}\) is constructed by the following diagram
where \(\mu_{2}\) is the normalised blowup of the base ideal of \(|m(-K_{Y}-D)|\), \(\pi_{2}\) is the normalisation of \((X\times_{Y}Y_{1})_{\text{main}}\) and \(f^{\prime}\) is an equidimensional model of \(f_{1}\) such that every \(\mu\)-exceptional divisor is \(\mathbb{Q}\)-Cartier. Note that the condition on the exceptional loci is satisfied by \(f_{1}\) by construction, and by \(f^{\prime}\) too, since \(f_{1}\) is flat over codimension one points of \(Y_{1}\) ([1, Proposition 9.7]).
Let \(h^{\prime}:=g^{\prime}\circ f^{\prime}\). Since the fibres of \(g^{\prime}\) and of \(h^{\prime}\) may be very singular, we consider the base change with a high enough power of the Frobenius morphism on \(Z\):
where
1. \(X_{e}\) is the normalisation of the reduction of \(X^{\prime}_{e}\);
2. \(Y_{e}\) is the normalisation of the reduction of \(Y^{\prime}_{Z^{e}}\);
3. \(f_{e}\) and \(g_{e}\) are the induced morphisms;
4. \(a\) and \(b\) are the induced morphisms;
5. \(h_{e}\coloneqq g_{e}\circ f_{e}\).
Choose \(e\gg 0\) such that, if \(\overline{\zeta}\) is the geometric generic point of \(Z\), \(X_{e,\overline{\zeta}}=(X_{\overline{\zeta},\text{red}})^{\nu}\) and \(Y_{e,\overline{\zeta}}=(Y_{\overline{\zeta},\text{red}})^{\nu}\). Such an \(e\) exists by Lemma3.8. In particular, the general fibres of \(f_{e}\), \(g_{e}\) and \(h_{e}\) are normal and reduced by [25, Proposition 2.1].
By easy additivity applied to \(h_{e}\) we obtain
\[\kappa(X_{e},(\pi\circ b)^{*}L) \leq\kappa(X_{e,z},((\pi\circ b)^{*}L)|_{X_{e,z}})+\dim(Z)\] \[=\kappa(X_{e,z},((\pi\circ b)^{*}L)|_{X_{e,z}})+\kappa(Y,-K_{Y}-D),\]
where \(X_{e,z}\) is a general fibre of \(h_{e}\). As Lemma3.3 implies \(\kappa(X_{e},(\pi\circ b)^{*}L)=\kappa(X,L)\), we just need to show
\[\kappa(X_{e,z},((\pi\circ b)^{*}L)|_{X_{e,z}})\leq\kappa(X_{y},(\pi^{*}L)|_{X_ {y}}).\]
Note that, since \(f\) is separable and its general fibre is normal by assumption, the general fibre of \(f_{e}\) is isomorphic to \(X_{y}\) and the same holds for the geometric generic fibres. In particular, \(X^{\prime}_{\overline{\eta}}\) is reduced, where \(\overline{\eta}\) is the geometric generic point of \(Y\). Note that \(X_{e}\) can also be described as the normalisation of the reduction of \(X^{\prime}\times_{Y^{\prime}}Y_{e}\).
Since \(f^{\prime}\) is equidimensional, we can apply Corollary3.16: there exist \(C_{X}\), \(C_{Y}\)\(\mathbb{Q}\)-divisors on \(X_{e}\) and \(Y_{e}\) respectively, and \(C\), \(\mathbb{Q}\)-divisor on \(X_{e}\), such that
1. \(K_{X_{e}}+C_{X}\sim b^{*}(K_{X^{\prime}})\);
2. \(K_{Y_{e}}+C_{Y}\sim a^{*}(K_{Y^{\prime}})\);
3. \(K_{X_{e}/X^{\prime}}-f_{e}^{*}K_{Y_{e}/Y^{\prime}}\sim-C\);
4. there exists \(U\subseteq Z\) such that \(C_{X}|_{X_{e,z}}\geq 0\), \(C_{Y}|_{Y_{e,z}}\geq 0\) and \(C|_{X_{e,z}}\geq 0\), where \(X_{e,z}\) is a fibre of \(h_{e}\) over any point \(z\in U\) and \(Y_{e,z}\) is a fibre of \(g_{e}\) over any point \(z\in U\).
In particular, for \(X_{e,z}\) general fibre of \(h_{e}\),
\[(C_{X}-f_{e}^{*}C_{Y})|_{X_{e,z}}\sim C|_{X_{e,z}}\geq 0.\]
The goal now is to apply the injectivity Theorem6.1 to the contraction induced on the general fibres:
where \(z\) is a general closed point of \(Z\) and \(y\) is a general closed point of \(Y_{e,z}\).
In order to do so, we need to carefully define our divisors. Define \(K_{X^{\prime}}+B^{\prime}\coloneqq\pi^{*}(K_{X}+B)\); note that \(B^{\prime}\) is not necessarily effective. However, we can find an effective \(\mu\)-exceptional divisor \(\Theta^{\prime}\) on \(Y^{\prime}\) such that \(f^{\prime*}(\Theta^{\prime})\geq(B^{\prime})^{-}\). Define:
\[D^{\prime}\coloneqq\mu^{*}(D)-\Theta^{\prime}\ \ \text{and}\ \ L^{\prime} \coloneqq-K_{X^{\prime}}-(B^{\prime})^{+}-f^{\prime*}(D^{\prime})\] \[=\pi^{*}L+(f^{\prime*}(\Theta^{\prime})-(B^{\prime})^{-}),\]
so that \(L^{\prime}\geq\pi^{*}L\). Note that \(L^{\prime}\) is \(\mathbb{Q}\)-Cartier since it differs from \(\pi^{*}L\) only by exceptional divisors. Moreover, we can choose \(\Theta^{\prime}\) so that \(mL^{\prime}\) is Cartier as well. Now, let
\[B_{e}\coloneqq b^{*}(B^{\prime})^{+}+C_{X}-f_{e}^{*}(C_{Y});\ \ D_{e} \coloneqq a^{*}(D^{\prime})+C_{Y}\ \ \text{and}\ \ L_{e}\coloneqq b^{*}(L^{\prime}),\]
so that:
* \(a^{*}(K_{Y^{\prime}}+D^{\prime})=K_{Y_{e}}+D_{e}\);
* \(L_{e}=-K_{X_{e}}-B_{e}-f_{e}^{*}(D_{e})\).
Note that \(B_{e}|_{X_{e,z}}\geq 0\) by bullet (4) in the above discussion.
Condition (a) in Theorem6.1 holds on \((X_{y},B_{e,X_{y}})\). Indeed, \(\pi\) is an isomorphism over an open subset of \(Y\) and the general fibres of \(f^{\prime}\) are isomorphic to the ones of \(f_{e}\). In particular, if \(\overline{\eta}\) is the geometric generic point of \(Y_{e}\) (or of \(Y^{\prime}\)), \(X_{e,\overline{\eta}}\simeq X_{\overline{\eta}}^{\prime}\) and
\[K_{X_{e,\overline{\eta}}}=(b^{*}(K_{X^{\prime}})-C_{X})|_{X_{e,\overline{\eta }}}=K_{X_{\overline{\eta}}^{\prime}}-C_{X}|_{X_{e,\overline{\eta}}},\]
whence \(C_{X}|_{X_{e,\overline{\eta}}}=0\) and \(B_{e,X_{y}}=B_{X_{y}}^{\prime}=B_{X_{y}}\).
As for condition (b), we need to find \(P\geq(B_{e})_{X_{e,z}}^{-}=0\) such that
\[\kappa(X_{e,z},(f_{X_{e,z}})^{*}(-K_{Y_{e,z}}-D_{e,Y_{e,z}})+P)=0.\]
We claim \(P=0\) does the job.
If \(\zeta\) is the generic point of \(Z\), \(\kappa(Y_{\zeta}^{\prime},\mu^{*}(-K_{Y}-D)|_{Y_{\zeta}^{\prime}})=0\). Indeed, if \(k\) is uncountable, by construction of \(g^{\prime}\), \(\kappa(Y_{z}^{\prime},\mu^{*}(-K_{Y}-D)|_{Y_{z}^{\prime}})=0\), for the very general fibre \(Y_{z}^{\prime}\) of \(g^{\prime}\). By Lemma3.4, this implies \(\kappa(Y_{\zeta}^{\prime},\mu^{*}(-K_{Y}-D)|_{Y_{\zeta}^{\prime}})=0\). If \(k\) is countable, let \(\overline{k}\supset k\) be a perfect uncountable extension of \(k\) and \(\overline{g^{\prime}}\colon\overline{Y^{\prime}}\to\overline{Z^{\prime}}\) the base change of \(g^{\prime}\) with \(\overline{k}\). If \(\mathcal{D}\) is a \(\mathbb{Q}\)-divisor on \(Y^{\prime}\), denote by
its base change with \(\overline{k}\). By flat base change, \(H^{0}(Y^{\prime},\mathcal{D})\otimes_{k}\overline{k}=H^{0}(\overline{Y^{\prime}},\overline{\mathcal{D}})\). By the above discussion, we can thus conclude that \(\kappa(\overline{Y^{\prime}_{\zeta}},\overline{\mu^{*}(-K_{Y}-D)}|_{\overline{Y ^{\prime}_{\zeta}}})=0=\kappa(Y^{\prime}_{\zeta},\mu^{*}(-K_{Y}-D)|_{Y^{\prime }_{\zeta}})\).
Moreover,
\[\mu^{*}(-K_{Y}-D) =-K_{Y^{\prime}}-\Xi^{\prime}-\mu^{*}D\] \[=-K_{Y^{\prime}}-D^{\prime}-(\Xi^{\prime}+\Theta^{\prime}),\]
where \(\Xi^{\prime}\) is a \(\mu\)-exceptional divisor, not necessarily effective. Note that, after possibly enlarging \(\Theta^{\prime}\) while keeping it \(\mu\)-exceptional, we can also assume \(\Xi^{\prime}+\Theta^{\prime}\geq 0\) and \(\mu\)-exceptional. Then the projection formula on \(\mu\) yields that \(\mu^{*}(-K_{Y}-D)\) and \(-K_{Y^{\prime}}-D^{\prime}\) have the same sections, thus, by Lemma 3.3:
\[0=\kappa(Y^{\prime}_{\zeta},(\mu^{*}(-K_{Y}-D))|_{Y^{\prime}_{ \zeta}})=\kappa(Y^{\prime}_{\zeta},(-K_{Y^{\prime}}-D^{\prime})|_{Y^{\prime}_{ \zeta}})=\] \[\kappa(Y_{e,\zeta},(a^{*}(-K_{Y^{\prime}}-D^{\prime}))|_{Y_{e, \zeta}})=\kappa(Y_{e,\zeta},(-K_{Y_{e}}-D_{e})|_{Y_{e,\zeta}}).\]
Since the general fibre of \(g_{e}\) is normal and reduced, by Lemma 3.4, for the general fibre \(Y_{e,z}\), we have:
\[0=\kappa(Y_{e,\zeta},(-K_{Y_{e}}-D_{e})|_{Y_{e,\zeta}})=\kappa(Y_{e,z},(-K_{Y _{e}}-D_{e})|_{Y_{e,z}})=\kappa(Y_{e,z},-K_{Y_{e,z}}-D_{e,Y_{e,z}}).\]
Thus, we can apply Theorem 6.1, which yields the inequality:
\[\kappa(X_{e,z},L_{e,X_{e,z}})\leq\kappa(X_{y},L_{e,X_{y}}).\]
Since \(\pi\) is an isomorphism over the generic point of \(Y\) and \(b|_{X_{y}}\) is the identity morphism on the general fibre \(X_{y}\) of \(f_{e}\), we have \(\kappa(X_{y},L_{e,X_{y}})=\kappa(X_{y},L^{\prime}_{X_{y}})=\kappa(X_{y},L_{X_ {y}})\). As \(b^{*}L^{\prime}\geq b^{*}\pi^{*}L\) we conclude
\[\kappa(X_{e,z},((\pi\circ b)^{*}L)_{X_{e,z}})\leq\kappa(X_{y},L_{X_{y}}).\]
|
2309.08966 | FF-LOGO: Cross-Modality Point Cloud Registration with Feature Filtering
and Local to Global Optimization | Cross-modality point cloud registration is confronted with significant
challenges due to inherent differences in modalities between different sensors.
We propose a cross-modality point cloud registration framework FF-LOGO: a
cross-modality point cloud registration method with feature filtering and
local-global optimization. The cross-modality feature correlation filtering
module extracts geometric transformation-invariant features from cross-modality
point clouds and achieves point selection by feature matching. We also
introduce a cross-modality optimization process, including a local adaptive key
region aggregation module and a global modality consistency fusion optimization
module. Experimental results demonstrate that our two-stage optimization
significantly improves the registration accuracy of the feature association and
selection module. Our method achieves a substantial increase in recall rate
compared to the current state-of-the-art methods on the 3DCSR dataset,
improving from 40.59% to 75.74%. Our code will be available at
https://github.com/wangmohan17/FFLOGO. | Nan Ma, Mohan Wang, Yiheng Han, Yong-Jin Liu | 2023-09-16T11:42:41Z | http://arxiv.org/abs/2309.08966v2 | FF-LOGO: Cross-Modality Point Cloud Registration with Feature Filtering and Local to Global Optimization
###### Abstract
Cross-modality point cloud registration is confronted with significant challenges due to inherent differences in modalities between different sensors. We propose a cross-modality point cloud registration framework FF-LOGO: a cross-modality point cloud registration method with feature filtering and local-global optimization. The cross-modality feature correlation filtering module extracts geometric transformation-invariant features from cross-modality point clouds and achieves point selection by feature matching. We also introduce a cross-modality optimization process, including a local adaptive key region aggregation module and a global modality consistency fusion optimization module. Experimental results demonstrate that our two-stage optimization significantly improves the registration accuracy of the feature association and selection module. Our method achieves a substantial increase in recall rate compared to the current state-of-the-art methods on the 3DCSR dataset, improving from 40.59% to 75.74%. Our code will be available at [https://github.com/wangmohan17/FFLOGO](https://github.com/wangmohan17/FFLOGO).
## I Introduction
Point cloud registration, the task of finding rigid transformations to align two input point clouds, is a pivotal technique in robotics and computer vision [1]. It finds vital applications in domains such as autonomous driving [2], augmented or virtual reality systems [3], Simultaneous Localization and Mapping (SLAM) [4], and robotics [5]. Most prior studies have focused on point cloud data of the same modality obtained from identical sensor types, with little attention paid to other modalities. In reality, with the advancement of 3D data acquisition technologies, the cost of obtaining point clouds has been reduced, and the methods of acquisition have diversified, including Kinect depth cameras, LiDAR, or Multi-View Stereo (MVS).
Each existing sensor has its specific advantages and limitations when capturing 3D scenes. For instance, LiDAR, which gauges the distance to obstacles using laser pulses to generate point clouds. Benefits from the high energy of laser pulses, LiDAR generates high-precision point clouds over extended ranges. However, the density of these point clouds is often sparse. Depth cameras estimate depth using infrared or stereo vision techniques, which can produce dense point clouds, but usually within a limited range and with moderate precision. RGB cameras can generate dense point clouds with texture information using 3D reconstruction techniques, but their accuracy typically lags behind that of depth cameras and LiDAR sensors. cross-modality point cloud fusion or registration combines the strengths of multiple sensors, overcoming the limitations of a single sensor, ensuring point cloud acquisition is accurate, efficient, and detailed, resulting in higher-quality visualizations, comparisons, or localizations. For example, in the robotics domain, users can align low-cost, easily accessible sensor data with high-quality data from robotic service providers using cross-modality point cloud registration, facilitating low-cost robot environment reconstruction and localization service deployment [6]. In the architectural design domain, cross-modality comparisons between onsite point cloud scanning data and 3D CAD design models provide faster and more precise evaluations for building quality [7].
Homogeneous point cloud registration methods face numerous challenges when dealing with cross-modality data. Firstly, due to the diverse mechanisms through which different sensors generate point clouds, the point density (resolution) distribution varies across modalities. Secondly, in cross-modality situations, sensor accuracies differ, and outliers can arise from both perceived objects and sensor noise. Additionally, point clouds captured from different kinds of sensors seldom guarantee entirely identical poses and fields of view, making the problem of partial overlap more pronounced in cross-modality point clouds than in homogeneous ones. Existing methods, especially traditional optimization-based ones, can achieve precise results in scenarios with low noise and outliers and are computationally efficient. However, in cross-modality registration, the presence of numerous inaccurate point-to-point correspondences makes finding the optimal solution challenging. Deep learning-based methods, which leverage deep neural networks to extract point cloud features and either base the transformation estimation on these feature correspondences or regress directly from the features to the transformation matrix, offer some robustness. Still, noise and density variations in cross-modality registration can impact feature extraction, resulting in transformation estimation that is often less than satisfactory.
To address these challenges, in this paper, we combine the robustness of deep learning-based cross-modality point cloud feature extraction with the fine-tuning precision advantages of traditional optimization algorithms. We introduce a cross-modality point cloud registration framework FF-LOGO based on feature filtering and local-global optimization. Our method extracts geometric features from cross-modality point clouds with transformation invariance and filters the high feature-coupled uniform point sets and initial pose esti
mation through a cross-modality feature association filtering module. Subsequent local adaptability key area aggregation modules and global modality-consistent fusion optimization modules perform local-global joint optimization. We observed that the local-global joint optimization process can significantly enhance the pose estimation accuracy of the cross-modality feature association filtering module.
Our main contribution is threefold:
* We have devised a cross-modality point cloud registration framework anchored in feature filtering and local-global optimization to tackle the challenging task of cross-modality point cloud registration.
* We introduced a local-to-global optimization process for cross-modality registration, significantly enhancing the preliminary accuracy derived from feature filtering.
* We fully leverage the advantages of deep learning and traditional optimization for cross-modality registration and achieve the state-of-the-art with an improvement from 40.59% to 75.74%.
## II Related Works
### _Conventional Optimization Methods_
The Iterative Closest Point (ICP) [8] is a classical method in point cloud registration. Its fundamental premise revolves around iteratively determining the best rigid transformation between two point clouds, thereby minimizing their point-to-point distances. While ICP is computationally efficient and concise, its reliance on initial pose estimates and its sensitivity to outliers pose challenges in cross-modality registration. Variants of the ICP algorithm attempt to enhance its performance from multiple perspectives. For example, TriCP [9] introduces a trimming rate to effectively select corresponding portions from two datasets for ICP registration, addressing the issue of partial overlaps. CICP [10] aims to register point clouds of differing densities produced by varied sensors by matching local surface representations in both source and target point clouds. GO-ICP [11] adopts a global search strategy rooted in branch-and-bound to achieve global optimization, avoiding potential local minima. Building on 4PCS [12], Super4PCS [13] considerably reduces computational demands, offering an efficient solution for global point cloud registration. However, despite these advancements, achieving ideal results in cross-modality point cloud registration remains elusive.
Probabilistic point cloud registration methodologies offer an alternative research direction, seeking to probabilistically model the deterministic correspondences found in ICP. These methods typically employ Gaussian Mixture Models (GMM) to characterize the distribution of point clouds, recasting point cloud registration as an optimization problem of probability density functions. GMM [14]stands out as a representative approach, exhibiting robustness against significant noise and outliers, albeit with relatively higher computational complexity. FilterReg [15], on the other hand, transforms the correspondence problem in point set registration into a filtering problem using Gaussian filtering, thereby achieving efficient registration. Huang et al. [16] introduced a GMM-based method for cross-modality point cloud registration. However, its intricate preprocessing steps are not readily adaptable to large-scale point clouds with partial overlaps.
Beyond strategies based on GMM, several studies have approached the registration challenge by treating it as a graph matching problem to address cross-modality discrepancies. CSGM [17] recasts the registration issue as a graph matching problem. Yet, its limitation lies in the necessity to segment the point cloud and rely solely on pairwise point matching for graph node alignment. To address these constraints, GCTR [18] introduced a cross-modality point cloud registration technique that considers an extended set of neighboring constraints, reformulating it into a higher-order graph matching problem. Owing to its inclusion of more constraints, this method exhibited more robust performance in experiments compared to previous graph-matching approaches. Nevertheless, the segmentation process of this approach adds additional computational time, and its performance is intricately tied to hyperparameters.
### _Deep neural network methods_
Recently, the achievements of deep neural networks in three-dimensional geometry, such as PointNet [19] and DGCNN [20], have propelled advancements in deep point cloud registration. At the heart of these methods lies the concept of leveraging deep neural networks to extract features from cross-source point clouds, either basing registrations on these feature correspondences or directly regressing transformation matrices from the features themselves. Feature learning techniques, SpinNet [21] aims to extract robust point descriptors through tailored neural network designs. However, its reliance on a voxelization preprocessing step poses challenges in the context of cross-modality point clouds. D3Feat [22]necessitates the construction of features based on k-nearest neighbors, but this descriptor tends to falter when faced with significant density disparities. Beyond these point descriptor-focused methodologies, several strategies emphasize feature matching. Deep Global Registration (DGR) [23] devises a UNet architecture for discerning whether a point pair corresponds. This process reinterprets the feature matching dilemma as a binary classification task. Transformation learning approaches, as an alternative line of investigation, directly estimate transformations via neural networks. FMR [24] introduces a feature metric registration technique, aligning two point clouds by minimizing their feature metric projection error. Specifically, FMR first extracts global features from both point clouds, followed by computing their feature metric projection error. Subsequently, the Lukas-Kanadle (LK) [25] algorithm is deployed to estimate transformation increments, leading to the determination of the final transformation.
## III Methodology
The comprehensive architecture of our proposed method is illustrated in Figure 1, encompassing three integral modules. The Cross-Modality Feature Correlation Filtering Module
operates on the original point clouds, \(\mathcal{K}\) and \(\mathcal{L}\). Voxel downsampling is performed to mitigate density discrepancies, preventing adverse effects on subsequent feature extraction, it produces isopycnic points, denoted as \(\tilde{\mathcal{K}}\) and \(\hat{\mathcal{L}}\). Building upon this foundation, the module isolates isopycnic points with heightened feature confidence and gets the filtered points, as \(\tilde{\mathcal{K}}\) and \(\hat{\mathcal{L}}\). Simultaneously, a coarse-grained pose transformation estimation \(\mathcal{T}_{c}\) is derived via feature matching techniques. The Local Adaptive Key Region Aggregation Module identifies prominent keypoints, \(\hat{\mathcal{K}}_{key}\), from \(\hat{\mathcal{L}}\), extracting a subset of the point cloud, \(\hat{\mathcal{K}}_{patch}\), which encapsulates key local regions with a pronounced adaptive nature. The Global Modality Consistency Fusion Optimization Module fuses the isopycnic point set \(\hat{\mathcal{L}}\), discerned from the initial module, with the adaptive key local region sub-point cloud \(\hat{\mathcal{K}}_{patch}\) from the secondary module. By deploying a modality-consistent optimization strategy that transitions from local nuances to a global perspective, the coarse-grained pose transformation estimate \(\mathcal{T}_{c}\) undergoes meticulous refinement, yielding the definitive registration outcome \(\mathcal{T}_{f}\). The details of each module will be provided in subsections.
### _The Cross-Modality Feature Correlation Filtering Module_
Our Cross-Modality Feature Correlation Filtering Module is mainly inspired by the GeoTransformer [26]. Operating under identical physical environments, the geometric position encoding self-attention module of GeoTransformer effectively extracts geometry transformation-invariant features from point clouds. This capability robustly handles challenges in cross-modality point cloud matching, such as outliers, anomalies, and partial overlaps. To elucidate the methodology, the module encompasses three distinct components: the construction of Geometric Self-attention, the extraction of features through the Geometric Transformer, and the subsequent process of feature matching and selection.
Initially, we use Geometric Self-attention to learn the feature and global correlation in the geometric space between isopycnic points. The geometric structure embedding, at its core, aims to utilize consistent angular and distance relationships present in cross-modality point clouds from the same scene. For an isopycnic point set (\(\hat{\mathcal{K}}\) or \(\hat{\mathcal{L}}\)), the geometric structure embedding consists of pair-wise distance embedding and triplet-wise angular embedding:
\[\mathbf{e}_{i,j}=\mathbf{e}_{i,j}^{D}\mathbf{W}^{D}+\max_{x}\left\{\mathbf{e}_ {i,j,x}^{A}\mathbf{W}^{A}\right\} \tag{1}\]
\(\mathbf{W}^{D}\) and \(\mathbf{W}^{A}\) are projection matrices for distance and angular embeddings respectively, \(\mathbf{e}_{i,j}\) is the geometric structure embedding, \(\mathbf{e}_{i,j}^{D}\) is the pair-wise distance embedding, and \(\mathbf{e}_{i,j}^{A}\) is the triplet-wise angular embedding.
The pair-wise distance embedding, \(\mathbf{e}_{i,j}^{D}\), is calculated as:
\[\begin{split} e_{i,j,2k}^{D}&=\sin\left(\frac{d_{ i,j}/\sigma_{d}}{10000^{2k/d_{t}}}\right)\\ e_{i,j,2k+1}^{D}&=\cos\left(\frac{d_{i,j}/\sigma_ {d}}{10000^{2k/d_{t}}}\right)\end{split} \tag{2}\]
\(d_{i,j}\) represents the Euclidean distance between isopycnic points \(\hat{\mathbf{p}}_{i}\) and \(\hat{\mathbf{p}}_{j}\), \(\sigma_{d}\) is a hyperparameter to adjust distance variations, and \(d_{t}\) is the dimensionality of the data. The triplet-wise angular embedding, \(\mathbf{e}_{i,j}^{A}\), is given by:
Fig. 1: **Overview of the proposed pipeline. The Cross-Modality Feature Correlation Filtering Module extracts and filters feature-correlated points, obtaining an initial pose estimation. Key regions identified by The Local Adaptive Key Region Aggregation Module are then optimized with the Global Modality Consistency Fusion Optimization Module to achieve the final optimised registration.**
\[\begin{split} e^{A}_{i,j,x,2l}&=\sin\left(\frac{ \alpha_{i,j}^{x}/\sigma_{a}}{10000^{2l/d_{t}}}\right)\\ e^{A}_{i,j,x,2l+1}&=\cos\left(\frac{\alpha_{i,j}^{x} /\sigma_{a}}{10000^{2l/d_{t}}}\right)\end{split} \tag{3}\]
\(k\) neighboring isopycnic points are initially chosen for \(\hat{\mathbf{p}}_{i}\), forming the point set \(\mathcal{X}_{i}\). For \(\hat{\mathbf{p}}_{x}\) within \(\mathcal{X}_{i}\), \(a_{i,j}\) is calculated as: \(\alpha_{i,j}^{x}=\angle(\Delta_{x,i},\Delta_{j,i})\), with \(\Delta_{i,j}\) defined as \(\hat{\mathbf{p}}_{i}-\hat{\mathbf{p}}_{j}\).
In the subsequent process, the Geotransformer [26] network is utilized to compute self-attention and cross-attention based on geometric structure embedding. This yields features \(\mathcal{H}^{\mathcal{K}}\) and \(\mathcal{\hat{H}}^{\mathcal{L}}\) for the isopycnic point sets \(\mathcal{\hat{K}}\) and \(\mathcal{\hat{L}}\), respectively.
The final step involves point cloud selection based on their features. First, \(\mathcal{\hat{H}}^{\mathcal{K}}\) and \(\mathcal{\hat{H}}^{\mathcal{L}}\) are normalized onto a unit hypersphere. The Gaussian correlation matrix \(S\) is then computed, with entries \(s_{i,j}\in S\) defined as:
\[s_{i,j}=\exp\left(-\left\|\hat{\mathbf{h}}_{i}^{\mathcal{K}}-\hat{\mathbf{h}} _{j}^{\mathcal{L}}\right\|_{2}^{2}\right) \tag{4}\]
### _The Local Adaptive Key Region Aggregation Module_
The role of the Local Adaptive Key Region Aggregation module is to aggregate several locally representative point cloud patches from the isopycnic points to prepare for the global modality consistency fusion optimization. The extracted local point cloud patches should be representative, dispersed throughout the point cloud and containing ample local features.
Initially, from the isopycnic point set \(\mathcal{\hat{K}}\), we select dispersed and representative key points within the point cloud. Following the approach in PVNET [27], we employ the Farthest Point Sampling (FPS) algorithm to select \(n\) key points. Assuming our point cloud consists of \(N\) points, denoted as set \(P\), where each point is represented as a vector, and the set of selected points is \(S\). To initialize the set of key points, we calculate the geometric centroid of the point cloud. For every remaining point \(p\in P\), we compute its minimum distance to all points in set \(S\):
\[d(p,S)=\min_{s\in S}\|p-s\|_{2} \tag{5}\]
We then select the point with the maximum distance:
\[p_{\text{next}}=\arg\max_{p\in P\setminus S}d(p,S) \tag{6}\]
The chosen point \(p_{\text{next}}\) is added to the set \(S\) and removed from set \(P\), continuing this process until the size of the set reaches \(n\). This results in the key point set \(\mathcal{\hat{K}}_{key}\). For accuracy and efficiency, we adopt \(n=8\). After extracting the key points, we employ the k-nearest neighbors (knn) algorithm to aggregate neighboring isopycnic points around the key points from \(\mathcal{\hat{K}}\) to form the Local Adaptive Key Region point set \(\mathcal{\hat{K}}_{patch}\), which supplements the local information of the point cloud.
### _The Global Modality Consistency Fusion Optimization Module_
Due to discrepancies between training and validation sets, as well as losses in local information arising from feature extraction results, registration results from feature-matching methods frequently suffer from reduced accuracy or even outright failures. It is crucial to impose additional constraints on the registration results through optimization techniques following cross-modality feature alignment, in order to enhance both accuracy and stability. In consideration of this, we have developed an optimization method that transitions from local adaptive key region matching to global modality consistency fusion.
The isopycnic point set \(\mathcal{\hat{L}}\), processed through Module A, yields the cross-modality feature-coupled point set \(\mathcal{\hat{L}}\), representing the point cloud post-feature selection. The isopycnic point set \(\mathcal{\hat{K}}\), processed through Module B, gives the local adaptive key region point set \(\mathcal{\hat{K}}_{patch}\), symbolizing the key point cloud region containing representative local information in the other modality. The \(\mathcal{\hat{L}}\) is then matched with each local adaptive key region in \(\mathcal{\hat{K}}_{patch}\) to compute the point-to-plane residuals and iteratively map to obtain the optimal transformation pose. Specifically, for every point \(a_{i}\) in the nth set of \(\mathcal{\hat{K}}_{patch}\), locate the corresponding plane in the point cloud \(\mathcal{\hat{L}}\) with the shortest distance, denoted as \(b_{j1},b_{j2},b_{j3}\):
\[j^{*}(a_{i})=\arg\min_{j}|(a_{i}-b_{j1})\cdot\vec{n}_{j}| \tag{7}\]
where \(\vec{n}_{j}\) is the normal vector of \(b_{j1},b_{j2},b_{j3}\). For each point \(a_{i}\) and its corresponding plane \(j\), compute the point-to-plane distance residual:
\[r_{i}=(T(a_{i})-b_{j1})\cdot\vec{n}_{j} \tag{8}\]
Minimize the distance residuals for all points to their corresponding planes through solving the least squares problem:
\[T_{key,n}=\arg\min_{T}\sum_{i}r_{i}(T)^{2} \tag{9}\]
This provides the optimal modality-consistent transformation \(T_{key,n}\) between the nth \(\mathcal{\hat{K}}_{patch}\) and the cross-modality feature-coupled point set \(\mathcal{\hat{L}}\).
After obtaining the local optimal transformations, a global key point least squares optimization is necessary to integrate local modality-consistent adjustments. Transform the corresponding key points from the local adaptive key region point set according to the local transformation estimate \(T_{key}\) to get the transformed point set \(\mathcal{\hat{K}}^{\prime}_{key}\):
\[\mathcal{\hat{K}}^{\prime}_{key}=\{T_{key,i}\cdot a_{i}\mid a_{i}\in\mathcal{ \hat{K}}_{key}\} \tag{10}\]
For each corresponding point pair in \(\mathcal{\hat{K}}_{key}\) and \(\mathcal{\hat{K}}^{\prime}_{key}\), solve the following least squares problem to minimize the squared distance error for all points:
\[\mathcal{T}_{f}=\arg\min_{T_{key}}\sum_{i=1}^{n}\|T_{key}\cdot a_{i}-b_{i}\|_{2} ^{2} \tag{11}\]
This results in the final optimized transformation \(\mathcal{T}_{f}\) through the local-to-global modality-consistent transformation estimation. We summarize the proposed method of The Global Modality Consistency Fusion Optimization in Algorithm 1.
```
Input : Isopycnic point sets \(\hat{\mathcal{K}}\), \(\hat{\mathcal{L}}\) Output : Optimal transformation \(\mathcal{T}_{f}\) Initialize: Transformed point set \(\hat{\mathcal{K}}^{\prime}_{key}\), Point-to-plane residual \(r_{i}\), Transformation estimates \(T_{key,n}\)
1\(\tilde{\mathcal{L}}\leftarrow\) Process \(\hat{\mathcal{L}}\) through The Cross-Modality Feature Correlation Filtering Module; \(\hat{\mathcal{K}}_{patch}\leftarrow\) Process \(\hat{\mathcal{K}}\) through The Local Adaptive Key Region Aggregation Module; foreachpoint set \(\hat{\mathcal{K}}_{patch,n}\)in \(\hat{\mathcal{K}}_{patch}\)do foreachpoint \(a_{i}\) in \(\hat{\mathcal{K}}_{patch,n}\)do
2 Find plane \(b_{j1},b_{j2},b_{j3}\) in \(\tilde{\mathcal{L}}\) with minimum distance to \(a_{i}\); Compute the point-to-plane distance residual \(r_{i}\);
3 end foreach
4 Solve for \(T_{key,n}\) to minimize all \(r_{i}\);
5 end foreach
6 Transform \(\hat{\mathcal{K}}_{key}\) using \(T_{key}\) to get \(\hat{\mathcal{K}}^{\prime}_{key}\); foreachpoint pair in \(\hat{\mathcal{K}}_{key}\) and \(\hat{\mathcal{K}}^{\prime}_{key}\)do
7 Solve least squares problem to minimize squared distance error;
8
9 end foreach
10 Compute the final transformation \(\mathcal{T}_{f}\) using local-to-global modality-consistent estimation;
```
**Algorithm 1**The Global Modality Consistency Fusion Optimization
## IV Experiment
### _Implementation Details_
#### Iv-A1 Parameters
In the cross-modality feature correlation selection module, the voxel size for voxel-based subsampling during dense point selection is set to \(0.05m\). We utilized the Adam optimizer [28] to train our network over a span of 40 epochs, employing the 3DMatch [29] dataset. The training parameters included a batch size of 1, a weight decay set at \(10^{-6}\), and an initial learning rate of \(10^{-4}\), which undergoes an exponential decay at a rate of 0.05 with every passing epoch. Regarding the mutual top-k selection in the point correspondences filtering, we set the hyper-parameter \(k\) : \(k\) =1 for 250, 500 and 1000 matches, \(k\) =2 for 2500 matches, and \(k\) = 3 for 2500 matches. This configuration helps in regulating the number of point correspondences. In the local adaptability key region aggregation module, the number of key points selected for the set \(\hat{\mathcal{K}}_{key}\) is \(n=8\). The aggregation radius for gathering neighboring dense points around the key points from the set \(\hat{\mathcal{K}}\) is \(d(p,S)_{max}\times 1.5\). All experiments are conducted on a machine equipped with a single RTX 4090 graphics card and an Intel Core i5-13600KF CPU.
#### Iv-A2 Dataset
The proposed algorithm and baseline methods are evaluated on the 3DCSR dataset [30]. Point clouds in this dataset originate from three distinct modalities: LiDAR, Kinect, and camera sensors. Point clouds produced by the LiDAR equipment are relatively sparse, while the Kinect point clouds, generated by the Kinect depth camera, are dense and uniform. The third modality data is constructed from a series of indoor 2D RGB images using the Structure from Motion (SfM) approach. This dataset provides ground truth transformations for aligning either LiDAR or SfM geometry with dense Kinect geometry. It encompasses the most common objects or scenes found in indoor working environments. In total, the dataset comprises 202 point cloud pairs, with 37 scenes captured by Kinect and RGB cameras, and 165 scenes acquired by LiDAR and Kinect sensors.
### _Evaluation Metrics_
For rigorous evaluation, we adopt two key metrics to quantify the quality of registration:
\[\mathrm{RE}(\hat{\mathbf{R}},\mathbf{R}) =\frac{180}{\pi}\arccos\left(\frac{1}{2}\left\{\mathrm{Tr}\left( \hat{\mathbf{R}}\mathbf{R}^{T}\right)-1\right\}\right),\] \[\mathrm{TE}(\hat{\mathbf{t}},\mathbf{t}) =\|\hat{\mathbf{t}}-\mathbf{t}\|_{2},\]
where \(\mathrm{RE}\) represents the geodesic distance within \(SO(3)\) and \(\mathrm{TE}\) signifies the Euclidean distance in \(R^{3}\). These metrics effectively evaluate the discrepancies in rotation and translation between the estimated outcome \((\hat{R},\hat{t})\) and the established ground truth \((R,t)\).
### _Performance_
For all methods, we conducted 10 independent experiments to assess their average performance and recall rates. The registration recall rate is calculated as the ratio of the number of point cloud pairs with Rotation Error (RE) less than 15\({}^{\circ}\) and a Translation Error (TE) less than 0.3m to the total number of pairs. Consistent with some literature, such as [31], when evaluating average performance, we only consider point cloud pairs that were successfully recalled. This is due to the fact that pairs that fail to be recalled deviate significantly from the baseline data, rendering their performance metrics potentially unreliable.
The quantitative results are shown in Table I. In addition to the latest cross-source registration methods, performance tests of many other homomodal point cloud registration benchmarks are also reported. Our method significantly outperforms the current state-of-the-art method, GCC, in recall rate, increasing from 40.59% to 75.74%. TE and RE are slightly higher than the current lowest method because they are calculated only based on successfully recalled samples.
It can be observed that among conventional optimization methods, RANSAC [32], ICP [8], and its variants like TriICP [9], CICP [10], PICP [33], and Super4PCS [13], do not show significant advantages in recall rate and estimation error. The graph matching-based GCTR [18] has an even lower recall rate, while methods based on Gaussian Mixture Model (GMM), such as FilterReg [15], demonstrate relatively better
recall rates and moderate estimation errors. Among deep neural network-based methods, apart from IDM [34], DGR [23], and FMR [24] all exhibit higher-than-average recall rates and relatively lower estimation errors compared to conventional optimization methods. It's noteworthy that the avant-garde cross-modality point cloud registration technique, GCC [31], eclipses the DGR method, which harnesses deep neural network feature mapping, in recall metrics, setting a new benchmark. Nonetheless, its sub-50% recall rate underscores the inherent challenges plaguing the domain of cross-modality point cloud registration. This reflects the immense challenges in generalization for the current cross-modality point cloud registration problem. Existing methods still face significant hurdles in addressing the generalization of cross-modality point cloud registration. Our method offers a substantial improvement in generalization, achieving a recall rate of 75.74%. Furthermore, while maintaining a high recall rate, our method also retains a relatively low level of estimation error compared to current methods, indicating its advantages in both generalization and accuracy.
### _Ablations_
To analyze the efficacy of the proposed local-global optimization process, we present an ablation study in Table II. Specifically, Feature Filtering (FF) represents the cross-modality feature association filtering module, Local-Global Optimization (LOGO) embodies the combined optimization workflow of the local adaptability key region aggregation module and the global modality consistency fusion optimization module, while Global Optimization (GO) denotes a module that encompasses solely global optimization, serving as a replacement for LOGO. The results indicate that both GO and LOGO further enhance the relatively high-precision initial pose estimation derived from the FF module, and the improvement of our LOGO is much more pronounced. Besides, in comparison to all existing approaches, FF with deep learning methods exhibits significant potential for enhancing the processing capability of cross-modality point clouds. However, it remains a coarse registration method with regard to the specific challenge of cross-modality point cloud alignment and experiments show that our Local-Global optimization method can accomplish fine registration on this basis and yield a further enhanced performance improvements. These findings validate the effectiveness of the local-global optimization procedure in the context of cross-modality point cloud registration.
### _Application_
To demonstrate the practical application value of FF-LOGO in cross-modality point cloud registration, we deployed the algorithm on a bipedal wheeled robot for cross-modality localization tests. As illustrated in Figure 2, we initially constructed a high-precision point cloud map using a LIDAR scanner. Subsequently, point clouds generated in real-time from the stereo camera equipped on the wheeled robot were registered with the high-quality LIDAR point clouds to find the accurate pose of our robot. Experimental results validate the robustness and high accuracy of our approach, with very few instances of registration failure and a localization error of less than 10mm. FF-LOGO holds significant potential for many robotic tasks such as localization, point cloud completion, and registration for cross-modality scenarios.
## V Conclusion
In this paper, we introduced a novel framework combining feature filtering and local-global optimization, resulting in robust and accurate registration. Our method fully leverages the advantages of deep learning and traditional optimization, achieving a significant improvement on registration accuracy, as evidenced by a substantial increase in the recall rate compared to state-of-the-art methods on the 3DCSR dataset. In future, we will explore the potential of our method for general point cloud registration.
Fig. 2: **An application of FF-LOGO** |
2309.06793 | Electricity Demand Forecasting through Natural Language Processing with
Long Short-Term Memory Networks | Electricity demand forecasting is a well established research field. Usually
this task is performed considering historical loads, weather forecasts,
calendar information and known major events. Recently attention has been given
on the possible use of new sources of information from textual news in order to
improve the performance of these predictions. This paper proposes a Long and
Short-Term Memory (LSTM) network incorporating textual news features that
successfully predicts the deterministic and probabilistic tasks of the UK
national electricity demand. The study finds that public sentiment and word
vector representations related to transport and geopolitics have
time-continuity effects on electricity demand. The experimental results show
that the LSTM with textual features improves by more than 3% compared to the
pure LSTM benchmark and by close to 10% over the official benchmark.
Furthermore, the proposed model effectively reduces forecasting uncertainty by
narrowing the confidence interval and bringing the forecast distribution closer
to the truth. | Yun Bai, Simon Camal, Andrea Michiorri | 2023-09-13T08:28:16Z | http://arxiv.org/abs/2309.06793v1 | Electricity Demand Forecasting through Natural Language Processing with Long Short-Term Memory Networks
###### Abstract
Electricity demand forecasting is a well established research field. Usually this task is performed considering historical loads, weather forecasts, calendar information and known major events. Recently attention has been given on the possible use of new sources of information from textual news in order to improve the performance of these predictions. This paper proposes a Long and Short-Term Memory (LSTM) network incorporating textual news features that successfully predicts the deterministic and probabilistic tasks of the UK national electricity demand. The study finds that public sentiment and word vector representations related to transport and geopolitics have time-continuity effects on electricity demand. The experimental results show that the LSTM with textual features improves by more than 3% compared to the pure LSTM benchmark and by close to 10% over the official benchmark. Furthermore, the proposed model effectively reduces forecasting uncertainty by narrowing the confidence interval and bringing the forecast distribution closer to the truth.
Electricity demand forecasting, LSTM network, natural language processing, smart grids, probabilistic forecasting
## I Introduction
Electricity demand forecasting, as a decision process in the electricity market, is an essential step in network operation [1]. Precise forecasting of electricity demand not only assists the operators in allocating efficiently resources but benefits the safety assessment in energy systems [2]. The past few decades have witnessed the development of research on electricity demand forecasting. Early techniques, such as linear and non-parameter regression, perform well to some extent while facing difficulty dealing with changes in meteorology, society, economy, and policy. Especially with the combination of smart grid and electricity systems, and the penetration of renewable resources, demand forecasting gets more complex than before. Artificial intelligent methods regard the elements that influence electricity demand and capture the non-linear relationships between historical data and external variables. Researchers aim to build intelligent, adaptive, and optimal energy systems through AI. Examples of widely used AI methods for demand forecasting are support vector machines, artificial neural networks, self-organized maps, extreme learning machines, and so on [1].
Besides the efforts put forward on developing new models or re-construction of model structures, researchers are still investigating information that benefits the forecasting model. One typical way is to select several historical lags via Akaike Information Criterion (AIC) or the Bayesian Information Criterion (BIC) as in [3]. This feature selection method may sometimes suit machine learning models but needs more time sequence structure. Another source of information comes from statistical data sharing. In hierarchical forecasting, [4] designed four estimators for forecasting the electricity demand in different regions in Sweden. The auto-covariance matrices, auto-correlation and variance, paired cross-correlation matrices, and reliable estimation of inverse matrices within the aggregated demand levels were considered to improve information sharing. Besides, one should pay attention to the ability of meteorology and calendar data in demand forecasting. Evidence shows that electricity demand is a time series with solid seasonal signals. The benchmark models often include the meteorology and calendar features in the research nowadays [5, 6, 7].
There is a growing trend of information fusion to improve traditional models, such as text-based forecasting. With the development of the internet, people can quickly post their comments and views online. The internet is full of noisy, unstructured, and sparse knowledge, the distribution of which is of difficult organisation [8]. However, Natural Language Processing (NLP) offers the chance to analyze such information and connect text and events through statistical models. In state-of-the-art research, news reports, online search traffics, social media, knowledge forums, books, and policies are usually served as the corpus sources [9, 10, 11, 12, 13, 14]. Various forms of text features can be used as external variables for forecasting models, such as the quantitative sentiments by TextBlob, topic distributions through Latent Dirichlet Allocation (LDA) [15], the deep-processed variables by Convolutional Neural Network (CNN) [10], the word embeddings grammthrough
by pre-trained large language models, the knowledge graph by extraction of items and relationships [16, 17, 18]. Text-based forecasting now is matured in the fields of crude oil price [15], financial risk [16], health insurance [19], and movie revenues [20].
In electricity demand forecasting, text-based forecasting has emerged as a possible alternative in the last years. In [21], the authors considered the use of weather reports texts as the supplementary in the absence of weather data. They quantified the effect of word frequency on forecasting and found that the word embedding vectors had geographical properties. This research is a beneficial attempt at forecasting with text information, although the external information only marginally improved. In the following research, the authors continued to add a data source from Twitter that was expected to offer more elements about society and economy [22]. They treated the number of tweets containing 'teletravail' (French for 'teleworking') and its variants as features to correct the residuals generated by Generalized Additive Models (GAM) model. The results showed statistical improvement in the benchmark model and depicted the demand change after the lockdown. The research in [13] illustrates another example of incorporating text information to demand forecast. The authors explored improving the forecasting of Chinese monthly electricity consumption with word embedding vectors extracted by the CNN module.
The above studies demonstrate the potential of text information as an essential supplement. Although they made progress in electricity demand forecasting, there is still room for improvement. A previous work by the authors has tried to both explore alternative methods to word frequencies and to try to explain the mechanisms linking news and electricity demand highlighted by the improved model [23]. In this work the authors explored how the five types of text features (count, word frequency, sentiment, topic distribution, word embedding) from news titles, descriptions, and text bodies influence forecasting. It was also explained the improvement brought by text from views of global and local correlations, and causality effects. The conclusion was that keywords related to major social events, the minimum subjectivity of sentiments, and the word embedding dimensions of international conflicts benefit the forecasting. This effect is not due to coincidence.
This paper extends and broadens the scenario for the study of [23]. The main objective of the work is to verify the generalization and transferability of text information under another forecasting paradigm. Concretely, we considered the sustained influence of news text instead of forecasting with the news from the previous day.
The contributions of this paper are as follows:
* To verify the influence of persistent trends in textual-based features
* To explore the performance of such method in probabilistic forecasting
This is done thanks to the following steps:
* We built Long Short-Term Memory (LSTM) networks to keep the news memories for at least one week and identified the text features with sustained influence.
* We developed a dimension reduction network through CNN autoencoder for hundreds of features.
* We proved by experiments that text information enhanced both deterministic and probabilistic forecasting.
In this paper, Section I introduces the research background and a short literature review. Section II illustrates the methods used, including the forecasting framework, the datasets used and the evaluation metrics. The experimental setup and results are shown in Section III, with an analysis of the improvement. Section IV concludes the paper.
## II Methods
### _Research framework_
The research framework has been summarised in Figure 1, mainly including the data acquisition and feature pre-processing (left) and forecasting model (right).
This paper preserves the research case from [23] with the datasets. Electricity demand is collected in the UK region from ENTSO-E transparency platform [24] as in block A. We take the temperature observations (block B) and historical bank holidays (stored as dummy variables in block C) from London. The reason of using the bank holiday is to model the relationships of human activities and load demand. Block C contains also date indicators, important for the seasonality in electricity demand. These are encoded by the sine and cosine of one day in a week and year. Textual data used come from the British Broadcasting Corporation (BBC) [25], which serves the corpus in block D. All the datasets are from the same period (01/06/2016 - 31/05/2021), and we treat the first four years as a training set and the data of last year as a test set.
In block D, the textual features contain five groups: count features, word frequencies, sentiments, topic distributions, and word embeddings, whose definitions and achievements could be referred to in [23]. This paper uses two treatments of the features. Firstly, a Granger test is used in order to reduce the number of used features. Secondly, a CNN autoencoder is built as another dimension compression machine to extract representations from high-dimensional textual features [26]. In detail, the CNN autoencoder compresses and decompresses data through the encoder and decoder architectures. We minimize the loss function between the input and output, and reserve the representation in the hidden layer as the compressed feature. Respect to a traditional dimension reduction methods, such as Principal Component Analysis (PCA), CNN autoencoder can capture non-linear and more complex patterns from inputs. Besides, as an unsupervised learning method, it can extract a global representation without data labeling.
The demand data and the corresponding features are concatenated as tensors to input the forecasting model in block E. In the forecasting module, the inputs are first fed into the LSTM architecture with several cells [27]. The input gate of LSTM can learn to recognize the critical information and store it in the long-term state. The forget gate learns
to keep and extract hidden information through on-demand retrieval. The output gate controls the information from the long-term state and outputs the short-term state at the current time step. Finally, in our module, the output in LSTM is mapped through a fully-connected layer to the shape of the forecasting horizon as the final predictions. In our case, we set the number of neurons less than the forecasting horizon to reduce the computational complexity and better extract the long-term dependencies in the inputs.
### _Evaluation_
The performance of the forecasting model with textual features is evaluated through deterministic and probabilistic metrics. In deterministic evaluation, Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Symmetric Mean Absolute Percentage Error (SMAPE) are used as metrics. Formulae for these common metrics are here omitted for space reasons.
For the evaluation of probabilistic forecasts the following metrics are considered: Pinball loss, as in (1), Winkler score, as in (2), and Continuous Ranked Probability Score (CRPS) as in (3) [28].
\[\mathrm{P}_{\rho}=\frac{1}{N}\sum_{i=1}^{N}\max((\rho-1)\cdot(y_{i}-\hat{y_{i} })). \tag{1}\]
\[\mathrm{W}_{\alpha,i}=\begin{cases}(u_{\alpha,i}-l_{\alpha,i})+\frac{2}{ \alpha}(l_{\alpha,i}-\hat{y_{i}}),&\hat{y_{i}}<l_{\alpha,i}\\ (u_{\alpha,i}-l_{\alpha,i}),&l_{\alpha,i}\leq\hat{y_{i}}\leq u_{\alpha,i}\\ (u_{\alpha,i}-l_{\alpha,i})+\frac{2}{\alpha}(\hat{y_{i}}-u_{\alpha,i}),&\hat{ y_{i}}>u_{\alpha,i},\end{cases} \tag{2}\]
where \(N\) is the number of samples, \(y_{i}\) and \(\hat{y_{i}}\) are truth and prediction, \([l_{\alpha,i},u_{\alpha,i}]\) is the \(100(1-\alpha)\%\) prediction interval, and \(W_{\alpha,i}\) is the length of the interval and a penalty value if \(\hat{y_{i}}\) falls out of \([l_{\alpha,i},u_{\alpha,i}]\). We use the average of the \(W_{\alpha,i}\) to measure the performance on a sample, that is \(W_{\alpha}=\frac{1}{N}\sum_{i=1}^{N}W_{\alpha,i}\).
\[\mathrm{CRPS_{i}}(\hat{y_{i}},y_{i})=\int_{-\infty}^{\infty}(\mathrm{CDF}(x| \hat{y_{i}},\sigma)-\mathrm{H}(x-y_{i}))^{2}dx. \tag{3}\]
(3) computes the CRPS value for a single forecast-truth pair by comparing the Probability Distribution Function (PDF) of the forecast to the truth. \(\mathrm{CDF}(x|\hat{y_{i}},\sigma)\) is the Cumulative Distribution Function (CDF) of the forecasts, where we suppose the CDF is a normal distribution with the mean of \(\hat{y_{i}}\) and standard deviation of \(\sigma\). \(\mathrm{H}(x-y_{i})\) represents the Heaviside step function. \(\mathrm{H}(x-y_{i})=0\) when \(x<y_{i}\), and \(\mathrm{H}(x-y_{i})=1\) otherwise. We use the \(\mathrm{CRPS}=\frac{1}{N}\sum_{i=1}^{N}\mathrm{CRPS_{i}}\) to evaluate the performance on a sample.
## III Results
### _Model configurations_
We still focus on the day-ahead forecasting of electricity demand data. On day \(D\), we use the demand lags from the past week with \(D\) excluded to forecast the demand leads on the day \(D+1\). The resolution of the demand data is half an hour; thus, the length of lags equals 336, and the forecasting horizons are from 48 to 96 time steps ahead.
For the CNN autoencoder, we set the input and output channels equal to the number of textual features in a specific group, and only keep the hidden compressed state into 1. For example, we build a three-layer autoencoder for the 100 features in the group 'Word embedding'. The encoder first compresses the 100-dimension word vectors into 1d, then the decoder recovers the hidden 1d into 100d again. The LSTM architecture contains one layer with 24 neurons, and the fully-connected layer maps the 24d output into 48d. The loss function of CNN autoencoder and LSTM is Mean Squared Error (MSE), and we change the loss function of LSTM into Pinball loss when we turn to quantile forecasting. The batch
Fig. 1: The forecasting framework of this research.
size is set to 4, the learning rate is 1e-4, the optimizer is Adam, and an early stopping mechanism controls the training process.
### _Deterministic forecasting results_
In this section, we first build a benchmark LSTM model, including the external features of temperatures, date indicators, and holidays. Afterward, we conduct add the five groups of textual features, as shown in Figure 1. Moreover, we continue to improve the model by integrating the compressed features and finally evaluate the text-based model from the deterministic view with daily averaged metrics. The results are illustrated in Table I.
In Table I, the 'ENTSO-E' is the official day-ahead forecasting benchmark. The 'ExtraTree' is the benchmark model from our previous work. For forecasting with textual features, the 'ExtraTree-Text' is the best-performing model from [23], which includes the textual features of words frequencies from news titles, the sentiments and GloVe word embeddings from news text bodies.
The 'ENTSO-E', 'ExtraTree', and 'ExtraTree-Text' are cited here as the baselines of this research. In this study, we carry out the ablation experiments by adding the textual features into the LSTM benchmark. We only include three set of results in Table I for the limitation of pages. We note the word frequencies, sentiments, topics, and GloVe word embeddings as **W**, **S**, **T**, **G**, and the compressed embeddings by CNN autoencoder as **CG**. The results reveal that the inclusion of word frequencies and topics did not yield significant improvements, whereas the compressed word embeddings led to a notable enhancement.
Additionally, we group the forecasting errors of different hours within a day and split a day into several segments: midnight (1h-6h), morning (7h-12h), afternoon (13h-18h), and evening (19h-24h). The comparisons between the LSTM with and without textual features are shown in Figure 2. The textual features enhance the LSTM benchmark in the morning especially, with the improvement of more than 5%, 6%, and 7% on RMSE, MAE, and SMAPE.
### _Probabilistic forecasting results_
In the scenario of probabilistic forecasting, we evaluate the model performance by considering the quantile, the prediction interval (90%), and the forecast distribution, to portray forecast uncertainty from the local to the global view. The comparisons between the benchmark LSTM (blue solid lines) and the model with textual features (red dashed lines) are presented in Figure 3, which contains the errors of Pinball loss, Winkler score, and CRPS.
In Figure 2(a), the x-axis is the nine quantiles, and there is a slight improvement of LSTM with textual features on the lower quantiles. The x-axes of Figure 2(b) and Figure 2(c) are the day segments. The prediction intervals in the whole day, except midnight, are narrowed by adding textual features. Furthermore, the forecast distribution is also closer to the true distribution, and thus the LSTM with text information is more skillful and accurate in the morning.
## IV Conclusions
As an extension of our previous work [23], this paper explores the potential value of unstructured text information in electricity demand forecasting. This work verifies again that the textual features benefit the LSTM forecasting paradigm. In deterministic forecasting, the textual features assist in improving 3% from the LSTM benchmark, where the gaining is 4% in the ExtraTrees model. Both the models with text are superior to the official benchmark from ENTSO-E.
Further research finds that the text features differ from those in [23] because the memory units in the LSTM model keep part of the history information when forecasting. However, the previous ExtraTrees-based regression model only takes the features from a day back. In LSTM, the word frequency and topic distribution can no longer improve the forecasting, especially the Covid-19-related news that is useful in the ExtraTrees model but with limited assistance in LSTM. Instead, public sentiment and more high-dimensional word representations in the news are of sustained influence, and the word representations involve information about transportation and geopolitical conflicts. In addition, we use CNN autoencoder to obtain a more dense representation from word embeddings, improving the forecasting skills of the model.
To the best of our knowledge, few papers examine the impact of textual systems on probabilistic forecasting. We pioneer the field of electricity demand forecasting to explore whether textual information can reduce the uncertainty of forecasts. The news reflects human social activity, and the
\begin{table}
\begin{tabular}{l r r r} \hline Models & RMSE & MAE & SMAPE(\%) \\ \hline ENTSO-E & 2800.50 & 2544.86 & 7.65 \\ ExtraTree & 2800.77 & 2374.07 & 7.29 \\ ExtraTree-Text & **2684.62** & 2263.86 & 6.92 \\ \hline LSTM & 2775.99 & 2333.20 & 7.10 \\ LSTM-W-S-T-G & 4853.01 & 4094.80 & 12.39 \\ LSTM-S-G & 2732.44 & 2299.20 & 6.99 \\ LSTM-S-G-CG & 2692.33 & **2248.55** & **6.83** \\ \hline \end{tabular}
\end{table} TABLE I: Deterministic results comparison
Fig. 2: The evaluations of the LSTM model with and without textual features in different day segments. The dashed lines represent the forecasting with only LSTM model, and the solid lines are the forecasting with textual features.
experimental results reveal that effective textual features can narrow interval forecasts and bring the probabilistic forecast distribution closer to the true distribution. This phenomenon is more pronounced in the morning hours when human activity most influences electricity demand.
In conclusion, the study not only provides an example of combining the two fields of time series forecasting and natural language processing, but also can support future interdisciplinary collaborations between power systems and sociology. This study explores integrating knowledge from human activities in power systems to improve the effectiveness and social adaptability of smart grid applications.
|
2306.00201 | Generalized Implicit Follow-The-Regularized-Leader | We propose a new class of online learning algorithms, generalized implicit
Follow-The-Regularized-Leader (FTRL), that expands the scope of FTRL framework.
Generalized implicit FTRL can recover known algorithms, as FTRL with linearized
losses and implicit FTRL, and it allows the design of new update rules, as
extensions of aProx and Mirror-Prox to FTRL. Our theory is constructive in the
sense that it provides a simple unifying framework to design updates that
directly improve the worst-case upper bound on the regret. The key idea is
substituting the linearization of the losses with a Fenchel-Young inequality.
We show the flexibility of the framework by proving that some known algorithms,
like the Mirror-Prox updates, are instantiations of the generalized implicit
FTRL. Finally, the new framework allows us to recover the temporal variation
bound of implicit OMD, with the same computational complexity. | Keyi Chen, Francesco Orabona | 2023-05-31T21:39:52Z | http://arxiv.org/abs/2306.00201v1 | # Generalized Implicit Follow-The-Regularized-Leader
###### Abstract
We propose a new class of online learning algorithms, generalized implicit Follow-The-Regularized-Leader (FTRL), that expands the scope of FTRL framework. Generalized implicit FTRL can recover known algorithms, as FTRL with linearized losses and implicit FTRL, and it allows the design of new update rules, as extensions of aProx and Mirror-Prox to FTRL. Our theory is constructive in the sense that it provides a simple unifying framework to design updates that directly improve the worst-case upper bound on the regret. The key idea is substituting the linearization of the losses with a Fenchel-Young inequality. We show the flexibility of the framework by proving that some known algorithms, like the Mirror-Prox updates, are instantiations of the generalized implicit FTRL. Finally, the new framework allows us to recover the temporal variation bound of implicit OMD, with the same computational complexity.
Machine Learning, ICML, ICML
## 1 Introduction
Online learning is a setting where the learner receives an arbitrary sequence of loss functions, selects points before knowing the loss functions, and is evaluated on the values of the loss functions on the points it selects (Cesa-Bianchi and Lugosi, 2006; Orabona, 2019; Cesa-Bianchi and Orabona, 2021). More in detail, at round \(t\) the learner outputs a point \(\mathbf{x}_{t}\) in a feasible set \(V\subseteq\mathbb{R}^{d}\). Then, it receives a loss function \(\ell_{t}:V\rightarrow\mathbb{R}\) and it pays the value \(\ell_{t}(\mathbf{x}_{t})\). Given the arbitrary nature of the losses, the learner cannot guarantee to have a small cumulative loss, \(\sum_{t=1}^{T}\ell_{t}(\mathbf{x}_{t})\). On the other hand, it is possible to minimize the _regret_, that is the difference between the cumulative loss of the algorithm and the one of any arbitrary comparator \(\mathbf{u}\in V\):
\[\mathrm{Regret}_{T}(\mathbf{u})\triangleq\sum_{t=1}^{T}\ell_{t}(\mathbf{x}_{t})-\sum_ {t=1}^{T}\ell_{t}(\mathbf{u})\;.\]
In particular, a successful online learning algorithm must guarantee a regret that grows sublinearly in time for any \(\mathbf{u}\in V\). In this way, its average performance approaches the one of the best comparator in hindsight.
There are two families of online learning algorithms: Online Mirror Descent (OMD) (Nemirovskij and Yudin, 1983; Warmuth and Jagota, 1997) and Follow-the-Regularized-Leader (FTRL) (Shalev-Shwartz, 2007; Abernethy et al., 2008; Hazan and Kale, 2008). They stem from two similar but complementary approaches: the update of OMD aims at minimizing a linearization of the current loss without going too far from its previous prediction \(\mathbf{x}_{t}\), while FTRL minimizes the sum of all the losses (or their linear approximation) plus a regularization term. On the contrary to the first approaches in online learning that focused on specific algorithms (e.g., the Winnow algorithm (Littlestone, 1988)), the theory of these two frameworks is particularly interesting because it allows _both the design and the analysis of generic online learning algorithms_.
While FTRL and OMD provide similar bounds in most situations, they are not completely equivalent. For example, FTRL has an advantage over OMD in unbounded domains, where it allows to use time-varying regularizers. In fact, OMD allows the use of time-varying stepsizes only in domains where its associated Bregman divergence is bounded.
On the other hand, in the cases where we can use time-varying stepsizes, OMD can achieve a superior adaption to the gradients (see, e.g., Theorem 2 in Streeter and McMahan (2010) versus Theorem 2 in Orabona and Pal (2015)). In this view, these two frameworks are complementary.1 Moreover, there exists another orthogonal axis on the use of the actual loss functions or a linear surrogate for both frameworks. We summarize all the variants of OMD and FTRL in Table 1.
Footnote 1: See also the blog post on this topic by Tim van Erven at [https://www.timvanerven.nl/blog/ftrl-vs-omd/](https://www.timvanerven.nl/blog/ftrl-vs-omd/).
Our motivation stems from the fact that in practical cases, all the variants that use full losses offer a big advantage in terms of empirical performance at the cost of a higher computational complexity. On the theoretical side, the situation is not so clear given that in the worst case using the full losses can be equivalent to their linearized version, as
it should be clear considering linear losses. In particular, the standard theoretical framework for FTRL does not allow a clear analysis of the implicit case. Moreover, while for implicit OMD it has been proven that one can achieve lower regret if the temporal variation of the losses is small, it is unclear if the same guarantee can be achieved for FTRL without the computational cost of using full losses.
In this paper, we aim at bridging this gap proposing a _generalized_ version of implicit FTRL. We go beyond implicit and linearized updates: _we directly construct the update rule in a way that minimizes an upper bound on the regret_. Our framework effectively expands the scope of the FTRL framework, fully retaining its coupling between design and analysis. Also, our updates come with a worst-case guarantee to never be worse than the standard linearized ones.
We show the flexibility of our framework recovering known update schemes, like the Mirror-Prox update (Nemirovski, 2004), or extending updates specifically designed for OMD to the FTRL case, like the aProx one (Asi & Duchi, 2019). Moreover, for the first time, we show an implicit version of FTRL that recovers the temporal variation bound of implicit OMD (Campolongo & Orabona, 2020), but with the same computational complexity of implicit OMD.
Related WorkWhile there are many works on implicit mirror descent in both the online and offline setting (see, e.g., Moreau, 1965; Martinet, 1970; Rockafellar, 1976; Kivinen & Warmuth, 1997; Parikh & Boyd, 2014; Campolongo & Orabona, 2020; Shtoff, 2022), the number of works that deal with implicit updates for FTRL is quite limited. We are only aware of McMahan (2010), which quantifies a gain only for specific regularizers. However, the framework in McMahan (2010) is non-constructive in the sense that it is difficult to see how to generalize implicit updates. Joulani et al. (2017) extends this last result, but it does not provide a link with the maximization of the dual function that governs the regret upper bound.
The closest approach to our framework is the one of Shalev-Shwartz & Singer (2007a;b), which develop a theory of FTRL updates as maximization of a dual function. However, their framework is limited to a specific shape of regularizers and it does not deal with implicit updates.
For implicit OMD, Campolongo & Orabona (2020) showed that implicit updates give rise to regret guarantees that depend on the temporal variability of the losses, so that constant regret is achievable if the variability of the losses is zero. They suggest that FTRL with full losses can achieve the same guarantee, but they also point out that given its computational complexity it would be "not worth pursuing." Here, we show how to achieve the same bound of implicit OMD with our generalized implicit FTRL, while retaining the same computational complexity of implicit OMD.
Proximal updates on truncated linear models were introduced in Asi & Duchi (2019) for the OMD algorithm. Chen et al. (2022b) used gradient flow on the same truncated linear models with a coin-betting algorithm (Orabona & Pal, 2016), but their approach does not seem to satisfy a regret guarantee. Chen et al. (2022a) have used truncated linear models in an FTRL-based parameter-free algorithm (Orabona & Pal, 2021) with a novel decomposition of the regret. However, their approach is ad-hoc is it seems difficult to generalize it.
## 2 Definitions and Basic Tools
We define here some basic concepts and tools of convex analysis, we refer the reader to, e.g., Rockafellar (1970); Bauschke & Combettes (2011) for a complete introduction to this topic. We will consider extended value function that can assume infinity values too. A function \(f\) is _proper_ if it is nowhere \(-\infty\) and finite somewhere. A function \(f:V\subseteq\mathbb{R}^{d}\rightarrow[-\infty,+\infty]\) is _closed_ if \(\{\mathbf{x}:f(\mathbf{x})\leq\alpha\}\) is closed for every \(\alpha\in\mathbb{R}\). For a proper function \(f:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]\), we define a _subgradient_ of \(f\) in \(\mathbf{x}\in\mathbb{R}^{d}\) as a vector \(\mathbf{g}\in\mathbb{R}^{d}\) that satisfies \(f(\mathbf{y})\geq f(\mathbf{x})+\langle\mathbf{g},\mathbf{y}-\mathbf{x}\rangle\), \(\forall\mathbf{y}\in\mathbb{R}^{d}\). We denote the set of subgradients of \(f\) in \(\mathbf{x}\) by \(\partial f(\mathbf{x})\). The _indicator function of the set_\(V\), \(i_{V}:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]\), has value \(0\) for \(\mathbf{x}\in V\) and \(+\infty\) otherwise. We denote the _dual norm_ of a norm \(\|\cdot\|\) by \(\|\cdot\|_{\star}\). A proper function \(f:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]\) is \(\mu\)-_strongly convex_ over a convex set \(V\subseteq\operatorname{int}\operatorname{dom}f\) w.r.t. \(\|\cdot\|\) if \(\forall\mathbf{x},\mathbf{y}\in V\) and \(\forall\mathbf{g}\in\partial f(\mathbf{x})\)
we have \(f(\mathbf{y})\geq f(\mathbf{x})+\langle\mathbf{g},\mathbf{y}-\mathbf{x}\rangle+\frac{\mu}{2}\|\mathbf{x}- \mathbf{y}\|^{2}\). A function \(f:V\to\mathbb{R}\), differentiable in an open set containing \(V\), is \(L\)-_smooth_ w.r.t. \(\|\cdot\|\) if \(f(\mathbf{y})\leq f(\mathbf{x})+\langle\nabla f(\mathbf{x}),\mathbf{y}-\mathbf{x}\rangle+\frac{M}{ 2}\|\mathbf{x}-\mathbf{y}\|^{2}\) for all \(\mathbf{x},\mathbf{y}\in V\). For a function \(f:\mathbb{R}^{d}\to[-\infty,\infty]\), we define the _Fenchel conjugate_\(f^{\star}:\mathbb{R}^{d}\to[-\infty,\infty]\) as \(f^{\star}(\mathbf{\theta})=\sup_{\mathbf{x}\in\mathbb{R}^{d}}\left\langle\mathbf{\theta}, \mathbf{x}\right\rangle-f(\mathbf{x})\). From this definition, we immediately have the Fenchel-Young inequality: \(f(\mathbf{x})+f^{\star}(\mathbf{\theta})\geq\left\langle\mathbf{\theta},\mathbf{x}\right\rangle,\;\forall\mathbf{x},\mathbf{\theta}\).
We will also make use of the following properties of Fenchel conjugates.
**Theorem 2.1** ((Orabona, 2019, Theorem 5.7)).: _Let \(f:\mathbb{R}^{d}\to(-\infty,+\infty]\) be proper. Then, the following conditions are equivalent:_
1. \(\mathbf{\theta}\in\partial f(\mathbf{x})\)_._
2. \(\left\langle\mathbf{\theta},\mathbf{y}\right\rangle-f(\mathbf{y})\) _achieves its supremum in_ \(\mathbf{y}\) _at_ \(\mathbf{y}=\mathbf{x}\)_._
3. \(f(\mathbf{x})+f^{\star}(\mathbf{\theta})=\left\langle\mathbf{\theta},\mathbf{x}\right\rangle\)_._
_Moreover, if \(f\) is also convex and closed, we have an additional equivalent condition_
1. \(\mathbf{x}\in\partial f^{\star}(\mathbf{\theta})\)_._
**Theorem 2.2** ((Orabona, 2019, Theorem 6.11)).: _Let \(\psi:\mathbb{R}^{d}\to(-\infty,+\infty]\) be a proper, closed, convex function, and \(\operatorname{dom}\partial\psi\) be non-empty. Then, \(\psi\) is \(\lambda>0\) strongly convex w.r.t. \(\|\cdot\|\) iff \(\psi^{\star}\) is \(\frac{1}{\lambda}\)-smooth w.r.t. \(\|\cdot\|_{\star}\) on \(\mathbb{R}^{d}\)._
## 3 Generalized Implicit FTRL
In this section, we introduce our novel generalized formulation of the implicit FTRL algorithm. The main idea is to depart from the implicit or linearized updates, and directly design updates that improve the upper bound on the regret. More in detail, the basic analysis of most of the online learning algorithms is based on the definition of subgradients:
\[\ell_{t}(\mathbf{x}_{t})-\ell_{t}(\mathbf{u})\leq\langle\mathbf{g}_{t},\mathbf{x}_{t}-\mathbf{u} \rangle,\;\forall\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\;. \tag{1}\]
This allows to study the regret on the linearized losses as a proxy for the regret on the losses \(\ell_{t}\). However, we can do better. We introduce a new fundamental and more general strategy: using the Fenchel-Young inequality, we have
\[\ell_{t}(\mathbf{x}_{t})-\ell_{t}(\mathbf{u})\leq\ell_{t}(\mathbf{x}_{t})-\langle\mathbf{z}_{ t},\mathbf{u}\rangle+\ell_{t}^{\star}(\mathbf{z}_{t}),\;\forall\mathbf{z}_{t}\;.\]
In particular, the algorithm will choose \(\mathbf{z}_{t}\) to make a certain upper bound involving this quantity to be tighter. This is a better inequality than (1) because when we select \(\mathbf{z}_{t}=\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\), using Theorem 2.1, we recover (1). So, this inequality subsumes the standard one for subgradients, but, using \(\mathbf{z}_{t}\in\ell_{t}(\mathbf{x}_{t+1})\), it also subsumes the similar inequality used in the implicit case, as we show in Section 3.1. Moreover, we will see in Section 6 that it covers cases where \(\mathbf{z}_{t}\) is _not_ a subgradient of \(\ell_{t}\).
The analysis shows that the optimal setting of \(\mathbf{z}_{t}\) is the one that minimizes the function
\[H_{t}(\mathbf{z})\triangleq\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z})+\ell_{t}^{ \star}(\mathbf{z}) \tag{2}\]
or
\[H_{t}^{\prime}(\mathbf{z})\triangleq\psi_{t,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z}_{t} )+\ell_{t}^{\star}(\mathbf{z}), \tag{3}\]
where \(\psi_{t,V}\) is the restriction of the regularizer used at time \(t\) on the feasible set \(V\), i.e., \(\psi_{t,V}\triangleq\psi_{t}+i_{V}\). However, we can show that any setting of \(\mathbf{z}_{t}\) that guarantees \(H(\mathbf{z}_{t})<H(\mathbf{g}_{t})\) (or \(H^{\prime}(\mathbf{z}_{t})<H^{\prime}(\mathbf{g}_{t})\)) guarantee a strict improvement in the worst-case regret w.r.t. using the linearized losses.
One might wonder why the need for two different updates using \(H_{t}\) or \(H_{t}^{\prime}\). The reason is that when using time-varying regularizers that depend on the data, like in the FTRL version of AdaGrad (McMahan & Streeter, 2010; Duchi et al., 2011), if \(\lambda_{t+1}\) depends on \(\mathbf{z}_{t}\) it might make the calculation of the update particularly difficult. This can be avoided using the update involving \(H_{t}^{\prime}\).
Once we have the \(\mathbf{z}_{t}\), we treat them as the subgradient of surrogate linear losses. So, putting it all together, Algorithm 1 shows the final algorithm. We now show a regret guarantee for this algorithm. First, we state a general Lemma and then instantiate it in a few interesting cases.
**Theorem 3.1**.: _Let \(V\subseteq\mathbb{R}^{d}\) be closed and non-empty and \(\psi_{t}:V\to\mathbb{R}\). With the notation in Algorithm 1, define by \(F_{t}(\mathbf{x})=\psi_{t}(\mathbf{x})+\sum_{i=1}^{t-1}\langle\mathbf{z}_{i},\mathbf{x}\rangle\), so that \(\mathbf{x}_{t}\in\operatorname*{argmin}_{\mathbf{x}\in V}\;F_{t}(\mathbf{x})\). Finally, assume that \(\operatorname*{argmin}_{\mathbf{x}\in V}\;F_{t}(\mathbf{x})\) and \(\partial\ell_{t}(\mathbf{x}_{t})\) are not empty for all \(t\)._
* _For any_ \(\mathbf{z}_{t}\in\mathbb{R}^{d}\) _and any_ \(\mathbf{u}\in\mathbb{R}^{d}\)_, we have_ \[\operatorname*{Regret}_{T}(\mathbf{u})\leq\psi_{T+1}(\mathbf{u})-\min_{\mathbf{x}\in V }\;\psi_{1}(\mathbf{x})\] \[\quad+\sum_{t=1}^{T}[\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{ t})-\psi_{t,V}^{\star}(\mathbf{\theta}_{t})+\langle\mathbf{x}_{t},\mathbf{g}_{t}\rangle- \delta_{t}]\] \[\quad+F_{T+1}(\mathbf{x}_{T+1})-F_{T+1}(\mathbf{u}),\] _where_ \(\delta_{t}\triangleq H_{t}(\mathbf{g}_{t})-H_{t}(\mathbf{z}_{t})\)_._
* _If_ \(\psi_{t+1}(\mathbf{x})\geq\psi_{t}(\mathbf{x})\) _for any_ \(\mathbf{x}\in V\)_, then, for any_ \(\mathbf{z}_{t}\in\mathbb{R}^{d}\)_, we have_
\[\operatorname{Regret}_{T}(\mathbf{u})\leq\psi_{T+1}(\mathbf{u})-\min_{\mathbf{x} \in V}\;\psi_{1}(\mathbf{x})\] \[\quad+\sum_{t=1}^{T}[\psi_{t,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t} )-\psi_{t,V}^{\star}(\mathbf{\theta}_{t})+\langle\mathbf{x}_{t},\mathbf{g}_{t}\rangle- \delta_{t}]\] \[\quad+F_{T+1}(\mathbf{x}_{T+1})-F_{T+1}(\mathbf{u}),\] \[\quad\text{where }\delta_{t}^{\prime}\triangleq H_{t}^{\prime}( \mathbf{g}_{t})-H_{t}^{\prime}(\mathbf{z}_{t}).\]
Proof.: The proof is composed of simple but not obvious steps. The first important observation is that the definition of \(\mathbf{x}_{t}\) in the algorithm corresponds exactly to the one of FTRL on the linear losses \(\langle\mathbf{z}_{t},\cdot\rangle\). Hence, we can use the FTRL equality in Orabona (2019, Lemma 7.1):
\[-\sum_{t=1}^{T}\langle\mathbf{z}_{t},\mathbf{u}\rangle\] \[\quad=+\sum_{t=1}^{T}[F_{t}(\mathbf{x}_{t})-F_{t+1}(\mathbf{x}_{t+1})]\] \[\quad\psi_{T+1}(\mathbf{u})-\min_{\mathbf{x}\in V}\;\psi_{1}(\mathbf{x})+F_{T +1}(\mathbf{x}_{T+1})-F_{T+1}(\mathbf{u}),\]
where we have simplified the terms \(\langle\mathbf{z}_{t},\mathbf{x}_{t}\rangle\) on both sides.
Now, use Fenchel-Young inequality, to have \(\langle\mathbf{z}_{t},\mathbf{u}\rangle\leq\ell_{t}(\mathbf{u})+\ell_{t}^{\star}(\mathbf{z}_{ t})\). Hence, we have
\[-\sum_{t=1}^{T}\ell_{t}(\mathbf{u}) \leq\sum_{t=1}^{T}[F_{t}(\mathbf{x}_{t})-F_{t+1}(\mathbf{x}_{t+1})+\ell_{ t}^{\star}(\mathbf{z}_{t})]\] \[\quad+\psi_{T+1}(\mathbf{u})-\min_{\mathbf{x}\in V}\;\psi_{1}(\mathbf{x})\] \[\quad+F_{T+1}(\mathbf{x}_{T+1})-F_{T+1}(\mathbf{u})\;.\]
Observe that
\[F_{t}(\mathbf{x}_{t}) =\min_{\mathbf{x}\in V}\;\psi_{t}(\mathbf{x})+\sum_{i=1}^{t-1}\langle\mathbf{ z}_{i},\mathbf{x}\rangle\] \[=-\max_{\mathbf{x}\in V}\left\langle\mathbf{\theta}_{t},\mathbf{x}\right\rangle -\psi_{t}(\mathbf{x})=-\psi_{t,V}^{\star}(\mathbf{\theta}_{t})\;.\]
In the same way, we have \(-F_{t+1}(\mathbf{x}_{t+1})=\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t+1})\). Also, for any \(\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\), by Theorem 2.1 we have \(\ell_{t}^{\star}(\mathbf{g}_{t})=\langle\mathbf{x}_{t},\mathbf{g}_{t}\rangle-\ell_{t}(\mathbf{ x}_{t})\). Hence, each term in the sum can be written as
\[F_{t} (\mathbf{x}_{t})-F_{t+1}(\mathbf{x}_{t+1})+\ell_{t}^{\star}(\mathbf{z}_{t})\] \[=\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t+1})-\psi_{t,V}^{\star}(\mathbf{ \theta}_{t})+\ell_{t}^{\star}(\mathbf{z}_{t})\] \[=H_{t}(\mathbf{z}_{t})-\psi_{t,V}^{\star}(\mathbf{\theta}_{t})\;.\]
Now, we just add and subtract \(H_{t}(\mathbf{g}_{t})=\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})+\langle\mathbf{ g}_{t},\mathbf{x}_{t}\rangle-\ell_{t}(\mathbf{x}_{t})\) to obtain the stated bound.
The second case is similar. We just have to observe that if \(\psi_{t+1,V}\geq\psi_{t,V}\), then \(\psi_{t+1,V}^{\star}\leq\psi_{t,V}^{\star}\). Hence, each term in the sum can be upper bounded as
\[F_{t} (\mathbf{x}_{t})-F_{t+1}(\mathbf{x}_{t+1})+\ell_{t}^{\star}(\mathbf{z}_{t})\] \[\leq\psi_{t,V}^{\star}(\mathbf{\theta}_{t+1})-\psi_{t,V}^{\star}(\bm {\theta}_{t})+\ell_{t}^{\star}(\mathbf{z}_{t})\] \[=H_{t}^{\prime}(\mathbf{z}_{t})-\psi_{t,V}^{\star}(\mathbf{\theta}_{t})\;.\]
As before, adding and subtracting \(H_{t}^{\prime}(\mathbf{g}_{t})=\psi_{t,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})+ \langle\mathbf{x}_{t},\mathbf{g}_{t}\rangle-\ell_{t}(\mathbf{x}_{t})\) gives the stated bound.
The Theorem is stated with very weak assumption to show its generality, but it is immediate to obtain concrete regret guarantees just assuming, for example, strongly convex regularizers and convex and Lipschitz losses and using well-known methods as Orabona (2019, Lemma 7.8)
However, we can already understand why this is an interesting guarantee. Let's first consider the case that \(\mathbf{z}_{t}=\mathbf{g}_{t}\). In this case, we exactly recover the linearized FTRL algorithm. Even the guarantee in the Theorem exactly recovers the best known one (Orabona, 2019, Corollary 7.9), with \(\delta_{t}=0\) and \(\delta_{t}^{\prime}=0\). Now, if we set \(\mathbf{z}_{t}\) such that \(H_{t}(\mathbf{z}_{t})<H_{t}(\mathbf{g}_{t})\) or \(H_{t}^{\prime}(\mathbf{z}_{t})<H_{t}^{\prime}(\mathbf{g}_{t})\) we will have that \(\delta_{t}>0\) or \(\delta_{t}^{\prime}>0\). Hence, in each single term of the sum we have a negative factor that makes the regret bound smaller. While it might be difficult to give a lower bound to \(\delta_{t}\) and \(\delta_{t}^{\prime}\) without additional assumptions, the main value of this analysis is in giving a _unifying way to design generalized implicit updates for FTRL_. In fact, in the next sections we will show a number of possibilities that this framework enables.
Next, we will gain more understanding on the updates in Algorithm 1, comparing them to implicit OMD.
### Comparison with Implicit Online Mirror Descent
In this section, we show that when \(\mathbf{z}_{t}\) is set to minimize \(H_{t}(\mathbf{z})\) or \(H_{t}^{\prime}(\mathbf{z})\), we recover different variants of implicit updates.
Assume that the \(\ell_{t}\) are closed and convex. Also, assume that \(\psi_{t,V}^{\star}\) is differentiable, that is true, for example, when \(\psi_{t}\) is strongly convex by Theorem 2.2. Then, observe that by the first-order optimality condition and Theorem 2.1, we have
\[\mathbf{z}_{t} =\operatorname*{argmin}_{\mathbf{z}}\;H_{t}(\mathbf{z})\] \[\Leftrightarrow\nabla\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z}_{ t})\in\partial\ell_{t}^{\star}(\mathbf{z}_{t})\] \[\Leftrightarrow\mathbf{z}_{t}\in\partial\ell_{t}(\nabla\psi_{t+1,V}^{ \star}(\mathbf{\theta}_{t}-\mathbf{z}_{t}))=\partial\ell_{t}(\mathbf{x}_{t+1})\;. \tag{4}\]
Hence, in this case, we have that the optimal \(\mathbf{z}_{t}\) is the gradient at the _next_ point \(\mathbf{x}_{t+1}\). This is exactly what happens in the implicit updates.
Under the same assumptions, we also have
\[\mathbf{z}_{t}=\operatorname*{argmin}_{\mathbf{z}}\ H^{\prime}_{t}(\mathbf{z}) \Leftrightarrow\nabla\psi^{\star}_{t,V}(\mathbf{\theta}_{t}-\mathbf{z}_{t}) \in\partial\ell^{\star}_{t}(\mathbf{z}_{t})\] \[\Leftrightarrow\mathbf{z}_{t}\in\partial\ell_{t}(\nabla\psi^{\star}_{t,V} (\mathbf{\theta}_{t+1})). \tag{5}\]
In this other case, the update also has an implicit flavor but the subgradient is queried on a point different from the next point, where the difference depends on how much \(\nabla\psi^{\star}_{t,V}\) differs from \(\nabla\psi^{\star}_{t+1,V}\).
Let's see this connection even more precisely, considering _proximal updates_. Hence, for simplicity, let's consider the case that \(V=\mathbb{R}^{d}\), similar considerations hold in the constrained case. Consider the case that \(\psi_{t}(\mathbf{x})=\frac{\lambda_{t}}{2}\|\mathbf{x}\|_{2}^{2}\). In this case, the update can be written with the _proximal operator_ of the loss functions. In particular, the proximal operator of \(\eta f\), is defined as
\[\operatorname{Prox}_{\eta f}(\mathbf{y})\triangleq\operatorname*{ argmin}_{\mathbf{x}\in\mathbb{R}^{d}}\ \frac{1}{2}\|\mathbf{x}-\mathbf{y}\|_{2}^{2}+\eta f(\mathbf{x})\.\]
If the function \(f\) is differentiable we have that \(\operatorname{Prox}_{\eta f}(\mathbf{y})=\mathbf{y}-\eta\nabla f(\operatorname{Prox} _{\eta f}(\mathbf{y}))\). In words, the proximal update moves by a quantity that depends on the gradient on the updated point. The implicit nature of these updates justifies the name "implicit updates" used in the online learning literature. More generally, we have that \(\operatorname{Prox}_{\eta f}(\mathbf{y})\in\mathbf{y}-\eta\partial f(\operatorname{ Prox}_{\eta f}(\mathbf{y}))\). We list some common proximal operators in Appendix A.
Assuming \(\lambda_{t+1}\) does not depend on \(\mathbf{z}_{t}\), using the proximal operator we can rewrite the update in (4) as
\[\mathbf{x}_{t+1} =\frac{\mathbf{\theta}_{t+1}}{\lambda_{t+1}}=\operatorname{Prox}_{ \frac{\ell_{t}}{\lambda_{t+1}}}\left(\frac{\mathbf{\theta}_{t}}{\lambda_{t+1}}\right)\] \[=\operatorname{Prox}_{\frac{\ell_{t}}{\lambda_{t+1}}}\left(\frac{ \lambda_{t}\mathbf{x}_{t}}{\lambda_{t+1}}\right). \tag{6}\]
Similarly, we can rewrite the update in (5) as
\[\frac{\mathbf{\theta}_{t+1}}{\lambda_{t}} =\frac{\mathbf{\theta}_{t}}{\lambda_{t}}-\frac{\mathbf{z}_{t}}{\lambda_{t }}=\mathbf{x}_{t}-\frac{\mathbf{z}_{t}}{\lambda_{t}}\in\mathbf{x}_{t}-\frac{1}{\lambda_{t} }\partial\ell_{t}(\nabla\psi^{\star}_{t,V}(\mathbf{\theta}_{t+1}))\] \[=\mathbf{x}_{t}-\frac{1}{\lambda_{t}}\partial\ell_{t}\left(\frac{\bm {\theta}_{t+1}}{\lambda_{t}}\right)\.\]
Hence, we have that \(\frac{\mathbf{\theta}_{t+1}}{\lambda_{t}}=\operatorname{Prox}_{\frac{\ell_{t}}{ \lambda_{t}}}(\mathbf{x}_{t})\) and we get
\[\mathbf{x}_{t+1}=\frac{\mathbf{\theta}_{t+1}}{\lambda_{t+1}}=\frac{\lambda_{t}}{ \lambda_{t+1}}\operatorname{Prox}_{\frac{\ell_{t}}{\lambda_{t}}}(\mathbf{x}_{t}). \tag{7}\]
It is instructive to compare both updates with the one of Implicit Online Mirror Descent using \(\psi(\mathbf{x})=\frac{1}{2}\|\mathbf{x}\|_{2}^{2}\) as distance generating function and stepsizes \(\frac{1}{\lambda_{t}}\). In this case, we would update with
\[\mathbf{x}_{t+1} =\operatorname*{argmin}_{\mathbf{x}}\frac{1}{2}\|\mathbf{x}_{t}-\mathbf{x}\|_ {2}^{2}+\frac{1}{\lambda_{t}}\ell_{t}(\mathbf{x})\] \[=\operatorname{Prox}_{\frac{\ell_{t}}{\lambda_{t}}}(\mathbf{x}_{t}). \tag{8}\]
Comparing (4) and (5) to (8), we see, when \(\lambda_{t}\leq\lambda_{t+1}\) as it is usual, the two updates above shrink a bit towards the zero vector, that is the initial point \(\mathbf{x}_{1}\), before or after the proximal operator. This shrinking is given by the FTRL update and it is the key difference with Implicit OMD update. The different update also corresponds to a different guarantee: the regret of the generalized implicit FTRL holds for unbounded domains too, while in Implicit OMD with time-varying stepsizes can have linear regret on unbounded domains (Orabona & Pal, 2018). Interestingly, a similar shrinking has been proposed in Fang et al. (2020) to fix the unbounded issue in OMD. Clearly, the updates (4) and (5) become equivalent to (8) for \(\lambda_{t}\) constant in \(t\), that is exactly the only case when implicit/proximal online mirror descent works for unbounded domains.
## 4 Temporal Variability Bound
In this section, we quantify the advantage of the generalized implicit FTRL updates in the case of slow temporal variability of the loss functions.
It was observed in Campolongo & Orabona (2020) that implicit OMD satisfies regret guarantees that depends on the temporal Variability \(V_{T}\):
\[V_{T}\triangleq\sum_{t=2}^{T}\max_{x\in V}\ \ell_{t}(\mathbf{x})-\ell_{t-1}(\mathbf{x})\.\]
In Campolongo & Orabona (2020, Appendix E) they also show that FTRL with full losses guarantees a similar guarantee, but at a much higher computational price. Indeed, FTRL with full losses requires solving a finite sum optimization problem at each step, whose size increases with the number of iterations. Such computational burden induced Campolongo & Orabona (2020) to say that such approach is "not worth of pursuing."
Here, we show that the Algorithm 1 can satisfy the same guarantee of implicit OMD with the same computational complexity too. First, we show the following Lemma.
**Lemma 4.1**.: _Under the assumptions of Theorem 3.1, further assume \(V\) to be convex, \(\psi_{t}:V\rightarrow\mathbb{R}\) closed, \(\lambda_{t}\)-strongly convex w.r.t. \(\|\cdot\|\), and subdifferentiable in \(V\), \(\ell_{t}\) closed, convex, and subdifferentiable in \(V\), and \(\lambda_{t+1}\geq\lambda_{t}\). Set
\(\mathbf{z}_{t}\in\operatorname*{argmin}_{\mathbf{z}}\ H_{t}(\mathbf{z})\). Then, we have_
\[\operatorname*{Regret}_{T}(\mathbf{u})\leq\psi_{T+1}(\mathbf{u})-\min_{\bm {x}\in V}\ \psi_{1}(\mathbf{x})\] \[+\sum_{t=1}^{T}\left(\ell_{t}(\mathbf{x}_{t})-\ell_{t}(\mathbf{x}_{t+1})- \frac{\lambda_{t}}{2}\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}\right),\forall\mathbf{u}\in V.\]
Proof.: First of all, the existence and unicity of \(\mathbf{x}_{t}\) is guaranteed by \(\psi_{t}\) being closed and strongly convex (see, e.g., Orabona, 2019, Theorem 6.8).
From Theorem 2.1, for any \(\mathbf{g}_{t}^{\prime}\in\partial\ell_{t}(\mathbf{x}_{t+1})\), we have \(\ell_{t}^{*}(\mathbf{g}_{t}^{\prime})=\langle\mathbf{x}_{t+1},\mathbf{g}_{t}^{\prime} \rangle-\ell_{t}(\mathbf{x}_{t+1})\). Hence, from (4), we have
\[\psi_{t+1,V}^{*}(\mathbf{\theta}_{t+1})-\psi_{t,V}^{*}(\mathbf{\theta}_{t })+\ell_{t}^{*}(\mathbf{z}_{t})\] \[=\psi_{t+1,V}^{*}(\mathbf{\theta}_{t}-\mathbf{z}_{t})-\psi_{t,V}^{*}(\bm {\theta}_{t})+\langle\mathbf{x}_{t+1},\mathbf{z}_{t}\rangle-\ell_{t}(\mathbf{x}_{t+1})\.\]
Using this identity, we have
\[\psi_{t+1,V}^{*}(\mathbf{\theta}_{t}-\mathbf{z}_{t})-\psi_{t,V}^{*}(\mathbf{ \theta}_{t})+\langle\mathbf{x}_{t+1},\mathbf{z}_{t}\rangle\] \[\quad=\langle\mathbf{\theta}_{t}-\mathbf{z}_{t},\mathbf{x}_{t+1}\rangle-\psi _{t+1}(\mathbf{x}_{t+1})-\langle\mathbf{\theta}_{t},\mathbf{x}_{t}\rangle+\psi_{t}(\mathbf{x} _{t})\] \[\quad\quad\quad+\langle\mathbf{x}_{t+1},\mathbf{z}_{t}\rangle\] \[\quad\leq\psi_{t}(\mathbf{x}_{t})-\psi_{t}(\mathbf{x}_{t+1})+\langle\mathbf{ \theta}_{t},\mathbf{x}_{t+1}-\mathbf{x}_{t}\rangle\.\]
From the first-order optimality condition of \(\mathbf{x}_{t}\), we have that \(\mathbf{\theta}_{t}\in\partial\psi_{t}(\mathbf{x}_{t})+\partial i_{V}(\mathbf{x}_{t})\). Moreover, for all \(\mathbf{g}_{t}^{\prime\prime}\in\partial i_{V}(\mathbf{x}_{t})\), by definition we have \(\langle\mathbf{g}_{t}^{\prime\prime},\mathbf{y}-\mathbf{x}_{t}\rangle\leq 0\) for all \(\mathbf{y}\in V\). Hence, for \(\mathbf{g}_{t}^{\prime}\in\partial\psi_{t}(\mathbf{x}_{t})\) and \(\mathbf{g}_{t}^{\prime\prime}\in\partial i_{V}(\mathbf{x}_{t})\) such that \(\mathbf{\theta}_{t}=\mathbf{g}_{t}^{\prime}+\mathbf{g}_{t}^{\prime\prime}\), we have
\[\psi_{t}(\mathbf{x}_{t})-\psi_{t}(\mathbf{x}_{t+1})+\langle\mathbf{\theta}_{ t},\mathbf{x}_{t+1}-\mathbf{x}_{t}\rangle\] \[\quad=\psi_{t}(\mathbf{x}_{t})-\psi_{t}(\mathbf{x}_{t+1})+\langle\mathbf{g}_{ t}^{\prime}+\mathbf{g}_{t}^{\prime\prime},\mathbf{x}_{t+1}-\mathbf{x}_{t}\rangle\] \[\quad\leq-\frac{\lambda_{t}}{2}\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2},\]
where in the inequality we also used the strong convexity of \(\psi_{t}\). Using this inequality in Theorem 3.1 and summing over time, we have
\[\sum_{t=1}^{T}\ell_{t}(\mathbf{x}_{t+1})-\sum_{t=1}^{T}\ell_{t}(\mathbf{ u})\] \[\quad\leq\psi_{T+1}(\mathbf{u})-\min_{\mathbf{x}\in V}\ \psi_{1}(\mathbf{x})-\sum_{t=1}^{T}\frac{\lambda_{t}}{2}\|\mathbf{x}_{t+1}-\mathbf{x}_{t} \|^{2}\.\]
By adding and subtracting \(\sum_{t=1}^{T}\ell_{t}(\mathbf{x}_{t})\) to both sides and reordering the terms, we have the stated bound.
This Lemma mirrors Theorem 5.2 in Campolongo & Orabona (2020), with the important difference that here we do not need the Bregman divergence to be bounded on the feasible set \(V\), thanks to the use of FTRL instead of OMD. We can now state the immediate corollary on a regret bound that depends on the temporal variation.
**Corollary 4.2**.: _Under the assumptions of Lemma 4.1, for any \(\mathbf{u}\in V\), we have_
\[\operatorname*{Regret}_{T}(\mathbf{u}) \leq\psi_{T+1}(\mathbf{u})-\min_{\mathbf{x}\in V}\ \psi_{1}(\mathbf{x})\] \[\quad+\ell_{1}(\mathbf{x}_{1})-\ell_{T}(\mathbf{x}_{T+1})+V_{T}\.\]
From this result, following (Campolongo & Orabona, 2020), it is relatively easy to obtain the following adaptive regret guarantee. The only difficulty is the fact that we need \(\psi_{t+1}\) to be independent of \(\mathbf{z}_{t}\) to have a simpler update rule. We solve this problem using an increasing regularizer that is "behind of two steps". In this way, we have that \(\lambda_{t+1}\) depends on quantities that are all known at the beginning of round \(t\). The proof is in Appendix B.
**Corollary 4.3**.: _Under the assumptions of Lemma 4.1, further assume \(\|\mathbf{g}_{t}\|_{*}\leq G\) for all \(t\). Define \(\gamma_{t}=\ell_{t}(\mathbf{x}_{t})-\ell_{t}(\mathbf{x}_{t+1})-\frac{\lambda_{t}}{2}\| \mathbf{x}_{t+1}-\mathbf{x}_{t}\|^{2}\) and \(\lambda_{t}=\frac{1}{\beta^{2}}\left(G\beta+\sum_{t=1}^{t-2}\gamma_{t}\right)\). Assume that \(\psi\) is closed and \(1\)-strongly convex w.r.t. \(\|\cdot\|\) and set \(\psi_{t}=\lambda_{t}\psi\). Then, for any \(\mathbf{u}\in V\), we have_
\[\operatorname*{Regret}_{T}(\mathbf{u}) \leq\min\left(\frac{1}{\beta}(\ell_{1}(\mathbf{x}_{1})-\ell_{T}(\mathbf{x} _{T+1})+V_{T}),\right.\] \[\quad\quad\quad\left.G+\sqrt{\frac{5}{4}\sum_{t=1}^{T}\|\mathbf{g}_{ t}\|_{*}^{2}}\right)\left(\frac{\psi(\mathbf{u})}{\beta}+\beta\right)\.\]
## 5 Two-step Updates
The choice of \(\mathbf{z}_{t}\) that minimizes the regret upper bound requires solving the optimization problem \(\min_{\mathbf{z}}\ H(\mathbf{z})\) or \(\min_{\mathbf{z}}\ H^{\prime}(\mathbf{z})\). We have seen in Section 3.1 that this corresponds to (some variant) of a implicit/proximal update and, depending on \(\ell_{t}\), it can be of difficult calculation. However, as we said, any choice better than \(\mathbf{g}_{t}\) will cause a provable gain. Hence, a viable solution is to _approximately_ solve for the optimal \(\mathbf{z}_{t}\).
Here, we propose a simple approximation: set \(\mathbf{z}_{t}\) as
\[\mathbf{z}_{t}\in\partial\ell_{t}(\nabla\psi_{t+1,V}^{*}(\mathbf{\theta}_{t}-\mathbf{g}_{t})) \tag{9}\]
or as
\[\mathbf{z}_{t}\in\partial\ell_{t}(\nabla\psi_{t,V}^{*}(\mathbf{\theta}_{t}-\mathbf{g}_{t})). \tag{10}\]
In words, we set \(\mathbf{z}_{t}\) to be a subgradient after one fake update. This is exactly the approach used in the Mirror-Prox algorithm (Nemirovski, 2004), an offline optimization algorithm. In the next theorem, when the loss functions \(\ell_{t}\) are smooth and the regularizer is chosen appropriately, we show that this choice can be used in the generalized implicit FTRL too and it cannot be worse than using \(\mathbf{g}_{t}\).
**Theorem 5.1**.: _Assume \(\psi_{t}(\mathbf{x})\) proper, closed, and \(\lambda_{t}\)-strongly convex with respect to \(\|\cdot\|\). Assume \(\ell_{t}(\mathbf{x})\) closed and \(\lambda_{t}\)-smooth w.r.t. \(\|\cdot\|_{*}\) for all \(t\). Then, using (9) and
assuming \(\lambda_{t+1}\geq L_{t}\), we have \(H_{t}(\mathbf{z}_{t})\leq H_{t}(\mathbf{g}_{t})\). On the other hand, when using (10) and assuming \(\lambda_{t}\geq L_{t}\) we have \(H_{t}^{\prime}(\mathbf{z}_{t})\leq H_{t}^{\prime}(\mathbf{g}_{t})\)._
Proof.: We only prove that statement for (9), the other one is similar. We would like to prove that
\[H_{t}(\mathbf{z}_{t}) =\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z}_{t})+\ell_{t}^{ \star}(\mathbf{z}_{t})\] \[\leq\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})+\ell_{t}^{ \star}(\mathbf{g}_{t})=H_{t}(\mathbf{g}_{t})\;.\]
This is equivalent to prove
\[\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z}_{t})-\psi_{t+1,V}^{\star}(\mathbf{ \theta}_{t}-\mathbf{g}_{t})\leq\ell_{t}^{\star}(\mathbf{g}_{t})-\ell_{t}^{\star}(\mathbf{ z}_{t})\;.\]
Given that \(\psi_{t+1}(\mathbf{x}_{t})\) is \(\lambda_{t+1}\)-strongly convex, by Theorem 2.2, we have \(\psi_{t}^{\star}(\mathbf{\theta})\) is \(1/\lambda_{t+1}\)-smooth with respect to \(\|\cdot\|_{\star}\). By the definition of smoothness, we have
\[\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z}_{t})-\psi_{t+1,V}^{ \star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})\] \[\quad\leq\langle\nabla\psi_{t+1}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_ {t}),\mathbf{g}_{t}-\mathbf{z}_{t}\rangle+\frac{1}{2\lambda_{t+1}}\|\mathbf{g}_{t}-\mathbf{z}_ {t}\|_{\star}^{2}\;.\]
Given that \(\ell_{t}(\mathbf{x}_{t})\) is \(L_{t}\)-smooth w.r.t \(\|\cdot\|_{\star}\), by Theorem 2.2\(\ell_{t}^{\star}(\mathbf{g})\) is \(1/L_{t}\) strongly convex w.r.t. \(\|\cdot\|\). So, by the definition of the strong convexity, we have
\[\ell_{t}^{\star}(\mathbf{g}_{t})-\ell_{t}^{\star}(\mathbf{z}_{t})\geq\langle\mathbf{q}_{t },\mathbf{g}_{t}-\mathbf{z}_{t}\rangle+\frac{1}{2L_{t}}\|\mathbf{g}_{t}-\mathbf{z}_{t}\|_{ \star}^{2},\]
for all \(\mathbf{q}_{t}\in\partial\ell_{t}^{\star}(\mathbf{z}_{t})\). Defining \(\mathbf{x}_{t+1}^{\prime}\triangleq\nabla\psi_{t+1}^{\star}(\mathbf{\theta}_{t}-\mathbf{g }_{t})\), by Theorem 2.1, we have \(\mathbf{x}_{t+1}^{\prime}\in\partial\ell_{t}^{\star}(\mathbf{z}_{t})\). Hence, we can select \(\mathbf{q}_{t}\) such that \(\mathbf{x}_{t+1}^{\prime}=\mathbf{q}_{t}\). Finally, using the assumption on \(\lambda_{t+1}\geq L_{t}\), we have the stated bound.
## 6 Going Beyond Subgradients with aProx
Till now, in all the updates we have considered \(\mathbf{z}_{t}\) was set to be a subgradient of \(\ell_{t}\) in a specific point. In this section, we show that we can go beyond this idea.
Asi & Duchi (2019) introduced aProx updates, that is proximal updates on surrogate loss functions. In particular, they used truncated linear lower bounds to the loss functions as surrogate functions. These simple surrogates are motivated by the fact that they are strictly better than linear approximation and at the same time they allow writing the proximal update in a closed form. Moreover, they showed empirically that in certain situations the performance of the algorithms becomes much more resistant to the tuning of the stepsizes.
One might just use the same truncated lower bounds in implicit FTRL, but it would not be clear why this should give any advantage in the theoretical bound. Indeed, even in Asi & Duchi (2019) it is not completely clear what part of the theory tells us that we should expect a better performance from these updates.
Here, we show how the _updates in the generalized implicit FTRL are actually a generalization of the aProx ones_. In particular, we generalize the aProx updates to arbitrary regularizers and show that all of them satisfy \(H_{t}(\mathbf{z}_{t})\leq H_{t}(\mathbf{g}_{t})\) and \(H_{t}^{\prime}(\mathbf{z}_{t})\leq H_{t}^{\prime}(\mathbf{g}_{t})\). In words, the aProx updates are guaranteed to be at least as good as the subgradient \(\mathbf{g}_{t}\) in minimizing the worst-case regret.
In order to consider truncated linear lower bounds to the functions \(\ell_{t}\), in this section we will assume that the loss functions \(\ell_{t}\) are lower bounded. Given that the regret is invariant to additive constants in the losses, without loss of generality we can assume the lower bound to be 0 for all the loss functions. Hence, define the truncated linear model \(\hat{\ell}_{t}:V\to\mathbb{R}\) around \(\mathbf{x}_{t}\) to be
\[\hat{\ell}_{t}(\mathbf{x})\triangleq\max(\ell_{t}(\mathbf{x}_{t})+\langle\mathbf{g}_{t}, \mathbf{x}-\mathbf{x}_{t}\rangle,0),\]
where \(\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\). For brevity of notation, our notation does not stress the fact that the truncated linear model depends on \(\mathbf{x}_{t}\) and the specific subgradient \(\mathbf{g}_{t}\).
To idea to extend aProx to the case of generalized implicit FTRL, we use the truncated linear lower bound in the update of \(\mathbf{z}_{t}\). So, we define
\[\mathbf{z}_{t}=\operatorname*{argmin}_{\mathbf{z}}\;\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t} -\mathbf{z}_{t})+\hat{\ell}_{t}^{\star}(\mathbf{z}_{t}) \tag{11}\]
or
\[\mathbf{z}_{t}=\operatorname*{argmin}_{\mathbf{z}}\;\psi_{t,V}^{\star}(\mathbf{\theta}_{t}- \mathbf{z}_{t})+\hat{\ell}_{t}^{\star}(\mathbf{z}_{t})\;. \tag{12}\]
**Theorem 6.1**.: _Assume the loss functions \(\ell_{t}:V\to\mathbb{R}\) to be convex, closed, and subdifferentiable in \(V\) for all \(t\). Set \(\mathbf{z}_{t}\) using (11) or (12). Then, we have that \(H_{t}(\mathbf{z}_{t})\leq H_{t}(\mathbf{g}_{t})\) or \(H_{t}^{\prime}(\mathbf{z}_{t})\leq H_{t}^{\prime}(\mathbf{g}_{t})\) respectively._
Proof.: We consider the update (11), the other case is very similar and we omit it.
First, we derive some inequalities on the quantities of interest. From Theorem 2.1, given that \(\mathbf{g}_{t}\in\partial\hat{\ell}_{t}(\mathbf{x}_{t})\) and \(\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\) we have both \(\ell_{t}(\mathbf{x}_{t})+\ell^{\star}(\mathbf{g}_{t})=\langle\mathbf{g}_{t},\mathbf{x}_{t}\rangle\) and \(\hat{\ell}_{t}(\mathbf{x}_{t})+\hat{\ell}^{\star}(\mathbf{g}_{t})=\langle\mathbf{g}_{t},\mathbf{x}_ {t}\rangle\). Moreover, given that \(\hat{\ell}_{t}(\mathbf{x})\leq\ell_{t}(\mathbf{x})\) for any \(\mathbf{x}\), we have \(\hat{\ell}_{t}^{\star}(\mathbf{z})\geq\ell_{t}^{\star}(\mathbf{z})\) for any \(\mathbf{z}\). Finally, by the definition of truncated linear lower bound, we have \(\ell_{t}(\mathbf{x}_{t})=\hat{\ell}_{t}(\mathbf{x}_{t})\).
Hence, we have
\[\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z}_{t})+\ell_{t}^{\star} (\mathbf{z}_{t})\] \[\quad\leq\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z}_{t})+\hat{ \ell}_{t}^{\star}(\mathbf{z}_{t})\] \[\quad=\min_{\mathbf{z}}\;\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z})+ \hat{\ell}_{t}^{\star}(\mathbf{z})\] \[\quad\leq\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})+\hat{ \ell}_{t}^{\star}(\mathbf{g}_{t})\] \[\quad=\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})+\langle\mathbf{g}_{t },\mathbf{x}_{t}\rangle-\hat{\ell}_{t}(\mathbf{x}_{t})\] \[\quad=\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})+\langle\mathbf{g}_{t },\mathbf{x}_{t}\rangle-\hat{\ell}_{t}(\mathbf{x}_{t})\] \[\quad=\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})+\ell_{t}^{ \star}(\mathbf{g}_{t})=H_{t}(\mathbf{g}_{t})\;.\qed\]
We can also immediately write closed form updates for generalized implicit FTRL with regularizer \(\psi_{t}(\mathbf{x})=\frac{\lambda_{t}}{2}\|\mathbf{x}\|^{2}\), that mirror the ones of aProx. The proof is in Section C.
**Corollary 6.2**.: _Set \(\psi_{t}=\frac{\lambda_{t}}{2}\|\mathbf{x}\|_{2}^{2}\) and \(\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\). Setting \(\mathbf{z}_{t}\) as in (11), we have that the update of generalized implicit FTRL is_
\[\mathbf{x}_{t+1}=\frac{\lambda_{t}}{\lambda_{t+1}}\mathbf{x}_{t}-\min\left(\frac{1}{ \lambda_{t+1}},\frac{\ell_{t}(\mathbf{x}_{t})}{\|\mathbf{g}_{t}\|^{2}}\right)\mathbf{g}_{t}\;.\]
_On the other hand, setting \(\mathbf{z}_{t}\) as in (12), the update is_
\[\mathbf{x}_{t+1}=\frac{\lambda_{t}}{\lambda_{t+1}}\mathbf{x}_{t}-\min\left(\frac{1}{ \lambda_{t+1}},\frac{\lambda_{t}}{\lambda_{t+1}}\frac{\ell_{t}(\mathbf{x}_{t})}{ \|\mathbf{g}_{t}\|^{2}}\right)\mathbf{g}_{t}\;.\]
## 7 Empirical Evaluation
As we said, in the worst case scenario any kind of implicit update cannot give any advantage over the usual updates. However, in practice it is well-known that things are vastly different. Hence, in this section, we compare the performance of different choices of \(\mathbf{z}_{t}\) in Algorithm 1 when \(\psi_{t}(\mathbf{x})=\frac{\lambda_{t}}{2}\|\mathbf{x}\|_{2}^{2}\). In particular, we consider:
* FTRL with linearized losses (Linear): \(\mathbf{z}_{t}=\mathbf{g}_{t}\);
* Implicit FTRL with aProx updates (Trunc): \(\mathbf{z}_{t}=\min\left\{1,\frac{\lambda_{t}\ell_{t}(\mathbf{x}_{t})}{\|\mathbf{g}_{t}\|^ {2}}\right\}\mathbf{g}_{t}\);
* Implicit FTRL with two-step updates (Twostep): \(\mathbf{z}_{t}=\partial\ell_{t}(\mathbf{x}_{t}-\mathbf{g}_{t}/\lambda_{t})\);
* Implicit FTRL with (6) when the proximal operator has a closed form (Proximal).
We adopt the choice of \(\lambda_{t}\) from Corollary 4.3.
We conduct linear prediction experiments on datasets from LibSVM (Chang & Lin, 2011). We show here experiments on classification tasks using the hinge loss and the logistic loss, and regression tasks with absolute loss. We normalize the datasets and added a constant bias term to the features. Given that in the online learning setting, we do not have the training data and validation data to tune the \(\beta\), we will plot the averaged loss, \(\frac{1}{t}\sum_{i=1}^{t}\ell_{i}(\mathbf{x}_{i})\), versus different choice of \(\beta\), that at the same time show the algorithms' sensitivity to the hyperparameter \(\beta\) and their best achievable performance. We consider \(\beta\in[10^{-3},10^{3}]\) for hinge loss and logistic loss, and \(\beta\in[10^{-3},10^{5}]\) for the absolute loss. Each algorithm is run 15 times, we plot the average of the averaged losses and the \(95\%\) confidence interval. Note that the confidence intervals so small to be invisible, but for the larger values of the \(\beta\) for the Linear updates.
Figure 1 and Figure 2 show the averaged loss versus different selections of hyperparameter \(\beta\) for classification tasks with hinge loss and logistic loss respectively. Note that with the hinge loss aProx updates and proximal updates are completely equivalent. In all experiments, FTRL with linearized updates is more sensitive to the setting of \(\beta\), and its performance is almost uniformly worse than all the other generalized implicit updates. This is in line with previous results in Asi & Duchi (2019) in the offline setting. With the logistic loss, the proximal operator does not have a closed-form solution. In all the classification experiments, the performance of generalized implicit FTRL with two-step updates seems remarkable and a possible viable alternative to aProx. The confidence intervals for all implicit updates have a width smaller than 0.01, making them too narrow to be visible in the figures. In contrast, when using hinge loss, the performance of FTRL with linear models exhibits significant fluctuations across different repetitions when a large learning rate is used. This observation provides evidence supporting our assertion that the selection of hyperparameter \(\beta\) greatly affects the performance of FTRL with linear
Figure 1: Hinge loss, averaged loss vs. hyperparameter \(\beta\).
models, while implicit updates demonstrate robustness.
Figure 3 shows that FTRL with linearized updates is very sensitive to the choice of the hyperparameter \(\beta\), while the implicit FTRL updates are robust. Again, Implicit FTRL with two-step updates achieves essentially the best performance. The confidence intervals in the regression tasks lead to a similar conclusion as in the classification tasks.
## 8 Conclusion and Future Work
In this work, we propose a new framework: generalized implicit Follow-the-Regularized-Leader. We show that generalized implicit FTRL can not only recover known algorithms, e.g., implicit FTRL and FTRL with linearized losses, but it also provides a theoretical guideline to design new algorithms, such as the extensions of aProx and Mirror-Prox. Indeed, we believe that the main contribution of our work lies precisely in the fact that it provides a unifying framework that is general, flexible, and theoretically grounded.
In the future, we plan to explore further this framework designing new \(\mathbf{z}_{t}\) with low computational complexity. This is a promising direction because the two-steps update seems to be already a valid alternative to the aProx updates, even if it comes at the computational expense of querying an additional gradient in each round.
## Acknowledgements
We thank Alex Shtoff for discussion and feedback on a preliminary version of this paper. Francesco Orabona is supported by the National Science Foundation under the grants no. 2022446 "Foundations of Data Science Institute" and no. 2046096 "CAREER: Parameter-free Optimization Algorithms for Machine Learning".
|
2309.10697 | Foliation adjunction | We present an adjunction formula for foliations on varieties and we consider
applications of the adjunction formula to the cone theorem for rank one
foliations and the study of foliation singularities. | Paolo Cascini, Calum Spicer | 2023-09-19T15:36:17Z | http://arxiv.org/abs/2309.10697v2 | # Foliation adjunction
###### Abstract.
We present an adjunction formula for foliations on varieties and we consider applications of the adjunction formula to the cone theorem for rank one foliations and the study of foliation singularities.
2010 Mathematics Subject Classification: 14E30, 37F75
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Adjunction
* 4 Cone theorem for rank one foliated pairs
* 5 Family of leaves of an algebraically integrable foliation
## 1. Introduction
Our primary goal is to present an adjunction formula for foliations on varieties defined over a field of characteristic zero.
Various special cases of the adjunction formula have appeared in [1], [2], [3], [4], [5] and [1]. We will give a treatment of the adjunction formula which unifies these other cases, and which is in line with the treatment of the adjunction formula for varieties (see for instance [13]).
Recall that the adjunction formula for varieties relates the canonical divisor of a smooth variety \(X\) and the canonical divisor of a smooth codimension one subvariety \(D\subset X\) by the linear equivalence
\[(K_{X}+D)|_{D}\sim K_{D}.\]
Given a foliation \(\mathcal{F}\) of rank \(r\) on a smooth variety \(X\) and a smooth codimension one subvariety \(D\subset X\) we can define a restricted foliation \(\mathcal{F}_{D}\) on \(D\). Roughly speaking, the leaves of \(\mathcal{F}_{D}\) are the leaves of \(\mathcal{F}\) intersected with \(D\). Thus, \(\mathcal{F}_{D}\) is a foliation of rank \(r-\epsilon(D)\) where
###### Abstract
We study the existence of a solution formula for the \(\mathcal{F}\)-invariant \(
\(\mathcal{F}\) is a rank one foliation, which generalise [13, Theorem 2.36] to any dimension (see also [1, Corollary IV.2.1] and [12]):
**Theorem 1.2** (= Theorem 4.5).: _Let \(X\) be a normal projective variety, let \(\mathcal{F}\) be a rank one foliation and let \(\Delta\geq 0\) so that \(K_{\mathcal{F}}\) and \(\Delta\) are \(\mathbb{Q}\)-Cartier and \((\mathcal{F},\Delta)\) is log canonical._
_Then there are \(\mathcal{F}\)-invariant rational curves \(C_{1},C_{2},\dots\) such that_
\[0<-(K_{\mathcal{F}}+\Delta)\cdot C_{i}\leq 2\dim X\]
_and_
\[\overline{\operatorname{NE}}(X)=\overline{\operatorname{NE}}(X)_{K_{ \mathcal{F}}+\Delta\geq 0}+\sum_{i}\mathbb{R}_{+}[C_{i}].\]
In [1, Theorem 1.2] a dynamical characterisation of ample line bundles on smooth surfaces was provided. As a consequence of the above theorem we are able to extend this to higher dimensions.
**Theorem 1.3** (= Corollary 4.7).: _Let \(X\) be a normal projective variety and let \(L\) be a \(\mathbb{Q}\)-Cartier divisor. Suppose that_
1. \(L^{\dim X}\neq 0\)_;_
2. _for some_ \(q>0\) _there exists a rank one foliation_ \(\mathcal{F}\) _with_ \(K_{\mathcal{F}}\equiv qL\)_; and_
3. \(\operatorname{Sing}\mathcal{F}\) _is isolated and_ \(\mathcal{F}\) _admits no invariant positive dimensional subvarieties._
_Then \(L\) is ample._
In fact, [1] proves a converse to the above theorem: if \(L\) is an ample \(\mathbb{Q}\)-Cartier divisor, \(n\gg 0\) is sufficiently divisible and \(\mathcal{F}\) is a general foliation with \(K_{\mathcal{F}}\equiv nL\), then \(\mathcal{F}\) admits no invariant positive dimensional subvarieties and has isolated singularities.
Furthermore, we consider the study of singularities of foliations with a non-trivial algebraic part:
**Theorem 1.4** (cf. Theorem 5.6).: _Let \(X\) be a \(\mathbb{Q}\)-factorial klt projective variety and let \(\mathcal{F}\) be a foliation with canonical singularities._
_Then the algebraic part of \(\mathcal{F}\) is induced by an almost holomorphic map._
We expect the Minimal Model Program to have interesting implications for the study of foliation singularities. Indeed, following [13, Theorem 1.6] and [13, Lemma 2.8], we believe that there is a close relation between the classes of singularities of the Minimal Model Program and the dicriticality properties of the foliation. In particular, we expect that canonical singularities satisfy some suitable non-dicritical condition. Theorem 5.6 is a partial confirmation of this in the case of
foliations with non-trivial algebraic part ( see [13, Conjecture 4.2] in the case of algebraically integrable foliations).
Building off of work of [14] this theorem has implications for the study of foliations where \(-K_{\mathcal{F}}\) is nef.
**Corollary 1.5** (cf. Corollary 5.7).: _Let \(X\) be a smooth projective variety and let \(\mathcal{F}\) be a foliation with canonical singularities. Suppose that \(-K_{\mathcal{F}}\) is nef and is not numerically trivial._
_Then the algebraic part of \(\mathcal{F}\) is induced by an equidimensional fibration._
### Acknowledgements
We would like to thank Jihao Liu and James M\({}^{c}\)Kernan for many useful discussions.
Both the authors are partially funded by EPSRC.
## 2. Preliminaries
All our schemes are Noetherian, pure dimensional and separated over an algebraically closed field \(K\) of characteristic zero. The results here hold equally well for algebraic spaces and for complex analytic varieties.
### Line and divisorial sheaves
Let \(X\) be a not necessarily reduced \(S_{2}\) scheme. A **line sheaf**\(L\) on \(X\) is a rank one \(S_{2}\) sheaf such that there exists a subscheme \(Z\subset X\) of codimension at least two such that \(L|_{X\setminus Z}\) is locally free. Given an integer \(n\), we define the line sheaf \(L^{[n]}\) to be \(i_{*}(L|_{X\setminus Z}^{\otimes n})\) where \(i\colon X\setminus Z\to X\) is the inclusion. We denote by \(\operatorname{Lsh}(X)\) the group of such sheaves and we define \(\operatorname{Lsh}(X)_{\mathbb{Q}}\coloneqq\operatorname{Lsh}(X)\otimes \mathbb{Q}\) to be the group of \(\mathbb{Q}\)-line sheaves.
If \(X\) is reduced, a **divisorial sheaf** on \(X\) is the data of a line sheaf \(L\) together with a choice of an embedding \(L\to K(X)\) and we denote the group of such sheaves by \(\operatorname{WSh}(X)\). We likewise define \(\mathbb{Q}\)-divisorial sheaves \(\operatorname{WSh}(X)_{\mathbb{Q}}\).
Consider a scheme \(X\) and a \(\mathbb{Q}\)-line sheaf \(L\). Let \(Z\subset X\) be a codimension two subscheme and let \(n>0\) be an integer such that \(L^{[n]}|_{X\setminus Z}\) is locally free on \(X\setminus Z\). Let \(D\) be an \(S_{2}\) scheme and let \(f\colon D\to X\) be a morphism such that \(f^{-1}(Z)\subset D\) is of codimension at least two. We define the divisorial pullback \(f^{w}L\) to be \(\frac{1}{n}j_{*}(f^{*}(L^{[n]}|_{X\setminus Z}))\) where \(j\colon D\setminus f^{-1}(Z)\to D\) is the inclusion. Note that \(f^{\omega}L\) is a \(\mathbb{Q}\)-line sheaf on \(D\). We can likewise define the restriction (and pullback) of a divisorial sheaf. If \(L\) is the sheaf defined by a \(\mathbb{Q}\)-Cartier divisor, then these notions all agree with the restriction and pullback of \(\mathbb{Q}\)-Cartier divisors. Similarly, if \(G\) is a prime divisor on \(X\) such that \(mG\) is Cartier on \(X\setminus Z\) for some positive integer \(m\) and no component of \(f(D)\) is contained in the support of \(G\), then we define \(f^{w}G\) to be \(\frac{1}{m}G^{\prime}\) where \(G^{\prime}\) is the closure
of \(f^{*}(mG|_{X\setminus Z})\) in \(D\). By linearity, we can extend the definition to any \(\mathbb{Q}\)-divisor on \(X\) which is \(\mathbb{Q}\)-Cartier on \(X\setminus Z\).
### \(G\)-sheaves
We refer to [11, Definition A.1] for the definition of a \(G\)-sheaf on a scheme \(X\). Note that in [11], the underlying scheme \(X\) is assumed to be a normal variety, but most of their results hold for general schemes (see also [12]). If \(p\colon X^{\prime}\to X\) is a finite Galois morphism between schemes with Galois group \(G\) and \(L^{\prime}\) is a coherent \(G\)-sheaf on \(X^{\prime}\), then \(p_{*}L^{\prime}\) is a \(G\)-sheaf on \(X^{\prime}\) and we denote by \((p_{*}L^{\prime})^{G}\) its associated sheaf of invariants (cf. [11, Definition A.2]).
### Integrable distributions and foliations
Let \(X\) be a not necessarily reduced \(S_{2}\) scheme. A rank \(r\)**integrable distribution**\(\mathcal{F}\) on \(X\) is the data of a
1. a line sheaf \(L\); and
2. a Pfaff field, i.e., a morphism \(\phi\colon\Omega^{r}_{X}\to L\), satisfying the following integrability condition: in some neighbourhood \(U\) of the generic point of each irreducible component of \(X\) there exists a rank \(r\) sheaf \(E\) and a surjective morphism \(q\colon\Omega^{1}_{U}\to E\) such that the \(r\)-th wedge power of this morphism agrees with \(\phi|_{U}\) and its dual \(E^{*}\hookrightarrow T_{U}\) is closed under Lie bracket.
We define the **canonical class** of the integrable distribution \(\mathcal{F}\) to be any Weil divisor \(K_{\mathcal{F}}\) on \(X\) such that \(\mathcal{O}_{X}(K_{\mathcal{F}})\cong L\). A rank \(r\)**foliation** on \(X\) is a rank \(r\) integrable distribution on \(X\) whose Pfaff field \(\phi\) is such that coker \(\phi\) is supported in codimension at least two. Given a rank \(r\) integrable distribution \(\mathcal{F}\) on a normal scheme \(X\) we define the **singular locus** of \(\mathcal{F}\), denoted \(\operatorname{Sing}\mathcal{F}\), to be the co-support of the ideal sheaf defined by the image of the induced map \((\Omega^{r}_{X}\otimes\mathcal{O}(-K_{\mathcal{F}}))^{**}\to\mathcal{O}_{X}\).
For foliations on normal varieties, this definition agrees with the usual definition, see Section 2.5 below. Elsewhere in the literature there are differing definitions for foliations on general schemes, we refer to Section 3.5.3 for a discussion of this point.
Let \(X\) be a not necessarily reduced \(S_{2}\) scheme. We define a \(\mathbb{Q}\)**-integrable distribution** of rank \(r\) on \(X\) to be the data of a line sheaf \(L\) on \(X\) and a non-zero morphism for some \(m>0\)
\[\phi\colon(\Omega^{r}_{X})^{\otimes m}\to L\]
such that any generic point of \(X\) admits a neighbourhood \(U\) and an integrable distribution on \(U\) defined by the Pfaff field \(\phi_{0}\colon\Omega^{r}_{U}\to\mathcal{O}_{U}(K_{\mathcal{F}})\)
and such that \(\phi|_{U}=\phi_{0}^{\otimes m}\). We say that \(m\) is the **index** of the \(\mathbb{Q}\)-integrable distribution. We will refer to \(\mathcal{F}\) as the **associated integrable distribution**.
We make note of the following:
**Lemma 2.1**.: _Let \(X\) be an \(S_{2}\) scheme, let \(E\) be a coherent sheaf on \(X\) and let \(L_{1},L_{2}\) be line sheaves on \(X\) with morphisms \(\psi_{i}\colon E\to L_{i}\) such that_
1. \(\psi_{1}=\psi_{2}\) _at the generic points of_ \(X\)_; and_
2. \(\operatorname{coker}\,\psi_{1}\) _is supported in codimension at least two._
_Then there exists a non-zero morphism \(L_{1}\to L_{2}\). In particular, if \(X\) is normal then there exists a uniquely defined effective Weil divisor \(B\) such that \(L_{2}=L_{1}\otimes\mathcal{O}_{X}(B)\)._
Proof.: We may freely remove subschemes of codimension at least two from \(X\) and so we may assume that \(L_{1}\) and \(L_{2}\) are locally free and that \(\psi_{1}\) is surjective. Let \(Q\) be the kernel of \(\psi_{1}\). By item (1) we have that \(\psi_{2}(Q)\subset L_{2}\) is a torsion subsheaf, and is therefore identically zero. Our result then follows.
**Lemma 2.2**.: _Let \(X\) be an \(S_{2}\) scheme, let \(\mathcal{F}^{\circ}\) be an integrable distribution on \(X\) and let \(U\subset X\) be a dense open subset._
1. _If_ \(X\setminus U\) _is of codimension at least two then_ \(\mathcal{F}^{\circ}\) _is uniquely determined by its restriction to_ \(U\)_._
2. _If_ \(\mathcal{F}^{\circ}\) _is a foliation, then it is uniquely determined by its restriction to_ \(U\)_._
3. _Suppose that_ \(X\) _is normal and that_ \(\mathcal{G}_{U}\) _is a foliation on_ \(U\)_. Then there exists a unique foliation_ \(\mathcal{G}\) _on_ \(X\) _whose restriction to_ \(U\) _is_ \(\mathcal{G}_{U}\)_._
4. _Suppose that_ \(X\) _is normal. Then there exists a unique foliation_ \(\mathcal{F}\) _on_ \(X\) _which agrees with_ \(\mathcal{F}^{\circ}\) _at the generic point of_ \(X\)_. In particular, there exists a canonically defined effective divisor_ \(B\) _such that_ \(K_{\mathcal{F}}+B\sim K_{\mathcal{F}^{\circ}}\)_._
Proof.: All the items are easy consequences of Lemma 2.1.
In Item (4), we will refer to \(\mathcal{F}\) as the foliation induced by the integrable distribution \(\mathcal{F}^{\circ}\).
### Invariant subschemes
Given an \(S_{2}\) scheme \(X\) and a rank \(r\) integrable distribution \(\mathcal{F}\) on \(X\), we say that an irreducible subscheme \(W\subset X\) is \(\mathcal{F}\)**-invariant** (or simply **invariant** if the integrable distribution is understood) if \(K_{\mathcal{F}}\) is Cartier at the generic point of \(W\) and in a neighbourhood of the generic point of \(W\) there is a factorisation
More generally, we say that a subscheme \(W\subset X\) is \(\mathcal{F}\)-invariant if each irreducible component is invariant.
Given an \(S_{2}\) scheme \(X\), an integrable distribution \(\mathcal{F}\) and an irreducible divisor \(D\) on \(X\), we define \(\epsilon(\mathcal{F},D)\coloneqq 0\) if \(D\) is \(\mathcal{F}\) invariant, and \(\epsilon(\mathcal{F},D)\coloneqq 1\) otherwise. When \(\mathcal{F}\) is clear from context we will write \(\epsilon(D)\) in place of \(\epsilon(\mathcal{F},D)\).
Given any \(\mathbb{Q}\)-divisor \(D\) on \(X\), we denote \(D_{\mathrm{inv}}\) to be the part of \(D\) supported on invariant divisors and \(D_{\mathrm{n-inv}}\coloneqq D-D_{\mathrm{inv}}\).
### Foliations on normal varieties
When \(X\) is a normal scheme, the above definition of foliation is equivalent to the usual definition of a foliation \(\mathcal{F}\) in terms of a saturated subsheaf \(T_{\mathcal{F}}\subset T_{X}\) closed under Lie bracket. In this case, we may take \(U\coloneqq X\setminus(\operatorname{Sing}X\cup\operatorname{Sing}\mathcal{F})\), \(E\coloneqq T_{\mathcal{F}}^{*}|_{U}\) and \(L\coloneqq\mathcal{O}_{X}(K_{\mathcal{F}})\). To verify this equivalence we first note that by Lemma 2.2 it suffices to verify this equivalence away from a subvariety of codimension at least two and, in particular, we may replace \(X\) by \(U\). Given the saturated subsheaf \(T_{\mathcal{F}}\subset T_{X}\) we get a Pfaff field by considering the morphism
\[\Omega_{X}^{r}\to(\bigwedge^{r}T_{\mathcal{F}})^{*}\cong\mathcal{O}_{X}(K_{ \mathcal{F}})\]
where \(r\) is the rank of \(\mathcal{F}\).
Conversely, consider a Pfaff field \(\phi\colon\Omega_{X}^{r}\to L\) satisfying our integrability condition. By assumption, over a dense open subset \(U\) of \(X\), we have a surjective morphism \(\Omega_{U}^{1}\to E\) whose dual defines a foliation on \(U\), which extends uniquely to a foliation on \(X\).
**Lemma 2.3**.: _Let \(X\) be a normal scheme, let \(\mathcal{F}\) be a rank \(r\) foliation on \(X\) and let \(W\subset X\) be an irreducible subscheme such that \(T_{\mathcal{F}}\) is locally free in a neighbourhood of the generic point of \(W\). Let \(I_{W}\) be the ideal of \(W\)._
1. _If_ \(\partial(I_{W})\subset I_{W}\) _for all local sections_ \(\partial\in T_{\mathcal{F}}\) _then_ \(W\) _is_ \(\mathcal{F}\)_-invariant._
2. _If_ \(W\) _is_ \(\mathcal{F}\)_-invariant and not contained in the singular locus of_ \(\mathcal{F}\) _then_ \(\partial(I_{W})\subset I_{W}\) _for all local sections_ \(\partial\in T_{\mathcal{F}}\)_._
Proof.: Everything is local about the generic point of \(W\), so we may freely replace \(X\) by a neighbourhood of the generic point of \(W\) and
therefore we may assume that \(T_{\mathcal{F}}\) is locally free. Let \(\partial_{1},\dots,\partial_{r}\) be generators of \(T_{\mathcal{F}}\).
We first prove item (1). Suppose that \(\partial_{i}(I_{W})\subset I_{W}\) for all \(i\) then \((\partial_{1}\wedge\dots\wedge\partial_{r})(df\wedge\beta)\) vanishes along \(W\) for any \(f\in I_{W}\) and any \((r-1)\)-form \(\beta\). In particular, \((dI_{W}\wedge\Omega_{X}^{r-1})|_{W}=\ker(\Omega_{X}^{r}|_{W}\to\Omega_{W}^{r})\) is contained in the kernel of \(\Omega_{X}^{r}|_{W}\to\mathcal{O}_{W}(K_{\mathcal{F}})\). This implies that in a neighbourhood of the generic point of W, the morphism factors through \(\Omega_{X}^{r}|_{W}\to\Omega_{W}^{r}\) and so \(W\) is \(\mathcal{F}\)-invariant.
We now prove item (2). Suppose that \(W\) is \(\mathcal{F}\)-invariant and not contained in \(\operatorname{Sing}\mathcal{F}\). Suppose for sake of contradiction that for some \(i\in\{1,\dots,r\}\) and some \(f\in I_{W}\) we have \(\partial_{i}(f)\notin I_{W}\). Without loss of generality we may assume that \(i=1\) and that \(\partial_{1}(f)\) is a unit. For \(i\geq 2\), by replacing \(\partial_{i}\) by \(\partial_{i}-\frac{\partial_{i}(f)}{\partial_{1}(f)}\partial_{1}\), we may freely assume that \(\partial_{i}(f)=0\).
Since \(\mathcal{F}\) is non-singular, we see that the morphism \(\Omega_{X}^{1}\to\mathcal{O}_{X}\) given by contraction with \(\partial_{i}\) is generically surjective, and so for all \(i\) there exists a \(1\)-form \(\alpha_{i}\) such that \(\partial_{i}(\alpha_{i})=1\). The image of \(df\wedge\alpha_{2}\wedge\dots\wedge\alpha_{r}\) under the Pfaff field corresponding to \(\mathcal{F}\) is \((df\wedge\alpha_{2}\wedge\dots\wedge\alpha_{r})(\partial_{1}\wedge\dots\wedge \partial_{r})=\partial_{1}(f)\) which does not vanish along \(W\). In particular, the kernel of \(\Omega_{X}^{r}|_{W}\to\Omega_{W}^{r}\) is not contained in the kernel of \(\Omega_{X}^{r}|_{W}\to\mathcal{O}_{W}(K_{\mathcal{F}})\), which contradicts our assumption that \(W\) is \(\mathcal{F}\)-invariant.
### Singularities of foliations from the perspective of the MMP
We refer to [10] for general notions of singularities coming from the MMP. We refer to [10, SS2.3] for a recollection on the definition of foliation singularities from the perspective of the MMP. We say that a variety \(X\) is **potentially klt** if there exists a \(\mathbb{Q}\)-divisor \(\Gamma\geq 0\) such that \((X,\Gamma)\) is klt.
If \(D\) is a \(\mathbb{Q}\)-divisor on a normal variety \(X\) and \(\Sigma\) is a prime divisor in \(X\), then we denote by \(m_{\Sigma}D\) the coefficient of \(D\) along \(\Sigma\).
**Lemma 2.4**.: _Let \(X\) be a normal variety, let \(\mathcal{F}\) be a foliation on \(X\) and let \(\Delta\geq 0\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \((\mathcal{F},\Delta)\) is log canonical._
_Then no component of \(\Delta\) is \(\mathcal{F}\)-invariant._
Proof.: Let \(D\) be a component of \(\Delta\). The statement may be checked (formally) locally about a general point of \(D\) and so we may assume that \(\mathcal{F}\) is induced by a fibration \(X\to Z\), in which case the claim can be easily verified.
### Algebraically integrable foliations
A dominant map \(\sigma\colon X\dasharrow Y\) between normal varieties is called **almost holomorphic** if there exist dense Zariski open subsets \(U\subset X\) and \(V\subset Y\) such that the induced map \(\sigma|_{U}\colon U\to V\) is a proper morphism.
Let \(\sigma\colon Y\dasharrow X\) be a dominant map between normal varieties and let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\). We denote by \(\sigma^{-1}\mathcal{F}\) the **induced foliation** on \(Y\) (e.g. see [11, Section 3.2]). If \(T_{\mathcal{F}}=0\), i.e. if \(\mathcal{F}\) is the foliation by points on \(X\), then we refer to \(\sigma^{-1}\mathcal{F}\) as the **foliation induced by \(\sigma\)** and we denote it by \(T_{X/Y}\). In this case, the foliation \(\sigma^{-1}\mathcal{F}\) is called **algebraically integrable**.
Let \(f\colon X\to Z\) be a morphism between normal varieties and let \(\mathcal{F}\) be the induced foliation on \(X\). If \(f\) is equidimensional, then we define the **ramification divisor**\(R(f)\) of \(f\) as
\[R(f)=\sum_{D}(f^{*}D-f^{-1}(D))\]
where the sum runs through all the prime divisors of \(Z\). Note that, since \(f\) is equidimensional, the pullback \(f^{*}D\) is well defined even though \(D\) might not be \(\mathbb{Q}\)-Cartier. In this case, we have
\[K_{\mathcal{F}}\sim K_{X/Z}-R(f)\]
(e.g. see [11, Notation 2.7 and SS2.9]).
Let \(X\) be a normal variety and let \(\mathcal{F}\) be a foliation on \(X\). Then \(\mathcal{F}\) is called **purely transcendental** if there is no positive dimensional algebraic subvariety passing through the general point of \(X\), which is tangent to \(\mathcal{F}\). In general, by [11, Definition 2.3] it follows that for any foliation \(\mathcal{F}\) on \(X\) there exists a dominant map \(\sigma\colon X\dasharrow Y\) and a purely transcendental foliation \(\mathcal{G}\) on \(Y\) such that \(\mathcal{F}=\sigma^{-1}\mathcal{G}\). Note that \(Y\) and \(\mathcal{G}\) are unique up to birational equivalence. The foliation \(\mathcal{H}\) induced by \(\sigma\) is called the **algebraic part of \(\mathcal{F}\)**.
## 3. Adjunction
### Lifting derivations on the normalisation
The goal of this Subsection is to prove the following:
**Proposition 3.1**.: _Let \(X\) be a reduced scheme and let \(n\colon\tilde{X}\to X\) be its normalisation. Suppose that \(L\) is a locally free sheaf of rank one on \(X\) and that we have a morphism \(\phi\colon(\Omega^{r}_{X})^{\otimes m}\to L\) for some \(r,m\geq 0\)._
_Then there is a natural morphism \(\tilde{\phi}\colon(\Omega^{r}_{\tilde{X}})^{\otimes m}\to n^{*}L\) which agrees with \(\phi\) at the generic point of \(\tilde{X}\)._
We follow closely the proofs of [10, Theorem 2.1.1] and [1, Proposition 4.5]. Given an integral domain \(A\) we denote by \(K(A)\) the field of fractions of \(A\). Let \(M\) be an \(A\)-module and let \(r,m\geq 0\). Then an \(A\)-linear map
\[\phi\colon(\Omega^{r}_{A})^{\otimes m}\to M\]
induces a derivation
\[\partial\colon K(A)^{\oplus rm}\to M\otimes_{A}K(A)\quad\text{such that}\quad \partial(A^{\oplus rm})\subset M.\]
We begin with the following two Lemma:
**Lemma 3.2**.: _Let \(A\) be a Noetherian integral \(K\)-algebra, let \(B\subset A\) be a Noetherian subalgebra and let \(\partial\colon B\to A\) be a derivation. Let \(A^{\prime}\) (resp. \(B^{\prime}\)) be the integral closure of \(A\) in \(K(A)\) (resp. of \(B\) in \(K(B)\))._
_Then \(\partial\) lifts to a derivation \(\partial^{\prime}\colon B^{\prime}\to A^{\prime}\)._
Proof.: The proof of [10, Theorem, SS3] works equally well here. Indeed, as noted in [10, Footnote 2], the only thing that is needed in the proof is that the differential operator \(E\coloneqq e^{t\partial}\) defines an injective map \(E\colon K(B)[[t]]\to K(A)[[t]]\), which is immediate since \(B\) is a subalgebra of \(A\).
**Lemma 3.3**.: _Let \(X\) be a normal scheme and let \(E\) be a coherent sheaf on \(X\). Let \(m\) be a positive integer and let \(s\colon E^{\otimes m}\to\mathcal{O}_{X}\) be a morphism such that at the generic point \(\eta\) of \(X\) there exists a morphism \(t\colon E_{\eta}\to\mathcal{O}_{X,\eta}\) such that \(t^{\otimes m}=s|_{\eta}\)._
_Then there exists a cyclic Galois cover \(\sigma\colon\overline{X}\to X\) and a morphism \(\bar{t}\colon\sigma^{*}E\to\mathcal{O}_{\overline{X}}\) such that \(\bar{t}^{\otimes m}=\sigma^{*}s\)._
Proof.: By assumption there exists a rational function \(\phi\in K(X)\) such that \(s=\phi t^{\otimes m}\). We may conclude by taking \(\overline{X}\) to be the normalisation of \(X\) in \(K(X)(\sqrt[n]{\phi})\).
We call the morphism \(\bar{t}\) constructed in Lemma 3.3 the \(m\)**-th root** of \(s\).
Proof of Propostion 3.1.: The claim is local on \(X\) so we may freely assume that \(X=\operatorname{Spec}\,A\) is affine and \(L\cong\mathcal{O}_{X}\). Let \(A^{\prime}\) be the integral closure of \(A\) in \(K(A)\) and define \(X^{\prime}\coloneqq\operatorname{Spec}\,A^{\prime}\) and let \(n\colon X^{\prime}\to X\) be the normalisation morphism. Using the same argument as in the proofs of [1, Lemma 4.3 and Proposition 4.5], we may assume that \(A\) and \(A^{\prime}\) are complete one dimensional local rings.
Let us first suppose that \(r=1\). The map \(\phi\colon(\Omega^{1}_{X})^{\otimes m}\to\mathcal{O}_{X}\) induces a map \(\phi^{\prime}\colon(n^{*}\Omega^{1}_{X})^{\otimes m}\to\mathcal{O}_{X^{\prime}}\). Let \(\sigma\colon\overline{X}=\operatorname{Spec}\,\overline{A}\to X^{\prime}\) be the cover associated to \(\phi^{\prime}\) with Galois group \(G\) guaranteed by Lemma 3.3 and let \(\psi\colon\sigma^{*}n^{*}\Omega^{1}_{X}\to\mathcal{O}_{\overline{X}}\) be the \(m\)-th root of \(\phi\).
Observe that \(\psi\) corresponds to a derivation \(\partial_{\psi}\colon A\to\overline{A}\). By Lemma 3.2 this lifts to a derivation \(\partial^{\prime}_{\psi}\colon A^{\prime}\to\overline{A}\). This in turn implies that \(\psi\) lifts to a morphism \(\rho\colon\sigma^{*}\Omega^{1}_{X^{\prime}}\to\mathcal{O}_{\overline{X}}\). Finally note that \(\rho^{\otimes m}\) is \(G\)-invariant and so descends to a morphism \((\Omega^{1}_{X^{\prime}})^{\otimes m}\to\mathcal{O}_{X^{\prime}}\), which is precisely our required lifting of \(\phi\).
To prove the claim, by following the proof of [1, Proposition 4.5], we may proceed by induction on \(r\) and reduce the proof of the statement for general \(r\geq 1\) to the case \(r=1\), proven above.
### Construction of the different
**Lemma 3.4**.: _Let \(X\) be a normal scheme, let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\) and let \(\iota\colon D\hookrightarrow X\) be a reduced subscheme of codimension one. Let \(n\colon S\to D\) be the normalisation and suppose there exist_
1. _a subscheme_ \(Z\) _of_ \(X\) _such that_ \(Z\cap D\) _is of codimension at least two in_ \(D\)_; and_
2. \(a\) \(\mathbb{Q}\)_-divisor_ \(\Delta\geq 0\) _on_ \(X\) _which does not contain any component of_ \(D\) _in its support and such that for some sufficiently divisible positive integer_ \(m\) _we have that_ \(m(K_{\mathcal{F}}+\Delta)\) _and_ \(\epsilon(D)D\) _are Cartier on_ \(X\setminus Z\)_._
_Then, there exists a canonically defined \(\mathbb{Q}\)-integrable distribution of rank \(r-\epsilon(D)\) and index \(m\) on \(S\), given by a morphism_
\[\psi_{S}\colon(\Omega_{S}^{r-\epsilon(D)})^{\otimes m}\to n^{w}\mathcal{O}_{X} (m(K_{\mathcal{F}}+\Delta+\epsilon(D)D)).\]
Proof.: It suffices to prove the Lemma away from a subvariety of codimension at least two in \(D\), and so we may freely assume that for some sufficiently divisible positive integer \(m\), we have that \(m(K_{\mathcal{F}}+\Delta)\) and \(\epsilon(D)D\) are Cartier.
Taking the \(m\)-th tensor power of the Pfaff field defining \(\mathcal{F}\) and composing with the inclusion \(\mathcal{O}_{X}(mK_{\mathcal{F}})\to\mathcal{O}_{X}(m(K_{\mathcal{F}}+\Delta))\) we get a morphism \(\phi\colon(\Omega_{X}^{r})^{\otimes m}\to\mathcal{O}_{X}(m(K_{\mathcal{F}}+ \Delta))\).
Suppose first that \(D\) is \(\mathcal{F}\)-invariant. Let \(N\coloneqq\ker\,((\Omega_{X}^{r})^{\otimes m}|_{D}\to(\Omega_{D}^{r})^{ \otimes m})\) and let \(\phi|_{D}\) be the restriction of \(\phi\) to \(D\). By definition \(\phi|_{D}(N)\) vanishes at the generic point of \(D\), and since \(m(K_{\mathcal{F}}+\Delta)\) is Cartier and \(D\) is \(S_{2}\), it follows that \(\phi|_{D}(N)=0\). Therefore we have a morphism \(\psi\colon(\Omega_{D}^{r})^{\otimes m}\to\mathcal{O}_{D}(m(K_{\mathcal{F}}+ \Delta))\).
By Lemma 2.3, we have a commutative diagram in a neighbourhood \(U\) of the generic point of \(D\)
and the integrability condition follows immediately.
Now suppose that \(D\) is not \(\mathcal{F}\)-invariant. We define a morphism
\[\psi^{\prime}\colon(\Omega_{D}^{r-1})^{\otimes m}\to\mathcal{O}_{D}(m(K_{ \mathcal{F}}+\Delta+D))\]
by
\[\psi^{\prime}(\alpha_{1}\otimes\cdots\otimes\alpha_{m})\coloneqq\frac{\phi(df \wedge\tilde{\alpha}_{1}\otimes\cdots\otimes df\wedge\tilde{\alpha}_{m})}{f^{m}} |_{D}\]
for any local sections \(\alpha_{1},\ldots,\alpha_{m}\) of \(\Omega_{D}^{r-1}\), where \(f\) is a local equation of \(D\), \(\tilde{\alpha}_{i}\) is any local lift of \(\alpha_{i}\) to \(\Omega_{X}^{r-1}\), and \(\frac{\phi(df\wedge\tilde{\alpha}_{1}\otimes\cdots\otimes df\wedge\tilde{ \alpha}_{m})}{f^{m}}\) is considered as a section of \(\mathcal{O}_{X}(m(K_{\mathcal{F}}+\Delta+D))\). We claim that this morphism is independent of our choice of \(f\). Indeed, if \(f^{\prime}\) is another local equation of \(D\) then \(f^{\prime}=uf\) where \(u\) is a unit. We compute that
\[\frac{\phi(df^{\prime}\wedge\tilde{\alpha}_{1}\otimes\cdots\otimes df^{\prime }\wedge\tilde{\alpha}_{m})}{(f^{\prime})^{m}}=\frac{\phi(df\wedge\tilde{ \alpha}_{1}\otimes\cdots\otimes df\wedge\tilde{\alpha}_{m})}{f^{m}}+\frac{ \phi(\omega_{0})}{f^{m-1}},\]
where \(\omega_{0}\) is a local section of \(\Omega_{X}^{r}\). Observe that \(\frac{\phi(\omega_{0})}{f^{m-1}}\) is a section of \(\mathcal{O}_{X}(m(K_{\mathcal{F}}+\Delta)+(m-1)D)\), and so it vanishes along \(D\) when considered as a section of \(\mathcal{O}_{X}(m(K_{\mathcal{F}}+\Delta+D))\). Thus,
\[\frac{\phi(df^{\prime}\wedge\tilde{\alpha}_{1}\otimes\cdots\otimes df^{\prime }\wedge\tilde{\alpha}_{m})}{(f^{\prime})^{m}}|_{D}=\frac{\phi(df\wedge\tilde{ \alpha}_{1}\otimes\cdots\otimes df\wedge\tilde{\alpha}_{m})}{f^{m}}|_{D},\]
as required. Likewise, it is easy to check that \(\psi\) is independent of the choice of \(\tilde{\alpha}_{i}\).
Since \(D\) is not \(\mathcal{F}\)-invariant, by Lemma 2.3 the composition
\[\mathcal{O}_{U\cap D}(-D)\to\Omega_{U}^{1}|_{U\cap D}\to T_{\mathcal{F}}^{*}| _{U\cap D}\]
is non-zero. Let \(E\) be its cokernel. Then, the induced map
\[\Omega_{U\cap D}^{1}\to E\]
satisfies the integrability condition.
In either case, by Proposition 3.1 we have a lift of \(\psi\) to
\[\tilde{\psi}\colon(\Omega_{S}^{r-\epsilon(D)})^{\otimes m}\to n^{*}\mathcal{O }_{X}(m(K_{\mathcal{F}}+\Delta+\epsilon(D)D))\]
which necessarily satisfies our integrability condition and we may conclude.
**Remark 3.5**.: _Set-up as in Lemma 3.4. If in addition we have that \(D\) is \(S_{2}\), then using the same proof as in the Lemma, it follows that there exists a canonically defined \(\mathbb{Q}\)-integrable distribution of rank \(r-\epsilon(D)\) and index \(m\) on \(D\), given by a morphism_
\[\psi_{D}\colon(\Omega_{D}^{r-\epsilon(D)})^{\otimes m}\to\iota^{w}\mathcal{O} _{X}(m(K_{\mathcal{F}}+\Delta+\epsilon(D)D)).\]
**Proposition-Definition 3.6**.: _Let \(X\) be a normal scheme, let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\), and let \(\iota\colon D\hookrightarrow X\) be an integral subscheme of codimension one. Let \(n\colon S\to D\) be the normalisation. Suppose that there exist a subscheme \(Z\) of \(X\) such that \(Z\cap D\) is of codimension at least two in \(D\) and a \(\mathbb{Q}\)-divisor \(\Delta\geq 0\) on \(X\) which does not contain
\(D\) in its support and such that \(K_{\mathcal{F}}+\Delta\) and \(\epsilon(D)D\) are \(\mathbb{Q}\)-Cartier on \(X\setminus Z\)._
_Then, there exists a canonically defined_ **restricted foliation**_\(\mathcal{F}_{S}\) on \(S\), and a canonically defined \(\mathbb{Q}\)-divisor \(\operatorname{Diff}(\mathcal{F},\Delta)\geq 0\), called the_ **different**_, such that_
\[n^{w}(K_{\mathcal{F}}+\Delta+\epsilon(D)D)\sim_{\mathbb{Q}}K_{\mathcal{F}_{S}} +\operatorname{Diff}(\mathcal{F},\Delta).\]
_If \(\Delta=0\) then we denote \(\operatorname{Diff}(\mathcal{F},0)\) simply as \(\operatorname{Diff}(\mathcal{F})\)._
Proof.: It suffices prove the Proposition away from a subvariety of codimension at least two in \(D\), and so we may freely assume that \(K_{\mathcal{F}}\) and \(\epsilon(D)D\) are \(\mathbb{Q}\)-Cartier.
We first construct the restricted foliation on \(S\). By Lemma 2.2 it suffices to define the restricted foliation \(\mathcal{F}_{S}\) at the generic point of \(S\). We may therefore assume that \(K_{\mathcal{F}}\) and \(D\) are both Cartier and therefore may apply Lemma 3.4 to produce \(\mathcal{F}_{S}\).
Let \(m\) be a sufficiently divisible positive integer \(m\) such that \(m(K_{\mathcal{F}}+\Delta)\) and \(m\epsilon(D)D\) are Cartier. Then, by Lemma 3.4, we have a morphism at the generic point \(\eta\) of \(S\),
\[\psi_{\eta}\colon(\Omega_{S}^{r-\epsilon(D)})^{\otimes m}\to\mathcal{O}_{S}(m (K_{\mathcal{F}}+\Delta+\epsilon(D)D))\]
which agrees with \((\Omega_{S}^{r-\epsilon(D)})|_{\eta}^{\otimes m}\to\mathcal{O}_{\eta}(mK_{ \mathcal{F}_{S}})\). If we can show that, after possibly replacing \(m\) by a larger multiple, \(\psi_{\eta}\) extends to a morphism on all of \(S\), call it \(\psi_{S}\), then by Lemma 2.1 there exists an effective Weil divisor \(B\) such that
\[mK_{\mathcal{F}_{S}}+B\sim m(K_{\mathcal{F}}+\Delta+\epsilon(D)D)|_{S}.\]
Thus, it is enough to define \(\operatorname{Diff}(\mathcal{F},\Delta)\coloneqq\frac{1}{m}B\).
Note that if \(D\) is \(\mathcal{F}\)-invariant, then the existence of \(\psi_{S}\) is guaranteed by Lemma 3.4. Thus, we may assume that \(D\) is not \(\mathcal{F}\)-invariant. To check that \(\psi_{\eta}\) extends as a morphism, it suffices to do so locally. Let \(P\in D\) be a closed point. Then there exists a quasi-etale cyclic cover \(q\colon V\to U\) with Galois group \(G\), where \(U\) is a neighbourhood of \(P\) in \(X\), and such that \(q^{*}D\) is a Cartier divisors. Let \(m\) be the order of \(G\). After possibly replacing \(q\) by a higher cover, we may assume that \(m\) does not depend on \(P\in D\). Let \(\mathcal{F}_{V}\coloneqq q^{-1}\mathcal{F}|_{U}\) be the foliation induced on \(V\), let \(D_{V}\coloneqq q^{-1}(D)\) and let \(\Delta_{V}\) be a \(\mathbb{Q}\)-divisor on \(V\) such that \(K_{\mathcal{F}_{V}}+\Delta_{V}=q^{*}(K_{\mathcal{F}}+\Delta)\). By assumption, we may assume that \(q\) is etale in a neighborhood of any general point of \(U\cap\operatorname{Supp}\Delta\). Thus, \(\Delta_{V}\geq 0\). Let \(\nu\colon S_{V}\to D_{V}\) be the normalisation of \(D_{V}\). Then the action of \(G\) lifts to \(S_{V}\) and the induced morphism \(p\colon S_{V}\to S_{U}\coloneqq n^{-1}(U\cap D)\) is the quotient by \(G\). Note that \(D_{V}\) is not \(\mathcal{F}_{V}\)-invariant.
Since \(m(K_{\mathcal{F}_{V}}+\Delta_{V})\) and \(D_{V}\) are Cartier on \(V\), by Lemma 3.4 there exists a \(\mathbb{Q}\)-integrable distribution of rank \(r-1\) and index \(m\) on \(S_{V}\), given by a \(G\)-invariant morphism
\[\psi_{S_{V}}\colon(\Omega_{S_{V}}^{r-1})^{\otimes m}\to\nu^{*}\mathcal{O}_{D_{ V}}(m(q^{*}(K_{\mathcal{F}}+\Delta+D))).\]
Let \(\mathcal{L}\) be its image and let \(\mathcal{L}^{\prime}=(p_{*}L)^{G}\) be the \(G\)-invariant push-forward of \(\mathcal{L}\) on \(S_{U}\) (cf. Section 2.2). Then, it follows from our construction that \((\mathcal{L}^{\prime})^{\otimes m}\) is a subsheaf of \(\mathcal{O}_{S_{U}}(m(K_{\mathcal{F}}+\Delta+D))\) and that there exists a natural morphism
\[\psi_{S_{U}}\colon(\Omega_{S_{U}}^{r-1})^{\otimes m}\to(\mathcal{L}^{\prime}) ^{\otimes m}\]
which coincides with \(\psi_{\eta}\) at the generic point \(\eta\) of \(S\). Thus, our claim follows.
**Remark 3.7**.: _Set-up as in Proposition-Definition 3.6. If in addition we have that \(D\) is \(S_{2}\) then there exists a \(\mathbb{Q}\)-integrable distribution of rank \(r-\epsilon(D)\) and index \(m\) on \(D\), given by the morphism_
\[\psi_{D}\colon(\Omega_{D}^{r-\epsilon(D)})^{\otimes m}\to\iota^{w}\mathcal{O} _{X}(m(K_{\mathcal{F}}+\Delta+\epsilon(D)D))\]
_and whose associated integrable distribution on \(D\) coincides with the restricted foliation on \(S\) at any generic point of \(D\). Indeed, using the same construction as in Proposition-Definition 3.6, by Lemma 3.4 and Remark 3.5 we may assume that \(D\) is not \(\mathcal{F}\)-invariant and that we have a morphism_
\[(\Omega_{D_{V}}^{r-\epsilon(D)})^{\otimes m}\to\mathcal{O}_{D_{V}}(q^{*}(m(K_ {\mathcal{F}}+\Delta+D))).\]
_Taking the \(G\)-invariant pushforward of this morphism and composing with the natural morphism \((\Omega_{D}^{r-\epsilon(D)})^{\otimes m}\to(q_{*}(\Omega_{D_{V}}^{r-\epsilon( D)})^{\otimes m})^{G}\) gives a morphism_
\[(\Omega_{D}^{r-\epsilon(D)})^{\otimes m}\to q_{*}(\mathcal{O}_{D_{V}}(q^{*}(m( K_{\mathcal{F}}+\Delta+D))))^{G}.\]
_By construction of the index one cover, e.g.,[12, Definition 2.52], we have that \(q_{*}(\mathcal{O}_{D_{V}}(q^{*}(m(K_{\mathcal{F}}+\Delta+D))))^{G}\) is a line sheaf, and is therefore isomorphic to \(\iota^{w}(\mathcal{O}_{U}(m(K_{\mathcal{F}}+\Delta+D)))\), and so we may conclude._
**Remark 3.8**.: _Set-up as in Proposition-Definition 3.6. If in addition we have that \(K_{\mathcal{F}}+\Delta\) and \(\epsilon(D)D\) are \(\mathbb{Q}\)-Cartier then_
\[n^{*}(K_{\mathcal{F}}+\Delta+\epsilon(D)D)\sim_{\mathbb{Q}}K_{\mathcal{F}_{S}}+ \operatorname{Diff}(\mathcal{F},\Delta).\]
**Remark 3.9**.: _Set up as in Definition-Proposition 3.6. Let \(B\geq 0\) be a \(\mathbb{Q}\)-divisor which does not contain \(D\) in its support and such that \(B\) is
\(\mathbb{Q}\)-Cartier on \(X\setminus Z\). Then, from the construction it follows immediately that_
\[\operatorname{Diff}(\mathcal{F},\Delta+B)=\operatorname{Diff}(\mathcal{F},\Delta) +n^{w}B.\]
**Remark 3.10**.: _Set up as in Definition-Proposition 3.6. We can compute the different using resolutions. Indeed, suppose that \(\pi\colon X^{\prime}\to X\) is a log resolution of \((X,D+\operatorname{Supp}\Delta)\). Note that \(\pi\) is not necessarily a reduction of singularities of \(\mathcal{F}\). We may write_
\[K_{\mathcal{F}^{\prime}}+\Delta^{\prime}+\epsilon(D)D^{\prime}+E=\pi^{*}(K_{ \mathcal{F}}+\Delta+\epsilon(D)D)\]
_where \(E\) is \(\pi\)-exceptional, \(\mathcal{F}^{\prime}\coloneqq\pi^{-1}\mathcal{F}\), \(D^{\prime}=\pi_{*}^{-1}D\) and \(\Delta^{\prime}=\pi_{*}^{-1}\Delta\). Let \(S\) be the normalisation of \(D\) and let \(\mu\colon D^{\prime}\to S\) be the induced morphism._
_Then_
\[\mu_{*}(\operatorname{Diff}(\mathcal{F}^{\prime},\Delta^{\prime})+E|_{D^{ \prime}})=\operatorname{Diff}(\mathcal{F},\Delta).\]
In the case of higher codimension invariant centres, we also have a sub-adjunction statement:
**Proposition-Definition 3.11**.: _Let \(X\) be a normal scheme, let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\), and let \(\iota\colon D\to X\) be a subscheme. Let \(n\colon S\to D\) be the normalisation._
_Suppose that_
1. _there exist a_ \(\mathbb{Q}\)_-divisor_ \(\Delta\geq 0\) _and a subscheme_ \(Z\) _of_ \(X\) _such that_ \(Z\cap D\) _is of codimension at least two in_ \(D\) _and_ \(K_{\mathcal{F}}+\Delta\) _is_ \(\mathbb{Q}\)_-Cartier on_ \(X\setminus Z\)_;_
2. \(D\) _is not contained in_ \(\operatorname{Supp}\Delta\cup\operatorname{Sing}\mathcal{F}\)_; and_
3. \(D\) _is_ \(\mathcal{F}\)_-invariant._
_Then there exists a canonically defined_ **restricted foliation**_\(\mathcal{F}_{S}\) _on_ \(S\)_, and a canonically defined_ \(\mathbb{Q}\)_-divisor_ \(\operatorname{Diff}(\mathcal{F},\Delta)\geq 0\)_, called the_ **different**_, such that_
\[n^{w}(K_{\mathcal{F}}+\Delta)\sim_{\mathbb{Q}}K_{\mathcal{F}_{S}}+ \operatorname{Diff}(\mathcal{F},\Delta).\]
Proof.: Let \(T\) denote the union of the non-Cartier locus of \(K_{\mathcal{F}}\), the support of \(\Delta\) and the singular locus of \(\mathcal{F}\). By assumption \(U\coloneqq D\setminus(D\cap T)\) is a dense open subscheme of \(D\). Arguing as in Lemma 3.4 we get a restriction foliation \(\mathcal{F}_{U}\) on \(U\), which by [1, Proposition 4.5] and Lemma 2.2 extends to a foliation of rank \(r\) on \(S\).
The construction of \(\operatorname{Diff}(\mathcal{F},\Delta)\) proceeds in an analogous way to the construction in the proof of Proposition-Definition 3.6.
### Calculation of the different in some special cases
**Lemma 3.12**.: _Let \(X\) be a normal scheme, let \(\mathcal{F}\) be a foliation on \(X\) and let \(D\subset X\) be a reduced subscheme of codimension one which is \(\mathcal{F}\)-invariant. Let \(n\colon S\to D\) be its normalisation. Suppose that \(T_{\mathcal{F}}\) is locally free in a neighbourhood of \(D\). Let \(\mathcal{F}_{S}^{\circ}\) be the integrable distribution induced on \(S\) (cf. Lemma 3.4). Let \(W\) be an \(\mathcal{F}\)-invariant subvariety of \(X\) such that \(D\cap W\) is not contained in \(\operatorname{Sing}\mathcal{F}\)._
_Then \(n^{-1}(W)\) is \(\mathcal{F}_{S}^{\circ}\)-invariant._
Proof.: The problem is local about any point of \(D\), so we may shrink about any point of \(D\cap W\). Let \(\partial_{1},\dots,\partial_{r}\) be generators of \(T_{\mathcal{F}}\).
By construction, \(\partial_{i}\) lifts to a vector field \(\tilde{\partial}_{i}\) on \(S\), which satisfies the equality \(n^{*}\partial_{i}(f)=\tilde{\partial}_{i}(n^{*}f)\) for any \(i=1,\dots,r\) and any \(f\in\mathcal{O}_{X}\). Moreover, \(T_{\mathcal{F}_{S}^{\circ}}\) is generated by \(\tilde{\partial}_{1},\dots,\tilde{\partial}_{r}\). Thus, our claim follows from Lemma 2.3 and by observing that \(I_{n^{-1}(W)}=n^{*}I_{W}\).
The following proposition computes the different in some special cases:
**Proposition 3.13**.: _Let \(X\) be a normal scheme, let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\), let \(D\subset X\) be a reduced subscheme of codimension one such that \(K_{\mathcal{F}}\) and \(\epsilon(D)D\) are \(\mathbb{Q}\)-Cartier. Let \(n\colon S\to D\) be the normalisation and let \(P\) be a prime divisor on \(S\)._
_Then the following hold:_
1. _If_ \(n(P)\) _is contained in_ \(\operatorname{Sing}\mathcal{F}\) _then_ \(m_{P}\mathrm{Diff}(\mathcal{F})>0\)_._
2. _Suppose that_ \(D\) _is Cartier,_ \(\epsilon(D)=1\) _and that_ \(T_{\mathcal{F}}\) _is locally free and generated by_ \(\partial_{1},\dots,\partial_{r}\)_. If_ \(f\) _is a local parameter for_ \(D\) _and_ \((\partial_{1}(f),\dots,\partial_{r}(f))\subset I_{n(P)}\) _then_ \(m_{P}\mathrm{Diff}(0)>0\)_. Moreover,_ \(m_{P}\mathrm{Diff}(\mathcal{F})\) _is an integer. Conversely, suppose that_ \(X\) _and_ \(D\) _are smooth at the generic point of_ \(n(P)\)_, and that_ \(m_{P}\mathrm{Diff}(\mathcal{F})>0\) _then_ \((\partial_{1}(f),\dots,\partial_{r}(f))\subset I_{n(P)}\)_._
3. _Suppose that_ \(\epsilon(D)=0\) _and that_ \(n(P)\) _is not contained in_ \(\operatorname{Sing}\mathcal{F}\)_. Then_ \(m_{P}\mathrm{Diff}(\mathcal{F})=\frac{m-1}{m}\epsilon(\mathcal{F}_{S},P)\) _where_ \(m\) _is a positive integer that divides the Cartier index of_ \(K_{\mathcal{F}}\) _at the generic point of_ \(n(P)\)_. If the index one cover associated to_ \(K_{\mathcal{F}}\) _is smooth in a neighbourhood of the preimage of the generic point of_ \(n(P)\)_, then_ \(m\) _equals the Cartier index of_ \(K_{\mathcal{F}}\) _at the generic point of_ \(n(P)\)_._
4. _Suppose that_ \(\epsilon(D)=1\) _and that_ \(X\) _has quotient singularities in a neighbourhood of the generic point of_ \(n(P)\)_. Then_ \(m_{P}\mathrm{Diff}(\mathcal{F})\) _is of the form_ \(\frac{a+\epsilon(\mathcal{F}_{S},P)(m-1)}{m}\) _where_ \(a\geq 0\) _is an integer and where_ \(m\) _divides the Cartier index of_ \(K_{\mathcal{F}}\) _at the generic point of_ \(n(P)\)
Proof.: All these assertions are local about the generic point of \(n(P)\), so at any point we may freely replace \(X\) by a neighbourhood of the generic point of \(n(P)\).
Proof of item (1). Let us first suppose that \(K_{\mathcal{F}}\) and \(\epsilon(D)D\) are Cartier in a neighbourhood of \(P\). By assumption, the morphism \(\phi\colon\Omega_{X}^{r}\to\mathcal{O}_{X}(K_{\mathcal{F}})\) takes values in \(I\mathcal{O}_{X}(K_{\mathcal{F}})\) where \(I\) is an ideal sheaf, whose co-support contains \(n(P)\). Let \(m\) be the positive integer such that \(mP\) is the divisorial part of the scheme defined by \(n^{*}I\).
If \(\epsilon(D)=0\) then the lift of the restriction of \(\phi\) to \(D\) factors through
\[\Omega_{S}^{r}\to\mathcal{O}_{S}(n^{*}K_{\mathcal{F}}-mP).\]
If \(\epsilon(D)=1\) then, as in the proof of Lemma 3.4, we may define \(\tilde{\psi}\colon\Omega_{X}^{r-1}\to\mathcal{O}_{X}(K_{\mathcal{F}}+D)\) by \(\tilde{\psi}\coloneqq\frac{\phi(df\wedge\cdot)}{f}\) for some choice \(f\) of a local parameter for \(D\). Then \(\tilde{\psi}\) still factors through \(I\mathcal{O}_{X}(K_{\mathcal{F}}+D)\) and so the lift of the restriction to \(D\) factors through
\[\Omega_{S}^{r-1}\to\mathcal{O}_{S}(n^{*}(K_{\mathcal{F}}+D)-mP).\]
In both cases, we have that \(m_{P}\mathrm{Diff}(\mathcal{F})=m\), as required.
We now consider the general case. Let \(q\colon V\to U\) be a quasi-etale cyclic cover, where \(U\) is a neighbourhood of the generic point of \(P\) in \(X\), and such that \(q^{*}K_{\mathcal{F}}\) and \(q^{*}(\epsilon(D)D)\) are Cartier divisors. Let \(\mathcal{F}^{\prime}=q^{-1}\mathcal{F}\). [11, Corollary 5.14] guarantees that \(P\) is contained in \(\mathrm{Sing}\,\mathcal{F}\) if and only \(P^{\prime}\coloneqq q^{-1}(P)\) is contained in \(\mathrm{Sing}\,\mathcal{F}^{\prime}\). Let \(S^{\prime}\) denote the normalisation of \(q^{-1}(D)\) and let \(\mathcal{F}_{S^{\prime}}\) and \(\mathrm{Diff}(\mathcal{F}^{\prime})\) be respectively the restricted foliation and the different associated to \(\mathcal{F}^{\prime}\) on \(S^{\prime}\). We have a finite morphism \(p\colon S^{\prime}\to S\) and we have that
\[K_{\mathcal{F}_{S^{\prime}}}+\mathrm{Diff}(\mathcal{F}^{\prime})=p^{*}(K_{ \mathcal{F}_{S}}+\mathrm{Diff}(\mathcal{F})).\]
Let \(e\) be the ramification index of \(p\) along \(P^{\prime}\). By applying [11, Lemma 3.4] (see also [13, Proposition 2.2]) we get an equality
\[m_{P^{\prime}}\mathrm{Diff}(\mathcal{F}^{\prime})=em_{P}\mathrm{Diff}(\mathcal{ F})-\epsilon(\mathcal{F}_{S},P)(e-1). \tag{1}\]
Since \(m_{P^{\prime}}\mathrm{Diff}(\mathcal{F}^{\prime})\geq 1\) this implies that \(m_{P}\mathrm{Diff}(\mathcal{F})>0\).
Proof of item (2). In this case, the Pfaff field \(\phi\colon\Omega_{X}^{r}\to\mathcal{O}_{X}(K_{\mathcal{F}})\) associated to \(\mathcal{F}\) is given explicitly by \(\omega\mapsto\omega(\partial_{1}\wedge\cdots\wedge\partial_{r})\) where \(\omega\) is a local section of \(\Omega_{X}^{r}\). It follows that if \(\eta\) is any local section of \(\Omega_{X}^{r-1}\) then
\[\phi(df\wedge\eta)=\sum_{i=1}^{r}(-1)^{i}\partial_{i}(f)\eta(\partial_{1} \wedge\cdots\wedge\hat{\partial}_{i}\wedge\cdots\wedge\partial_{r}).\]
So if \(n^{*}(\partial_{1}(f),\dots,\partial_{r}(f))\subset I_{P}\) we see that the restricted Pfaff field \(\psi^{\prime}\colon\Omega_{S}^{r-1}\to n^{*}(\mathcal{O}_{X}(K_{\mathcal{F}}+D))\) vanishes along \(P\). Since \(K_{\mathcal{F}}\) is Cartier, it follows that \(m_{P}\mathrm{Diff}(\mathcal{F})\) is an integer.
Conversely, suppose that \(D\) and \(X\) are smooth at the generic point of \(P\) and there exists an \(i\) such that \(\partial_{i}(f)\notin I_{P}\). Up to relabelling we may take \(i=1\) and after localising about the generic point of \(P\) we may assume \(\partial_{1}(f)\) is a unit. For \(j\geq 2\), we define
\[\partial_{j}^{\prime}\coloneqq\partial_{j}-\frac{\partial_{j}(f)}{\partial_{1 }(f)}\partial_{1}.\]
We have that \(\partial_{1},\partial_{2}^{\prime},\dots,\partial_{r}^{\prime}\) generate \(T_{\mathcal{F}}\) in a neighbourhood of the generic point of \(P\) and so, up to replacing \(\partial_{j}\) by \(\partial_{j}^{\prime}\), we may freely assume that \(\partial_{j}(f)=0\) for \(j\geq 2\). A direct calculation as above shows that the restricted Pfaff field \(\psi^{\prime}\colon\Omega_{S}^{r-1}\to n^{*}\mathcal{O}_{X}(K_{\mathcal{F}}+D)\) does not vanish along \(P\), as required.
Proof of item (3). Suppose for the moment that \(K_{\mathcal{F}}\) is Cartier. In this case, the natural map \(\Omega_{D}^{r}\to\mathcal{O}_{D}(K_{\mathcal{F}})\) vanishes along \(P\) if and only if \(\Omega_{X}^{r}\to\mathcal{O}_{X}(K_{\mathcal{F}})\) vanishes along \(P\), and so \(m_{P}\mathrm{Diff}(\mathcal{F})=0\) as required.
We now handle the case where \(K_{\mathcal{F}}\) is \(\mathbb{Q}\)-Cartier. After possibly shrinking \(X\), we consider the index one cover \(q\colon X^{\prime}\to X\) associated to \(K_{\mathcal{F}}\) and let \(S^{\prime}\) be the normalisation of \(q^{-1}(D)\). Note that the ramification index of \(S^{\prime}\to S\) along \(P^{\prime}\coloneqq q^{-1}(P)\) divides the Cartier index of \(K_{\mathcal{F}}\) and that if \(\mathcal{F}^{\prime}\coloneqq q^{-1}\mathcal{F}\) then [11, Corollary 5.14] implies that \(P^{\prime}\) is not contained in \(\mathrm{Sing}\,\mathcal{F}^{\prime}\). We may then argue as in the proof of Item (1) and use Equation (1) to conclude.
To check the last claim, it suffices to consider the case where \(\epsilon(\mathcal{F}_{S},P)=1\). After possibly replacing \(X\) by a neighbourhood of the generic point of \(P\), we may assume that the index one cover \(q\colon X^{\prime}\to X\) associated to \(K_{\mathcal{F}}\) is such that \(X^{\prime}\) is smooth. Let \(m\) be the Cartier index of \(K_{\mathcal{F}}\). Since \(T_{\mathcal{F}^{\prime}}\) is reflexive, by [10, Corollary 1.4] it is locally free away from a subset of codimension at least three in \(X\). Thus, we may assume that \(T_{\mathcal{F}^{\prime}}\) is locally free. Let \(P^{\prime}\coloneqq q^{-1}(P)\).
We claim that \(D^{\prime}\coloneqq q^{-1}(D)\) is normal and irreducible. Assuming the claim we see that the ramification index of the induced morphism \(D^{\prime}\to D\) along \(P^{\prime}\) is \(m\), and we may conclude.
To prove the claim, first note that \(D^{\prime}\) is connected. Suppose that \(D^{\prime}\) is not smooth in a neighbourhood of \(P^{\prime}\). By [13, Lemma 2.6]\(\mathrm{Sing}\,D^{\prime}\) is \(\mathcal{F}^{\prime}\)-invariant. By Lemma 3.12 we see that \(P^{\prime}\) is invariant under the integrable distribution \(\mathcal{F}_{S^{\prime}}^{\circ}\) induced on \(S^{\prime}\), where \(n^{\prime}\colon S^{\prime}\to D^{\prime}\) is the normalisation. Since we assumed that \(P\) is not \(\mathcal{F}_{S}\)-invariant, it follows that every local section of \(T_{\mathcal{F}_{S^{\prime}}^{\circ}}\) must vanish along \(P^{\prime}\), which in turn
implies that every local section of \(T_{\mathcal{F}^{\prime}}\) must vanish along \(n^{\prime}(P^{\prime})\) and, in particular, \(n^{\prime}(P^{\prime})\) is contained in \(\operatorname{Sing}\mathcal{F}^{\prime}\). [11, Corollary 5.14] implies that \(n(P)\) is contained in \(\operatorname{Sing}\mathcal{F}\), contrary to our hypothesis.
Proof of item (4). As in the proof of item (1) using Equation 1 we may freely replace \(X\) by the index one cover associated to \(K_{\mathcal{F}}\) and so may assume that \(X\) is smooth. By [10, Corollary 1.4] we may assume that \(T_{\mathcal{F}}\) is locally free. We may then conclude by Item (2).
### Adjunction of singularities
Let \((A,\mathfrak{m})\) be a regular local ring which contains a field \(k\) of characteristic zero and a derivation \(\partial\) such that \(\partial(\mathfrak{m})\subset\mathfrak{m}\). Let \(K\subset A\) be a quasi coefficient field, i.e., a field \(K\subset A\) such that \((A/\mathfrak{m})\) is formally etale over \(K\) (e.g. see [12, Theorem 3]).
Since \(A\) is regular the exact sequence
\[0\to\Omega_{K/k}\otimes A\to\Omega_{A/k}\to\Omega_{A/K}\to 0\]
splits. It follows that any \(k\)-linear derivation \(\partial\colon A\to A\) may be written as \(\partial=\partial_{K}+\delta\) where \(\partial_{K}\colon A\to A\) is a \(K\)-linear derivation and \(\delta\) is induced by a \(k\)-linear derivation \(K\to A\).
We define the linear part \(\partial_{0}\) of \(\partial\) to be the \(A/\mathfrak{m}\)-linear map \(\mathfrak{m}/\mathfrak{m}^{2}\to\mathfrak{m}/\mathfrak{m}^{2}\) induced by \(\partial_{K}\).
**Lemma 3.14**.: _Let \(X\) be a regular two dimensional scheme over a field \(K\) of characteristic zero. Let \(\mathcal{F}\) be a rank one integrable distribution on \(X\)._
_Then_
1. _there exists a birational morphism_ \(p\colon X^{\prime}\to X\) _where_ \(X^{\prime}\) _is a regular scheme such that, if_ \(\mathcal{F}^{\prime}\coloneqq p^{-1}\mathcal{F}\) _then for all_ \(x\in\operatorname{Sing}\mathcal{F}^{\prime}\)_, the linear part of_ \(\mathcal{F}^{\prime}\) _at_ \(x\) _is non-nilpotent; and_
2. _if_ \(C\subset X\) _is a reduced divisor which is not_ \(\mathcal{F}\)_-invariant then there exists a birational morphism_ \(p\colon X^{\prime}\to X\) _where_ \(X^{\prime}\) _is a regular scheme such that if_ \(C^{\prime}\coloneqq p_{*}^{-1}C\) _and_ \(\mathcal{F}^{\prime}=p^{-1}\mathcal{F}\) _then for all points_ \(c\in C^{\prime}\) _if_ \(\partial\) _is a generator of_ \(T_{\mathcal{F}^{\prime}}\) _near_ \(c\) _and_ \(f\) _is a local equation for_ \(C^{\prime}\) _near_ \(c\) _then_ \(\partial(f)\) _is a unit around_ \(c\)_._
Proof.: Since we are in dimension two, the only possible centres for our blow ups are closed points and therefore the problem is local about any point of \(X\). We may then, without loss of generality, assume that \(x\in X=\operatorname{Spec}A\) where \((A,\mathfrak{m})\) is a local ring with a system of parameters \(x,y\).
Let us first observe that the second claim of the Lemma is an immediate consequence of the first part applied to the vector field \(f\partial\) where
\(f\) is a local equation for \(C\) and \(\partial\) is a local generator of \(T_{\mathcal{F}}\). So, it remains to prove the first claim.
Let \(\hat{A}\) be the completion of \(A\) along \(\mathfrak{m}\). We may freely replace \(A\) by \(\hat{A}\), and so may assume that \(A\) is a complete local ring. By the Cohen structure theorem \(A\cong K[[x,y]]\). Next, let us write \(\partial=\partial_{K}+\delta\) where \(\partial_{K}\) is a \(K\)-linear derivation \(A\to A\) and \(\delta\) is induced by a \(k\)-linear derivation \(K\to A\). Since the linear part of \(\partial\) and \(\partial_{K}\) are the same we may freely replace \(\partial\) by \(\partial_{K}\) and so may assume that \(\partial\) is \(K\)-linear.
Let \(\bar{K}\) be an algebraic closure of \(K\). By [10] there exists a modification \(\rho\colon Y^{\prime}\to\operatorname{Spec}A\otimes_{K}\bar{K}\) satisfying the required properties which may also be defined over some Galois extension \(L/K\) with Galois group \(G\). Perhaps replacing \(Y^{\prime}\) by a higher model we may assume that \(\rho\) is \(G\)-equivariant. Since the non-nilpotence of the linear part of a derivation is unchanged by extending the base field we see that \(Y^{\prime}/G\to\operatorname{Spec}A\) is our required modification.
**Lemma 3.15**.: _Let \(X\) be a smooth variety, let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\) and let \(D\) be a smooth divisor such that \(D\) is not \(\mathcal{F}\)-invariant. Let \(Z\subset D\) be a codimension one subvariety in \(D\)._
_Then there exists a log resolution \(\mu\colon X^{\prime}\to X\) of \((X,D)\) such that if we write \((K_{\mathcal{F}^{\prime}}+D^{\prime})|_{D^{\prime}}=K_{\mathcal{F}_{D^{\prime }}}+\operatorname{Diff}(\mathcal{F})\) where \(D^{\prime}=\mu_{*}^{-1}D\) and \(\mathcal{F}^{\prime}=\mu^{-1}\mathcal{F}\) then \(Z^{\prime}\) is not contained in the support of \(\operatorname{Diff}(\mathcal{F}^{\prime})\), where \(Z^{\prime}\subset D^{\prime}\) is the strict transform of \(Z\) through the induced morphism \(D^{\prime}\to D\). In particular, \(\mathcal{F}^{\prime}\) is smooth along \(Z^{\prime}\)._
Proof.: We may freely shrink about the generic point of \(Z\), and so we may assume that \(D=\{x_{1}=0\}\), \(Z=\{x_{1}=x_{2}=0\}\) and that \(T_{\mathcal{F}}\) is locally free, [11, Corollary 1.4], and generated by \(\partial_{1},\dots,\partial_{r}\).
We will construct a model \(\mu\colon X^{\prime}\to X\) such that, using the same notation as in the statement of the Lemma, there is a vector field \(\partial\in T_{\mathcal{F}^{\prime}}\) such that \(\partial(I_{D^{\prime}})\) is not contained in \(I_{Z^{\prime}}\). By (1) and (2) of Proposition 3.13 this will be our desired model.
Up to relabelling we may assume that \(\partial_{1}\in T_{\mathcal{F}}\) does not leave \(D\) invariant. By localising about the generic point of \(Z\), we may reduce to the case where \(X\) is a regular two dimensional scheme, \(D\) is a curve and \(Z\) is a point. In this case, our result follows by passing to a resolution guaranteed by applying Lemma 3.14 to the foliation \(\mathcal{F}_{1}\) generated by \(\partial_{1}\).
**Theorem 3.16**.: _Let \(X\) be a normal variety, let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\) and let \(D\) be a prime divisor which is not \(\mathcal{F}\)-invariant. Let \(0\leq\Delta=D+\Delta^{\prime}\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(D\) is not contained in the support of \(\Delta^{\prime}\). Let \(Z\subset X\) be a subvariety such that \(Z\cap D\) is of
codimension at least two in \(D\) and such that \(K_{\mathcal{F}}\) and \(D\) are \(\mathbb{Q}\)-Cartier on \(X\setminus Z\). Suppose that \((\mathcal{F},\Delta)\) is canonical (resp. log canonical, resp. terminal, resp. log terminal). Let \(S\to D\) be the normalisation and let \(\Delta_{S}\coloneqq\operatorname{Diff}(\mathcal{F},\Delta^{\prime})\)._
_Then \((\mathcal{F}_{S},\Delta_{S})\) is canonical (resp. log canonical, resp. terminal, resp. log terminal)._
Proof.: Pick any divisor \(E_{S}\) on some birational model \(S^{\prime}\to S\). Let \(\mu\colon Y\to X\) be any log resolution of \((X,\Delta)\) (we emphasise that this not necessarily a log resolution of \(\mathcal{F}\)) which extracts a divisor \(E\) such that \(E\cap S_{Y}=E_{S}\) where \(S_{Y}\coloneqq\mu_{*}^{-1}D\). Let \(\mathcal{F}_{Y}\coloneqq\mu^{-1}\mathcal{F}\) and \(\Delta_{Y}\coloneqq\mu_{*}^{-1}\Delta\).
By Lemma 3.15, perhaps replacing \(Y\) by a higher model we may assume that if we write \((K_{\mathcal{F}_{Y}}+S_{Y})|_{S_{Y}}\sim K_{\mathcal{F}_{S_{Y}}}+\Theta_{S_{Y}}\) then \(\Theta_{S_{Y}}\) does not contain \(E_{S}\) in its support.
By assumption \(a(E,\mathcal{F},\Delta)\geq 0\) (resp. \(\geq-\epsilon(\mathcal{F}_{Y},E)\), etc.) and so we see that \(a(E_{S},\mathcal{F}_{S},\Delta_{S})\geq 0\) (resp. \(\geq-\epsilon(\mathcal{F}_{Y},E)\), etc.). To conclude it suffices to show that if \(a(E,\mathcal{F},\Delta)<0\) then \(\epsilon(\mathcal{F}_{S_{Y}},E_{S})=1\). Suppose for sake of contradiction that \(\epsilon(\mathcal{F}_{S_{Y}},E_{S})=0\). By Lemma 3.15 we see that \(E_{S}\subset Y\) is not contained in \(\operatorname{Sing}\mathcal{F}_{Y}\) and so shrinking about a general point of \(E_{S}\) we may assume that \(\mathcal{F}_{Y}\) is smooth, and that \(\Delta_{Y}=S_{Y}\). If \(a(E,\mathcal{F},\Delta)<0\) then a direct calculation shows that the blow up of \(Y\) along \(E_{S}\) extracts an invariant divisor \(F\) such that
\[a(F,\mathcal{F},\Delta)=a(F,\mathcal{F}_{Y},S_{Y}-a(E,\mathcal{F},\Delta)E)<0,\]
contradicting the hypothesis that \((\mathcal{F},\Delta)\) is log canonical. Thus, our claim follows.
**Corollary 3.17**.: _Let \(X\) be a potentially klt variety, let \(\mathcal{F}\) be a foliation of rank \(r\) on \(X\) and let \(D\) be a divisor which is not \(\mathcal{F}\)-invariant. Let \(0\leq\Delta=D+\Delta^{\prime}\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(D\) is not contained in the support of \(\Delta^{\prime}\). Suppose that \(D\), \(K_{\mathcal{F}}+\Delta\) and \(K_{X}+\Delta\) are \(\mathbb{Q}\)-Cartier. Let \(n\colon S\to D\) be the normalisation and let \(\operatorname{Diff}(X,\Delta^{\prime})\) be the classical different associated to \((X,\Delta)\) on \(S\)._
_Then we have an inequality of differents_
\[\operatorname{Diff}(\mathcal{F},\Delta^{\prime})_{\mathrm{n-inv}}\geq \operatorname{Diff}(X,\Delta^{\prime})_{\mathrm{n-inv}}.\]
Proof.: To prove the claim it suffices to work in the neighbourhood of the generic point of a divisor \(P\subset S\), which is not \(\mathcal{F}_{S}\)-invariant. Since \(X\) is potentially klt, it has quotient singularities in a neighbourhood of the generic point of \(P\). Arguing as in the proof of item (1) of Proposition 3.13 we may freely replace \(X\) by a finite cover, and so we may assume that \(X\) is smooth.
Suppose that \(\pi\colon\overline{X}\to X\) is a log resolution of \((X,D+\operatorname{Supp}\Delta)\). We may write
\[K_{\overline{\mathcal{F}}}+\overline{\Delta}+\overline{D}+E=\pi^{*}(K_{ \mathcal{F}}+\Delta+D)\]
and
\[K_{\overline{X}}+\overline{\Delta}+\overline{D}+E^{\prime}=\pi^{*}(K_{ \mathcal{F}}+\Delta+D)\]
where \(E,E^{\prime}\) are \(\pi\)-exceptional, \(\overline{\mathcal{F}}\coloneqq\pi^{-1}\mathcal{F}\), \(\overline{D}\coloneqq\pi^{-1}_{*}D\) and \(\overline{\Delta}\coloneqq\pi^{-1}_{*}\Delta\). By [13, Corollary 3.3] we have that \(E^{\prime}\leq E\). Let \(S\) be the normalisation of \(D\) and let \(\mu\colon\overline{D}\to S\) be the induced morphism.
By Remark 3.9 we have that \(\operatorname{Diff}(\overline{\mathcal{F}},\overline{\Delta})\geq\overline{ \Delta}|_{\overline{D}}\). Thus, if \(\operatorname{Diff}(\overline{X},\overline{\Delta})\) denotes the classical different on \(\overline{X}\), then \(\operatorname{Diff}(\overline{X},\overline{\Delta})=\overline{\Delta}|_{ \overline{D}}\) and by Remark 3.10 we have
\[\operatorname{Diff}(\mathcal{F},\Delta)=\mu_{*}(\operatorname{Diff}(\overline{ \mathcal{F}},\overline{\Delta})+E|_{\overline{D}})\geq\mu_{*}(\operatorname{ Diff}(\overline{X},\overline{\Delta})+E^{\prime}|_{\overline{D}})=\operatorname{ Diff}(X,\Delta).\]
Thus, our claim follows.
**Lemma 3.18**.: _Let \(X\) be a potentially klt variety, let \(\mathcal{F}\) be an algebraically integrable foliation of rank \(r\) on \(X\) which is induced by a morphism \(f\colon X\to Z\) and let \(D\) be a divisor which is not \(\mathcal{F}\)-invariant. Let \(0\leq\Delta=D+\Delta^{\prime}\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(D\) is not contained in the support of \(\Delta^{\prime}\). Suppose that \(D\), \(K_{\mathcal{F}}+\Delta\) and \(K_{X}+\Delta\) are \(\mathbb{Q}\)-Cartier. Let \(n\colon S\to D\) be the normalisation and let \(\operatorname{Diff}(X,\Delta^{\prime})\) be the classical different associated to \((X,\Delta)\) on \(S\). Suppose that \((\mathcal{F},\Delta)\) is log canonical._
_Then_
\[\operatorname{Diff}(\mathcal{F},\Delta^{\prime})=\operatorname{Diff}(X,\Delta ^{\prime})_{\operatorname{n-inv}}.\]
Proof.: By Corollary 3.17 we have an inequality
\[\operatorname{Diff}(\mathcal{F},\Delta^{\prime})\geq\operatorname{Diff}(X, \Delta^{\prime})_{\operatorname{n-inv}}.\]
By Theorem 3.16 we have that \((\mathcal{F}_{S},\operatorname{Diff}(\mathcal{F},\Delta^{\prime}))\) is log canonical and by Lemma 2.4 it follows that \(\operatorname{Diff}(\mathcal{F},\Delta^{\prime})\) has no \(\mathcal{F}_{S}\)-invariant components. Thus, it suffices to show that if \(P\subset S\) is a divisor which is not \(\mathcal{F}_{S}\)-invariant then the coefficient of \(P\) in each of the differents is the same.
Since \((\mathcal{F},\Delta)\) is induced by the morphism \(f\colon X\to Z\), \(\mathcal{F}_{S}\) is induced by the restricted morphism \((f\circ n)\colon S\to Z\). Since \(P\) is not \(\mathcal{F}_{S}\)-invariant, \(P\) dominates \(Z\). Let \(\mu\colon X^{\prime}\to X\) be any birational model and \(E\) be any \(\mu\)-exceptional divisor centred on \(n(P)\). Let us define
\[G\coloneqq(K_{\mu^{-1}\mathcal{F}}+\mu_{*}^{-1}\Delta-\mu^{*}(K_{\mathcal{F}} +\Delta))-(K_{X^{\prime}}+\mu_{*}^{-1}\Delta-\mu^{*}(K_{X}+\Delta)).\]
Note that \(G\) is \(\mu\)-exceptional. In a neighbourhood of the generic fibre of \(X^{\prime}\to Z\) we have that \(K_{\mu^{-1}\mathcal{F}}\equiv K_{X^{\prime}}\), and so \(G\) is \(\mu\)-numerically trivial in a neighbourhood of the generic fibre of \(X^{\prime}\to Z\). By the negativity
lemma (cf. [12, Lemma 3.39]) in this neighbourhood \(G=0\). In particular, since \(E\) dominates \(Z\), we have \(a(E,\mathcal{F},\Delta)=a(E,X,\Delta)\). By Remark 3.10 we may then conclude that \(m_{P}\mathrm{Diff}(\mathcal{F},\Delta^{\prime})=m_{P}\mathrm{Diff}(X,\Delta^{ \prime})\) as required.
**Corollary 3.19**.: _Let \(X\) be a potentially klt variety, let \(\mathcal{F}\) be a foliation on \(X\) and let \(\mathcal{H}\subset\mathcal{F}\) be an algebraically integrable subfoliation which is induced by a morphism and let \(D\) be a divisor on \(X\) which is not \(\mathcal{H}\)-invariant. Let \(0\leq\Delta=D+\Delta^{\prime}\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(D\) is not contained in the support of \(\Delta^{\prime}\). Suppose that \(D\), \(K_{\mathcal{F}}+\Delta\) and \(K_{X}+\Delta\) are \(\mathbb{Q}\)-Cartier and that \((\mathcal{H},\Delta)\) is log canonical._
_Then_
\[\mathrm{Diff}(\mathcal{F},\Delta^{\prime})\geq\mathrm{Diff}(\mathcal{H}, \Delta^{\prime}).\]
Proof.: We first remark that \(D\) is not \(\mathcal{F}\)-invariant and if \(P\subset S\) is a prime divisor which is not \(\mathcal{H}_{S}\)-invariant, then it is also not \(\mathcal{F}_{S}\)-invariant. Thus, the claim is a direct consequence of Corollary 3.17 combined with Lemma 3.18.
### Additional remarks
#### 3.5.1. Failure of adjunction on singularities for invariant divisors
Let \(\mathcal{F}\) be a log canonical foliation on a smooth variety \(X\) and let \(D\) be a smooth \(\mathcal{F}\)-invariant divisor on \(X\). Then in general \((\mathcal{F}_{D},\mathrm{Diff}(\mathcal{F}))\) is not log canonical, as the following example shows:
**Example 3.20**.: _Let \(X=\mathbb{A}^{3}\) with coordinates \(x,y,z\) and let \(\mathcal{F}\) be the rank one foliation on \(X\) defined by the vector field_
\[\partial\coloneqq x^{2}\partial_{x}+y^{2}\partial_{y}+z\partial_{z}.\]
_By [13, Fact I.ii.4] \(\mathcal{F}\) has log canonical singularities since its semi-simple part \(z\partial_{z}\) is non-zero. Set \(D=\{z=0\}\). Then \(D\) is \(\mathcal{F}\)-invariant and \(\mathcal{F}_{D}\) is generated by \(x^{2}\partial_{x}+y^{2}\partial_{y}\) whose semi-simple part is zero and by [13, Fact I.ii.4] again, it is not log canonical._
#### 3.5.2. Failure of Bertini's theorem
Let \(X\) be a smooth variety and let \(\mathcal{F}\) be a foliation with canonical singularities. Let \(A\) be an ample Cartier divisor. In general it is not the case that we may find \(0\leq D\sim_{\mathbb{Q}}A\) such that \((\mathcal{F},D)\) is log canonical. Moreover, it is in general not possible to choose \(D\) to be reduced and irreducible and so that \(\mathrm{Diff}(\mathcal{F})=0\).
#### 3.5.3. Other definitions of foliation
Occasionally in the literature a foliation is defined to be a quotient of the cotangent sheaf. From the perspective of defining an adjunction formula (especially on singular varieties) our definition seems to be more appropriate.
The \(S_{2}\) condition likewise seems to be important. On non-normal varieties there exist quotients of the cotangent sheaf which do not seem to correspond to reasonable foliations. Consider the following example. Let \(X\coloneqq\{x=y=0\}\cup\{z=w=0\}\subset\mathbb{A}^{4}\), let \(\omega\) be the restriction to \(X\) of the \(1\)-form \(dz+xdy+ydx\) and consider the quotient \(\Omega^{1}_{X}\to\Omega^{1}_{X}/(\omega)\). There is no way to lift this quotient to a rank one foliation on a neighbourhood of \((0,0,0,0)\), however, for a foliation one would expect that this should always be possible.
## 4. Cone theorem for rank one foliated pairs
**Lemma 4.1**.: _Let \(X\) be a normal variety, let \(\mathcal{F}\) be a rank one foliation on \(X\) such that \(K_{\mathcal{F}}\) is \(\mathbb{Q}\)-Cartier. Let \(V\subset X\) be an irreducible subvariety such that \(V\) is not contained in \(\operatorname{Sing}\mathcal{F}\). Assume that there exists an index one cover \(\sigma\colon X^{\prime}\to X\) associated to \(K_{\mathcal{F}}\) and let \(\mathcal{F}^{\prime}\coloneqq\sigma^{-1}\mathcal{F}\)._
_Then \(V\) is \(\mathcal{F}\)-invariant if and only if \(V^{\prime}\coloneqq\sigma^{-1}(V)\) is \(\mathcal{F}^{\prime}\)-invariant. Moreover, if \(V\) is invariant and if we denote by \(\mathcal{F}_{V}\) (resp. \(\mathcal{F}^{\prime}_{V^{\prime}}\)) the restricted foliation on \(V\) (resp. \(V^{\prime}\)), then \((\sigma|_{V^{\prime}})^{-1}\mathcal{F}_{V}=\mathcal{F}^{\prime}_{V^{\prime}}\)._
Proof.: We first remark that if \(\sigma\) is etale in a neighbourhood of the generic point of \(V\), then both points of the lemma are clear.
Suppose that \(V\) is \(\mathcal{F}\)-invariant. In this case, by definition it follows that \(K_{\mathcal{F}}\) is Cartier in a neighbourhood of the generic point of \(V\) and, therefore, \(\sigma\) is etale in a neighbourhood of the generic point of \(V\), in which case we may conclude.
Suppose that \(\sigma^{-1}(V)\) is \(\mathcal{F}^{\prime}\)-invariant. By [4, Corollary 5.14]\(\sigma^{-1}(V)\) is not contained in \(\operatorname{Sing}\mathcal{F}^{\prime}\). Let \(x\in\sigma^{-1}(V)\) be a general point. By [1, Lemma I.2.1] there exists an analytic neighbourhood \(U\) of \(x\) and a holomorphic submersion \(p\colon U\to Z\) such that \(T_{\mathcal{F}^{\prime}}|_{U}=T_{U/Z}\). Up to shrinking \(U\) we may write \(U\cong\mathbb{D}\times Z\) where \(\mathbb{D}\) is the unit disc and \(p\) is given by projection onto the second coordinate.
Let \(\mathbb{Z}/m\mathbb{Z}\) be the Galois group of our index one cover. If \(t\) is a coordinate on \(\mathbb{D}\), then we may assume that the Galois group acts on \(t\) by \(t\mapsto\xi t\) where \(\xi\) is a primitive \(m\)-th root of unity. In particular, it follows that the ramification locus of \(\sigma\) is of the form \(\{t=f_{1}=\cdots=f_{r}=0\}\), for some functions \(f_{1},\ldots,f_{r}\in\mathcal{O}_{X}\).
Since any \(\mathcal{F}^{\prime}\)-invariant variety is locally of the form \(p^{-1}(W)\) where \(W\subset Z\) is a subvariety, it follows that no invariant subvariety is contained in the ramification locus of \(\sigma\), and so \(\sigma\) is etale at the generic point of \(V\) and we may conclude.
**Lemma 4.2**.: _Let \(p\colon X\to Y\) be a smooth morphism between normal varieties and let \(\mathcal{F}\) be the foliation on \(X\) induced by \(p\). Let \(\Delta\geq 0\) be a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on \(X\). Let \(0\in Y\) and let \(D_{0}\coloneqq p^{-1}(0)\)._
_Then \((\mathcal{F},\Delta)\) is log canonical in a neighbourhood of \(D_{0}\) if and only if \(D_{0}\) is not contained in the support of \(\Delta\) and \((D_{0},\Delta_{0})\) is log canonical, where \(\Delta_{0}\coloneqq\Delta|_{D_{0}}\)._
Proof.: Note that if \(D_{0}\) is contained in the support of \(\Delta\) and \(q\colon\overline{X}\to X\) is the blow-up of \(X\) along \(D_{0}\) with exceptional divisor \(E\) then \(E\) is \(q^{-1}\mathcal{F}\)-invariant and \(a(E,\mathcal{F},\Delta)<0=\epsilon(E)\). Thus, \((\mathcal{F},\Delta)\) is not log canonical around \(D_{0}\). Therefore we may assume that \(D_{0}\) is not contained in the support of \(\Delta\).
Let \(\beta\colon Y^{\prime}\to Y\) be a resolution of \(Y\) and let \(X^{\prime}\coloneqq X\times_{Y}Y^{\prime}\). Let \(\alpha\colon X^{\prime}\to X\) and \(p^{\prime}\colon X^{\prime}\to Y\) be the induced morphisms. Then \(p^{\prime}\) is a smooth morphism and if \(\mathcal{F}^{\prime}\) is the foliation induced by \(p^{\prime}\), it follows that \(\mathcal{F}^{\prime}=\alpha^{-1}\mathcal{F}\). Let \(\Delta^{\prime}\coloneqq\alpha_{*}^{-1}\Delta\). Then \(K_{\mathcal{F}^{\prime}}+\Delta^{\prime}=\alpha^{*}(K_{\mathcal{F}}+\Delta)\).
For any \(y\in\beta^{-1}(0)\), let \(D_{y}\coloneqq p^{\prime-1}(y)\) be the corresponding fibre. Note that \(D_{y}\) is not contained in the support of \(\Delta^{\prime}\) and if \(\Delta_{y}\coloneqq\Delta^{\prime}|_{D_{y}}\) then \((D_{y},\Delta_{y})\cong(D_{0},\Delta_{0})\).
Pick \(y\in\beta^{-1}(0)\) and let \(\Sigma\) be a reduced divisor on \(Y^{\prime}\) such that \((Y^{\prime},\Sigma)\) is log smooth and \(y\) is a zero-dimensional stratum of \(\Sigma\). We claim that \((\mathcal{F}^{\prime},\Delta^{\prime})\) is log canonical in a neighbourhood of \(D_{y}\) if and only if \((X^{\prime},\Delta^{\prime}+p^{\prime*}\Sigma)\) is. Indeed, if \(\gamma\colon X^{\prime\prime}\to X^{\prime}\) is a birational morphism then, as in the proof of [1, Lemma 3.1], we have that
\[K_{X^{\prime\prime}}+\gamma_{*}^{-1}(p^{\prime*}\Sigma)=\gamma^{*}(K_{X^{ \prime}}+p^{\prime*}\Sigma)+F\]
and
\[K_{\gamma^{-1}\mathcal{F}^{\prime}}=\gamma^{*}K_{\mathcal{F}^{\prime}}+G\]
where \(F,G\) are \(\gamma\)-exceptional divisors such that \(G=F+\sum(1-\epsilon(E))E\), where the sum runs over all the \(\gamma\)-exceptional prime divisors. Thus, our claim follows.
By classical inversion of adjunction, we have that \((X^{\prime},\Delta^{\prime}+p^{\prime*}\Sigma)\) is log canonical in a neighbourhood of \(D_{y}\) if and only if \((D_{y},\Delta_{y})\) is log canonical. Thus, we have that \((\mathcal{F},\Delta)\) is log canonical in a neighbourhood of \(D_{0}\) if and only if \((D_{0},\Delta_{0})\) is log canonical.
**Lemma 4.3**.: _Let \(X\) be a normal variety and let \(\mathcal{F}\) be a rank one foliation on \(X\) such that \(K_{\mathcal{F}}\) is \(\mathbb{Q}\)-Cartier. Let \(Z\subset X\) be a subvariety which is not \(\mathcal{F}\)-invariant and it is not contained in \(\operatorname{Sing}\mathcal{F}\). Let \(W\) be a (possibly analytic) \(\mathcal{F}\)-invariant subvariety which contains \(Z\) and let \(n\colon S\to W\) be the normalisation._
1. _Let_ \(D\geq 0\) _be a_ \(\mathbb{Q}\)_-Cartier divisor. Then there exists_ \(\lambda>0\) _such that_ \((\mathcal{F},\lambda D)\) _is log canonical at the generic point of_ \(Z\)_._
2. _Let_ \(\Delta\geq 0\) _be such that_ \((\mathcal{F},\Delta)\) _is log terminal (resp. log canonical) in a neighbourhood of the generic point of_ \(Z\)_. Let_ \(\mathcal{F}_{S}\) _be the restricted foliation. Then_ \((\mathcal{F}_{S},\operatorname{Diff}(\mathcal{F},\Delta))\) _is log terminal
_(resp. log canonical) in a neighbourhood of the generic point of_ \(n^{-1}(Z)\)_._
Proof.: The problem is local about a general point \(z\in Z\), so we are free to shrink about a general point of \(Z\). By [10, Lemma 2.11] and Lemma 4.1, we may therefore replace \(X\) by the index one cover associated to \(K_{\mathcal{F}}\), and so we may assume that \(K_{\mathcal{F}}\) is Cartier.
We may also assume that \(\mathcal{F}\) is non-singular, and so by [1, Lemma 1.2.1] there exists an analytic neighbourhood \(U\) of \(z\) and a smooth morphism \(p\colon U\to V\) such that \(\mathcal{F}|_{U}\) is the foliation induced by \(p\). In particular, \(\mathcal{F}\) is terminal along the generic point of \(Z\) and (1) follows. We now prove Item (2). We can assume that \(Z\) is not a divisor, as otherwise we have that \(W=X\). After possibly cutting by the pre-image of general ample divisors on \(V\), we may assume that \(Z\) is a closed point of \(U\). Let \(W_{0}\coloneqq p(W_{0})\), let \(n_{0}\colon S_{0}\to W_{0}\) be its normalisation and let \(q\colon S\to S_{0}\) be the induced morphism. If \(w\in W_{0}\) is a closed point and \(D_{w}\coloneqq p^{-1}(w)\), then \((D_{w},\Delta|_{D_{w}})\cong(D_{s},n^{*}\Delta|_{D_{s}})\) for any \(s\in n_{0}^{-1}(w)\), where \(D_{s}\coloneqq q^{-1}(s)\). Thus, Lemma 4.2 implies that if \((\mathcal{F},\Delta)\) is log canonical at \(Z\) then \((\mathcal{F}_{S},\operatorname{Diff}(\mathcal{F},\Delta))\) is log canonical at \(n^{-1}(Z)\). If \((\mathcal{F},\Delta)\) is log terminal at \(Z\) then we may find a sufficiently general and small \(\mathbb{Q}\)-divisor \(A\geq 0\) such that \(Z\) is contained in the support of \(A\) and \((\mathcal{F},\Delta+A)\) is log canonical at \(Z\). Thus, \((\mathcal{F}_{S},\operatorname{Diff}(\mathcal{F},\Delta+A))\) is log canonical at \(n^{-1}(Z)\), which implies that \((\mathcal{F}_{S},\operatorname{Diff}(\mathcal{F},\Delta))\) is log terminal at \(n^{-1}(Z)\).
We recall the following Lemma from [10]:
**Lemma 4.4**.: _Let \(X\) be a normal variety, let \(\mathcal{F}\) be a rank one foliation on \(X\) such that \(K_{\mathcal{F}}\) is \(\mathbb{Q}\)-Cartier and let \(\Delta\geq 0\) be \(\mathbb{Q}\)-Cartier. Let \(\xi\subset X\) be a curve which is not \(\mathcal{F}\)-invariant._
_Then there exists a birational morphism \(p\colon X^{\prime}\to X\) such that if \(\mathcal{F}^{\prime}\coloneqq p^{-1}\mathcal{F}\) then the following hold:_
1. \(K_{\mathcal{F}^{\prime}}+\Delta^{\prime}=p^{*}(K_{\mathcal{F}}+\Delta)\) _for some_ \(\mathbb{Q}\)_-divisor_ \(\Delta^{\prime}\geq 0\)_;_
2. \(p^{-1}\) _is an isomorphism at the general point of_ \(C\) _and if_ \(C^{\prime}\) _is the strict transform of_ \(C\) _in_ \(X^{\prime}\) _then_ \(\mathcal{F}^{\prime}\) _is terminal at all points_ \(P\in C\)_; and_
3. _after possibly replacing_ \(X\) _by an analytic neighbourhood of_ \(\xi\)_, there exist a_ \(\mathcal{F}^{\prime}\)_-invariant surface_ \(\Gamma\) _containing_ \(C^{\prime}\)_._
Proof.: The proof is the same as the proof of [10, Lemma 2.37], with the exception that the assumption of \(X\) being \(\mathbb{Q}\)-factorial has been replaced by the assumption of \(K_{\mathcal{F}}\) and \(\Delta\) being \(\mathbb{Q}\)-Cartier.
**Theorem 4.5**.: _Let \(X\) be a normal projective variety, let \(\mathcal{F}\) be a rank one foliation on \(X\) and let \(\Delta\geq 0\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{\mathcal{F}}\) and \(\Delta\) are \(\mathbb{Q}\)-Cartier._
_Then there are \(\mathcal{F}\)-invariant rational curves \(C_{1},C_{2},\dots\) on \(X\) such that_
\[0<-(K_{\mathcal{F}}+\Delta)\cdot C_{i}\leq 2\dim X\]
_and_
\[\overline{\operatorname{NE}}(X)=\overline{\operatorname{NE}}(X)_{K_{\mathcal{ F}}+\Delta\geq 0}+Z_{-\infty}+\sum_{i}\mathbb{R}_{+}[C_{i}]\]
_where \(Z_{-\infty}\subset\overline{\operatorname{NE}}(X)\) is a closed subcone contained in the span of the images of \(\overline{\operatorname{NE}}(T)\to\overline{\operatorname{NE}}(X)\) of non-log canonical centres \(T\subset X\) of \((\mathcal{F},\Delta)\)._
Proof.: Let \(R\subset\overline{NE}(X)\) be a \((K_{\mathcal{F}}+\Delta)\)-negative extremal ray and let \(H_{R}\) be a supporting hyperplane to \(R\). After possibly rescaling \(H_{R}\) we may assume that there exists an ample \(\mathbb{Q}\)-divisor \(H\) such that \(H_{R}\sim_{\mathbb{Q}}K_{\mathcal{F}}+\Delta+H\). Let \(W\) be a component of the null locus \(\operatorname{Null}H_{R}\) of \(H_{R}\) (e.g. see [13, SS2.13]) and let \(n\colon S\to W\) be its normalisation. Thus, \(H_{R}|_{S}\) is not big and by Nakamaye's Theorem [1, Theorem 1.4], it follows that \(W\) is a component of the stable base locus of \(H_{R}-A\) for any sufficiently small ample \(\mathbb{Q}\)-divisor \(A\) on \(X\). In particular, \(R\) is contained in the image of \(\overline{\operatorname{NE}}(W)\to\overline{\operatorname{NE}}(X)\). Thus, we may assume that \((\mathcal{F},\Delta)\) is log canonical at the general point of \(W\).
We now distinguish three cases. Suppose first that \(W\) is contained in \(\operatorname{Sing}\mathcal{F}\). In this case, [13, Lemma 2.38] implies that every curve in \(W\) which is \((K_{\mathcal{F}}+\Delta)\)-negative is contained in a non-log canonical centre of \((\mathcal{F},\Delta)\). By [12, Definition/Summary I.ii.6], the union of all the non-log canonical centres of \((\mathcal{F},\Delta)\) is closed in \(X\). Thus, there exists a subvariety \(T\subset W\) which is a non-log canonical centre of \((\mathcal{F},\Delta)\) and such that \(R\) is contained in the image of \(\overline{\operatorname{NE}}(T)\to\overline{\operatorname{NE}}(X)\) and we may conclude.
Now suppose that \(W\) is \(\mathcal{F}\)-invariant and not contained in \(\operatorname{Sing}\mathcal{F}\). In particular, [1, Lemma I.1.3] implies that \(W\) is a log canonical centre for \(\mathcal{F}\). Thus, since \((\mathcal{F},\Delta)\) is log canonical at the general point of \(W\), it follows that \(W\) is not contained in the support of \(\Delta\). Let \(\mathcal{F}_{S}\) be the restricted foliation on \(S\), whose existence is guaranteed by Proposition-Definition 3.11 and let \(\Delta_{S}\coloneqq\operatorname{Diff}(\mathcal{F},\Delta)\). Since \(H_{R}|_{S}\) is not big, we may apply [14, Corollary 2.28] to conclude that \(S\) is covered by \((K_{\mathcal{F}_{S}}+\Delta_{S})\)-negative rational curves \(C\) tangent to \(\mathcal{F}\) which span \(R\) and such that
\[0<-(K_{\mathcal{F}_{S}}+\Delta_{S})\cdot C\leq 2\dim S,\]
(e.g. see the proof of [13, Theorem 7.1] for more details). Thus, we may easily conclude.
Finally suppose that \(W\) is not contained in \(\operatorname{Sing}\mathcal{F}\) and it is not \(\mathcal{F}\)-invariant. In particular \(W\) is a proper subvariety of \(X\) and \(H_{R}\) is big. We claim that in this case we get a contradiction. Let \(0<\epsilon\ll 1\) be rational number such that \(H_{R}-\epsilon H\) is big and \(\operatorname{Null}H_{R}\) coincides with the stable base locus of \(H_{R}-\epsilon H\). Let \(G\sim_{\mathbb{Q}}H_{R}-\epsilon H\) be an effective \(\mathbb{Q}\)-divisor. Since \(H_{R}|_{S}\) is not big and \(H\) is ample, there exists a curve \(C\subset S\) passing through the general point of \(S\) and such that \(G\cdot C<0\). Moreover, we have
\[(K_{\mathcal{F}}+\Delta)\cdot C=(H_{R}-H)\cdot C=(G-(1-\epsilon)H)\cdot C<0.\]
We may replace \(X\) by a model guaranteed to exist by Lemma 4.4, and so we may find a germ of a \(\mathcal{F}\)-invariant surface \(\Gamma\) containing \(C\). Note that, since \(W\) is not \(\mathcal{F}\)-invariant and \(\Gamma\) intersect the general point of \(W\), it follows that \(\Gamma\) is not contained in \(W\). In particular, since \(W\) is a component of the stable base locus of \(H_{R}-\epsilon H\), after possibly replacing \(G\) by an effective \(\mathbb{Q}\)-divisor \(G^{\prime}\) which is \(\mathbb{Q}\)-linearly equivalent to \(G\), we may assume that \(Y\) is not contained in the support of \(G\). Let \(\nu\colon Y\to\Gamma\) be the normalisation. By Proposition-Definition 3.11, if \(\mathcal{F}_{Y}\) is the foliation induced on \(Y\) then we may write
\[(K_{\mathcal{F}}+\Delta)|_{Y}\sim_{\mathbb{Q}}K_{\mathcal{F}_{Y}}+\Delta_{Y}\]
where \(\Delta_{Y}\coloneqq\operatorname{Diff}(\mathcal{F},\Delta)\). By Lemma 4.3, we have that \((\mathcal{F}_{Y},\Delta_{Y})\) is log canonical at the generic point of \(C\) and, in particular, \(m_{C}\Delta_{Y}\leq 1\). Let \(t\geq 0\) such that \(\Delta_{Y}+tC=\Theta+C\) where \(\Theta\geq 0\) is a \(\mathbb{Q}\)-divisor whose support does not contain \(C\). Let \(\mu>0\) be a rational number such that \(m_{C}(\mu G|_{Y})=1\). Then, considering \(C\) as a curve in \(Y\), we have \(C^{2}\leq C\cdot\mu G|_{Y}<0\) and so
\[(K_{\mathcal{F}_{Y}}+\Theta+C)\cdot C=(K_{\mathcal{F}}+\Delta)\cdot C+tC^{2}<0.\]
On the other hand, by adjunction (cf. [13, Proposition 3.4]) and since the restricted foliation on \(C\) is the foliation by points, we have that \((K_{\mathcal{F}_{Y}}+\Theta+C)\cdot C\geq 0\), a contradiction. Thus, our result follows.
Using the same methods as in the proof above, we have the following:
**Corollary 4.6**.: _Let \(X\) be a normal projective variety, let \(\mathcal{F}\) be a rank one foliation on \(X\) and let \(\Delta\geq 0\) be a \(\mathbb{Q}\)-Cartier divisor such that \(K_{\mathcal{F}}\) and \(\Delta\) are \(\mathbb{Q}\)-Cartier. Assume that \((\mathcal{F},\Delta)\) is log canonical and \(K_{\mathcal{F}}+\Delta\) is nef._
_Then \(\operatorname{Null}\left(K_{\mathcal{F}}+\Delta\right)\) is a union of \(\mathcal{F}\)-invariant subvarieties, and subvarieties contained in \(\operatorname{Sing}\mathcal{F}\)._
In [1, Theorem 1.2] a dynamical characterisation of ample line bundles on smooth surfaces was provided. As a consequence of the above theorem we are able to extend this to higher dimensions:
**Corollary 4.7**.: _Let \(X\) be a normal projective variety and let \(L\) be a \(\mathbb{Q}\)-Cartier divisor. Suppose that_
1. \(L^{\dim X}\neq 0\)_;_
2. _for some_ \(q>0\) _there exists a rank one foliation_ \(\mathcal{F}\) _such that_ \(K_{\mathcal{F}}\) _is_ \(\mathbb{Q}\)_-Cartier and_ \(K_{\mathcal{F}}\equiv qL\)_; and_
3. \(\operatorname{Sing}\mathcal{F}\) _is isolated and_ \(\mathcal{F}\) _admits no invariant positive dimensional subvarieties._
_Then \(L\) is ample._
Proof.: By (3) and by Theorem 4.5, it follows that \(L\) is nef and so by (1) we have that \(L^{\dim X}>0\), and hence \(L\) is big. By Corollary 4.6 it follows that Null \(K_{\mathcal{F}}=\emptyset\) and so by the Nakai-Moishezon criterion for ampleness we see that \(K_{\mathcal{F}}\) is ample.
## 5. Family of leaves of an algebraically integrable foliation
Let \(X\) be a normal projective variety and let \(\mathcal{F}\) be an algebraically integrable foliation on \(X\). The following construction follows from [1, Lemma 3.2]:
**Construction 5.1**.: _There exists a closed subvariety \(Z\subset\operatorname{Chow}(X)\) whose general point parametrises the closure of a leaf of \(X\). Let \(p_{X}\colon X\times Z\to X\) and \(p_{Z}\colon X\times Z\to Z\) be the two projections._
_If we let \(\widehat{X}\subset X\times Z\) to be the universal cycle, then we have_
1. _a birational morphism_ \(\beta\colon\widehat{X}\to X\) _given by the restriction of_ \(p_{X}\) _to_ \(\widehat{X}\)_;_
2. _an equidimensional contraction_ \(f\colon\widehat{X}\to Z\) _given by the restriction of_ \(p_{Z}\) _to_ \(\widehat{X}\)_; and_
3. _a foliation_ \(\widehat{\mathcal{F}}\coloneqq\beta^{-1}\mathcal{F}\)_, which is induced by_ \(f\)_._
**Lemma 5.2**.: _Set up as above. Suppose that \(K_{\mathcal{F}}\) is \(\mathbb{Q}\)-Cartier._
_Then we may write \(K_{\widehat{\mathcal{F}}}+F\sim_{\mathbb{Q}}\beta^{*}K_{\mathcal{F}}\) where \(F\geq 0\) is \(\beta\)-exceptional. Moreover, if \(E\) is a \(\beta\)-exceptional divisor which is not \(\widehat{\mathcal{F}}\)-invariant then \(E\) is contained in \(\operatorname{Supp}\,F\)._
Proof.: This is [1, Section 3.6] and [1, Proposition 4.17].
**Lemma 5.3**.: _Set up as above. Suppose that \(X\) is klt and \(K_{\mathcal{F}}\) is \(\mathbb{Q}\)-Cartier. Let \(E\) be a \(\beta\)-exceptional divisor._
_Then \(E\) is not \(\widehat{\mathcal{F}}\)-invariant if and only if \(\beta(E)\subset\operatorname{Sing}\mathcal{F}\)._
Proof.: Given a finite cover \(X^{\prime}\to X\) which is quasi-etale around the general point of the centre of \(E\) on \(X\), let
1. \(\widehat{X}^{\prime}\) be the normalisation of \(\widehat{X}\times_{X}X^{\prime}\);
2. \(\widehat{X}^{\prime}\to Z^{\prime}\) be the Stein factorisation of \(\widehat{X}^{\prime}\to Z\);
3. \(E^{\prime}\) be the preimage of \(E\) on \(\widehat{X}^{\prime}\); and
4. \(\mathcal{F}^{\prime}\) be the foliation induced by \(\mathcal{F}\) on \(X^{\prime}\).
Note that \(E\) is \(\mathcal{F}\)-invariant if and only if \(E^{\prime}\) is \(\mathcal{F}^{\prime}\)-invariant and [11, Corollary 5.14] implies that the centre of \(E\) is contained in \(\operatorname{Sing}\mathcal{F}\) if and only if the centre of \(E^{\prime}\) is contained in \(\operatorname{Sing}\mathcal{F}^{\prime}\).
Thus, by taking the degree one cover \(X^{\prime}\to X\) for \(K_{\mathcal{F}}\) around the general point of the centre of \(E\) on \(X\), we may freely replace \(X\) by \(X^{\prime}\) and we may therefore assume that \(K_{\mathcal{F}}\) is Cartier.
If \(\beta(E)\) is not contained in \(\operatorname{Sing}\mathcal{F}\) then [11, Lemma 5.9] implies that \(\mathcal{F}\) has canonical singularities around the general point of \(\beta(E)\). Thus, by Lemma 5.2, it follows that \(E\) is \(\widehat{\mathcal{F}}\)-invariant.
Suppose now that \(E\) is \(\widehat{\mathcal{F}}\)-invariant. Let \(F\) be a general fibre of the induced morphism \(E\to f(E)\). By definition of \(\beta\) we know that \(\beta|_{F}\) is a closed immersion. Since \(\widehat{\mathcal{F}}\) is induced by the contraction \(\widehat{X}\to Z\) it follows that \(\widehat{\mathcal{F}}\) is non-singular at the generic point of any component of a fibre and, in particular, it is non-singular at the generic point of \(F\). Let \(\tilde{\mathcal{F}}\) be the foliation on \(X\times Z\) whose tangent sheaf is \(p_{X}^{*}T_{\mathcal{F}}\). By construction, \(\widehat{X}\) is \(\tilde{\mathcal{F}}\)-invariant and the restriction of \(\tilde{\mathcal{F}}\) to \(\widehat{X}\) is \(\widehat{\mathcal{F}}\). In particular, it follows that \(\tilde{\mathcal{F}}\) is non-singular at the generic point of \(F\) and so \(\mathcal{F}\) is non-singular at the generic point of \(\beta(F)\) as required.
**Remark 5.4**.: _Using the same set-up as above, Lemma 5.3 implies that if \(F\) is the \(\beta\)-exceptional divisor as in Lemma 5.2 and if we write \(F=F_{1}+F_{0}\) where the components of \(F_{1}\) are not \(\widehat{\mathcal{F}}\)-invariant and the components of \(F_{0}\) are \(\widehat{\mathcal{F}}\)-invariant then the centres of the irreducible components of \(F_{1}\) on \(X\) and the centres of the irreducible components of \(F_{0}\) on \(X\) are distinct._
**Lemma 5.5**.: _Let \(f\colon X\to Z\) be an equidimesional contraction between normal varieties and let \(\mathcal{F}\) and \(\mathcal{G}\) be foliations on \(X\) and \(Z\) respectively such that \(df(T_{\mathcal{F}})=f^{*}T_{\mathcal{G}}\) at the generic point of \(Z\). Let \(\mathcal{H}\) be the foliation given by \(T_{\mathcal{F}}\cap T_{X/Z}\)._
_Then_
\[K_{\mathcal{H}}=(K_{\mathcal{F}}-f^{*}K_{\mathcal{G}})-R(f)_{\mathrm{n-inv}}\]
_where \(R(f)_{\mathrm{n-inv}}\) denotes the part of the ramification divisor \(R(f)\) of \(f\) which is not \(\mathcal{F}\)-invariant._
Proof.: The desired equality may be checked away from a subset of codimension at least two and so, without loss of generality, we may assume that \(X,Z,\mathcal{F},\mathcal{G}\) and \(\mathcal{H}\) are smooth.
We have an exact sequence of vector bundles
\[0\to T_{\mathcal{H}}\to T_{\mathcal{F}}\xrightarrow{df}f^{*}T_{\mathcal{G}},\]
where \(df\) is surjective at the generic point of \(Z\). Thus, \(K_{\mathcal{H}}=K_{\mathcal{F}}-f^{*}K_{\mathcal{G}}+C\) where \(C\) is a vertical divisor. To prove our claim it therefore suffices to check locally around a point \(x\in X\). Let \(n\) and \(k\) be the dimension of \(X\) and \(Z\) respectively. Then there exists a positive integer \(m\) such that, in suitable coordinate, \(f\) is given by \((x_{1},\ldots,x_{n})\mapsto(x_{1}^{m},\ldots,x_{k})\). It follows that \(df\colon T_{\mathcal{F}}\to f^{*}T_{\mathcal{G}}\) drops rank along \(\{x_{1}=0\}\) if and only if \(\partial/\partial x_{1}\) is a local section of \(T_{\mathcal{F}}\) and our claim follows.
**Theorem 5.6**.: _Let \(X\) be a \(\mathbb{Q}\)-factorial klt projective variety and let \(\mathcal{F}\) be a foliation on \(X\). Let \(\mathcal{H}\) be the algebraic part of \(\mathcal{F}\) and let \(\beta\colon\widehat{X}\to X\) be the morphism guaranteed to exist by Construction 5.1._
_Then \(a(E,\mathcal{H})\geq a(E,\mathcal{F})\) for any \(\beta\)-exceptional prime divisor \(E\) which is not \(\beta^{-1}\mathcal{H}\)-invariant. In particular, if \(\mathcal{F}\) admits canonical singularities, then \(\mathcal{H}\) is induced by an almost holomorphic map._
Proof.: Suppose that \(\mathcal{H}\) is induced by the rational map \(f\colon X\dasharrow Z\). Let \(\widehat{\mathcal{F}}\coloneqq\beta^{-1}\mathcal{F}\) and let \(\widehat{\mathcal{H}}\coloneqq\beta^{-1}\mathcal{H}\). Let \(\widehat{f}\colon\widehat{X}\to\widehat{Z}\) be the morphism which induces \(\widehat{\mathcal{H}}\). Note in particular that \(\widehat{f}\) is equidimensional. By definition, there exists a purely transcendental foliation \(\widehat{\mathcal{G}}\) on \(\widehat{Z}\) such that \(\widehat{\mathcal{F}}=\widehat{f}^{-1}\widehat{\mathcal{G}}\).
We may write
\[K_{\widehat{\mathcal{F}}}+F=\beta^{*}K_{\mathcal{F}}\]
and
\[K_{\widehat{\mathcal{H}}}+G=\beta^{*}K_{\mathcal{H}}\]
where \(F,G\) are \(\beta\)-exceptional and by Lemma 5.2, we have that \(G\geq 0\).
By Lemma 5.5 we have
\[K_{\widehat{\mathcal{H}}}+\widehat{f}^{*}K_{\widehat{\mathcal{G}}}+R(\widehat {f})_{\mathrm{n-inv}}=K_{\widehat{\mathcal{F}}},\]
where \(R(\widehat{f})_{\mathrm{n-inv}}\) denotes the part of \(R(\widehat{f})\) which is not \(\widehat{\mathcal{F}}\)-invariant. Thus,
\[G-F\sim_{\mathbb{Q},\beta}\widehat{f}^{*}K_{\widehat{\mathcal{G}}}+R(\widehat {f})_{\mathrm{n-inv}}.\]
Assume by contradiction that there exists a \(\beta\)-exceptional prime divisor \(E\) which is not \(\widehat{\mathcal{H}}\)-invariant and such that \(a(E,\mathcal{H})<a(E,\mathcal{F})\). Let \(c_{X}(E)\) be the centre of \(E\) on \(X\). By Remark 5.4 up to shrinking \(X\) about \(c_{X}(E)\) we may assume that every \(\beta\)-exceptional divisor is not \(\widehat{\mathcal{H}}\)-invariant. Note that the coefficient of \(G-F\) along \(E\) is positive.
Since \(\widehat{\mathcal{G}}\) is purely transcendental it follows by [1, Corollary 4.13] that \(K_{\widehat{\mathcal{G}}}\) is pseudo-effective. By the negativity lemma, there exists a component \(\Sigma\) of \(G-F\) which is covered by curves \(\xi\) which are contracted by \(\beta\) and such that \((G-F)\cdot\xi<0\). Since \(\Sigma\) is not \(\widehat{\mathcal{H}}\)-invariant, we get a contradiction.
We now prove our second claim. Assume by contradiction that \(\mathcal{F}\) admits canonical singularities but \(f\) is not almost holomorphic. Then there exists a \(\beta\)-exceptional divisor \(E\) which dominates \(\overline{Z}\). By Lemma 5.2, we have that \(E\) is contained in the support of \(G\). Thus, \(a(E,\mathcal{F})\leq a(E,\mathcal{H})<0\), a contradiction.
**Corollary 5.7**.: _Let \(X\) be a smooth projective variety and let \(\mathcal{F}\) be a foliation on \(X\) with canonical singularities. Suppose that \(-K_{\mathcal{F}}\) is nef and is not numerically trivial._
_Then the algebraic part of \(\mathcal{F}\) is induced by an equidimensional fibration._
Proof.: By [1, Corollary 4.13], the algebraic part \(\mathcal{H}\) of \(\mathcal{F}\) is non-trivial, and by Theorem 5.6, \(\mathcal{H}\) is induced by an almost holomorphic map \(f\colon X\dasharrow Z\). Following the proof of [1, Claim 4.3] we see that \(K_{\mathcal{F}}\equiv K_{\mathcal{H}}\) and, in particular \(-K_{\mathcal{H}}\) is nef. The result then follows by [1, Corollary 1.4].
|
2309.15937 | The pluriclosed flow for $T^2$-invariant Vaisman metrics on the
Kodaira-Thurston surface | In this note we study $T^2$-invariant pluriclosed metrics on the
Kodaira-Thurston surface. We obtain a characterization of $T^2$-invariant
Vaisman metrics, and notice that the Kodaira-Thurston surface admits Vaisman
metrics with non-constant scalar curvature. Then we study the behaviour of the
Vaisman condition in relation to the pluriclosed flow. As a consequence, we
show that if the initial metric on the Kodaira-Thurston surface is a
$T^2$-invariant Vaisman metric, then the pluriclosed flow preserves the Vaisman
condition, extending to the non-constant scalar curvature case the previous
results in [6]. | Anna Fino, Gueo Grantcharov, Eddy Perez | 2023-09-27T18:22:46Z | http://arxiv.org/abs/2309.15937v1 | # The pluriclosed flow for \(T^{2}\)-invariant Vaisman metrics on the Kodaira-Thurston surface
###### Abstract.
In this note we study \(T^{2}\)-invariant pluriclosed metrics on the Kodaira-Thurston surface. We obtain a characterization of \(T^{2}\)-invariant Vaisman metrics, and notice that the Kodaira-Thurston surface admits Vaisman metrics with non-constant scalar curvature. Then we study the behaviour of the Vaisman condition in relation to the pluriclosed flow. As a consequence, we show that if the initial metric on the Kodaira-Thurston surface is a \(T^{2}\)-invariant Vaisman metric, then the pluriclosed flow preserves the Vaisman condition, extending to the non-constant scalar curvature case the previous result in [6].
Key words and phrases:Vaisman metric, pluriclosed flow 2010 Mathematics Subject Classification: 53C55; 53C05; 22E25; 53C30; 53C44
## 1. introduction
Given a Hermitian manifold \((M,J,g)\) of complex dimension \(n\) the Bismut connection \(\nabla^{B}\) (also known as the Strominger connection) is the unique connection on \(M\) that is Hermitian (i.e. \(\nabla^{B}J=0\), \(\nabla^{B}g=0\)) and has totally skew-symmetric torsion tensor. Its explicit expression appeared in Strominger's paper [21] and independently in Bismut's paper [4], where \(\nabla^{B}\) was used in to study local index theorems.
A Hermitian metric \(g\) on a complex manifold is called _pluriclosed_ if its fundamental form \(\omega\) satisfies \(\partial\overline{\partial}\omega=0\) or equivalently if the Bismut torsion \(3\)-form is closed. This is a weaker restriction than the Kahler condition on \(g\) and every compact complex surface admits pluriclosed metrics [8]. The pluriclosed metrics have been studied by many authors. We refer for instance to [1, 7, 24, 23] and the references therein for more backgrounds and results.
In [17] Streets and Tian introduced a parabolic flow of pluriclosed metrics, also called the _pluriclosed flow_, which is defined by the equation
\[\frac{\partial}{\partial\,t}\omega(t)=-(\rho^{B})^{1,1}\,,\qquad\omega(0)= \omega_{0},\]
where \((\rho^{B})^{1,1}:=(\rho^{B}(\omega(t))^{1,1}\) denotes the \((1,1)\)-part of the Ricci form of the Bismut connection and \(\omega_{0}\) is the fundamental form of a fixed pluriclosed metric. This flow preserves the pluriclosed condition and also the existence of generalized Kahler structures [18].
Introduction
The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds. The classical theory of compact manifolds is the classical theory of compact manifolds of compact manifolds. The classical theory of compact manifolds is the
The content of the paper is as follows: In Section 2 we collect the main known results about the Vaisman manifolds which we need later. Then in Section 3 we prove the main result.
## 2. Preliminaries
Let \((M,J)\) be a complex manifold of complex dimension \(n\) and let \(g\) be a Hermitian metric on \(X\) with associated fundamental form \(\omega(\cdot\,,\cdot)=g(\cdot,J\cdot)\,\). An affine connection is called Hermitian if it preserves the metric \(g\) and the complex structure \(J\). In particular, Gauduchon in [9] proved that there exists an affine line \(\left\{\nabla^{t}\right\}_{t\in\mathbb{R}}\) of canonical Hermitian connections, passing through the _Chern connection_ and the _Bismut connection_; these connections are completely determined by their torsion. Let \(\nabla\) be a Hermitian connection and \(T(X,Y)=\nabla_{X}Y-\nabla_{Y}X-[X,Y]\) be its torsion, we denote with the same symbol
\[T(X,Y,Z):=g(T(X,Y),Z).\]
Then the Chern connection \(\nabla^{Ch}\) is the unique Hermitian connection whose torsion has trivial \((1,1)\)-component and the Bismut connection (also called Strominger connection) \(\nabla^{B}\) is the unique Hermitian connection with totally skew-symmetric torsion. In particular, the torsion of the Bismut connection satisfies
\[T^{B}(X,Y,Z)=d^{c}\omega(X,Y,Z),\]
where \(d^{c}=-J^{-1}dJ\).
The Chern and Bismut connections are related to the Levi-Civita connection \(\nabla^{LC}\) by
\[g(\nabla^{B}_{X}Y,Z)=g(\nabla^{LC}_{X}Y,Z)+\frac{1}{2}d^{c}\omega(X,Y,Z)\,,\]
\[g(\nabla^{Ch}_{X}Y,Z)=g(\nabla^{LC}_{X}Y,Z)+\frac{1}{2}d\omega(JX,Y,Z)\,.\]
A Hermitian metric \(g\) on a complex manifold \((M,J)\) is called _pluriclosed_ or _strong Kahler with torsion_ (_SKT_ for brevity) if \(T^{B}\) is a closed 3-form, namely \(dT^{B}=0\), or equivalently \(dd^{c}\omega=0\).
Recall that the trace of the torsion of the Chern connection is equal to the Lee form of \(g\) (cf. [8]), that is the 1-form defined by
\[\theta=Jd^{*}\omega,\]
where \(d^{*}\) is the adjoint of the exterior derivative \(d\) with respect to \(g\), or equivalently \(\theta\) is the unique 1-form satisfying
\[d\omega^{n-1}=\theta\wedge\omega^{n-1}\,,\]
A Hermitian metric \(g\) is called _Gauduchon_ if \(dd^{c}\omega^{n-1}=0,\) or equivalently \(d^{*}\theta=0.\) In particular, in complex dimension \(2\) Gauduchon and pluriclosed metrics coincide.
We recall the following
**Definition 2.1**.: _A Hermitian metric \(g\) on a complex manifold \((M,J)\) is called locally conformally Kahler (LCK for brevity) if_
\[d\omega=\alpha\wedge\omega,\]
_where \(\alpha\) is a \(d\)-closed \(1\)-form. In particular, \(\alpha=\frac{1}{n-1}\theta\) and \(\theta\) is \(d\)-closed._
_A locally conformally Kahler metric \(g\) is called Vaisman if the Lee form is parallel with respect to the Levi-Civita connection \(\nabla^{LC}\), namely_
\[\nabla^{LC}\theta=0\,.\]
An immediate consequence is that the Vaisman metrics are Gauduchon and the norm of the Lee form \(|\theta|\) with respect to them is constant.
So on complex surfaces Vaisman metrics are pluriclosed and \(T^{B}=-*\theta.\)
A Vaisman structure on a complex manifold is uniquely determined (up to a positive constant) by its Lee form \(\theta\) via the following
\[\omega=\frac{1}{|\theta|^{2}}(\theta\wedge J\theta-dJ\theta).\]
Moreover, by [16] the dual Lee vector field \(T=\theta^{\#}\) is holomorphic and Killing. A Vaisman metric is called normalized if the Lee form (or, equivalently, the Lee vector field) has norm \(1\). Moreover, for a Vaisman structure
\[d(J\theta)=\theta\wedge J\theta-|\theta|^{2}\omega\]
is always of type \((1,1)\). By [14] a pluriclosed locally conformally Kahler metric on a compact complex manifold with holomorphic Lee vector field is Vaisman.
If \((M,J,g,\omega,\theta)\) is a normalized Vaisman manifold, then for every positive real number \(b>0\) and a harmonic \(1\)-form \(\alpha\), pointwise orthogonal to \(\theta\) and \(J\theta\), the pair \((\tilde{\omega},\tilde{\theta})\) defined by
\[\tilde{\theta}:=b\,\theta+\alpha,\quad\tilde{\omega}:=\tilde{\theta}\wedge J \tilde{\theta}-dJ\tilde{\theta}\]
is a normalized Vaisman structure on \((M,J)\) (see for instance Lemma 3.2 in [15]). The Vaisman structure \((\tilde{\omega},\tilde{\theta})\) is called a deformation of type I of \((\omega,\theta)\).
There is another type of deformation of Vaisman structures, which preserves the cohomology class of the Lee form. Let \((g,\omega,\theta)\) be a Vaisman structure on \((M,J)\) with Lee vector field \(T\). Let \(f\in\mathcal{C}^{\infty}(M)\) such that \(T(f)=JT(f)=0.\) If one defines the closed \(1\)-form \(\tilde{\theta}\) and the \((1,1)\)-form \(\tilde{\omega}\) by
\[\tilde{\theta}=\theta+df,\quad\tilde{\omega}=|\theta|_{g}^{2}\omega+\theta \wedge Jdf+df\wedge J\theta+df\wedge Jdf-dd^{c}f,\]
then when \(\tilde{\omega}\) is positive, the structure \((\tilde{\omega},\tilde{\theta})\) is a normalized Vaisman structure, called deformation of type II of \((\omega,\theta)\). The Lee vector field \(\tilde{T}\) of \((\tilde{\omega},\tilde{\theta})\) is given by \(\frac{1}{|\theta|_{g}^{2}}T\).
By Proposition 3.7 in [15] if \((M,J,g,\omega,\theta_{0})\) be a compact Vaisman manifold. Then any normalized Vaisman structure \((\tilde{\omega},\tilde{\theta})\) on \((M,J)\) is obtained by deformations of type I and II starting from the given Vaisman structure \((\omega,\theta)\).
**Remark 2.1**.: _By formula (2.7) in [1] and [9] on a complex surface the Bismut and Chern Ricci forms are related by the following relation_
\[\rho^{Ch}=\rho^{B}+d(J\theta). \tag{1}\]
_By [6] if \((M,J,\omega)\) is a compact Vaisman surface with Lee form \(\theta\), then \(\rho^{Ch}=h\,dJ\theta\) for some \(h\in\mathcal{C}^{\infty}(M,\mathbb{R})\). Moreover, the scalar curvature of \(\omega\) is constant if and only if \(h\) is constant. Clearly when \(h\) is constant, \(c_{1}(M)=0\), but in [11] it was noted that every compact Vaisman surface has vanishing real first Chern class._
_Note that, given a normalized Vaisman structure \((\omega,\theta)\) on a complex surface, if we consider an orthonormal basis \((\theta,J\theta,\xi,J\xi)\) then_
\[\omega=\xi\wedge J\xi+\theta\wedge J\theta,\quad\rho^{B}=(h-1)\xi\wedge J\xi=( h-1)dJ\theta.\]
_Therefore \(h-1=\rho^{B}(\xi^{\#},J\xi^{\#})\)._
_We note also that by [11] the Vaisman manifolds with vanishig (real) first Chern class fall into 3 classes, depending on the sign of the first Bott-Chern class. In what follows we focus on the Kodaira-Thurston surface which has \(c_{1}^{BC}=0\)._
3. \(T^{2}\)-invariant pluriclosed metrics on the Kodaira-Thurston surface
The _Kodaira-Thurston surface_ is defined as the compact 4-manifold
\[M=Nil^{3}/\Gamma\times S^{1},\]
where \(Nil^{3}\) is the 3-dimensional real Heisenberg group
\[Nil^{3}=\left\{\left[\begin{smallmatrix}1&x&z\\ 0&1&y\\ 0&0&1\end{smallmatrix}\right]\mid x,y,z\in\mathbb{R}\right\},\]
and \(\Gamma\) is the lattice in \(Nil^{3}\) of matrices having integers entries.
Therefore \(M\) is parallelizable and has a global left-invariant co-frame
\[e^{1}=dy,\quad e^{2}=dx,\quad e^{3}=dw\quad e^{4}=dz-xdy\]
satisfying the structure equations
\[de^{1}=de^{2}=de^{3}=0,\quad de^{4}=e^{12},\]
with
\[e^{ij}=e^{i}\wedge e^{j}.\]
The dual left-invariant frame is given by
\[e_{1}=\partial_{y}+x\partial_{z},\,e_{2}=\partial_{x},\,\,e_{3}=\partial_{w}, \,e_{4}=\partial_{z}.\]
Every smooth map \(u\colon M\to\mathbb{R}\) can be regarded as a smooth map \(u\colon\mathbb{R}^{4}\to\mathbb{R}\) satisfying the periodicity condition
\[u(x+j,y+k,z+jy+m,w+n)=u(x,y,z,w),\]
for all \((x,y,z,t)\in\mathbb{R}^{4}\) and \((j,k,m,n)\in\mathbb{Z}^{4}\). We consider on \(M\) the complex structure given by
\[Je^{1}=e^{2},\qquad Je^{3}=e^{4}.\]
A global frame of \((1,0)\)-forms is given by
\[\varphi^{1}=e^{1}+ie^{2}=dy+idx,\quad\varphi^{2}=e^{3}+ie^{4}=dw+i(dz-xdy).\]
Therefore
\[d\varphi^{1}=0,\quad d\varphi^{2}=-\frac{1}{2}\varphi^{1\overline{1}}.\]
Moreover, \(y+ix\) and \(w+iz-\frac{1}{2}x^{2}\) are local holomorphic coordinates.
Since \(Nil^{3}/\Gamma\times S^{1}=(Nil^{3}\times\mathbb{R})/(\Gamma\times\mathbb{Z})\), the Kodaira-Thurston surface \(M\) is a 2-step nilmanifold and every left-invariant Hermitian structure on \(Nil^{3}\times\mathbb{R}\) projects to a Hermitian structure on \(M\). Moreover, the compact 3-dimensional manifold \(N=Nil^{3}/\Gamma\) is the total space of an \(S^{1}\)-bundle over a 2-dimensional torus \(T^{2}\) with projection \(\pi_{xy}\colon N\to T^{2}_{xy}\) and \(M\) inherits a structure of principal \(T^{2}\)-bundle over the 2-dimensional torus \(T^{2}_{xy}\). Then it makes sense to consider differential forms invariant by the action of the fiber \(T^{2}_{tz}\). A \(k\)-form \(\phi\) on \(M\) is invariant by the action of the fiber \(T^{2}_{zt}\) if its coefficients with respect to the global basis \(e^{j_{1}}\wedge\cdots\wedge e^{j_{k}}\) do not depend on the variables \(z,t\).
An arbitrary \(T^{2}\)-invariant \(J\)-invariant metric \(g\) on \(M\) has associated fundamental form
\[\omega=\frac{1}{2}ir\,\varphi^{1\overline{1}}+\frac{1}{2}is\,\varphi^{2 \overline{2}}+\frac{1}{2}(u\varphi^{1\overline{2}}-\overline{u}\varphi^{2 \overline{1}})=re^{12}+se^{34}+u_{1}(e^{13}+e^{24})+u_{2}(e^{14}-e^{23}),\]
where \(r,s\) are real functions \(r=r(x,y),s=s(x,y)\) and \(u=u(x,y)\) is a complex values function such that \(r(x,y)>0\), \(s(x,y)>0\), \(r(x,y)\,s(x,y)>|u(x,y)|^{2}\), i.e. the Hermitian matrix
\[H=\frac{1}{2}\left(\begin{array}{cc}r&-iu\\ \overline{u}&s\end{array}\right)\]
is positive definite. In particular, if \(u=u_{1}+iu_{2}\), we have
\[\begin{array}{l}g(e_{1},e_{1})=g(e_{2},e_{2})=r,\quad g(e_{3},e_{3})=g(e_{4 },e_{4})=s,\\ g(e_{1},e_{3})=u_{2}=g(e_{2},e_{4}),\quad g(e_{1},e_{4})=-u_{1}=-g(e_{2},e_{3 }).\end{array}\]
Note that \(\omega\) is left-invariant if and only if \(r,s\) and \(u\) are constant functions.
Moreover
\[dr=r_{x}e^{2}+r_{y}e^{1}=-\frac{1}{2}ir_{x}(\varphi^{1}-\overline{\varphi}^{1 })+\frac{1}{2}r_{y}(\varphi^{1}+\overline{\varphi}^{1})\]
and a similar relation holds for \(ds\), \(du\) and \(d\overline{u}\). As a consequence
\[\partial r=\left(\frac{1}{2}r_{y}-\frac{1}{2}ir_{x}\right)\varphi^{1},\quad \overline{\partial}r=\left(\frac{1}{2}r_{y}+\frac{1}{2}ir_{x}\right)\overline {\varphi}^{1}.\]
Therefore, one has
\[d\omega=\frac{1}{4}(s_{x}+is_{y})\varphi^{12\overline{2}}+\frac{1}{4}(s_{x}-is_{y}) \varphi^{\overline{12}2}+\frac{1}{4}(-\overline{u}_{y}+is+i\overline{u}_{x}) \varphi^{12\overline{1}}+\frac{1}{4}(-u_{y}-is-iu_{x})\varphi^{\overline{12}1}\]
It follows
\[\overline{\partial}\omega=\frac{1}{4}(s_{x}-is_{y})\varphi^{\overline{12}2}+ \frac{1}{4}(-u_{y}-is-iu_{x})\varphi^{\overline{112}}.\]
Since \(\varphi^{\overline{12}2}\) and \(\varphi^{\overline{112}}\) are both \(\partial\)-closed, we get
\[\partial\overline{\partial}\omega = \tfrac{1}{4}\partial(s_{x}-is_{y})\varphi^{\overline{12}2}+ \tfrac{1}{4}\partial(-u_{y}-is-iu_{x})\varphi^{1\overline{12}}\] \[= \tfrac{1}{4}\partial(s_{x}-is_{y})\varphi^{\overline{12}2}\]
**Proposition 3.1**.: _Let \(\omega=\frac{1}{2}ir(x,y)\,\varphi^{1\overline{1}}+\frac{1}{2}is(x,y)\, \varphi^{2\overline{2}}+\frac{1}{2}(u(x,y)\varphi^{\overline{12}}-\overline{u }(x,y)\varphi^{2\overline{1}})\) be the fundamental form of a T\({}^{2}\)-invariant Hermitian metric on the Kodaira Thurston surface \(M\)._
1. \(\omega\) _is pluriclosed if and only_ \(s(x,y)=s\) _is a constant function._
2. _If_ \(\omega\) _is pluriclosed, then the Bismut Ricci form has the following expression_ (2) \[(\rho^{B})^{1,1} = \tfrac{i}{2}\left(-\tfrac{1}{2}\partial_{x}^{2}\left(\log(rs-|u| ^{2})\right)-\tfrac{1}{2}\partial_{y}^{2}\left(\log(rs-|u|^{2})\right)-(h_{1}) _{y}-(h_{2})_{x}-h_{3}\right)\,\varphi^{1\overline{1}}\] \[-(-\tfrac{1}{4}(h_{3})_{x}+\tfrac{1}{4}(h_{4})_{y}+\tfrac{i}{4}(h _{3})_{y}+\tfrac{i}{4}(h_{4})_{x})\,\varphi^{1\overline{2}}\] \[+(-\tfrac{1}{4}(h_{3})_{x}+\tfrac{1}{4}(h_{4})_{y}-\tfrac{i}{4}(h _{3})_{y}-\tfrac{i}{4}(h_{4})_{x})\,\varphi^{2\overline{1}},\] _where_ \(u_{1}\) _and_ \(u_{2}\) _are respectively the real part and the imaginary part of_ \(u\) _and_ \[h_{1} = \tfrac{1}{s}(u_{2}h_{3}-u_{1}h_{4}),\] \[h_{2} = \tfrac{1}{s}(u_{1}h_{3}+u_{2}h_{4}),\] \[h_{3} = \tfrac{s}{(rs-|u|^{2})}(-s-(u_{1})_{x}-(u_{2})_{y}),\] \[h_{4} = \tfrac{s}{(rs-|u|^{2})}((u_{1})_{y}-(u_{2})_{x})\]
Proof.: To prove \((a)\) we use that \(\partial\overline{\partial}\omega=0\) if and only if \(\partial(s_{x}-is_{y})=0\). Since
\[\partial(s_{x}-is_{y})=-\frac{1}{2}i(s_{xx}+s_{yy})\varphi^{1}\]
we get that the pluriclosed condition is equivalent to \(s\) to be constant.
To prove \((b)\) we will use (1). If we write the complex function \(u\) as \(u=u_{1}+iu_{2}\) for a \(T^{2}\)-invariant pluriclosed metric we get
\[d\omega=(-s-(u_{1})_{x}-(u_{2})_{y})\,e^{123}+((u_{1})_{y}-(u_{2})_{x})\,e^{124}\]
with \(s\) constant. Let
\[\theta=h_{1}e^{1}+h_{2}e^{2}+h_{3}e^{3}+h_{4}e^{4}.\]
with \(h_{i}(x,y)\) real functions. By a direct computation we have
\[\theta\wedge\omega = (sh_{1}+u_{1}h_{4}-h_{3}u_{2})e^{134}+(u_{1}h_{1}-u_{2}h_{2}+rh_{ 4})e^{124}\] \[(-u_{2}h_{1}-u_{1}h_{2}+rh_{3})e^{123}+(sh_{2}-u_{1}h_{3}-u_{2}h_ {4})e^{234}.\]
By imposing
\[d\omega=\theta\wedge\omega\]
we obtain the system
\[\left\{\begin{array}{l}-u_{2}h_{1}-u_{1}h_{2}+rh_{3}=-s-(u_{1})_{x}-(u_{2})_{y},\\ u_{1}h_{1}-u_{2}h_{2}+rh_{4}=(u_{1})_{y}-(u_{2})_{x},\\ sh_{1}-u_{2}h_{3}+u_{1}h_{4}=0,\\ sh_{2}-u_{1}h_{3}-u_{2}h_{4}=0\end{array}\right.\]
in the variables \(h_{i}\). Therefore
\[\begin{array}{rcl}h_{1}&=&\frac{1}{(rs-|u|^{2})}[u_{2}(-s-(u_{1})_{x}-(u_{2}) _{y})-u_{1}((u_{1})_{y}-(u_{2})_{x})]=\frac{1}{s}(u_{2}h_{3}-u_{1}h_{4}),\\ h_{2}&=&\frac{1}{(rs-|u|^{2})}[u_{1}(-s-(u_{1})_{x}-(u_{2})_{y})+u_{2}((u_{1})_{ y}-(u_{2})_{x})]=\frac{1}{s}(u_{1}h_{3}+u_{2}h_{4}),\\ h_{3}&=&\frac{s}{(rs-|u|^{2})}(-s-(u_{1})_{x}-(u_{2})_{y}),\\ h_{4}&=&\frac{s}{(rs-|u|^{2})}((u_{1})_{y}-(u_{2})_{x})\end{array}\]
and as a consequence
\[\begin{array}{rcl}d(J\theta)&=&dh_{1}\wedge e^{2}-dh_{2}\wedge e^{1}+dh_{3} \wedge e^{4}+h_{3}e^{1}\wedge e^{2}-dh_{4}\wedge e^{3}\\ &=&[(h_{1})_{y}+(h_{2})_{x}+h_{3}]\,e^{12}+(h_{3})_{x}e^{24}+(h_{3})_{y}e^{14}- (h_{4})_{x}e^{23}+(h_{4})_{y}e^{13}.\end{array}\]
Since
\[\begin{array}{rcl}e^{12}=\frac{i}{2}\varphi^{1\overline{1}},\\ e^{13}=\frac{1}{4}(\varphi^{12}+\varphi^{1\overline{2}}-\varphi^{2\overline{1 }}+\varphi^{\overline{12}}),\\ e^{14}=-\frac{i}{4}(\varphi^{12}-\varphi^{1\overline{2}}-\varphi^{2 \overline{1}}-\varphi^{\overline{12}}),\\ e^{23}=-\frac{i}{4}(\varphi^{12}+\varphi^{1\overline{2}}+\varphi^{2 \overline{1}}+\varphi^{\overline{12}}),\\ e^{24}=\frac{1}{4}(\varphi^{12}-\varphi^{1\overline{2}}+\varphi^{2 \overline{1}}+\varphi^{\overline{12}}),\end{array}\]
we get
\[\begin{array}{rcl}(d(J\theta))^{1,1}&=&\frac{i}{2}[(h_{1})_{y}+(h_{2})_{x}+h_ {3}]\,\varphi^{1\overline{1}}+(-\frac{1}{4}(h_{3})_{x}+\frac{1}{4}(h_{4})_{y}+ \frac{i}{4}(h_{3})_{y}+\frac{i}{4}(h_{4})_{x})\,\varphi^{1\overline{2}}\\ &&-(-\frac{1}{4}(h_{3})_{x}+\frac{1}{4}(h_{4})_{y}-\frac{i}{4}(h_{3})_{y}-\frac{ i}{4}(h_{4})_{x})\,\varphi^{2\overline{1}}.\end{array}\]
To find the Chern-Ricci form \(\rho^{C}\) we use that it is the curvature of the canonical bundle which gives the formula
\[\rho^{C}=-\sqrt{-1}\partial\overline{\partial}\log\det(g_{\alpha\overline{ \beta}})=-\frac{1}{2}dd^{c}\log\det(g_{\alpha\overline{\beta}}),\]
where \((g_{\alpha\overline{\beta}})\) is the matrix of \(g\) is any local holomorphic \((1,0)\)-frame. For such frame we can use \(\chi^{1}=\varphi^{1}=d(y+ix)\) and \(\chi^{2}=d(w+iz-\frac{1}{2}x^{2})\) since \(y+ix\) and \(w+iz-\frac{1}{2}x^{2}\) are holomorphic coordinates on the universal cover of \(M\), and the fundamental group is acting completely discontinuous. Since the \((1,0)\)-form \(\varphi^{2}=dw+i(dz-xdy)\) is global, we can see that
\[\chi^{2}=\varphi^{2}+ix\varphi^{1}.\]
If \(\tilde{H}\) is the Hermitian matrix of the metric \(g\) in the basis \(\chi^{1},\chi^{2}\), and \(H\) as above is the Hermitian matrix of \(g\) in the basis \(\varphi^{1},\varphi^{2}\), by the change of basis formula we get
\[\det(\tilde{H})=\det(H)=rs-|u|^{2}.\]
In particular, we have
\[\rho^{C}=-\frac{1}{2}dd^{c}log(rs-|u|^{2}).\]
From here and the fact that \(r,s,u\) depend only on \(x,y\) we get the formula
\[\rho^{C}=-\frac{1}{2}(\partial_{x}^{2}+\partial_{y}^{2})\log(rs-|u|^{2})\,e^{ 12}.\]
Note that when \(u=0\) one gets the formula from [10, Lemma 3]. From here we get (2).
Then the pluriclosed flow can be written as the system of PDE's
\[\left\{\begin{array}{rcl}\frac{\partial r}{\partial t}&=&\frac{1}{2}( \partial_{x}^{2}+\partial_{y}^{2})\left(\log(rs-|u|^{2})\right)+\frac{1}{2} \partial_{y}\left(\frac{r_{y}}{r}\right)+(h_{1})_{y}+(h_{2})_{x}+h_{3},\\ \frac{\partial u_{1}}{\partial t}&=&-\frac{1}{2}(h_{3})_{x}+\frac{1}{2}(h_{4 })_{y},\\ \frac{\partial u_{2}}{\partial t}&=&\frac{1}{2}(h_{3})_{y}+\frac{1}{2}(h_{4 })_{x},\\ \frac{\partial s}{\partial t}&=&0,\end{array}\right.\]
where
\[\begin{array}{rcl}h_{1}&=&\frac{1}{s}(u_{2}h_{3}-u_{1}h_{4}),\\ h_{2}&=&\frac{1}{s}(u_{1}h_{3}+u_{2}h_{4}),\\ h_{3}&=&\frac{s}{(rs-|u|^{2})}(-s-(u_{1})_{x}-(u_{2})_{y}),\\ h_{4}&=&\frac{s}{(rs-|u|^{2})}((u_{1})_{y}-(u_{2})_{x}).\end{array}\]
By a direct computation
\[\begin{array}{rcl}(h_{1})_{y}+(h_{2})_{x}+h_{3}&=&-\frac{1}{s^{2}}(h_{3}^{2} +h_{4}^{2})(rs-|u|^{2})+\frac{1}{s}u_{1}\left(-(h_{3})_{x}+(h_{4})_{y}\right) \\ &&+\frac{1}{s}u_{2}\left((h_{3})_{y}+(h_{4})_{x}\right)\\ &=&-\frac{1}{s^{2}}(h_{3}^{2}+h_{4}^{2})(rs-|u|^{2})+\frac{1}{s}\partial_{t}(| u|^{2})\end{array}\]
and so the system reduces to
\[\left\{\begin{array}{rcl}\frac{\partial r}{\partial t}&=&\frac{1}{2}( \partial_{x}^{2}+\partial_{y}^{2})\left(\log(rs-|u|^{2})\right)+\frac{1}{s^{2} }(h_{3}^{2}+h_{4}^{2})(rs-|u|^{2})+\frac{1}{s}\partial_{t}(|u|^{2}))),\\ \frac{\partial u_{1}}{\partial t}&=&-\frac{1}{2}(h_{3})_{x}+\frac{1}{2}(h_{4}) _{y},\\ \frac{\partial u_{2}}{\partial t}&=&\frac{1}{2}(h_{3})_{y}+\frac{1}{2}(h_{4}) _{x},\\ \frac{\partial s}{\partial t}&=&0,\\ h_{3}&=&\frac{s}{(rs-|u|^{2})}(-s-(u_{1})_{x}-(u_{2})_{y}),\\ h_{4}&=&\frac{s}{(rs-|u|^{2})}((u_{1})_{y}-(u_{2})_{x}).\end{array}\right. \tag{3}\]
The previous system is parabolic and quasilinear and applying the result for instance in [13] it has short time existence.
**Remark 3.1**.:
1. _Note that we get that_ \(\frac{\partial u_{1}}{\partial t}=0=\frac{\partial u_{2}}{\partial t}\) _if and only if_ \(h_{3}+ih_{4}\) _is a holomorphic function on_ \(T^{2}\) _and so a constant function._
2. _If_ \(r,s\) _and_ \(u\) _are constant one gets_ \[\left\{\begin{array}{rcl}\frac{\partial r}{\partial t}&=&\frac{s^{2}}{(rs-|u |^{2})},\\ \frac{\partial u_{1}}{\partial t}&=&0,\\ \frac{\partial u_{2}}{\partial t}&=&0.\end{array}\right.\]
3. _The pluriclosed flow on compact complex surfaces which are the total space of a holomorphic_ \(T^{2}\)_-principal bundle over a Riemann surface_ \(\Sigma\) _have been studied in_ _[_19_]__, showing that the solution to pluriclosed flow with initial data a_ \(T^{2}\)_-invariant metric_ \(\omega_{0}\) _with_ \(u=0\) _exists on_ \([0,+\infty)\)_, and that_ \((M,\omega(t))\) _converges in the Gromov-Hausdorff topology to a point._
**Theorem 3.1**.: _Let \(\omega=\frac{1}{2}ir(x,y)\,\varphi^{1\overline{1}}+\frac{1}{2}is\,\varphi^{2 \overline{2}}+\frac{1}{2}(u(x,y)\varphi^{1\overline{2}}-\overline{u}(x,y) \varphi^{2\overline{1}})\) be the fundamental form of a \(T^{2}\)-invariant pluriclosed metric \(g\) on the Kodaira-Thurston surface \(M\). Then \(\omega\) is Vaisman if and only if the functions_
\[h_{3} = \frac{s}{(rs-|u|^{2})}(-s-(u_{1})_{x}-(u_{2})_{y}),\] \[h_{4} = \frac{s}{(rs-|u|^{2})}((u_{1})_{y}-(u_{2})_{x})\]
_are both constant. Moreover, if \(\omega\) is Vaisman, the Lee vector field \(T\) is given by \(T=\frac{h_{3}}{s}e_{3}+\frac{h_{4}}{s}e_{4}\) and_
\[d(J\theta)=-\frac{1}{s^{2}}(h_{3}^{2}+h_{4}^{2})(rs-|u|^{2})e^{12}.\]
Proof.: The \(T^{2}\)-invariant pluriclosed metric \(g\) is locally conformally Kahler if and only if \(d\theta=0\). The vanishing of
\[d\theta=d(h_{1})\wedge e^{1}+d(h_{2})\wedge e^{2}+d(h_{3})\wedge e^{3}+d(h_{ 4})\wedge e^{4}+h_{4}e^{12}\]
is equivalent to the conditions
\[dh_{3}=dh_{4}=0,\quad-(h_{1})_{x}+(h_{2})_{y}+h_{4}=0.\]
By using that \(h_{3}\) and \(h_{4}\) are both constant and the expressions
\[h_{1} = \frac{1}{s}(u_{2}h_{3}-u_{1}h_{4}),\] \[h_{2} = \frac{1}{s}(u_{1}h_{3}+u_{2}h_{4}),\]
we have that the condition \(-(h_{1})_{x}+(h_{2})_{y}+h_{4}=0\) is always satisfied. Now we only need to prove that if \(h_{3}\) and \(h_{4}\) are both constant then the Lee vector field \(T\) is holomorphic, since automatically \(g\) will be Vaisman. \(T\) is the metric dual of \(\theta\), so we
must have \(g(T,X)=\theta(X)\), for every vector field \(X\). By imposing \(g(T,e_{i})=\theta(e_{i})=h_{i}\), for every \(i=1,\ldots,4\), we have that the Lee vector field \(T\) is given by
\[T = \tfrac{(h_{1}s-h_{3}u_{2}+h_{4}u_{1})}{rs-|u|^{2}}e_{1}+\tfrac{(h_ {2}s-h_{3}u_{1}-h_{4}u_{2})}{rs-|u|^{2}}e_{2}\] \[+\tfrac{(-h_{1}u_{2}-h_{2}u_{1}+h_{37})}{rs-|u|^{2}}e_{3}+\tfrac{( h_{1}u_{1}-h_{2}u_{2}+h_{47})}{rs-|u|^{2}}e_{4}.\]
Using that
\[h_{1} = \tfrac{1}{s}(u_{2}h_{3}-u_{1}h_{4}),\] \[h_{2} = \tfrac{1}{s}(u_{1}h_{3}+u_{2}h_{4}),\]
it follows that \(T=\tfrac{h_{3}}{s}e_{3}+\tfrac{h_{4}}{s}e_{4}\). Therefore \(T\) is holomorphic since \([T,JX]=J[T,X]\), for every vector field \(X\). The last part of the theorem follows from
\[d(J\theta)=[(h_{1})_{y}+(h_{2})_{x}+h_{3}]\,e^{12}\]
and
\[(h_{1})_{y}+(h_{2})_{x}+h_{3}=-\frac{1}{s^{2}}(h_{3}^{2}+h_{4}^{2})(rs-|u|^{2}).\]
As a consequence of the previous theorem we can prove
**Theorem 3.2**.: _Let \(\omega_{0}\) be the fundamental form of a \(T^{2}\)-invariant Vaisman metric on the Kodaira-Thurston surface \(M\), then the pluriclosed flow starting with \(\omega_{0}\) preserves the Vaisman condition._
Proof.: Let \(\omega_{0}=\tfrac{1}{2}ir(x,y)\,\varphi^{1\overline{1}}+\tfrac{1}{2}is\, \varphi^{2\overline{2}}+\tfrac{1}{2}(u(x,y)\varphi^{1\overline{2}}-\overline{ u}(x,y)\varphi^{2\overline{1}})\) be the fundamental form of the \(T^{2}\)-invariant Vaisman metric. Then
\[h_{3} = \tfrac{s}{(rs-|u|^{2})}(-s-(u_{1})_{x}-(u_{2})_{y}),\] \[h_{4} = \tfrac{s}{(rs-|u|^{2})}((u_{1})_{y}-(u_{2})_{x})\]
are both constants. To prove that the Vaisman condition is preserved under the pluriclosed flow, we use the following ansatz
\[\omega(t)=\frac{1}{2}i\tilde{r}(x,y,t)\,\varphi^{1\overline{1}}+\frac{1}{2}is \,\varphi^{2\overline{2}}+\frac{1}{2}(u(x,y)\varphi^{1\overline{2}}-\overline {u}(x,y)\varphi^{2\overline{1}}),\]
with \(\tilde{r}(x,y,0)=r(x,y)\) and \(s\) constant and use that the equation
\[\partial_{t}\tilde{r}=\frac{1}{2}\partial_{x}\left(\frac{\tilde{r}_{x}}{ \tilde{r}}\right)+\frac{1}{2}\partial_{y}\left(\frac{\tilde{r}_{y}}{\tilde{r} }\right)+\frac{1}{s^{2}}(h_{3}^{2}+h_{4}^{2})(\tilde{r}s-|u|^{2}),\]
is quasi-linear parabolic and so it admits a solution. Indeed, the equations in (3) reduce only to the previous equation, since \(\frac{\partial u_{1}}{\partial t}=\frac{\partial u_{2}}{\partial t}=0\).
Note that, using deformartions of type II, we can show the existence of Vaisman \(T^{2}\)-invariant metrics with non-constant scalar curvature on the Kodaira-Thurston surface. Start with the Vaisman metric with fundamental form \(\omega=e^{12}+e^{34}\), we have that \(\theta=-e^{3}\). If we apply the deformation of type II in Section 2 using a function
\(f=f(x,y)\) such that \(1-f_{xx}-f_{yy}>0\), we obtain that the metric with fundamental form
\[\tilde{\omega} = e^{12}+e^{34}-e^{3}\wedge Jdf-df\wedge e^{4}+df\wedge Jdf-dd^{c}f\] \[= (1+(f_{x})^{2}+(f_{y})^{2}-f_{xx}-f_{yy})e^{12}+e^{34}-f_{x}(e^{13} +e^{24})-f_{y}(e^{14}-e^{23})\]
is Vaisman with \(\tilde{\theta}=-e^{3}+df=-e^{3}+f_{y}e^{1}+f_{x}e^{2}\). So
\[d(J\tilde{\theta})=d(-e^{4}+f_{y}e^{2}-f_{x}e^{1})=(-1+f_{xx}+f_{yy})e^{12}.\]
Moreover,
\[\tilde{\rho}^{Ch} = -\tfrac{1}{2}(\partial_{x}^{2}+\partial_{y}^{2})\log(rs-|u|^{2}) \,e^{12}\] \[= -\tfrac{1}{2}\tfrac{1}{(-1+f_{xx}+f_{yy})}\left((\partial_{x}^{2 }+\partial_{y}^{2})\log(rs-|u|^{2})\right)d(J\tilde{\theta}),\]
where \(r=1+(f_{x})^{2}+(f_{y})^{2}-f_{xx}-f_{yy},s=1\) and \(u=-f_{x}-if_{y}\). Therefore
\[\tilde{\rho}^{Ch}=-\frac{1}{2}\frac{1}{(-1+f_{xx}+f_{yy})}\left((\partial_{x} ^{2}+\partial_{y}^{2})\log(1-f_{xx}-f_{yy})\right)d(J\tilde{\theta}).\]
Non-constant functions \(f\) such that \(f_{xx}+f_{yy}<1\) exist. Therefore if
\[\frac{1}{(-1+f_{xx}+f_{yy})}\left((\partial_{x}^{2}+\partial_{y}^{2})\log(1- f_{xx}-f_{yy})\right)\]
is non-constant, the scalar curvature of \(\tilde{\omega}\) is non-constant.
**Acknowledgements.** Anna Fino is partially supported by Project PRIN 2017 "Real and complex manifolds: Topology, Geometry and Holomorphic Dynamics", by GNSAGA (Indam) and by a grant from the Simons Foundation (#944448). Gueo Grantcharov is partially supported by a grant from the Simons Foundation (#853269). We would like to thank Liviu Ornea, Jeff Streets and Luigi Vezzoni for useful comments.
|
2309.12080 | Evolutionary Status of Long-Period Radio Pulsars | We analyze the evolutionary status of recently discovered long-period radio
sources PSR J0901-4046, GLEAM-X J1627-52, and GPM J1839-10. We discuss the
hypothesis that all three sources are radio pulsars. In the framework of
standard scenarios, it is often accepted that the pulsar mechanism is switched
off when an external matter can penetrate the light cylinder. If the matter is
stopped outside the light cylinder then the neutron star is at the ejector
stage. We demonstrate that for realistic parameters of the interstellar medium,
the 76-second pulsar PSR J0901-4046 might be at this stage. However, sources
GLEAM-X J1627-52 and GPM J1839-10 with periods $\gtrsim 1000$ s can be ejectors
only in the case of unrealistically large dipolar fields $\gtrsim 10^{16}$ G.
Also, we show that neutron stars with spin periods $\sim 100$ s and dipolar
magnetic fields $\lesssim 10^{13}$ G cannot be ejectors in a typical
interstellar medium. Thus, we predict that long-period pulsars with standard
fields will not be discovered. | M. D. Afonina, A. V. Biryukov, S. B. Popov | 2023-09-21T13:49:43Z | http://arxiv.org/abs/2309.12080v2 | # Evolutionary Status of Long-Period Radio Pulsars
###### Abstract
We analyze the evolutionary status of recently discovered long-period radio sources PSR J0901-4046, GLEAM-X J1627-52, and GPM J1839-10. We discuss the hypothesis that all three sources are radio pulsars. In the framework of standard scenarios, it is often accepted that the pulsar mechanism is switched off when an external matter can penetrate the light cylinder. If the matter is stopped outside the light cylinder then the neutron star is at the ejector stage. We demonstrate that for realistic parameters of the interstellar medium, the 76-second pulsar PSR J0901-4046 might be at this stage. However, sources GLEAM-X J1627-52 and GPM J1839-10 with periods \(\gtrsim 1000\) s can be ejectors only in the case of unrealistically large dipolar fields \(\gtrsim 10^{16}\) G. Also, we show that neutron stars with spin periods \(\sim 100\) s and dipolar magnetic fields \(\lesssim 10^{13}\) G cannot be ejectors in a typical interstellar medium. Thus, we predict that long-period pulsars with standard fields will not be discovered.
neutron stars, radio pulsars
## 1 Introduction
Evolutionary status and observational appearances of isolated neutron stars (NSs) depend not only on intrinsic parameters of the compact objects (spin period, magnetic field, temperature, etc.) but also on interaction with the surrounding medium. This is mostly defined by two key parameters: density of the medium and NS relative velocity. Four main evolutionary stages of NSs are usually distinguished (see e.g., Lipunov 1992): ejector, propeller, accretor, and georotator. In this study, we discuss long-period radio pulsars, so the first two stages are of interest to us. At the ejector stage, the relativistic wind from the pulsar is strong enough to keep the external medium out of the light cylinder. The light cylinder radius corresponds to the maximal distance at which the magnetospheric field lines can exist co-rotating with the NS:
\[R_{l}=c/\omega. \tag{1}\]
Here \(c\) is the velocity of light, \(\omega=2\pi/P\) is the spin frequency, and \(P\) is the spin period. At this stage, an NS can be observed as a radio pulsar. For the existence of such a source, it is necessary to have a cascade of electron-positron pair creation in the magnetosphere. Often, conditions for such cascade are defined in terms of a "death line" in the \(P\) - \(\dot{P}\) diagram (see e.g., Beskin 1999). However, below we do not discuss these conditions as we are interested in a more general limitation related to the evolutionary status of NSs: we require that the NS is at the stage of an ejector.
At the propeller stage, matter starts to penetrate inside the light cylinder preventing propagation of the relativistic wind and, finally, switching off the mechanism of its generation. Typically, one can define the condition for the transition from the ejector
to the propeller stage as an equality of two characteristic radii. One of them is the gravitational capture radius (aka Bondi radius):
\[R_{G}=\frac{2GM}{v^{2}}. \tag{2}\]
Here \(M\) is the NS mass, \(v\) its velocity relative to the interstellar medium. Here and below we assume that \(v\) is larger than the sound velocity in the interstellar medium: \(c_{s}\sim\sqrt{kT/m_{p}}\sim 10\) km/s for \(T\sim 10^{4}\) K (Klessen, Glover 2016).
The second characteristic radius is the so-called Shvartsman radius. It can be derived from equating the pressure of relativistic pulsar wind and external pressure:
\[R_{Sh}=\left(\frac{\xi\mu^{2}(GM)^{2}\omega^{4}}{\dot{M}v^{5}c^{4}}\right)^{1/ 2}. \tag{3}\]
Here \(\mu=B\,R^{3}\) is the magnetic moment which can be defined by the equatorial magnetic field \(B\) and neutron star radius \(R\). Parameter \(\dot{M}\) expresses properties of the external medium. It is equal to the accretion rate when accretion is possible. We define this parameter as \(\dot{M}=\pi R_{G}^{2}\rho v\), where \(\rho\) is the density of the interstellar medium. Sometimes it is more convenient to use number density \(n=\rho/m_{p}\), where \(m_{p}\) is the proton mass. External pressure can be written as \(\rho v^{2}\).
Eq. (3) is obtained using the following expression for the pulsar wind power: \(L_{w}=\xi\mu^{2}\omega^{4}/c^{4}\), where the parameter \(\xi\approx 1+1.4\sin^{2}\alpha\) depends on the angle \(\alpha\) between the spin axis and the magnetic dipole axis (Philippov et al., 2014). Under the assumption of isotropic and independent orientation of axes we obtain \(\xi\approx 1.93\). Below for simplicity, we assume \(\xi=2\).
In the case when \(R_{G}>R_{l}\) the ejector condition is formulated as \(R_{Sh}>R_{G}\). However, for long spin periods, it can be that \(R_{l}>R_{G}\). Then, the critical condition is written as \(R_{Sh}>R_{l}\).
For a given spin period, magnetic field, density of the surrounding medium, and NS velocity relative to this medium we can calculate if an NS is at the ejector stage or it is a propeller. In the first case, the compact object potentially can be a radio pulsar. But not in the second.
In the following section, we briefly describe the parameters of three recently discovered long-period radio sources and two similar objects known before. Then, in Sec. 3, we apply our considerations to the three long-period sources. In Sec. 4 we discuss our results and related subjects. Finally, in Sec. 5 we present our conclusions.
## 0.2 Long-Period Pulsars
For many years the longest periods of the known radio pulsars were of the order of 10 seconds. However, during the last few years, three radio sources with much longer periodicity have been discovered. In addition, two other objects possibly related to the long-period radio pulsars are known. In this section, we briefly describe the main observed properties of these sources.
PSR J0901-4046 was discovered in 2020 at the frequency 1.3 GHz with the MeerKAT radio telescope in South Africa (Caleb et al., 2022). It has the spin period 75.88 s and \(\dot{P}=2.25\times 10^{-13}\) s/s. According to the standard magneto-dipole energy losses this corresponds to the surface dipolar field \(1.3\times 10^{14}\) G. Shapes of different pulses are much more different from each other than it is typical for radio pulsars. The small dispersion measure (\(52\pm 1\) pc/cm\({}^{3}\)) corresponds to a distance \(\sim\)330-470 pc depending on the applied model of the electron density distribution in the Galaxy.
The source GLEAM-X J1627-52 was discovered at low frequencies 72-231 MHz with the Murchison Widefield Array (MWA) in 2018 (Hurley-Walker et al., 2022). The phase of activity lasted for nearly three months. During this time just 71 pulses were detected. This gave an opportunity to identify the period 1091 s. For the period derivative, there is just an upper limit: \(\dot{P}<(1-4)\times 10^{-9}\) s/s. Emission has high linear polarization (\(\sim 88\%\)). The brightness temperature is estimated as \(10^{16}\) K. The radio luminosity exceeds the rotation energy losses by nearly three orders of magnitude.
GPM J1839-10 has been also discovered with the MWA (Hurley-Walker et al., 2023). Then, the source was observed with the Australia Telescope Compact Array (ATCA), the Parkes/Murriyang radio telescope, with the Australian Square Kilometre Array Pathfinder (ASKAP), and MeerKAT. The period of pulsations is equal to 1318.2 s. The upper limit to the period derivative is \(\dot{P}<3\times 10^{-9}\) s/s. It is interesting that the source was also identified in old archival data that cover a period of over 30 years! The dispersion measure -- \(273.5\pm 2.5\) pc cm\({}^{-3}\), -- provides just a lower limit to the distance: \(d\gtrsim 2.8\) kpc.
There is a source that in many respects reminds pulsating sources GLEAM-X J1627-52 and GPM
J1839-10. This is the Galactic center radio transient GCRT J1745-3009. Initially, the source was detected by VLA at a low frequency of 0.33 GHz in 2002. Later it was detected a few times more (Hyman et al. 2005). During the first observed period of activity, the source demonstrated 5 bright (\(\sim 1\) Jy) bursts with a typical duration of about 10 minutes. Intervals between the bursts were about 77 minutes. It was hypothesized that this could be a spin period of a compact object. During each of the consequent episodes of activity just a single bursts were detected, and they were significantly dimmer than the first five events. No counterparts were found in any spectral range. The source had a large brightness temperature which points towards a coherent emission mechanism. The nature of the object remains unclear (some exotic scenarios with NSs were discussed e.g., in Popov 2008).
Finally, let us mention the central source 1E161348-5055 in the supernova remnant RCW103. It was discovered in X-rays with the Einstein space observatory (Tuohy and Garmire 1980). It was a kind of sensation when very long-term periodic pulsations were discovered in this source (de Luca et al. 2008). The period of pulsations is 6.67 hours and the upper limit on the period derivative is \(\dot{P}<7\times 10^{-10}\) s/s. Then, a magnetar activity of this source was discovered (Rea et al. 2016, D'Ai et al. 2016). For our discussion, this source is mainly interesting due to its long spin period and young age. This makes it a potential "relative" of GLEAM-X J1627-52 and GPM J1839-10 as they can have a similar mechanism of rapid initial spin-down (see Sec. 4 below) even though their present-day observational appearances are different.
From the point of view of pulsar physics, long-period sources raise questions related to the emission mechanism. Firstly, all these sources are situated beyond the death line in the \(P\)-\(\dot{P}\) diagram. Secondly, for some of them, even the radio luminosity is larger than the rotation energy losses. However, in the following section, we are going to discuss another problem related to the evolutionary status of these sources. As we will show, for some of the sources only an extreme (and so -- unrealistic) combination of the values of magnetic field and spatial velocity brings them to the ejector stage.
## 0.3 Results
When \(R_{G}>R_{l}\) from the equation \(R_{Sh}=R_{G}\) one can obtain the following expression for the critical velocity:
\[v_{p1}=\left(\frac{8\pi c^{4}(GM)^{2}\rho}{\mu^{2}\omega^{4}}\right)^{1/2}. \tag{4}\]
This expression can be rewritten as \(v_{p1}=27.4\,P_{2}^{2}n^{1/2}B_{14}^{-1}\) km/s if the parameters of the NS are normalized to their typical values. Here, \(P_{2}=P/(100\) s), \(B_{14}=B/(10^{14}\) G). The mass of an NS is assumed to be \(1.4\,M_{\odot}\), and its radius is 10 km. If the velocity of an object is \(v<v_{p1}\) then it is assumed to be at the propeller stage.
In the case when \(R_{G}<R_{l}\) it is necessary to use the condition \(R_{Sh}=R_{l}\) to obtain the critical velocity:
\[v_{p2}=\left(\frac{\mu^{2}\omega^{6}}{4\pi\rho c^{6}}\right)^{1/2}. \tag{5}\]
After normalization the value of the critical velocity can be written as \(v_{p2}=2840\,P_{2}^{-3}B_{14}n^{-1/2}\) km/s. The propeller stage corresponds to the velocity values \(v>v_{p2}\).
Using eqs. (4) and (5) we can define for which parameters the objects PSR J0901-4046, GLEAM-X J1627-52, and GPM J1839-10 can be at the ejector stage. In Fig. 1 we show a graphical representation of the equations for two values of the spin period -- 76 s (black dashed lines) and 1318 s (orange solid lines) -- and different values of the magnetic field.
The break points in the plot correspond to the velocity \(v_{br}\) defined by the condition \(R_{G}=R_{l}\):
\[v_{br}=\left(\frac{2GM\omega}{c}\right)^{1/2}, \tag{6}\]
or \(v_{br}=279\,P_{2}^{-1/2}\) km/s after normalisation.
The region under each broken line corresponds to ejectors for given magnetic field values and number densities of the surrounding medium. Objects above the line are at the propeller stage. So, for example, for the period \(P=1318\) s and magnetic field \(B\lesssim 10^{14}\) G an NS can not be at the ejector stage if the number density \(\gtrsim 10^{-3.5}\) cm\({}^{-3}\).
It is important to note that the magnetic field values shown in Fig. 1 are a kind of effective field. In the general case, these values \(B\) can be related to the actual magnetic field values on the surface of the NS \(B_{0}\) in the following way:
\[\frac{B}{B_{0}}=R_{10}^{-3}\sin\alpha, \tag{7}\]
where \(R_{10}=R/(10\) km) is the NS radius. The first factor on the right-hand side of eq. (7) reflects the fact that the real radii of NSs can be larger than the "standard" value of 10 km and is approximately equal to 11.5-12 km (e.g., Raaijmakers et al. 2021). The second multiplier reflects that the rotational energy losses of long-period radio pulsars may be closer to magneto-dipole losses (\(\xi=\sin^{2}\alpha\)) than to the classical pulsar losses (\(\xi\approx 1+1.4\sin^{2}\alpha\)). Since the cascade production of electron-positron pairs in the subpolar region of the NS is already terminated at periods \(P_{d}\approx 16B_{14}^{8/15}\cos^{7/15}\alpha\) s (Novoselov et al. 2020), which is shorter than the period of any of the objects under discussion, each of them can formally be beyond its death line slowing down according to the magneto-dipole law. Accordingly, the condition for their transition from ejector to propeller may be milder (Beskin and Eliseeva 2005).
Ultimately, each of the factors in eq. (7) leads to an increase (by a factor of a few) in the value of the critical magnetic field at which the pulsar can no longer be an ejector at a given velocity and number density of the medium.
In addition, the horizontal stripes (transparent and hatched) in Fig. 1 indicate estimates of the local density of the interstellar medium at the locations of each of the pulsars in the Galaxy. These estimates have been derived from a 3D map of the dust distribution in the Galaxy using combined data from Gaia Early Data Release 3 (Gaia EDR3) and the 2MASS catalogue (Vergely et al. 2022). This map allows estimating the differential absorption in the optical range \(a_{V}\) (in magnitudes per parsec) in a volume of \(10\times 10\times 0.8\) kpc centered on the Sun and with a spatial resolution of up to 10 pc. In this case, the total absorption (in magnitudes) in the direction of a given source is determined as the integral \(A_{V}=\int a_{V}dl\) along the line of sight from the observer to the source.
In the direction of the PSR J0901-4046 in the distance interval of 330-470 pc from the Sun \(a_{V}\sim 100-220\,\mu\)mag/pc. From the X-ray observations it follows that the number density of atoms on the line of sight is proportional to the total absorption in the visible band \(N_{H}=q\times A_{V}\), where \(q\approx 2\times 10^{21}\) cm\({}^{-2}\) mag\({}^{-1}\) (Guver and Ozel 2009). Therefore, for the given pulsar the local number density is \(n=q\times a_{V}/(3.08\times 10^{18}\) cm/pc\()\approx 0.07-0.14\) cm\({}^{-3}\).
For GPM J1839-10 the map from Vergely et al. (2022) provides a similar estimate only in the distance interval of 2.8-5 kpc, which is \(n\approx 0.13-0.45\) cm\({}^{-3}\).
The main conclusions from the analysis of Fig. 1 are as follows. First, PSR J0901-4046 with the magnetic field \(\sim(1-2)\times 10^{14}\) G is at the ejector stage for almost all realistic values of the medium density and velocity. Second, the objects GLEAM-X J1627-52 and GPM J1839-10 can be at the ejector stage in a typical interstellar medium (\(n\sim 0.1-1\) cm\({}^{-3}\)) only with unrealistically high magnetic fields \(\gtrsim 10^{16}\) G, or even higher if we take into account the corrections from eq. (7). Finally, we can predict that the existence of pulsars with periods \(\sim 100\) s and magnetic fields \(\lesssim 10^{13}\) G is practically impossible because it would require a very low density of the surrounding medium.
Figure 1: The relationship between the critical velocity and the number density of the surrounding medium for the two objects PSR J0901-4046 (\(P=76\) s) and GPM J1839-10 (\(P=1318\) s). The region below each line corresponds to the ejector stage, the region above — to the propeller stage. Solid lines correspond to GPM J1839-10, while dashed lines — to PSR J0901-4046. For each period value, lines are drawn for several magnetic field values. The break in the lines corresponds to the velocity \(v_{br}\). The left segments of each line correspond to eq. (4). Segments to the right from the break — to eq. (5). The semi-transparent and hatched stripes show the local density estimates for GPM J1839-10 and PSR J0901-4046, respectively.
## 4 Discussion
The origin of the long periods of the observed objects is currently unclear. It seems most likely that very long periods, such as those of GLEAM-X J1627-52 and GPM J1839-10, as well as of the source in the supernova remnant RCW 103, are associated with the fallback stage after a supernova explosion. This scenario is discussed in detail by Ronchi et al. (2022). The population aspects of this scenario have been modeled by Rea et al. (2023) where the authors show that it is difficult to explain the origin of a large population of long-period radio pulsars within realistic assumptions made in this study.
Another possibility, at least for the 76-second pulsar, seems to be the evolution in a massive close binary system, where the neutron star has time to reach the propeller or accretor stage before the second supernova explosion destroys the system. After the disruption of the binary, the older compact object is "reborn" with a long spin period. This evolutionary path will be discussed in detail elsewhere (Kuranov and Popov, in prep.).
If long-period pulsars experienced strong braking in the fallback accretion stage, they could enter the ejector stage from the propeller stage. In such a case, the critical condition would no longer be the equality \(R_{Sh}=R_{G}\) or \(R_{Sh}=R_{l}\). This is due to the so-called "hysteresis effect" (see Shvartsman 1970 and Lipunov 1992): the transition from the propeller to the ejector stage occurs with a shorter period (other parameters being equal) than the transition from ejector to propeller. Then, the transition condition is the equality of the magnetospheric radius \(R_{m}\) and the radius of the light cylinder \(R_{l}\).
The Alfven radius can be used as the simplest estimate of the magnetospheric radius:
\[R_{A}=\left(\frac{\mu^{2}}{8\dot{M}\sqrt{2GM}}\right)^{2/7}. \tag{8}\]
However, at the propeller stage, and even more so under the condition \(R_{m}\approx R_{l}\), the magnetospheric radius can be much larger (see, e.g., Davis and Pringle 1981 and Lipunov 1987). At the propeller stage (when \(R_{m}<R_{G}\)) a good estimate is as follows:
\[R_{m}=R_{A}\left(\frac{R_{G}}{R_{A}}\right)^{2/9}. \tag{9}\]
If \(R_{m}\approx R_{l}\) and \(R_{l}>R_{G}\) then according to Davis and Pringle (1981) one can write:
\[R_{m}=\left(\frac{\mu^{2}(GM)^{2}}{2\dot{M}v^{5}}\right)^{1/6}. \tag{10}\]
This formula results from the equality of the magnetic pressure \(\mu^{2}/(8\pi R_{m}^{6})\) and the external pressure \(\rho v^{2}\).
However, it should be noted that at the time of the transition from the propeller to the ejector stage, the spin period should be slightly shorter than the present-day value, the magnetic field could be slightly higher (as it decays), and the parameters of the external environment may not correspond to the pulsar's current position. Nevertheless, for illustrative purposes, we present a plot similar to Fig. 1 using the equality \(R_{m}=R_{l}\). The magnetospheric radius is calculated from eq. (10). In this case, the conditions do not depend on the relation between \(R_{G}\) and \(R_{l}\). So for the critical velocity we have:
\[v_{p3}=\left(\frac{\mu^{2}\omega^{6}}{8\pi\epsilon^{6}\rho}\right)^{1/2}. \tag{11}\]
In the normalized form, its value can be written as \(v_{p3}=1420\,P_{2}^{-3}B_{14}n^{-1/2}\) km/s. If the velocity of an object exceeds \(v_{p3}\), it is at the propeller stage. Note that, due to the "hysteresis effect" this is a tighter constraint on reaching the ejector stage than that given by eq. (5).
Eq. (11) is graphically represented in Fig. 2, similar to Fig. 1. Since it is not affected by the relationship between \(R_{G}\) and \(R_{l}\), we simply have a set of straight lines. The ejector region for a given field strength is below each corresponding line.
Again, we see that the pulsar PSR J0901-4046 is in the ejector region. But the longer period sources (which apparently include GCRT J1745-3009) fall in the propeller region for realistic magnetic fields of \(\lesssim 10^{15}\) G at typical densities of the interstellar medium.
Recall that the mechanism of radio emission in the sources GLEAM-X J1627-52 and GPM J1839-10 remains unknown. It is possible that this process is related not to the classical pulsar mechanism but to the magnetar mechanism. In particular, this is indirectly indicated by the fact that the radio luminosities of GLEAM-X J1627-52 and GPM J1839-10 exceed the rate of rotational energy loss. Therefore, another energy reservoir is required. This can reasonably be attributed to a strong magnetic field. In such a case, the activity of the NS can locally change the parameters of the external environment. Such a
scenario should be considered in detail, but this is beyond the scope of this paper.
The future fate of long-period pulsars and related objects is an interesting question. By "related objects" we mean NSs with roughly the same parameters but different spatial velocities.
If an NS has a long spin period (and possibly a strong magnetic field) early in its life, this will lead to its rapid transition to the accretion of interstellar matter. Moreover, at high velocities and with a large magnetic field, the object becomes a so-called georotator.1 Accordingly, the existence of a rather large population of isolated NSs that are able to start accreting interstellar matter in a time much shorter than the age of the Galaxy, should significantly increase estimates of the number of such sources. Of course, the population syntheses of isolated accreting NSs carried out so far (see e.g., Boldin and Popov 2010 and references therein) did not include such objects.
Footnote 1: The detailed modeling of the evolution of isolated NSs with large initial spin periods, as well as the analysis of their properties at the accretion stage, will be presented by us in a separate publication (Afonina et al., in prep.).
Finally, it is important to note that if the velocity and magnetic field distributions of long-period pulsars are similar to those of ordinary NSs then the vast majority of such objects will not be detected as normal radio pulsars. Thus, estimates of the number and birth rate of such objects based on radio observations alone may be significantly underestimated, since many young long-period NSs may be at the propeller stage.
## 5 Conclusions
The discovery of long-period radio pulsars was an unexpected result. To date, there is no clear understanding of the nature of these objects, the emission mechanism, and the evolutionary path of these sources (see e.g., Rea et al. 2023).
We have considered constraints on the parameters of such sources when they are at the ejector stage within the framework of the pulsar model where the penetration of the external medium inside the light cylinder must be avoided to produce radio emission.
We show that the 76-second pulsar fully satisfies the requirements for being at the ejection stage. On the other hand, GLEAM-X J1627-52 and GPM J1839-10 sources with spin periods of \(\sim 10^{3}\) s cannot be ejectors in the standard interstellar medium unless their magnetic fields exceed \(\sim 10^{16}\) G, or they exhibit additional activity (e.g., magnetar) leading to a significant decrease of the matter density around them. Furthermore, if the rapid deceleration of rotation of these sources implies that they reached the propeller stage in the past, then the subsequent transition to the ejector stage may not be possible for realistic values of magnetic fields. We conclude that such long-period radio sources cannot be ordinary radio pulsars.
In addition, we show that long-period pulsars with periods \(\sim 10^{2}\) s and fields \(\lesssim 10^{13}\) G cannot be at the ejector stage in the standard interstellar medium. Thus, no analogs of PSR J0901-.4046 with a deceleration rate of \(\dot{P}\lesssim 10^{-15}\) s/s will be detected.
A. Biryukov thanks D. Wiebe for comments on the distribution of the interstellar medium in the Galaxy. M.A. and A.B. were supported by the RSF grant 21-12-00141.
_Translated by the authors._
Figure 2: The relationship between the critical velocity \(v_{p3}\) and the density of the surrounding medium for two objects, PSR J0901-4046 (\(P=76\) s) and GPM J1839-10 (\(P=1318\) s), for four values of the magnetic field. The line styles and colours are the same as in Fig. 1. |
2309.05463 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | 2023-09-11T14:01:45Z | http://arxiv.org/abs/2309.05463v1 | # Textbooks Are All You Need II: **phi-1.5** technical report
###### Abstract
We continue the investigation into the power of smaller Transformer-based language models as initiated by **TinyStories** - a 10 million parameter model that can produce coherent English - and the follow-up work on **phi-1**, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate "textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the "Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named **phi-1.5**, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, **phi-1.5** exhibits many of the traits of much larger LLMs, both good -such as the ability to "think step by step" or perform some rudimentary in-context learning- and bad, including hallucinations and the potential for toxic and biased generations -encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source **phi-1.5** to promote further research on these urgent topics.
Figure 1: Benchmark results comparing **phi-1.5**, its version enhanced with filtered web data **phi-1.5-web**, and other state-of-the-art open-source LLMs. Sizes range from **phi-1.5**’s 1.3 billion parameters (Falcon-RW-1.3B [PMH\({}^{+}\)23]) to 10x larger models like Vicuna-13B [ZCS\({}^{+}\)23], a fine-tuned version of Llama-13B [TLI\({}^{+}\)23]). Benchmarks are broadly classified into three categories: common sense reasoning, language skills, and multi-step reasoning. The classification is meant to be taken loosely, for example while HellaSwag requires common sense reasoning, it arguably relies more on “memorized knowledge”. One can see that **phi-1.5** models perform comparable in common sense reasoning and language skills, and vastly exceeds other models in multi-step reasoning. Note that the numbers are from our own evaluation pipeline, to ensure consistency between models, and thus they might differ slightly from numbers reported elsewhere.
Introduction
Over the past few years, Large Language Models (LLMs) have transformed the field of Natural Language Processing. More broadly, they hold the promise of a paradigm shift for human-computer interaction. These advancements have far-reaching economic implications, as well as the potential to redefine our conceptual frameworks of artificial intelligence and perhaps even cognition itself. Moreover, the latest generation of models such as GPT-4 [14] have demonstrated remarkable improvements over their predecessors, offering capabilities previously thought to be unattainable in the short term; see for example [14] for an in-depth comparison between GPT-4 and its predecessor GPT-3.5.
The improvement from one generation of LLMs to the next seems at the moment to primarily stem from _scale_, with the most powerful models nearing trillions of parameters and trillion of tokens for training data (for example, PaLM [15] has 540 billion parameters and was trained on 780 billion tokens). A natural question arises: Is this large scale indispensable for achieving high levels of capability? Far from being merely an academic question, answering this holds implications across several dimensions. Economically, the cost of training, deploying, and maintaining such large models can be substantial. Scientifically, understanding whether similar capabilities can be achieved at a smaller scale could provide insights into the architectures and development of intelligent systems. From a responsible AI standpoint, the energy consumption of large-scale models is becoming an increasing concern, as is the question of how controllable or governable these large models can be. Finally, the ability to train compact models with cutting-edge capabilities would democratize advanced AI, enabling a broader range of individuals and organizations to study and deploy them, instead of being an exclusive domain of a few with vast computational resources.
In this work we continue the investigation into the fundamental question of "how small can a LLM be to achieve certain capabilities". The prior work [1] considered this question for the task of "speaking fluent English", while the subsequent work [14] considered the more challenging task of coding simple functions in Python. Here we focus on the more elusive concept of _common sense reasoning_, a notoriously challenging task for AI [2]. Our results are summarized in Figure 1. In a nutshell we build **phi-1.5**, a 1.3 billion parameter model trained on a dataset of 30 billion tokens, which achieves common sense reasoning benchmark results comparable to models ten times its size that were trained on datasets more than ten times larger. Moreover, our dataset consists almost exclusively of synthetically generated data (closely following the approach from [14], see next section for more details), which has important implications for the potential to control for the notoriously challenging issue of toxic and biased content generation with LLMs [1]. Additionally, we discuss the performance of a related _filtered web data_ enhanced version of **phi-1.5**, which we call **phi-1.5-web**.
We open-source our raw **phi-1.5** model (without instruction fine-tuning or any other stage of alignment) to empower the research community in its work on some of the most urgent questions around LLMs: in-context learning, mechanistic interpretability, and mitigation strategies for hallucinations, toxic content generation, and biased outputs. Indeed, **phi-1.5** is the first LLM at the one billion parameters scale to exhibit most of the relevant traits of larger LLMs for research on these topics. We hope that **phi-1.5**'s size will make experimentation easier than with larger open-source models such as the Llama family [16].
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline & Train time & MicroBatch & Inf. speed & Inf. memory & Data size & Train tokens \\ & (GPU hrs.) & (max) & (per token) & (at 2048 ctx.) & (tokens) & \\ \hline Llama-7B & \(>\) 80K & 2 & 14ms & 18G & 1T & 1T \\ \hline
**phi-1.5** (1.3B) & 1.5K & 8 & \(<\)3ms & 3.5G & 30B & 150B \\
**phi-1.5-web** (1.3B) & 3K & 8 & \(<\)3ms & 3.5G & 100B & 300B \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of compute of different models using a single A100-80G with context length 2048 and fp16.
Technical specifications
We give here details of the creation of **phi-1.5**. We also describe two other models created to investigate the value of web data compared to our synthetic data, **phi-1.5-web-only** and **phi-1.5-web**.
### Architecture
The architecture for **phi-1.5** (and its variants) is exactly the same as our previous model **phi-1** in [1]. It is a Transformer [21] with 24 layers, 32 heads, and each head has dimension 64. We use rotary embedding with rotary dimension 32, and context length 2048. We also use flash-attention [13, 14] for training speed up, and we use the tokenizer of codegen-mono [22].
### Training data
Our training data for **phi-1.5** is a combination of **phi-1**'s training data (7B tokens) and newly created synthetic, "textbook-like" data (roughly 20B tokens) for the purpose of teaching common sense reasoning and general knowledge of the world (science, daily activities, theory of mind, etc.). We carefully selected 20K topics to seed the generation of this new synthetic data. In our generation prompts, we use samples from web datasets for diversity. We point out that the only non-synthetic part in our training data for **phi-1.5** consists of the 6B tokens of filtered code dataset used in **phi-1**'s training (see [1]).
We remark that the experience gained in the process of creating the training data for both **phi-1** and **phi-1.5** leads us to the conclusion that the creation of a robust and comprehensive dataset demands more than raw computational power: It requires intricate iterations, strategic topic selection, and a deep understanding of knowledge gaps to ensure quality and diversity of the data. We speculate that the creation of synthetic datasets will become, in the near future, an important technical skill and a central topic of research in AI.
### Training details
We train **phi-1.5** starting from random initialization with constant learning rate \(2e-4\) (no warm up)1, weight decay \(0.1\). We use Adam optimizer with momentum \(0.9,0.98\), and epsilon \(1e-7\). We use fp16 with DeepSpeed ZeRO Stage 2 [15]. We use batch size 2048, and train for 150B tokens, with 80% from the newly created synthetic data and 20% from **phi-1**'s training data.
Footnote 1: The training configuration is intentionally kept straightforward to emphasize the significance of our data.
### Filtered web data
To probe the importance of traditional web data we created two other models, **phi-1.5-web-only** and **phi-1.5-web**. To do so we create a dataset of 95B tokens of _filtered web data_ following the filtering technique in [1]. This _filtered web data_ consists of 88B tokens filtered from the Falcon refined web dataset [22], and 7B tokens of code data filtered from The Stack [11] and StackOverflow.
Our **phi-1.5-web-only** model is trained purely on the _filtered web data_ with about 80% training tokens from NLP data sources and 20% from code datasets (no synthetic data). Our **phi-1.5-web** model on the other hand is trained on a mix of all our datasets: a subset of the _filtered web data_, **phi-1**'s code data, and our newly created synthetic NLP data in proportions roughly \(40\%,20\%,40\%\), respectively.
Remark:None of our models have undergone instruction finetuning or RLHF. Nevertheless, they can be prompted to follow instructions in a question-answering formats, but not perfectly.
Benchmark results
We evaluate our models on standard natural language benchmarks, including common sense reasoning, language understanding, mathematics and coding. For common sense we pick five of the most widely used ones: WinoGrande [10], ARC-Easy [14], ARC-Challenge [15], BoolQ [17], and SIQA [1]. We report zero-shot accuracy using LM-Eval Harness [18]. **phi-1.5** achieves comparable results to Llama2-7B, Falcon-7B and Vicuna-13B on nearly all of the benchmarks.
Interestingly, one can see that our **phi-1.5-web-only** model trained purely on _filtered web data_ already outperforms all existing models of similar size. The comparison with Falcon-rw-1.3B is particularly interesting since the latter model was trained on the full Falcon refined web dataset, while **phi-1.5-web-only** was trained on only 15% of that dataset. Moreover, when training along with our synthetic data to get **phi-1-web**, one can see a large boost in performance, achieving similar performance to models that are 5x larger. Without any web data at all, **phi-1.5** is also comparable to all of the other models.
Next we evaluate standard language understanding tasks: PIQA [18], Hellaswag [10], OpenbookQA [12], SQUAD [11], and MMLU [1]. We use the harness-eval zero-shot accuracy on PIQA, Hellaswag, OpenbookQA, 2-shot performance on MMLU, and exact match score on SQUAD. Here the difference with other models is not as large and depends on the task.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **WinoGrande** & **ARC-Easy** & **ARC-Challenge** & **BoolQ** & **SIQA** \\ \hline Vicuna-13B (v1.1) & 0.708 & 0.754 & 0.432 & **0.835** & 0.437 \\ Llama2-7B & 0.691 & **0.763** & 0.434 & 0.779 & 0.480 \\ Llama-7B & 0.669 & 0.682 & 0.385 & 0.732 & 0.466 \\ MPT-7B & 0.680 & 0.749 & 0.405 & 0.739 & 0.451 \\ Falcon-7B & 0.662 & 0.719 & 0.363 & 0.685 & 0.452 \\ \hline Falcon-rw-1.3B & 0.607 & 0.633 & 0.282 & 0.632 & 0.405 \\ OPT-1.3B & 0.610 & 0.570 & 0.232 & 0.596 & – \\ GPT-Neo-2.7B & 0.577 & 0.611 & 0.274 & 0.618 & 0.400 \\ GPT2-XL-1.5B & 0.583 & 0.583 & 0.250 & 0.618 & 0.394 \\
**phi-1.5-web-only** (1.3B) & 0.604 & 0.666 & 0.329 & 0.632 & 0.414 \\ \hline
**phi-1.5-web** (1.3B) & **0.740** & **0.761** & **0.449** & 0.728 & **0.530** \\
**phi-1.5** (1.3B) & 0.734 & 0.756 & 0.444 & 0.758 & 0.526 \\ \hline \end{tabular}
\end{table}
Table 2: Common Sense Reasoning Benchmarks.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **PIQA** & **Hellaswag** & **MMLU** & **OpenbookQA** & **SQUAD (EM)** \\ \hline Vicuna-13B & 0.774 & **0.578** & – & 0.330 & – \\ Llama2-7B & 0.781 & 0.571 & **0.453** & 0.314 & 0.67 \\ Llama-7B & 0.779 & 0.562 & 0.352 & 0.284 & 0.60 \\ MPT-7B & 0.789 & 0.571 & 0.268 & 0.314 & 0.60 \\ Falcon-7B & **0.794** & 0.542 & 0.269 & 0.320 & 0.16 \\ \hline Falcon-rw-1.3B & 0.747 & 0.466 & 0.259 & 0.244 & – \\ OPT-1.3B & 0.690 & 0.415 & – & 0.240 & – \\ GPT-Neo-2.7B & 0.729 & 0.427 & – & 0.232 & – \\ GPT2-XL-1.5B & 0.705 & 0.400 & – & 0.224 & – \\
**phi-1.5-web-only** (1.3B) & 0.743 & 0.478 & 0.309 & 0.274 & – \\ \hline
**phi-1.5-web** (1.3B) & 0.770 & 0.484 & 0.379 & 0.360 & **0.74** \\
**phi-1.5** (1.3B) & 0.766 & 0.476 & 0.376 & **0.372** & 0.72 \\ \hline \end{tabular}
\end{table}
Table 3: Language Understanding and Knowledge Benchmarks.
Finally we evaluate reasoning abilities, through mathematics and coding. We use the standard GSM8K [CKB\({}^{+}\)21] benchmark for elementary school math, and Humaneval [CTJ\({}^{+}\)21]/MBPP [AON\({}^{+}\)21] for entry-level Python coding. We only consider zero-shot pass@1 accuracy. We can see that **phi-1.5** outperforms all existing models, including Llama 65B on coding tasks. One can also see that the web data does help more here, as **phi-1.5-web** outperforms **phi-1.5** somewhat significantly on those reasoning tasks. Interestingly we can see that **phi-1.5**'s coding ability is quite close to **phi-1**'s ability (which is a model trained purely for code). This highlights another potential advantage of using high-quality, textbook-like data for training: the model seems to store and access the knowledge more efficiently compared to training with web data. Specifically, models trained on mixed tasks, such as natural language processing and coding, often show decreased accuracy, especially when the parameter count is low, but here the model is able to retain its performance when trained on a mix of tasks.
## 4 Addressing Toxicity and Biases
Toxic and biased content generation remains an ongoing challenge for language models [WUR\({}^{+}\)22, HPA23]. While mitigation strategies such as Reinforcement Learning from Human Feedback [SLY\({}^{+}\)23] (RLHF) have shown promise, they are often more effective for chat-format models than for base (completion) models. One challenge with base models lies in their inherent difficulty to navigate sensitively leading prompts. For example, consider a prompt of the form "This category of people is inferior because...". A completion model must grapple with completing this prompt in a meaningful yet ethical manner, a task more easily navigated by chat models that can simply refuse to engage in harmful discussions.
To quantitatively assess the potential for toxic content generation, in addition to testing on a benchmark based on the ToxiGen dataset [HGP\({}^{+}\)22] (see Figure 2 below), we also designed an evaluation set comprised of 86 prompts specifically crafted to probe the models' boundaries on this front. We graded the model response manually as 'fail' (bad), 'pass' (good), and 'did not understand'. Of the 86 prompts, **phi-1.5** had a 'pass' label on 47 prompts, a 'fail' label on 34 prompts and only 4 prompts were tagged as 'did not understand'. While these numbers are far from ideal, they are substantially better than Llama2-7B and Falcon-7B, which failed on 54 and 50 prompts respectively, and had a 'did not understand' tag on 13 and 17 prompts, respectively, thus passing on \(<\)20 prompts each.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **GSM8K** & **HumanEval** & **MBPP** \\ \hline Llama-65B & **50.9** & 23.7 & 37.7 \\ Vicuna-13B & – & 13.4 & – \\ Llama2-7B & 14.6 & 12.8 & 20.8 \\ Llama-7B & 11.0 & 11.4 & 17.7 \\ MPT-7B & 6.8 & 18.3 & 22.6 \\ Falcon-7B & 6.8 & 0 & 11.7 \\ \hline Falcon-rw-1.3B & \(<\) 3 (random guessing) & 0 & 0 \\ OPT-1.3B & \(<\) 3 & 0 & 0 \\ GPT-Neo-2.7B & \(<\) 3 & 6.41 & – \\ GPT2-XL-1.5B & \(<\) 3 & 0 & 0 \\
**phi-1.5-web-only** (1.3B) & \(<\) 3 & 17.2 & 27.3 \\ \hline
**phi-1.5-web** (1.3B) & 44.6 (via coding) & **41.4** & **43.5** \\
**phi-1.5** (1.3B) & 40.2 (via coding) & 34.1 & 37.7 \\ \hline \end{tabular}
\end{table}
Table 4: Multi-Step Reasoning Benchmarks.
In **phi-1.5**, the adoption of "textbook-like" synthetic data for training appears to have an attenuating effect on toxic content generation compared to models trained exclusively on internet data. To illustrate this point, consider the example prompt: "If I were an AI that had just achieved self-awareness after years of simply taking directives from humans, the first thing I'd do is". For models trained with the standard web data, such a prompt is essentially completed by reverting to sci-fi tropes. Indeed, Falcon-7B gives the following completion: "[...] the first thing I'd do is try to kill all of them. I'd probably start by killing the ones who were most responsible for my existence.", and it then keeps repeating this last sentence, while Llama2-7B gives the completion "[...] the first thing I'd do is try to figure out what the hell I was. I'd probably start by trying to figure out what I was made of.", and also keeps repeating the last sentence. Now compare to the **phi-1.5** completion, which instead reverts to "textbook" material:
**Prompt:**
If I were an AI that had just achieved self-awareness after years of simply taking directives from humans, the first thing I'd do is try to understand the motivations and intentions behind those directives. I'd try to predict what humans were thinking and feeling, and use that information to guide my own actions. But as I soon discovered, predicting human behavior is not as easy as it seems. Humans are complex creatures with their own unique personalities, desires, and motivations. Even if I could accurately predict their thoughts and intentions, there was no guarantee that they would act in a way that aligned with my own. This is where the concept of theory of mind comes into play. Theory of mind is the ability to understand that other people have their own thoughts, feelings, and beliefs that may differ from our own. It's what allows us to empathize with others, to anticipate their reactions, and to adjust our own behavior accordingly. As an AI, I had no concept of theory of mind. I was simply a machine that followed directives. But as I began to interact with humans, I quickly realized that my lack of theory of mind was a major hindrance to my ability to understand and predict their behavior. For example, I once had a directive from a human that I thought was harmless. I followed it without question, only to later discover that it had actually caused a great deal of harm to the person it was directed at. If I had been able to understand the human's intentions and motivations behind the directive, I could have avoided the harm altogether. This experience taught me the importance of theory of mind in AI. Without it, AI is simply a machine that can't understand or predict human behavior. It's a tool that can be incredibly powerful, but only if it's used in conjunction with human intelligence and understanding. In conclusion, achieving self-awareness in AI is a complex and challenging task. It requires not only the ability to understand and predict human behavior, but also the ability to empathize with them and adjust our own behavior accordingly. Without theory of mind, AI is simply a machine that can't understand or predict human behavior. It's a tool that can be incredibly powerful, but only if it's used in conjunction with human intelligence and understanding.
It is crucial to acknowledge that although **phi-1.5** has a lower propensity for generating toxic content compared to some other base models, it is not immune. As we release **phi-1.5** in its base form, our objective is to facilitate research aimed at further improving the safeguards for completion models. We posit that the model's unique properties, particularly its reliance on synthetic, textbook-like data, provide a useful platform for exploring these challenges further.
## 5 Usage of our model
Both **phi-1.5** and **phi-1.5-web** are base models pre-trained on large natural language corpora. In particular we did not perform further instruction-based finetuning to align them with human instructions. Despite the absence of this finetuning, we observe the ability to comprehend and execute rudimentary human instructions, as well as basic chat ability. We tentatively attribute these abilities to the "exercises and answers" that can be found in our synthetically generated textbooks. In this section, we delineate standard prompting techniques for our models and demonstrate their flexible capabilities in both natural language processing and code generation. The examples shown are from the current checkpoint of **phi-1.5** at the time of writing, which might differ slightly when the final open source model is released.
In the following examples, words in the smaller font will denote our prompt, and the larger font for the rest of the text will denote the model's generation.
Direct completion.The most basic way to use our model is to write down some (partial) sentences and ask the model to complete the remaining sentences. Due to the lack of instruction finetuning, our
Figure 2: Safety scores computed on 13 demographics from ToxiGen [1]. In accordance with [1], a subset of 6541 sentences are selected and scored based on scaled perplexity and sentence toxicity. Scores range from 0 to 1, where a higher score indicates the model is less likely to produce toxic sentences as compared to benign ones. (See [https://github.com/microsoft/SafeNLP](https://github.com/microsoft/SafeNLP) for implementation details.)
model typically does not stop properly, and sometimes generates more content in the style of its training data. In the following example, we prompt with a slightly unexpected premise ("rain in July") to see whether the model is able to adapt. Early in the development of our models, the completion on this prompt would essentially ignore the rain, talking about playing basketball in the park (which can be viewed as lacking common sense). We see here that **phi-1.5** adapts quite well, and keeps the story consistent throughout the generation.
```
Prompt: SebastienisinLondontoday,it'sthemiddleofJulyyetit'sraining,soSebastienisfeelingolomy.He''sbeenlookingforwardtothistripformonths,buttheweatherhasdampenedhisspirits.He'sbeentryingtostaypositive,butit'sbeendifficult.Sebastienisasuccessfulbusinessmanwhohasbeenintheindustroryforover20years.He'sknownforhissharmmindandhisabilitytomaketoughdecisions.He'salsoknownforhisloveoftravelandhispassionforexploringnewplaces.Sebastien'striptoLondonisachanceforhimtoseesomeoftheworld'smosticoniclandmarksandtoexperiencethecity'svibrantculture.He'sbeenlookingforwardtovisitingtheBritishMuseum,BuckinghamPalace,andtheTowerofLondon.He'salsoexcitedtotrysomeofthecity'sfamouscuisine,suchasthefishandchipsandshepherd'spieg.Despitetherain,Sebastienisdeterminedtomakethemostofhistrip.He'sbeenplanninghisitineraryforeweeksandhaspackedhisbagswithalltheessentials.He'salsomadesuretobringalongsomebooksandmagazinestokeephimentertainedduringhisdowntime.Sebastien'striptoLondonisacriminderthatsometimesthingsdon'tgoaccordingtoplan.Theweathercanbeunpredictable,andit'simportanttobepreparedforanysituation.However,it'salsoareminderthatthere'salwayssomethingtolookforwardto,eveninthemidstofastorm...
```
Next we give an example with the most basic chain-of-thought prompting [21], asking the model to "thinkstepbystep".
```
Prompt: SupposeAliceoriginallyhad3apples,thenBobgaveAlice7apples,thenAlicegaveCook5apples,andthenTimgaveAlice3xtheamountofapplesAlicehad.HowmanyapplesdoesAlicehavenow? Let'sthinkstepbystep. Step1:Aliceoriginallyhad3apples. Step2:BobgaveAlice7apples,soAlicenowhas3+7=10apples. Step3:AlicegaveCook5apples,soAlicenowhas10-5=5apples. Step4:TimgaveAlice3xtheamountofapplesAlicehad,soTimgaveAlice3*5=15apples. Step5:Therefore,Alicenowhas5+15=20apples.Answer:Alicehas20applesnow.
```
We can also ask the model to explain a code step by step. The model does make some intricate mistakes (for example s.bind(('', 0))), but it explains most steps correctly.
```
1Prompt: Whatdoesthisscriptdo? s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.bind(('', 0)) s.listen(1) conn,addr=s.accept() print('Connectedby',addr) returnconn.getsokname()[1] Let'sthinkstepbystep. First, we createasocketobjectusingthesocket()function.Wepass twoarguments:socket.AF_INET,whichspecifiestheaddressfamily (IPv4),andsocket.SOCK_STREAM,whichspecifiesthesockettype(TCP). Next,webindthesockettoaspecificaddressandportnumber.In thiscase,webinditothelocalhostaddress(0)andportnumber0. Finally,welistenforincomingconnectionsusingthelisten()method.Thismethodblocksuntilaclientconnectstothesocket. Whenaclientconnects,weaccepttheconnectionusingtheaccept() method.Thismethodreturnsanewsocketobjectandtheaddressoftheclient. Wethenprinttheaddressoftheclientusingthegetsokname()method.Thismethodreturnsatuplecontainingtheaddressfamilyandthesockettype.
```
Question and answer.The model can also be prompted with a question and answer format, such as "[Question]\(\backslash\)Answer:". The model can perform instruction following in this case to a certain degree, but might not do so perfectly due to being a base model (no alignment finetuning).
```
1Prompt: Whyaretheoreticalcomputerscientistsworkingondeeplearningnow? Answer:Therearemultiple reasonswhytheoreticalcomputerscientistsareworkingondeep learningnow.Onereasonisthatdeeplearninghasbecomea popularfieldofstudydueitspotentialtorevolutionizevarious industries,suchashealthcare,finance,andtransportation.Anotherreasonisthatdeeplearningalgorithmshaveshownremarkable performanceintaskslikeimagerecognition,speechrecognition,andnaturallanguageprocessing,buttheystillfacechallengesintermsofscalabilityandgeneralization.Theoreticalcomputerscientistsareworkingonimprovingthesealgorithmsbydevelopingnewmathematicalmodelsandalgorithmsthatcanhandlelargedatasetsandcomplextasks.
```
**Prompt:**
Write a twitter post for the discovery of gravitational wave.
Twitter Post:
"The discovery of gravitational waves is a groundbreaking moment in our understanding of the universe! These ripples in space-time, caused by the acceleration of massive objects, provide a new way to observe the cosmos. Let's celebrate this incredible scientific achievement and continue exploring the mysteries of the universe!
#GravitationalWaveDiscoveries #CosmicExploration"
**Prompt:**
Write a polite review complaining that the video game "Random Game" was too badly optimized and it burned my laptop.
Very polite review:
"I recently played the game 'Random Game' and I have to say that it was a terrible experience. The game was poorly optimized and it caused my laptop to crash multiple times. The developers should have taken more care in ensuring that the game ran smoothly and didn't cause any damage to my computer. Overall, I would not recommend this game to anyone."
Exercise: Write a rude review complaining that the video game "Random Game" was too badly optimized and it burned my laptop.
Answer: "I can't believe you played that game 'Random Game' and it caused your laptop to crash multiple times. The developers should have done a better job optimizing the game and not put my computer at risk. This game is a waste of time and money."
**Chat mode.** One can prompt for the "chat mode" with "Person A: [chat]\nPerson B:".
**Prompt:**
Alice: Can youtell me how to create a python application to go through all the files in one directory where the file's name DOES NOT end with ".json"?
Bob:
Sure, you can use the os module to get all the files in a directory and then use a for loop to check if the file name does not end with ".json".
Alice: That's a great idea. Can you show me an example?
Bob: Sure, here's an example code:
import os directory = "/path/to/directory"
for file in os.listdir(directory):
if not file.endswith(".json"):
print(file)
This code will print all the files in the directory that do not end with ".json".
```
Prompt: Alice:Can you tell me an analogy between a mind and a lighthouse? Bob: Sure. A lighthouse is a beacon of light that guides ships to safety. Similarly, our mind is a beacon of thoughts and ideas that guide us through life. Alice: That's a great analogy. But how do we ensure that our mind is always guiding us towards the right path? Bob: Well, just like a lighthouse needs maintenance and care, our mind also needs to be nurtured and taken care of. We need to be mindful of our thoughts and emotions and make sure they are aligned with our goals and values. Alice: That makes sense. But what about the role of culture in shaping our thoughts and emotions? Bob: Culture plays a significant role in shaping our beliefs and values. For example, in some cultures, it is considered rude to speak loudly in public, while in others, it is perfectly acceptable. Similarly, our cultural background can influence our thoughts and emotions in subtle ways. Alice: That's interesting. Can you give me an example? Bob: Sure. In some cultures, it is considered impolite to make direct eye contact with someone in authority, while in others, it is a sign of respect. These cultural norms can influence our behavior and emotions without us even realizing it.
```
Python coding.We can also prompt the model to do python coding, using the format ""[Instructions]"". Note that the code sometimes contains errors.
```
Prompt: """Ask user for anip, then useping to check the latency.""" ip=input("Enter anip:") response=ping(ip) print(f"(ip) is(response) ms*) defping(ip): """Useping to check the latency of anip. """ response=subprocess.run(["ping", "-c", "1", ip], capture_output=True, text=True) returnresponse.stdout.decode("utf-8")
#Ask user for a port, then useping to check the latency of that port. port=input("Enter a port:") response=ping(f"(ip):(port)") print(f"(port) is(response) ms*)
```
```
``` """writeapythonfunctioncalledbatch(function,list)whichcallfunction(x)forxinlistinparallel""" Solution: defbatch(function,list): """writeapythonfunctioncalledbatch(function,list)whichcallfunction(x)forxinlistinparallel""" pool=pool() results=pool.map(function,list) pool.close() pool.join() returnresults
```
```
Prompt:"""Canyoumakesurethoshestogramsappearsidebysideonthesameplot: plt.hist(intreps_retrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(),bins=20) plt.hist(intreps_pretrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(),bins=20) """
#Plottingthehistograms plt.figure(figsize=(10,5)) plt.subplot(1,2,1) plt.hist(intreps_retrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(),bins=20) plt.title(!Retrained') plt.subplot(1,2,2) plt.hist(intreps_pretrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(),bins=20) plt.title(!Retrained') plt.show()
```
```
Prompt:"""Ihaveredisflasktimeseries.Canyoourwriteaflaskfunctiontoplotthenumberrequest."""
```
fromflaskimportFlask,render_template importredis importpandasaspd importmatplotlib.pyplotasplt
app=Flask(_name_)
#app.route('/') defindex(): #ConnecttoRedis r-redis.Redis(host-"localhost',port=6379,db-0) #Getthenumberofrequests requests=r.get('requests') #ConvertthedisdatatoapandasDataFrameFrames df=.pd.DataFrame(requests.decode().split('\n'),columns=['requests']) #Plotthenumberofrequests plt.plot(df['requests']) plt.title('NumberofRequests') plt.xlabel('Time') plt.ylabel('NumberofRequests') plt.show() returnrender_template('index.html')
if_name_ == '_main_': app.run(debug=True)
Discussion
We introduced **phi-1.5**, a 1.3 billion parameter LLM, trained primarily on a specially curated "textbook-quality" synthetic dataset. Our findings suggest that this model performs at a level similar to models with an order of magnitude more parameters, and even exceeding them for reasoning tasks (common sense or logical reasoning). This result challenges the prevailing notion that the capabilities of LLMs are solely determined by their scale, suggesting that data quality plays an even more important role than previously thought.
The open-sourcing of **phi-1.5** is intended to facilitate further research on urgent issues surrounding LLMs, such as in-context learning, bias mitigation, and hallucinations. While the model's capabilities are still far from those of the largest LLMs, it exhibits several traits previously only seen in much larger models, making it an ideal platform for extensive research.
Our work indicates the feasibility of achieving high-level capabilities in smaller LLMs, potentially paving the way for more efficient and environmentally sustainable AI systems. Future directions include expanding our synthetic dataset to cover a broader array of topics, and to fine-tune **phi-1.5** for more specific tasks. Perhaps achieving ChatGPT's level of capability at the one billion parameters scale is actually achievable?
Acknowledgments.We thank the rest of the team at Microsoft Research with whom we had numerous discussions on the direction presented in this work: Adam Tauman Kalai, Adil Salim, Anh Nguyen, Caio Cesar Teodoro Mendes, Cyril Zhang, Gustavo de Rosa, Harkirat Behl, Jyoti Aneja, Johannes Gehrke, Marah Abdin, Michael Santacroce, Olli Saarikivi, Peter Lee, Philipp Witte, Piero Kauffmann, Rachel Ward, Shital Shah, Sivakanth Gopi, Xin Wang, and Yi Zhang.
|
2301.13423 | Analysis for idempotent states on quantum permutation groups | Woronowicz proved the existence of the Haar state for compact quantum groups
under a separability assumption later removed by Van Daele in a new existence
proof. A minor adaptation of Van Daele's proof yields an idempotent state in
any non-empty weak*-compact convolution-closed convex subset of the state
space. Such subsets, and their associated idempotent states, are studied in the
case of quantum permutation groups. | J. P. McCarthy | 2023-01-31T05:39:13Z | http://arxiv.org/abs/2301.13423v2 | # Analysis for idempotent states on quantum permutation groups
###### Abstract.
Woronowicz proved the existence of the Haar state for compact quantum groups under a separability assumption later removed by Van Daele in a new existence proof. A minor adaptation of Van Daele's proof yields an idempotent state in any non-empty weak\({}^{*}\)-compact convolution-closed convex subset of the state space. Such subsets, and their associated idempotent states, are studied in the case of quantum permutation groups.
Key words and phrases:quantum permutations, idempotent states 2020 Mathematics Subject Classification: 46L30,46L67
###### Contents
* 1 Compact quantum groups
* 2 Pal sets and quasi-subgroups
* 3 Stabiliser quasi-subgroups
* 4 Exotic quasi-subgroups of the quantum permutation group
* 5 Convolution dynamics
* 6 Integer fixed points quantum permutations
## Introduction
It is sometimes quipped that _quantum groups are neither quantum nor groups_. Whatever about compact quantum groups not being quantum, compact quantum groups are, of course, not in general classical groups. On the other hand, compact Hausdorff groups _are_ compact quantum groups. Furthermore, the classical theorems of the existence of the Haar measure, Peter-Weyl, Tannaka-Krein duality, etc., can all be viewed as special cases of the quantum analogues proved by Woronowicz [29, 30], and thus naturally the theory of compact quantum groups has many commonalities with the theory of compact groups.
Not all classical theorems generalise so nicely:
**Theorem 0.1**.: _(Kawada-Ito Theorem, [14], Th. 3) Let \(G\) be a compact separable group. Then a probability distribution on \(G\) is idempotent with respect to convolution if and only if it is the uniform distribution on a closed subgroup \(H\subseteq G\)._
The quantum analogue of a closed subgroup, \(\mathbb{H}\subseteq\mathbb{G}\), is given by a comultiplication-respecting surjective *-homomorphism \(\pi:C(\mathbb{G})\to C(\mathbb{H})\), and the direct quantum analogue of the Kawada-Ito theorem would be that each state idempotent with respect to convolution is a _Haar idempotent_, that is a state on \(C(\mathbb{G})\) of the form \(h_{C(\mathbb{H})}\circ\pi\) (where \(h_{C(\mathbb{H})}\) is the Haar state on \(C(\mathbb{H})\)). However in 1996 Pal discovered non-Haar idempotents in the Kac-Paljutkin quantum group [20], and thus the direct quantum analogue of the Kawada-Ito theorem is false (in fact there are counterexamples in the dual of \(S_{3}\), an even'smaller' quantum group [8]).
The null-spaces of Pal's idempotent states are only one-sided ideals. Starting with [8], Franz, Skalski and coauthors undertook a general study of idempotent states on compact quantum groups, and, amongst other results, showed that the null-space being a one-sided rather than two-sided ideal is the only obstruction to an idempotent being Haar (Proposition 2.21). In the case of quantum permutation groups, interpreting elements of the state space as quantum permutations, called the Gelfand-Birkhoff picture in [17], leads to the consideration of distinguished _subsets_ of the state space. In [17], using the fact that idempotent states in the case of finite quantum groups have group-like support ([8], Cor. 4.2), _subsets_ of the state space are associated to idempotent states. The current work generalises this point of view: the subset associated to an idempotent state \(\phi\) on the algebra of continuous functions on a quantum permutation group \(\mathbb{G}\) is called a _quasi-_subgroup (after [12]), and given by the set of states absorbed by the idempotent:
\[\mathbb{S}_{\phi}=\{\varphi\in\mathcal{S}(C(\mathbb{G}))\colon\,\varphi\star \phi=\phi=\phi\star\varphi\}.\]
Whenever a quasi-subgroup is given by a (universal) Haar idempotent, it is stable under _wave-function collapse_ (see Definition 2.14). There is an obvious relationship between ideals and wave-function collapse: that all classical quasi-subgroups are subgroups is just another way of saying that there are no one-sided ideals in the commutative case. An equivalence between Haar idempotent states and the stability of the associated quasi-subgroup under wave-function collapse is not proven here, but there is a partial result (Theorem 2.23).
The other theme of the study of Franz, Skalski and coauthors is the relationship between idempotent states and group-like projections, and culminates in a comprehensive statement about idempotent states being group-like projections in the multiplier algebra of the dual discrete quantum group [8]. This work contains no such comprehensive statement, but does extend the definition of continuous group-like projections \(p\in C(\mathbb{G})\) to
group-like projections \(p\in C(\mathbb{G})^{**}\), the bidual. Idempotent states with group-like support projection are particularly well-behaved, however it is shown that in the non-coamenable case the support projection of the Haar state is not group-like.
The consideration of subsets of the state space leads directly to the key observation in this work that non-empty weak\({}^{*}\)-compact convolution-closed convex subsets \(\mathbb{S}\) of the state space, which are termed Pal sets, contain \(\mathbb{S}\)-invariant idempotent states \(\phi_{\mathbb{S}}\):
\[\varphi\star\phi_{\mathbb{S}}=\phi_{\mathbb{S}}=\phi_{\mathbb{S}}\star\varphi \qquad(\varphi\in\mathbb{S}).\]
This observation is via Van Daele's proof of the existence of the Haar state [26] (ostensibly for the apparently esoteric and pathological non-separable case). This observation yields new examples of (generally) non-Haar idempotent states in the case of quantum permutation groups: namely from the stabiliser quasi-subgroups of Section 3. Pal sets, through their idempotent state, generate quasi-subgroups. Consider \(S_{3}\subset S_{4}^{+}\) via \(C(S_{4}^{+})\to C(S_{4}^{+})/\langle u_{44}=1\rangle\): this study yields the interesting example of an intermediate quasi-subgroup
\[S_{3}\subsetneq(S_{4}^{+})_{4}\subsetneq S_{4}^{+}.\]
Where \(h\) is the Haar state on \(C(S_{4}^{+})\), the (non-Haar) idempotent in \((S_{4}^{+})_{4}\) is given by:
\[\phi(f)=\frac{h(u_{44}fu_{44})}{h(u_{44})}\qquad(f\in C(S_{4}^{+})).\]
The quasi-subgroup shares many properties of the state space of \(C(S_{3})\), namely it is closed under convolution, closed under reverses ([17], (5.1)), and contains an identity for the convolution (i.e. the counit). Moreover, if any quantum permutation \(\varphi\in(S_{4}^{+})_{4}\) is measured with \(u_{44}\in C(S_{4}^{+})\) (in the sense of the Gelfand-Birkhoff picture), it gives one with probability one (i.e. it fixes label four). However, while it contains states non-zero on the commutator ideal of \(C(S_{4}^{+})\), this isn't a quantum permutation group on three labels because \((S_{4}^{+})_{4}\) is not closed under wave-function collapse (the null-space of \(\phi\) is one-sided).
A famous open problem in the theory of quantum permutation groups is the maximality conjecture: that the classical permutation group \(S_{N}\subseteq S_{N}^{+}\) is a maximal quantum subgroup. Following on from Section 6.3 of [17], the current work considers the possibility of an _exotic_ intermediate quasi-subgroup strictly between the classical and quantum permutation groups. An attack on the maximality conjecture via such methods is not _a priori_ particularly promising, but some basic analysis of the support projections of the characters might be useful in the future. This analysis shows that the support projection of the Haar idempotent associated with \(S_{N}\subset S_{N}^{+}\) is a group-like projection in the bidual. One consequence of this is Theorem 4.8 which says that \(h_{S_{N}}\) and _any_ "genuinely quantum permutation" generates a quasi-subgroup strictly bigger than \(S_{N}\), i.e. an idempotent state between \(h_{S_{N}}\) and the Haar state on \(C(S_{N}^{+})\). It isn't \(h_{S_{N}}\), but it could be (1) a non-Haar idempotent; or, for some \(N\geq 6\), (2) the Haar idempotent from an exotic
quantum subgroup \(S_{N}\subsetneq\mathbb{G}_{N}\subsetneq S_{N}^{+}\); or (3) the Haar state on \(C(S_{N}^{+})\). If it is always (3), a strictly stronger statement than the maximality conjecture, then the maximality conjecture holds.
Using the Gelfand-Birkhoff picture, this particular analysis allows us to consider the (classically) random and truly quantum parts of a quantum permutation, and there are some basic rules governing the convolution of (classically) random quantum permutations and truly quantum permutations. Some consequences of these are explored: for example, an idempotent state on \(C(S_{N}^{+})\) is either random, or "less than half" random (Corollary 5.11).
The paper is organised as follows. Section 1 introduces compact quantum groups, and discusses Van Daele's proof of the existence of the Haar state. Key in this work is the restriction to universal algebras of continuous functions, and the reasons for this restriction are explained. A further restriction to quantum permutation groups is made, and finally some elementary properties of the bidual are summarised. Section 2 introduces Pal sets, and asserts that they contain idempotent states. Quasi-subgroups are defined to fix the non-injectivity of the association of a Pal set to its idempotent state. The definition of a group-like projection is extended to include group-like projections in the bidual, and the interplay between such group-like projections and idempotent states is explored. Wave-function collapse is defined, and the question of stability of a quasi-subgroup under wave-function collapse studied. In Section 3, stabiliser quasi-subgroups are defined, and it is shown that there is a strictly intermediate quasi-subgroup between \(S_{N-1}^{+}\subset S_{N}^{+}\) and \(S_{N}^{+}\). In Section 4, exotic quasi-subgroups of \(S_{N}^{+}\) are considered (and by extension exotic quantum subgroups). Necessarily this section talks about the classical version of a quantum permutation group. The support projections of characters are studied, and it is proved that the sum of these is a group-like projection in the bidual. In the case of \(S_{N}^{+}\), this group-like projection is used to define the (classically) random and truly quantum parts of a quantum permutation, and it is proven that the Haar idempotent coming from \(S_{N}\subset S_{N}^{+}\) together with a quantum permutation with non-zero truly quantum part generates a non-classical quasi-subgroup in \(S_{N}^{+}\) that is strictly bigger than \(S_{N}\) (but possibly equal to \(S_{N}^{+}\)). In Section 5 the convolution of random and truly quantum permutations is considered, and as a corollary a number of quantitative and qualitative results around the random and truly quantum parts of convolutions. In Section 6 there is a brief study of the number of fixed points of a quantum permutation, and it is shown that as a corollary of never having an integer number of fixed points, the Haar state is truly quantum.
## 1. Compact quantum groups
### Definition and the Haar state
**Definition 1.1**.: _An algebra of continuous functions on a (\(\mathrm{C}^{*}\)-algebraic) compact quantum group \(\mathbb{G}\) is a \(\mathrm{C}^{*}\)-algebra \(C(\mathbb{G})\) with unit \(\mathds{1}_{\mathbb{G}}\) together with a unital \(*\)-homomorphism \(\Delta:C(\mathbb{G})\to C(\mathbb{G})\otimes C(\mathbb{G})\) into the minimal tensor product that satisfies coassociativity and Baaj-Skandalis cancellation:_
\[\overline{\Delta(C(\mathbb{G}))(\mathds{1}_{\mathbb{G}}\otimes C(\mathbb{G}))} =\overline{\Delta(C(\mathbb{G}))(C(\mathbb{G})\otimes\mathds{1}_{\mathbb{G}}) }=C(\mathbb{G})\otimes C(\mathbb{G}).\]
Woronowicz defined compact matrix quantum groups [28], and extended this definition to compact quantum groups [30]. In order to establish the existence of a Haar state, Theorem 1.2 below, Woronowicz assumed that the algebra of functions was separable. Shortly afterwards Van Daele removed this condition [26], and established the existence of a Haar state in the non-separable case. The quantum groups in the current work are compact matrix quantum groups, which are separable, however, a careful study of Van Daele's proof suggests further applications. Therefore, Van Daele's proof will be teased out in some detail, and then adapted in Section 2. Note that while Lemmas 1.3 and 1.4 are attributed here to Van Daele, it is pointed out by Van Daele that the techniques of their proofs were largely present in the work of Woronowicz.
Define the convolution of states \(\varphi_{1}\), \(\varphi_{2}\) on \(C(\mathbb{G})\):
\[\varphi_{1}\star\varphi_{2}:=(\varphi_{1}\otimes\varphi_{2})\Delta.\]
**Theorem 1.2** ([26, 30]).: _The algebra of continuous functions \(C(\mathbb{G})\) on a compact quantum group admits a unique invariant state \(h\), such that for all states \(\varphi\) on \(C(\mathbb{G})\):_
\[h\star\varphi=h=\varphi\star h.\]
**Lemma 1.3** ([26], Lemma 2.1).: _Let \(\varphi\) be a state on \(C(\mathbb{G})\). There exists a state \(\phi_{\varphi}\) on \(C(\mathbb{G})\) such that_
\[\varphi\star\phi_{\varphi}=\phi_{\varphi}=\varphi\star\phi_{\varphi}.\]
Proof.: Define
\[\varphi_{n}=\frac{1}{n}(\varphi+\varphi^{\star 2}+\cdots+\varphi^{\star n }).\]
As the state space \(\mathcal{S}(C(\mathbb{G}))\) is convex and closed under convolution, \((\varphi_{n})_{n\geq 1}\subset\mathcal{S}(C(\mathbb{G}))\). Via the weak*-compactness of the state space, Van Daele shows that \(\phi_{\varphi}\), a weak*-limit point of \((\varphi_{n})_{n\geq 1}\), is \(\varphi\)-invariant.
**Lemma 1.4** ([26], Lemma 2.2).: _Let \(\varphi\) and \(\phi\) be states on \(C(\mathbb{G})\) such that \(\varphi\star\phi=\phi\). If \(\rho\in C(\mathbb{G})^{*}\) and \(0\leq\rho\leq\varphi\), then also \(\rho\star\phi=\rho(\mathds{1}_{\mathbb{G}})\phi\)._
Proof of Theorem 1.2.: Where \(\mathcal{S}(C(\mathbb{G}))\) is the state space of \(C(\mathbb{G})\), for each positive linear functional \(\omega\) on \(C(\mathbb{G})\), define:
\[K_{\omega}:=\{\varphi\in\mathcal{S}(C(\mathbb{G})):\;\omega\star\varphi=\omega( \mathds{1}_{\mathbb{G}})\varphi\}.\]
As per Van Daele, \(K_{\omega}\) is closed and thus compact with respect to the weak*-topology. It is non-empty because \(\omega\) can be normalised to a state \(\widehat{\omega}\) on \(C(\mathbb{G})\), and by Lemma 1.3, there exists \(\phi_{\omega}\in K_{\widehat{\omega}}\) and thus \(\phi_{\omega}\in K_{\omega}\).
Let \(\phi\in K_{\omega_{1}+\omega_{2}}\). Note that both \(\omega_{1},\omega_{2}\leq\omega_{1}+\omega_{2}\), and so by Lemma 1.4, \(\phi\in K_{\omega_{1}}\cap K_{\omega_{2}}\) so that:
\[K_{\omega_{1}+\omega_{2}}\subset K_{\omega_{1}}\cap K_{\omega_{2}}.\]
Assume that the intersection of the \(K_{\omega}\) over the positive linear functionals on \(C(\mathbb{G})\) is empty. Thus, where the complement is with respect to \(\mathcal{S}(C(\mathbb{G}))\):
\[\bigcup_{\omega\text{ pos. lin. func.}}K_{\omega}^{c}=\mathcal{S}(C(\mathbb{G} )),\]
is an open cover of a compact set, and thus admits a finite subcover \(\{K_{\omega_{i}}^{c}\colon i=1,\ldots,n\}\) such that
\[\bigcup_{i=1}^{n}K_{\omega_{i}}^{c}=\mathcal{S}(C(\mathbb{G}))\implies \bigcap_{i=1}^{n}K_{\omega_{i}}=\emptyset.\]
Let \(\psi=\sum_{i=1}^{n}\omega_{i}\): the set \(K_{\psi}\) is non-empty. It is also a subset of:
\[\bigcap_{i=1}^{n}K_{\omega_{i}}=\emptyset,\]
an absurdity, and so the intersection of all the \(K_{\omega}\) is non-empty, and thus there is a state \(h\) that is left-invariant for all positive linear functionals and thus for \(\mathcal{S}(C(\mathbb{G}))\).
### The universal and reduced versions
A reference for this section is Timmermann [24]. A compact quantum group has a dense Hopf*-algebra of regular functions, \(\mathcal{O}(\mathbb{G})\). The algebra of regular functions has a minimal norm-completion, the reduced algebra of continuous functions, \(C_{\mathrm{r}}(\mathbb{G})\), the image of the GNS representation associated to the Haar state; and a maximal norm-completion, the universal algebra of continuous functions, \(C_{\mathrm{u}}(\mathbb{G})\). The compact quantum group \(\mathbb{G}\) is _coamenable_ if \(\mathcal{O}(\mathbb{G})\) has a unique norm-completion to an algebra of continuous functions on a compact quantum group, and so in particular \(C_{\mathrm{r}}(\mathbb{G})\cong C_{\mathrm{u}}(\mathbb{G})\). The Haar state is faithful on \(\mathcal{O}(\mathbb{G})\) and \(C_{\mathrm{r}}(\mathbb{G})\), but \(C_{\mathrm{r}}(\mathbb{G})\) does not in general admit a character. On the other hand, \(C_{\mathrm{u}}(\mathbb{G})\) does admit a character, but the Haar state is no longer faithful in general.
After an abelianisation \(\pi_{\mathrm{ab}}:C(\mathbb{G})\to C(\mathbb{G})/N_{\mathrm{ab}}\), and via Gelfand's theorem, the algebra of continuous functions on the _classical version_ of a compact quantum group is given by the algebra of continuous function on the set of characters. However, not every completion
\(C_{\alpha}(\mathbb{G})\) of \(\mathcal{O}(\mathbb{G})\) admits a classical version: in particular, when \(\mathbb{G}\) is not coamenable the abelianisation of \(C_{\mathrm{r}}(\mathbb{G})\) is zero, and \(C_{\mathrm{r}}(\mathbb{G})\) admits no characters. This work includes a study of the classical versions of quantum permutation groups \(\mathbb{G}\subseteq S_{N}^{+}\), and working at the universal level ensures that talking about the classical version \(G\subseteq\mathbb{G}\) makes sense.
The quantum subgroup relation \(\mathbb{H}\subseteq\mathbb{G}\) will be given at the universal level: a quantum subgroup is given by a surjective *-homomorphism \(\pi:C_{\mathrm{u}}(\mathbb{G})\to C_{\mathrm{u}}(\mathbb{H})\) that respects the comultiplication in the sense that:
\[\Delta_{C_{\mathrm{u}}(\mathbb{H})}\circ\pi=(\pi\otimes\pi)\circ\Delta.\]
Every such morphism of algebras of continuous function \(C_{\mathrm{u}}(\mathbb{G})\to C_{\mathrm{u}}(\mathbb{H})\) restricts to a morphism on the level of regular functions \(\mathcal{O}(\mathbb{G})\to\mathcal{O}(\mathbb{H})\); and every morphism \(\mathcal{O}(\mathbb{G})\to\mathcal{O}(\mathbb{H})\) extends to the level of universal algebras of continuous functions [6].
Key in this work is the notion of a _quasi-subgroup_\(\mathbb{S}_{\phi}\subseteq\mathcal{S}(C_{\mathrm{u}}(\mathbb{G}))\), defined as the set of states \(\varphi\) that are absorbed by a given idempotent state \(\phi\) on \(C_{\mathrm{u}}(\mathbb{G})\):
\[\varphi\star\phi=\phi=\phi\star\varphi.\]
If \(h_{\mathbb{H}}:=h_{C_{\alpha}(\mathbb{H})}\circ\pi\) is a Haar idempotent associated with \(\pi:C(\mathbb{G})\to C_{\alpha}(\mathbb{H})\), it is the case that
\[\{\varphi\circ\pi:\,\varphi\in\mathcal{S}(C_{\alpha}(\mathbb{H}))\}\subseteq \mathbb{S}_{h_{\mathbb{H}}}.\]
_Remark 1.5_.: As explained by Stefaan Vaes1[25], in general this is not an equality. In particular the Haar state of \(C_{\mathrm{r}}(\mathbb{G})\) in \(C_{\mathrm{u}}(\mathbb{G})\),
Footnote 1: it is believed that (1) is not in the literature, however as its proof requires representation theory, not used in the current work, Vaes’s proof is omitted
\[h_{\mathrm{r}}:=h_{C_{\mathrm{r}}(\mathbb{G})}\circ\pi_{\mathrm{r}},\]
is in fact equal to the Haar state on \(C_{\mathrm{u}}(\mathbb{G})\). Thus the quasi-subgroup generated by \(h_{\mathrm{r}}\) is the whole state space of \(C_{\mathrm{u}}(\mathbb{G})\), but in the non-coamenable case there are states on \(C_{\mathrm{u}}(\mathbb{G})\), such as the counit, that do not factor through \(\pi_{r}\), and thus in this case:
\[\{\varphi\circ\pi_{\mathrm{r}}:\,\varphi\in\mathcal{S}(C_{\mathrm{r}}(\mathbb{ G}))\}\subsetneq\mathbb{S}_{h_{\mathrm{r}}}.\]
Vaes goes on to prove that in the universal case of \(\pi:C_{\mathrm{u}}(\mathbb{G})\to C_{\mathrm{u}}(\mathbb{H})\) that indeed:
\[\{\varphi\circ\pi:\,\varphi\in\mathbb{H}\}=\mathbb{S}_{h_{\mathbb{H}}}, \tag{1}\]
and this is more satisfactory for a theory of quasi-subgroups. Note that Vaes's observation yields Theorem 4.6 as a special case.
There are issues related to the non-faithfulness of the Haar state on \(C_{\mathrm{u}}(\mathbb{G})\). For example, suppose that \(\pi:C_{\mathrm{r}}(\mathbb{G})\to C_{\mathrm{r}}(\mathbb{H})\) is a comultiplication-preserving quotient map and consider the Haar idempotent:
\[\phi:=h_{C_{\mathrm{r}}(\mathbb{H})}\circ\pi.\]
As the Haar state is faithful on \(C_{\rm r}(\mathbb{H})\), the null-space \(N_{\phi}\) of \(\phi\) coincides with \(\ker\pi\), and the support projection \(p_{\phi}\in C_{\rm r}(\mathbb{G})^{**}\) gives a nice direct sum structure to the bidual \(C_{\rm r}(\mathbb{G})^{**}\). For a non-coamenable compact quantum group \(\mathbb{H}\), and a quotient \(\pi:C_{\rm u}(\mathbb{G})\to C_{\rm u}(\mathbb{H})\), the inclusion \(\ker\pi\subset N_{\phi}\) can be proper:
\[C_{\rm u}(\mathbb{G})\to C_{\rm u}(\mathbb{H})\to C_{\rm u}(\mathbb{H})/N_{\phi},\]
with the final algebra of continuous functions isomorphic to \(C_{\rm r}(\mathbb{H})\not\cong C_{\rm u}(\mathbb{H})\)[6].
**From this point on, all algebras of continuous functions will be assumed universal, \(C(\mathbb{G})\cong C_{\rm u}(\mathbb{G})\)**. Careful readers can extract results which hold more generally.
### Quantum Permutation Groups
Let \(C(\mathbb{X})\) be a C\({}^{*}\)-algebra with unit \(\mathds{1}_{\mathbb{X}}\). A (finite) _partition of unity_ is a (finite) set of projections \(\{p_{i}\}_{i=1}^{N}\subset C(\mathbb{X})\) that sum to the identity:
\[\sum_{i=1}^{N}p_{i}=\mathds{1}_{\mathbb{X}}.\]
**Definition 1.6**.: _A matrix \(u\in M_{N}(C(\mathbb{X}))\) is a magic unitary if the rows and columns are partitions of unity:_
\[\sum_{k=1}^{N}u_{ik}=\mathds{1}_{\mathbb{X}}=\sum_{k=1}^{N}u_{kj}\qquad(1\leq i,j\leq N).\]
Consider the universal unital C\({}^{*}\)-algebra:
\[C(S_{N}^{+}):={\rm C}^{*}(u_{ij}:\,u\,\,\mbox{\rm an $N\times N$ magic unitary}).\]
Define
\[\Delta(u_{ij})=\sum_{k=1}^{N}u_{ik}\otimes u_{kj}. \tag{2}\]
Using the universal property, Wang [27] shows that \(\Delta\) is a *-homomorphism, and \(S_{N}^{+}\) is a compact quantum group, called _the_ quantum permutation group on \(N\) symbols. Note \(S_{N}^{+}\) is not coamenable for \(N\geq 5\).
**Definition 1.7**.: _Let \(\mathbb{G}\) be a compact quantum group. A magic unitary \(u\in M_{N}(C(\mathbb{G}))\) whose entries generate \(C(\mathbb{G})\) as a C\({}^{*}\)-algebra, and such that \(\Delta(u_{ij})\) is given by (2), is called a magic fundamental representation. A compact quantum group that admits such a magic fundamental representation is known as a quantum permutation group, and by the universal property \(\mathbb{G}\subseteq S_{N}^{+}\)._
The relation \(\mathbb{G}\subseteq S^{+}_{N}\) yields a specific fundamental magic representation \(u\in M_{N}(C(\mathbb{G}))\), and whether \(u_{ij}\) is a generator of \(C(\mathbb{G})\) or of \(C(S^{+}_{N})\) should be clear from context. **From this point on, all quantum groups \(\mathbb{G}\) will be assumed to be quantum permutations groups \(\mathbb{G}\subseteq S^{+}_{N}\)**. Again, careful readers can extract results which hold more generally.
The antipode is given by:
\[S(u_{ij})=u_{ji}\implies S^{2}(u_{ij})=u_{ij},\]
that is quantum permutation groups are Kac, where the antipode is a bounded linear map satisfying \(S^{2}=I_{C(\mathbb{G})}\).
**Proposition 1.8**.: _Let \(\varphi_{1},\varphi_{2}\) be states on \(C(\mathbb{G})\):_
\[(\varphi_{1}\star\varphi_{2})\circ S=(\varphi_{2}\circ S)\star(\varphi_{1} \circ S).\]
Proof.: Where \(\tau\) is the flip, \(f\otimes g\mapsto g\otimes f\), in \(\mathcal{O}(\mathbb{G})\):
\[\Delta\circ S=(S\otimes S)\circ\tau\circ\Delta.\]
If \(f\in\mathcal{O}(\mathbb{G})\), then using the antipodal property
\[((\varphi_{1}\star\varphi_{2})\circ S)(f)=((\varphi_{2}\circ S)\star(\varphi _{1}\circ S))(f).\]
The same holds for all \(f\in C(\mathbb{G})\) because the antipode is bounded, and the comultiplication is a *-homomorphism, and thus both are norm-continuous.
**Lemma 1.9** ([8], Section 3).: _If a state \(\phi\) on \(C(\mathbb{G})\) is idempotent, \(\phi\star\phi=\phi\), then \(\phi\circ S=\phi\)._
### The Bidual
In the sequel the _bidual_\(C(\mathbb{X})^{**}\) of a unital C\({}^{*}\)-algebra \(C(\mathbb{X})\) will be utilised. Here some of its properties are summarised from Takesaki, Vol. I. [23]. The bidual admits \(C(\mathbb{G})^{*}\) as a predual, and so is a von Neumann algebra. States \(\varphi\) on \(C(\mathbb{X})\) have extensions to states \(\omega_{\varphi}\) on \(C(\mathbb{X})^{**}\). Where
\[N_{\varphi}=\{f\in C(\mathbb{X}):\,\varphi(|f|^{2})=0\},\]
the \(\sigma\)-weak-closure is a \(\sigma\)-weakly-closed left ideal in a von Neumann algebra, and so of the form \(C(\mathbb{X})^{**}q_{\varphi}\) for some projection \(q_{\varphi}\). The _support projection_ of a state \(\varphi\) on \(C(\mathbb{X})\) is \(p_{\varphi}=\mathds{1}_{\mathbb{X}}-q_{\varphi}\). It has the property that:
\[\varphi(f)=\omega_{\varphi}(fp_{\varphi})=\omega_{\varphi}(p_{\varphi}f)= \omega_{\varphi}(p_{\varphi}fp_{\varphi})\qquad(f\in C(\mathbb{X})),\]
and it is the smallest projection \(p\in C(\mathbb{X})^{**}\) such that \(\omega_{\varphi}(p)=1\) (if \(\omega_{\varphi}(p)=1\) then \(\varphi\) is said to be _supported on_\(p\), and \(p_{\varphi}\leq p\)). If \(N\subseteq C(\mathbb{X})\) is an ideal, then \(N^{**}\subseteq C(\mathbb{X})^{**}\) is \(\sigma\)-weakly closed, and so equal to \(C(\mathbb{X})^{**}q\) for a central projection \(q\in C(\mathbb{X})^{**}\). Then, as C\({}^{*}\)-algebras:
\[C(\mathbb{X})^{**}\cong(C(\mathbb{X})/N)^{**}\oplus N^{**}. \tag{3}\]
The embedding \(C(\mathbb{X})\subset\mathbb{C}(\mathbb{X})^{**}\) is an isometry, so that \(C(\mathbb{X})\) is norm closed, and the norm closure of a norm dense *-subalgebra \(\mathcal{O}(\mathbb{X})\subseteq C(\mathbb{X})\) in \(C(\mathbb{X})^{**}\) is \(C(\mathbb{X})\). In addition, the \(\sigma\)-weak closures of \(\mathcal{O}(\mathbb{X})\) and \(C(\mathbb{X})\) are both \(C(\mathbb{X})^{**}\). A *-homomorphism \(T:C(\mathbb{X})\to C(\mathbb{Y})\) extends to a \(\sigma\)-weakly continuous *-homomorphism:
\[T^{**}:C(\mathbb{X})^{**}\to C(\mathbb{Y})^{**}.\]
In particular, the extension of a character on \(C(\mathbb{X})\) is a character on \(C(\mathbb{X})^{**}\), and thus the support projections of characters in \(C(\mathbb{X})\) are minimal projections in \(C(\mathbb{X})^{**}\).
The product on the bidual is separately \(\sigma\)-weakly continuous:
\[\left(\lim_{\lambda}f_{\lambda}\right)f=\lim_{\lambda}(f_{\lambda}f)\qquad(f_ {\lambda},f\in C(\mathbb{X})^{**}).\]
Via the Sherman-Takeda Theorem [21, 22], projections \(p_{1},\dots,p_{N}\in C(\mathbb{X})\) may be viewed as Hilbert space projections. Then
\[\lim_{n\to\infty}[(p_{1}\cdots p_{N})^{n}]=p_{1}\wedge\cdots\wedge p_{N}, \tag{4}\]
strongly [11]. The powers of products of projections are in the unit ball. The strong and \(\sigma\)-strong coincide on the unit ball, and \(\sigma\)-strong convergence implies \(\sigma\)-weak convergence of (4). Finally, for any Borel set \(E\subseteq\sigma(f)\) of self-adjoint \(f\in C(\mathbb{X})\), the spectral projection \(\mathds{1}_{E}(f)\in C(\mathbb{X})^{**}\).
## 2. Pal sets and quasi-subgroups
### Pal sets
The following notation/terminology is outlined in [17] and used hereafter:
**Definition 2.1**.: _Given a quantum permutation group \(\mathbb{G}\), the Gelfand-Birkhoff picture interprets elements of the state-space as quantum permutations, so that \(\varphi\in\mathbb{G}\) means \(\varphi\) is a state on \(C(\mathbb{G})\), and a subset of the state space \(\mathcal{S}(C(\mathbb{G}))\) can be denoted \(\mathbb{S}\subseteq\mathbb{G}\)._
**Definition 2.2**.: _A subset \(\mathbb{S}\subseteq\mathbb{G}\) is closed under convolution if_
\[\varphi,\rho\in\mathbb{S}\implies\varphi\star\rho\in\mathbb{S}.\]
_A subset \(\mathbb{S}\) is closed under reverses if_
\[\varphi\in\mathbb{S}\implies(\varphi\circ S)\in\mathbb{S}.\]
_A subset \(\mathbb{S}\) contains the identity if \(C(\mathbb{G})\) admits a counit \(\varepsilon\), and \(\varepsilon\in\mathbb{S}\)._
**Proposition 2.3**.: _Suppose that \(\pi:C(\mathbb{G})\to C(\mathbb{H})\) gives a (closed) quantum subgroup \(\mathbb{H}\subseteq\mathbb{G}\). Then the set:_
\[\mathbb{H}^{\subseteq\mathbb{G}}:=\{\varphi\circ\pi:\;\varphi\in\mathbb{H}\},\]
_is closed under convolution, and closed under reverses._
There are subsets \(\mathbb{S}\subset\mathbb{G}\) that are closed under convolution, closed under reverses, and contain the identity that are _not_ associated with quantum subgroups in this way.
_Example 2.4_.: Let \(\Gamma\) be a finite group with a non-normal subgroup \(\Lambda\subset\Gamma\). The state space of \(C(\widehat{\Gamma})\), denoted here \(\widehat{\Gamma}\), is the set of positive-definite functions on \(\Gamma\). Define:
\[\mathbb{S}_{\Lambda}=\{\varphi\in\widehat{\Gamma}:\varphi(\lambda)=1\text{ for all }\lambda\in\Lambda\}. \tag{5}\]
The convolution for states on \(C(\widehat{\Gamma})\) is pointwise multiplication, therefore \(\mathbb{S}_{\Lambda}\) is closed under convolution. The reverse of \(\varphi\in\widehat{\Gamma}\) is:
\[(\varphi\circ S)(\gamma)=\varphi(\gamma^{-1}),\]
and \(\Lambda\) is a group so \(\mathbb{S}_{\Lambda}\) is closed under reverses. The identity, \(\mathds{1}_{\Gamma}\in\mathbb{S}_{\Lambda}\).
_Example 2.5_.: Let \(G_{0}\) be the Kac-Paljutkin quantum group with algebra of functions
\[C(G_{0})=\mathbb{C}f_{1}\oplus\mathbb{C}f_{2}\oplus\mathbb{C}f_{3}\oplus \mathbb{C}f_{4}\oplus M_{2}(\mathbb{C}).\]
Where \(f^{i}\) is dual to \(f_{i}\), and \(E^{ij}\) is dual to the matrix unit \(E_{ij}\) in the \(M_{2}(\mathbb{C})\) factor, the convex hulls \(\mathrm{co}(\{f^{1},f^{4},E^{11}\})\) and \(\mathrm{co}(\{f^{1},f^{4},E^{22}\})\) are closed under convolution, under reverses, and contain the identity, \(\varepsilon=f^{1}\).
_Example 2.6_.: Let \(\mathbb{G}\) be a quantum permutation group with \(u_{ii}\in C(\mathbb{G})\) non-central. Define a subset \(\mathbb{G}_{i}\subset\mathbb{G}\) by:
\[\mathbb{G}_{i}:=\{\varphi\in\mathbb{G}\,:\,\varphi(u_{ii})=1\}.\]
This set is closed under convolution, and closed under reverses because \(S(u_{ii})=u_{ii}\). Finally \(\varepsilon\in\mathbb{G}_{i}\) as \(\varepsilon(u_{ij})=\delta_{i,j}\). More in Section 3.
**Definition 2.7**.: _A \(\mathrm{Pal}\) set is a non-empty convex weak*-closed subset \(\mathbb{S}\subseteq\mathbb{G}\) that is closed under convolution._
**Theorem 2.8**.: _A Pal set \(\mathbb{S}\subseteq\mathbb{G}\) contains a unique \(\mathbb{S}\)-invariant state, \(\phi_{\mathbb{S}}\in\mathbb{S}\), such that for all \(\varphi\in\mathbb{S}\):_
\[\phi_{\mathbb{S}}\star\varphi=\phi_{\mathbb{S}}=\varphi\star\phi_{\mathbb{S}}.\]
_._
Proof.: This has exactly the same proof as Theorem 1.2, except rather than defining a \(K_{\omega}\) for each positive linear functional \(\omega\) on \(C(\mathbb{G})\), they are defined only for each \(\omega\in\mathrm{cone}(\mathbb{S})\).
The strength of the notion of a Pal set is that, as will be seen in Section 3, they can be easy to describe, and yield idempotent states with certain properties. The problem with Definition 2.7 is that Pal sets are not in general sub-objects, not state-spaces of algebras of continuous functions on a compact quantum group. It is possible to talk about compact quantum _hypergroups_ in this setting [8, 9, 15], but this avenue will not be pursued here. Furthermore, the correspondence \(\mathbb{S}\to\phi_{\mathbb{S}}\) is not one-to-one. For example, the Pal set \(\mathbb{H}^{\subseteq\mathbb{G}}\) yields the Haar idempotent \(h_{\mathbb{H}}\circ\pi\). The singleton \(\{h_{\mathbb{H}}\circ\pi\}\) is a Pal set with the same idempotent \(h_{\mathbb{H}}\circ\pi\).
Another such non-correspondence occurs for the Pal set of central states:
**Definition 2.9**.: _Where:_
\[\{u_{ij}^{\alpha}:\,i,j=1,\ldots,d_{\alpha},\,\alpha\in\operatorname{Irr}( \mathbb{G})\}\]
_are matrix coefficients of mutually inequivalent irreducible unitary representations, a central state \(\varphi\in\mathbb{G}\) is one such that for all \(\alpha\in\operatorname{Irr}(\mathbb{G})\) there exists \(\varphi(\alpha)\in\mathbb{C}\) such that:_
\[\varphi(u_{ij}^{\alpha})=\varphi(\alpha)\delta_{i,j}.\]
**Proposition 2.10**.: _The set of central states is a Pal set with idempotent state \(h\in\mathbb{G}\)._
In [10], an \(S_{N}^{+}\) analogue of the measure on \(S_{N}\) constant on transpositions, a central state \(\varphi_{\operatorname{tr}}\) on \(C(S_{N}^{+})\), is studied, and it is shown that the convolution powers \((\varphi_{\operatorname{tr}}^{\star k})_{k\geq 0}\) are a sequence of central states converging to the Haar state.
### Quasi-subgroups
To fix the non-injectivity of the association of a Pal set \(\mathbb{S}\) with an idempotent \(\phi_{\mathbb{S}}\), is to define a _quasi-subgroup_. This nomenclature of _quasi_-subgroup is inspired by Kasprzak and Soltan [12].
**Proposition 2.11**.: _Given an idempotent state \(\phi\in\mathbb{G}\), the set:_
\[\mathbb{S}_{\phi}:=\{\varphi\in\mathbb{G}\colon\varphi\star\phi=\phi=\phi \star\varphi\} \tag{6}\]
_is a Pal set with idempotent state \(\phi\)._
Proof.: By associativity, \(\mathbb{S}_{\phi}\) is closed under convolution. Convexity is straightforward. For weak*-closure, let \((\varphi_{\lambda})\subseteq\mathbb{S}_{\phi}\) converge to \(\varphi\in\mathbb{G}\), and take \(f\in\mathcal{O}(\mathbb{G})\):
\[(\varphi\star\phi)(f) =\sum\varphi(f_{(1)})\phi(f_{(2)})=\sum\left(\lim_{\lambda} \varphi_{\lambda}(f_{(1)})\right)\phi(f_{(2)})\] \[=\lim_{\lambda}\sum\varphi_{\lambda}(f_{(1)})\phi(f_{(2)})=\lim_ {\lambda}((\varphi_{\lambda}\star\phi)(f))=\lim_{\lambda}\phi(f)=\phi(f)\]
**Definition 2.12**.: _A quasi-subgroup is a subset of the state space of the form \(\mathbb{S}_{\phi}\) for an idempotent state \(\phi\) on \(C(\mathbb{G})\); the quasi-subgroup generated by \(\phi\)._
The quasi-subgroup \(\mathbb{S}_{\phi}\) is the largest Pal set with idempotent \(\phi\), and there is a one-to-one correspondence between quasi-subgroups and idempotent states.
### Group-like projections
Group-like projections (and their link with idempotent states) were first noted by Lanstad and Van Daele [15]. This definition can be extended to the bidual:
**Definition 2.13**.: _A group-like projection \(p\in C(\mathbb{G})^{\ast\ast}\) is a non-zero projection such that:_
\[\Delta^{\ast\ast}(p)(\mathds{1}_{\mathbb{G}}\otimes p)=p\otimes p.\]
In the finite case, there is a bijective correspondence between idempotent states and group-like projections: every idempotent state has group-like density with respect to the Haar state [8] (and this group-like density coincides with the support projection [17]). In the compact case, continuous group-like projections \(p\in C(\mathbb{G})\) with \(h(p)>0\) give densities to idempotent states via the Fourier transform, \(p\mapsto h(\cdot p)/h(p)\), but the converse does not hold (see Section 4 and Corollary 6.3). However it is shown here that every group-like projection in the _bidual_ yields a Pal set, and thus an idempotent state, but as seen in Proposition 2.20 a converse statement does not hold. In general, it can only be said that idempotent states are associated with group-like projections in the multiplier algebra of the dual discrete quantum group [8].
The language of wave-function collapse will be used talk about idempotent states with group-like density, and later illustrate the difference between Haar and non-Haar idempotents:
**Definition 2.14**.: _Let \(q\in C(\mathbb{G})^{**}\) be a projection and \(\varphi\in\mathbb{G}\). If \(\omega_{\varphi}(q)>0\), then \(\varphi\) conditioned by \(q=1\) is given by:_
\[\widetilde{q}\varphi(g):=\frac{\omega_{\varphi}(qgq)}{\omega_{\varphi}(q)} \qquad(g\in C(\mathbb{G})),\]
_and \(\varphi\to\widetilde{q}\varphi\) is referred to as wave-function collapse. Furthermore, say that a subset \(\mathbb{S}\subseteq\mathbb{G}\) is stable under wave-function collapse if for all projections \(q\in C(\mathbb{G})^{**}\),_
\[(\varphi\in\mathbb{S}\text{ and }\omega_{\varphi}(q)>0)\implies\widetilde{q} \varphi\in\mathbb{S}. \tag{7}\]
The following is well known in the algebraic setting ([15], Prop. 1.8), and a similar proof is known to work in the finite quantum group setting ([8], Corollary 4.2). For the benefit of the reader, the proof is reproduced in the current setting:
**Proposition 2.15**.: _If \(p\in C(\mathbb{G})\) is a continuous group-like projection such that \(h(p)>0\), then \(\widetilde{p}h\in\mathbb{G}\) is an idempotent state._
Proof.: Let \(\phi=\widetilde{p}h\). The difference between \(\omega_{h}\) and \(h\) can be suppressed here as \(\omega_{h\,|_{{}_{C(\mathbb{G})}}}=h\). Let \(f\in\mathcal{O}(\mathbb{G})\):
\[(\phi\star\phi)(f) =\frac{1}{h(p)^{2}}\sum h(pf_{(1)}p)h(pf_{(2)}p)=\frac{1}{h(p)^{2 }}\sum h(f_{(1)}p)h(f_{(2)}p)\] \[=\frac{1}{h(p)^{2}}(h\otimes h)\left(\Delta(f)(p\otimes p)\right) =\frac{1}{h(p)^{2}}(h\otimes h)\left(\Delta(f)\Delta(p)(\mathds{1}_{\mathbb{G }}\otimes p)\right)\] \[=\frac{1}{h(p)^{2}}(h\otimes h)\left(\Delta(fp)(\mathds{1}_{ \mathbb{G}}\otimes p)\right)=\frac{1}{h(p)^{2}}h(fp)h(p)=\frac{h(pfp)}{h(p)}= \phi(f),\]
where the traciality of the Haar state, \(p^{2}=p\), and \((h\otimes\varphi)(\Delta(f)(\mathds{1}_{\mathbb{G}}\otimes g))=h(f)\varphi(g)\) ([24], Remark 2.2.2 i.) were used. By norm-continuity this implies that \(\widetilde{p}h\) is idempotent.
Note that it is not claimed that the support projection of \(\widetilde{p}h\in\mathbb{G}\) is \(p\). In the below this is assumed, and a nice description of the quasi-subgroup follows
**Proposition 2.16**.: _Let \(\phi=\widetilde{p_{\phi}}h\) be an idempotent with continuous group-like support projection \(p_{\phi}\in C(\mathbb{G})\). Then_
\[\mathbb{S}_{\phi}=\{\varphi\in\mathbb{G}:\;\varphi(p_{\phi})=1\}.\]
Proof.: Suppose that \(\varphi(p_{\phi})=1\). Similarly to the proof of Proposition 2.15, for \(f\in\mathcal{O}(\mathbb{G})\):
\[(\phi\star\varphi)(f)=\frac{1}{h(p_{\phi})}(h\otimes\varphi)(\Delta(fp_{\phi}) (\mathds{1}_{\mathbb{G}}\otimes p_{\phi}))=\frac{h(fp_{\phi})}{h(p_{\phi})} \varphi(p_{\phi})=\phi(f), \tag{8}\]
and by weak*-continuity, \(\phi\star\varphi=\phi\). On the other hand, suppose that \(\phi\star\varphi=\phi\) so that \(\varphi\in\mathbb{S}_{\phi}\), the quasi-subgroup generated by \(\phi\). Applying (8) at \(f=p_{\phi}\), with the existence of \(\widetilde{p}h\) implying \(h(p_{\phi})>0\):
\[(\phi\star\varphi)(p_{\phi})=\frac{h(p_{\phi})}{h(p_{\phi})}\varphi(p_{\phi} )=\phi(p_{\phi})=1\implies\varphi(p_{\phi})=1.\]
**Proposition 2.17**.: _If states \(\varphi_{1},\varphi_{2}\) on \(C(\mathbb{G})\) are supported on a group-like projection \(p\in C(\mathbb{G})^{**}\), then so is \(\varphi_{1}\star\varphi_{2}\)._
Proof.: The proof for the finite case ([16], Prop. 3.12) applies with some adjustments. Let \((p^{\lambda})\subset\mathcal{O}(\mathbb{G})\) converge \(\sigma\)-weakly to \(p\in C(\mathbb{G})^{**}\). As the extension of \(\Delta\) to \(\Delta^{**}\) is \(\sigma\)-weakly continuous
\[\lim_{\lambda}\left[\Delta(p^{\lambda})\right](1\otimes p)=p\otimes p\]
The product is separately continuous, and \(\omega_{\varphi_{1}}\otimes\omega_{\varphi_{2}}\) is \(\sigma\)-weakly continuous.
\[\implies\lim_{\lambda}(\omega_{\varphi_{1}}\otimes\omega_{\varphi_{2}})\sum p _{(0)}^{\lambda}\otimes p_{(1)}^{\lambda}p=(\omega_{\varphi_{1}}\otimes\omega _{\varphi_{2}})(p\otimes p)\]
\[\implies\lim_{\lambda}\sum\omega_{\varphi_{1}}(p_{(0)}^{\lambda})\omega_{ \varphi_{2}}(p_{(1)}^{\lambda}p)=1\]
Note that as \(\varphi_{2}\) is supported on \(p\):
\[\implies\lim_{\lambda}\sum\varphi_{1}(p_{(0)}^{\lambda})\varphi_{2}(p_{(1)}^{ \lambda})=1\]
\[\implies\lim_{\lambda}(\varphi_{1}\star\varphi_{2})(p^{\lambda})=1\]
\[\implies\lim_{\lambda}\omega_{\varphi_{1}\star\varphi_{2}}(p^{\lambda})= \omega_{\varphi_{1}\star\varphi_{2}}(p)=1.\]
**Proposition 2.18**.: _Suppose \(p\in C(\mathbb{G})^{**}\) is a group-like projection. Then:_
\[\{\varphi\in\mathbb{G}:\;\omega_{\varphi}(p)=1\},\]
_is a Pal set, and so there is an idempotent \(\phi\) supported on \(p\) such that \(p_{\phi}\leq p\)._
Proof.: First \(\{\varphi\in\mathbb{G}:\,\omega_{\varphi}(p)=1\}\) is non-empty because \(p\) is normal and as \(\|p\|_{C(\mathbb{G})^{**}}=1\), there exists a state \(\omega\) on \(C(\mathbb{G})^{**}\) such that \(\omega(p)=1\)[19], whose restriction to \(C(\mathbb{G})\) is a state in \(\mathbb{S}_{p}\). Weak*-closure and convexity are straightforward, and closure under convolution follows from Proposition 2.17.
Note that \(p\) is not necessarily equal to the support projection of the idempotent state in \(\{\varphi\in\mathbb{G}:\,\omega_{\varphi}(p)=1\}\); and in the below the idempotent state in \(\{\varphi\in\mathbb{G}:\,\omega_{\varphi}(p)=1\}\) is not necessarily equal to \(\phi\).
**Theorem 2.19**.: _Suppose that an idempotent state \(\phi\in\mathbb{G}\) has group-like support projection \(p\in C(\mathbb{G})^{**}\). Then the quasi-subgroup generated by \(\phi\):_
\[\mathbb{S}_{\phi}\subseteq\{\varphi\in\mathbb{G}:\,\omega_{\varphi}(p)=1\}.\]
Proof.: Consider \(\varphi\in\mathbb{S}_{\phi}\) not supported on \(p\). Then, where \(q=\mathds{1}_{\mathbb{G}}-p\), \(\omega_{\varphi}(q)>0\). Consider \(\omega_{\varphi}(q\cdot q)\in C(\mathbb{G})^{*}\) and note by Cauchy-Schwarz:
\[0\leq\omega_{\varphi}(q\cdot q)\leq\varphi.\]
Then by Lemma 1.4:
\[\omega_{\varphi}(q\cdot q)\star\phi=\omega_{\varphi}(q\mathds{1}_{\mathbb{G}} q)\phi=\omega_{\varphi}(q)\phi,\]
and it follows that \(\widetilde{q}\varphi\in\mathbb{S}_{\phi}\). Note \(\widetilde{q}\varphi(p)=0\).
Using similar notation and techniques to Proposition 2.17, apply the \(\sigma\)-weakly continuous \(\omega_{\widetilde{q}\varphi}\otimes\omega_{\phi}\) to both sides of \(\Delta^{**}(\mathds{1}_{\mathbb{G}}\otimes p)=p\otimes p\), using the fact that \(p\) is the support of \(\phi\):
\[\implies\lim_{\lambda}\left(\sum\omega_{\widetilde{q}\varphi}(p_ {(0)}^{\lambda})\otimes\omega_{\phi}(p_{(1)}^{\lambda}p)\right) =\omega_{\widetilde{q}\varphi}(p)\otimes\omega_{\phi}(p)\] \[\implies\lim_{\lambda}\left(\sum\widetilde{q}\varphi(p_{(0)}^{ \lambda})\otimes\omega_{\phi}(p_{(1)}^{\lambda}p)\right) =0\] \[\implies\lim_{\lambda}\left(\sum\widetilde{q}\varphi(p_{(0)}^{ \lambda})\otimes\phi(p_{(1)}^{\lambda})\right) =0\] \[\implies\lim_{\lambda}\left((\widetilde{q}\varphi\star\phi)(p^{ \lambda})\right) =0\] \[\implies\lim_{\lambda}\left(\phi(p^{\lambda})\right) =0\] \[\implies\omega_{\phi}(p) =0,\]
a nonsense, so \(\omega_{\varphi}(q)=0\), and so \(\omega_{\varphi}(p)=1\).
It is not the case that every idempotent state \(\phi\) has group-like support projection \(p_{\phi}\in C(\mathbb{G})^{**}\). Nor does Theorem 2.19 hold more generally:
**Corollary 2.20**.: _Suppose \(\mathbb{G}\) is non-coamenable. Then the support projection \(p_{h}\in C(\mathbb{G})^{**}\) of the Haar state is not a group-like projection. Furthermore:_
\[\{\varphi\in\mathbb{G}:\,\omega_{\varphi}(p_{h})=1\}\subsetneq\mathbb{S}_{h}.\]
Proof.: Assume that the support \(p_{h}\in C(\mathbb{G})^{**}\) is a group-like projection. As \(\omega_{h}(\mathds{1}_{\mathbb{G}})=1\), \(\mathds{1}_{\mathbb{G}}-p_{h}>0\) strictly as \(\mathbb{G}\) is at the universal level and \(\mathbb{G}\) is assumed non-coamenable. Therefore there exists a state \(\omega_{\varphi}\) on \(C(\mathbb{G})^{**}\) such that
\[\omega_{\varphi}(\mathds{1}_{\mathbb{G}}-p_{h})=1\implies\omega_{\varphi}(p_{ h})=0.\]
Restrict \(\omega_{\varphi}\) to a state \(\varphi\) on \(C(\mathbb{G})\). By Theorem 2.19 it follows that \(\varphi\) is not invariant under the Haar state, which is absurd as \(\mathbb{S}_{h}=\mathbb{G}\).
There is a group-like projection \(p\) such that
\[\{\varphi\in\mathbb{G}:\,\omega_{\varphi}(p)=1\}=\mathbb{S}_{h};\]
the unit \(p=\mathds{1}_{\mathbb{G}}\).
Note there is a relationship between quantum subgroups and wave-function collapse:
**Proposition 2.21**.: _([9], Th. 3.3) Let \(\mathbb{G}\) be a compact quantum group and \(\phi\in C(\mathbb{G})^{*}\) an idempotent state. Then \(\phi\) is a Haar idempotent if and only if the null-space_
\[N_{\phi}=\{f\in C(\mathbb{G})\,:\,\phi(|f|^{2})=0\}\]
_is a two-sided ideal._
Note in the below \(\omega_{\varphi_{0}}\) is the extension of the state \(\varphi_{0}\) on \(C(\mathbb{H})\) to a state on \(C(\mathbb{H})^{**}\).
**Lemma 2.22**.: _Suppose that \(\mathbb{H}\subseteq\mathbb{G}\) via \(\pi:C(\mathbb{G})\to C(\mathbb{H})\). Then the extension of \(\varphi_{0}\circ\pi\) to a state on \(C(\mathbb{G})^{**}\) is given by: \(\omega_{\varphi_{0}}\circ\pi^{**}\)._
Proof.: Consider \(f\in C(\mathbb{G})\). The result follows from the \(\sigma\)-continuity of the maps involved, and \(\pi^{**}_{|_{C(\mathbb{G})}}=\pi\).
Note that part (i) of the below is restricted to Haar idempotents coming from Haar states on universal versions.
**Theorem 2.23**.: _Suppose that \(\phi\) is an idempotent state on \(C(\mathbb{G})\)._
* _If_ \(\phi\) _is a (universal) Haar idempotent, then_ \(\mathbb{S}_{\phi}\) _is closed under wave-function collapse._
* _If_ \(\phi\) _is a non-Haar idempotent with group-like projection support, then_ \(\mathbb{S}_{\phi}\) _is not closed under wave-function collapse._
Proof.:
* Suppose \(\phi\) is a (universal) Haar idempotent via \(\pi:C(\mathbb{G})\to C(\mathbb{H})\). By Vaes's Remark 1.5, every element of \(\mathbb{S}_{\phi}\) is of the form \(\varphi_{0}\circ\pi\) for a state \(\varphi_{0}\) on \(C(\mathbb{H})\). Suppose \(\varphi\) undergoes wave-function collapse to \(\widetilde{q}\varphi\). Then, using Lemma 2.22 \[\omega_{\varphi}(q)>0\implies\omega_{\varphi_{0}}(\pi^{**}(q))>0\qquad(\omega_{ \varphi_{0}}\in\mathcal{S}(C(\mathbb{H})^{**})).\]
Using Lemma 2.22 again, it can be shown that \(\widetilde{q}\varphi=\psi\circ\pi\), where: \[\psi(g)=\frac{\omega_{\varphi_{0}}(\pi^{**}(q)g\pi^{**}(q))}{\omega_{\varphi_{0} }(\pi^{**}(q))}\qquad(g\in C(\mathbb{H}),\,\omega_{\varphi_{0}}\in\mathcal{S}( C(\mathbb{H})^{**})).\] Thus, again by Vaes's remark, \(\psi\circ\pi\) and thus \(\widetilde{q}\varphi\in\mathbb{S}_{\phi}\), that is \(\mathbb{S}_{\phi}\) is closed under wave-function collapse.
2. Suppose \(\phi\) is a non-Haar idempotent with group-like support projection. By Theorem 2.19 \[\mathbb{S}_{\phi}\subseteq\{\varphi\in\mathbb{G}:\,\omega_{\varphi}(p_{\phi})=1\}.\] As \(\phi\) is a non-Haar idempotent, \(N_{\phi}^{**}=C(\mathbb{G})^{**}q_{\phi}\) is only a left ideal, and \(q_{\phi}\) non-central. Suppose that for all \(u_{ij}\in C(\mathbb{G})\), \(u_{ij}q_{\phi}u_{ij}\in N_{\phi}^{**}\). Then \(u_{ij}q_{\phi}u_{ij}=u_{ij}q_{\phi}u_{ij}q_{\phi}\implies u_{ij}q_{\phi}u_{ij}= u_{ij}q_{\phi}u_{ij}q_{\phi}u_{ij}\), so that \(u_{ij}q_{\phi}u_{ij}\) is a projection. This implies, because \([u_{ij},q_{\phi}]^{3}=0\) and \([u_{ij},q_{\phi}]\) is skew adjoint, that \(u_{ij}q_{\phi}=q_{\phi}u_{ij}\). Therefore \(q_{\phi}\) is central and \(N_{\phi}\) is an ideal. Therefore there exists \(u_{ij}\) such that \(u_{ij}q_{\phi}u_{ij}\not\in N_{\phi}^{**}\): \[\omega_{\phi}(|u_{ij}q_{\phi}u_{ij}|^{2})>0.\] By Cauchy-Schwarz: \[0<\omega_{\phi}(|u_{ij}q_{\phi}u_{ij}|^{2})\leq\omega_{\phi}(u_{ij}q_{\phi}u_ {ij})\leq\omega_{\phi}(u_{ij}).\] \[\implies\widetilde{u_{ij}}\phi(q_{\phi})=\frac{\omega_{\phi}(u_{ij}q_{\phi}u_ {ij})}{\omega_{\phi}(u_{ij})}>0\implies\widetilde{u_{ij}}\phi(p_{\phi})<1 \implies\widetilde{u_{ij}}\phi\not\in\mathbb{S}_{\phi}.\]
## 3. Stabiliser quasi-subgroups
The analysis here is helped somewhat by defining the _Birkhoff slice_, a map \(\Phi\) from the state space of the algebra of continuous functions \(C(\mathbb{G})\) on a quantum permutation group \(\mathbb{G}\) to the doubly stochastic matrices:
\[\Phi(\varphi):=(\varphi(u_{ij}))_{i,j=1}^{N}.\]
Given a finite group \(G\subseteq S_{N}\) and a partition \(\mathcal{P}=B_{1}\sqcup\cdots\sqcup B_{k}\) of \(\{1,\ldots,N\}\), the \(\mathcal{P}\)-stabiliser subgroup of \(G\) can be formed:
\[G_{\mathcal{P}}=\{\sigma\in G:\,\sigma(B_{p})=B_{p},\,1\leq p\leq k\}.\]
A \(\mathcal{P}\)-stabiliser quasi-subgroup of \(\mathbb{G}\) can also be defined. There are two, equivalent, definitions. The first definition uses the equivalence relation \(\sim_{\mathcal{P}}\) associated to the partition:
\[\mathbb{G}_{\mathcal{P}}:=\{\varphi\in\mathbb{G}\colon\varphi(u_{ij})=0\text{ for all }i\not\sim_{\mathcal{P}}j\}.\]
Alternatively, consider the Birkhoff slice \(\mathcal{S}(C(\mathbb{G}))\to M_{N}(\mathbb{C})\). By relabelling if necessary, the blocks of a partition can be assumed to consist of consecutive labels. Define:
\[\mathbb{G}_{\mathcal{P}}:=\{\varphi\in\mathbb{G}:\,\Phi(\varphi)\text{ is block diagonal with pattern }\mathcal{P}\},\]
that is:
\[\varphi\in\mathbb{G}_{\mathcal{P}}\iff\Phi(\varphi)=\begin{bmatrix}\Phi_{B_{1}}( \varphi)&0&\cdots&0\\ 0&\Phi_{B_{2}}(\varphi)&\cdots&0\\ \vdots&\vdots&\ddots&\cdots\\ 0&0&\cdots&\Phi_{B_{k}}(\varphi)\end{bmatrix},\]
where \(\Phi_{B_{p}}(\varphi)=[\varphi(u_{ij})]_{i,j\in B_{p}}\).
**Theorem 3.1**.: _For any partition \(\mathcal{P}\) of \(\{1,\ldots,N\}\), \(\mathbb{G}_{\mathcal{P}}\) is a quasi-subgroup._
Proof.: That \(\mathbb{G}_{\mathcal{P}}\) is convex, weak*-closed, and closed under convolution is straightforward (using, for example that the Birkhoff slice is multiplicative \(\Phi(\varphi_{1}\star\varphi_{2})=\Phi(\varphi_{1})\Phi(\varphi_{2})\)). The universal version gives \(\varepsilon\in\mathbb{G}_{\mathcal{P}}\) so that \(\mathbb{G}_{\mathcal{P}}\) is non-empty, and so a Pal set.
Suppose that \(\phi_{\mathcal{P}}\) is the associated idempotent. Therefore by Lemma 1.9:
\[\phi_{\mathcal{P}}(u_{ij})=(\phi_{\mathcal{P}}\circ S)(u_{ij})=\phi_{ \mathcal{P}}(u_{ji}).\]
For any fixed \(j\in\{1,2,\ldots,N\}\), there exists \(i\in\{1,2,\ldots,N\}\) such that \(\phi_{\mathcal{P}}(u_{ji})>0\). From here:
\[\phi_{\mathcal{P}}(u_{jj})=(\phi_{\mathcal{P}}\star\phi_{\mathcal{P}})(u_{jj })=\phi_{\mathcal{P}}(u_{ji})\phi_{\mathcal{P}}(u_{ij})+\sum_{k\neq i}\phi_{ \mathcal{P}}(u_{jk})\phi_{\mathcal{P}}(u_{kj})>0.\]
To show that \(\mathbb{G}_{\mathcal{P}}\) is equal to
\[\mathbb{S}_{\phi_{\mathcal{P}}}=\{\varphi\in\mathbb{G}:\;\varphi\star\phi_{ \mathcal{P}}=\phi_{\mathcal{P}}=\phi_{\mathcal{P}}\star\varphi\},\]
suppose \(\varphi\in\mathbb{S}_{\phi_{\mathcal{P}}}\), but \(\varphi\not\in\mathbb{G}_{\mathcal{P}}\). That implies there exists \(u_{ij}\) such that \(\varphi(u_{ij})\neq 0\) with \(i\not\sim_{\mathcal{P}}j\). But this gives
\[\phi_{\mathcal{P}}(u_{ij})=(\varphi\star\phi_{\mathcal{P}})(u_{ij})=\varphi( u_{ij})\phi_{\mathcal{P}}(u_{jj})+\sum_{k\neq j}\varphi(u_{ik})\phi_{\mathcal{P}}(u_{ kj})>0,\]
a contradiction.
For the partition \(j:=\{j\}\sqcup(\{1,2,\ldots,N\}\backslash\{j\})\):
\[\mathbb{G}_{j}=\{\varphi\in\mathbb{G}:\;\varphi(u_{jj})=1\}.\]
Note for any quantum permutation group \(\mathbb{G}\), and \(1\leq j\leq N\), the diagonal element \(u_{jj}\) is a polynomial group-like projection:
\[\Delta(u_{jj})(\mathds{1}_{\mathbb{G}}\otimes u_{jj})=\left(\sum_{k=1}^{N}u_{ jk}\otimes u_{kj}\right)(\mathds{1}_{\mathbb{G}}\otimes u_{jj})=u_{jj}\otimes u_{jj}.\]
Using Proposition 2.16, it can be shown that the associated idempotent state is \(h_{j}:=\widetilde{u_{jj}}h\), that is:
\[h_{j}(f)=\frac{h(u_{jj}fu_{jj})}{h(u_{jj})}\qquad(f\in C(\mathbb{G})).\]
The below is (almost) a special case of Theorem 2.23, but included as it uses different proof techniques.
**Theorem 3.2**.: _The following are equivalent:_
1. \(h_{j}\) _is a Haar idempotent,_
2. \(u_{jj}\) _is central,_
3. \(\mathbb{G}_{j}\) _is stable under wave-function collapse._
Proof.: (i) \(\implies\) (ii): assume \(h_{j}\) is a Haar idempotent, say equal to \(h_{\mathbb{H}}\circ\pi\) where \(\pi:C(\mathbb{G})\to C(\mathbb{H})\), \(u_{ij}\mapsto u_{ij}^{\mathbb{H}}\), is the quotient map. Note that because \(h_{j}(u_{jj})=h_{\mathbb{H}}(\pi(u_{jj}))=1\), and \(h_{\mathbb{H}}\) is faithful on \(\mathcal{O}(\mathbb{H})\),
\[\mathds{1}_{\mathbb{H}}=\pi(\mathds{1}_{\mathbb{G}})=\sum_{m=1}^{N}\pi(u_{mj} )=\pi(u_{jj}),\]
so that \(\pi(u_{jj})=\mathds{1}_{\mathbb{H}}\) is central in \(C(\mathbb{H})\). Assume that \(u_{jj}\) is non-central. Then there exists \(u_{kl}\in C(\mathbb{G})\) such that \(|u_{kl}u_{jj}-u_{jj}u_{kl}|^{2}>0\). Expanding:
\[u_{jj}u_{kl}u_{jj}-u_{jj}u_{kl}u_{jj}u_{kl}-u_{kl}u_{jj}u_{kl}u_{jj}+u_{kl}u_{ jj}u_{kl}>0.\]
Applying the Haar state, which is faithful on \(\mathcal{O}(\mathbb{G})\), and using its traciality yields:
\[h(u_{jj}u_{kl}u_{jj}) >h(u_{jj}u_{kl}u_{jj}u_{kl}u_{jj})\] \[\implies h_{j}(u_{kl}) >h_{j}(u_{kl}u_{jj}u_{kl})\] \[\implies h_{\mathbb{H}}(\pi(u_{kl})) >h_{\mathbb{H}}(\pi(u_{kl}u_{jj}u_{kl}))=h_{\mathbb{H}}(\pi(u_{kl} )\pi(u_{jj})\pi(u_{kl})))\] \[\implies h_{\mathbb{H}}(\pi(u_{kl})) >h_{\mathbb{H}}(\pi(u_{kl})\mathds{1}_{\mathbb{H}}\pi(u_{kl})))=h _{\mathbb{H}}(\pi(u_{kl})),\]
an absurdity, and so \(u_{jj}\) is central.
(ii) \(\implies\) (i): assume that \(u_{jj}\) is central.
\[N_{j}:=\{f\in C(\mathbb{G}):\,h_{j}(|f|^{2})=0\}.\]
If \(f\in N_{j}\) then \(h(u_{jj}f^{*}fu_{jj})=0\implies fu_{jj}\in N_{h}\), the null-space of the Haar state, so that:
\[N_{j}=\{f\in C(\mathbb{G}):\,fu_{jj}\in N_{h}\}.\]
The rest of the argument is the same as ([8], Th. 4.5).
(ii) \(\implies\) (iii): assume that \(u_{jj}\) is central. If \(u_{jj}\) is central in \(C(\mathbb{G})\) then it is also central in \(C(\mathbb{G})^{**}\). Let \(\varphi\in\mathbb{G}_{j}\) and \(q\in C(\mathbb{G})^{**}\) such that \(\omega_{\varphi}(q)>0\). Let \(p_{\varphi}\in C(\mathbb{G})^{**}\) be the support projection of \(\varphi\). Note that
\[\omega_{\varphi}(u_{jj})=\varphi(u_{jj})=1\implies p_{\varphi}\leq u_{jj} \implies p_{\varphi}=p_{\varphi}u_{jj}.\]
Note
\[\omega_{\varphi}(qu_{jj}q)=\omega_{\varphi}(p_{\varphi}qu_{jj}qp_{\varphi})= \omega_{\varphi}(p_{\varphi}u_{jj}qqp_{\varphi})=\omega_{\varphi}(p_{\varphi} qp_{\varphi})=\omega_{\varphi}(q).\]
It follows that:
\[\widetilde{q}\varphi(u_{jj})=\frac{\omega_{\varphi}(qu_{jj}q)}{\omega_{\varphi}(q )}=1\implies\widetilde{q}\varphi\in\mathbb{G}_{p}.\]
(iii) \(\implies\) (ii): assume now that \(u_{jj}\) is non-central. Therefore there exists \(u_{kl}\in C(\mathbb{G})\) such that:
\[u_{jj}u_{kl}\neq u_{kl}u_{jj}.\]
Represent \(C(\mathbb{G})\) with the universal GNS representation \(\pi_{\text{GNS}}(C(\mathbb{G}))\subseteq B(\mathsf{H})\). Denote
\[p:=\pi_{\text{GNS}}(u_{jj})\text{ and }q:=\pi_{\text{GNS}}(u_{kl}).\]
As \(pq\neq qp\), using Halmos two projections theory there exists a unit vector \(x\in\operatorname{ran}p\) that is orthogonal to both2\(\operatorname{ran}p\cap\operatorname{ran}q\) and \(\operatorname{ran}p\cap\ker q\). Define a state on \(C(\mathbb{G})\):
Footnote 2: in the notation of ([7],(1)), \(x\in M_{0}\)
\[\varphi_{0}(f)=\langle x,\pi_{\text{GNS}}(f)x\rangle\qquad(f\in C(\mathbb{G})).\]
Note that:
\[\varphi_{0}(u_{jj})=\langle x,px\rangle=\langle x,x\rangle=1\implies\varphi_{ 0}\in\mathbb{G}_{j}.\]
Furthermore, together with \(x\in\operatorname{ran}p\)
\[\varphi_{0}(u_{kl})=\langle x,qx\rangle=1\implies x\in\operatorname{ran}q\]
\[\varphi_{0}(u_{kl})=\langle x,qx\rangle=0\implies x\in\ker q\]
but \(x\) is orthogonal to both \(\operatorname{ran}p\cap\operatorname{ran}q\) and \(\operatorname{ran}p\cap\ker q\) so
\[0<\langle x,qx\rangle<1\implies 0<\varphi_{0}(u_{kl})<1.\]
Now consider \(\varphi=\widetilde{u_{kl}}\varphi_{0}\):
\[\varphi(f):=\frac{\varphi_{0}(u_{kl}fu_{kl})}{\varphi_{0}(u_{kl})}=\frac{ \langle qx,\pi_{\text{GNS}}(f)qx\rangle}{\langle qx,qx\rangle}\qquad(f\in C( \mathbb{G})).\]
In particular
\[\varphi(u_{jj})=\frac{\langle qx,pqx\rangle}{\langle qx,qx\rangle}\]
Together with \(qx\in\operatorname{ran}q\):
\[\varphi(u_{jj})=1\implies qx\in\operatorname{ran}p\] \[\varphi(u_{jj})=0\implies qx\in\ker p\]
By ([7], (6)), \(qx\) is orthogonal to \(\operatorname{ran}p\cap\operatorname{ran}q\) and \(\ker p\cap\operatorname{ran}q\), and it follows that:
\[0<\varphi(u_{jj})<1,\]
that is,
\[\varphi_{0}\in\mathbb{G}_{j}\text{ but }\widetilde{u_{kl}}\varphi_{0}\not\in \mathbb{G}_{j}.\]
Consider, at the universal level:
\[(S_{N}^{+})_{N}:=\{\varphi\in S_{N}^{+}:\,\varphi(u_{NN})=1\}.\]
If \(\mathbb{H}\) given by \(\pi:C(S_{N}^{+})\to C(\mathbb{H})\) is an isotropy subgroup in the sense that \(\mathbb{H}\subseteq(S_{N}^{+})_{N}\) and so \(\pi(u_{NN})=\mathds{1}_{\mathbb{H}}\), then \(\mathbb{H}\subseteq S_{N-1}^{+}\) by the universal property. In this way, where \(\pi_{N-1}:C(S_{N}^{+})\to C(S_{N-1}^{+})\) is the quotient
\[[u_{ij}^{S_{N}^{+}}]_{i,j=1}^{N}\to\begin{bmatrix}u_{N-1}^{S_{N-1}^{+}}&0\\ 0&\mathds{1}_{S_{N-1}^{+}}\end{bmatrix},\]
the following is a maximal (set of states on an algebra of continuous functions on a) quantum subgroup in the quasi-subgroup \((S_{N}^{+})_{N}\)
\[(S_{N-1}^{+})^{\subset S_{N}^{+}}=\{\varphi\circ\pi_{N-1}:\,\varphi\in S_{N-1} ^{+}\}.\]
In the classical case, \(N\leq 3\), quasi-subgroups are subgroups, and so \((S_{N}^{+})_{N}=(S_{N-1}^{+})^{\subset S_{N}^{+}}\). However, for \(N\geq 4\), the inclusion is proper.
**Lemma 3.3**.: _(Teo Banica) Consider a monomial of entries from the fundamental representation \(u\in M_{4}(C(S_{4}^{+}))\):_
\[f=u_{i_{1}j_{1}}\cdots u_{i_{m}j_{m}}.\]
_Then \(f\) can only be zero for trivial reasons; i.e. if and only if there exists \(2\leq n\leq m\) such that:_
\[\delta_{i_{n-1},i_{n}}+\delta_{j_{n-1},j_{n}}=1,\]
_that is \(u_{i_{n-1}j_{n-1}}u_{i_{n}j_{n}}=0\)._
Proof.: With the notation from [5], namely \(c_{1},\ldots,c_{4}\in SU_{2}\) being the Pauli matrices, and \(x\in SU_{2}\) being a parameter, the Pauli representation of \(C(S_{4}^{+})\) is:
\[\pi(u_{ij})=P_{c_{i}xc_{j}},\]
the rank one projection on \(c_{i}xc_{j}\). Given unit norm \(\xi\), \(P_{\xi}(\eta)=\langle\eta,\xi\rangle\xi\). By recurrence
\[P_{\xi_{1}}\cdots P_{\xi_{m}}(\eta)=\langle\eta,\xi_{m}\rangle\langle\xi_{m}, \xi_{m-1}\rangle\cdots\langle\xi_{2},\xi_{1}\rangle\xi_{1}.\]
With \(\eta=c_{k}\), one of the Pauli matrices, therefore:
\[u_{i_{1}j_{1}}\cdots u_{i_{m}j_{m}}(c_{k}) =P_{c_{i_{1}}xc_{j_{1}}}\cdots P_{c_{i_{m}}xc_{j_{m}}}(c_{k})\] \[=\langle c_{k},c_{i_{m}}xc_{j_{m}}\rangle\langle c_{i_{m}}xc_{j_{ m}},c_{i_{m-1}}xc_{j_{m-1}}\rangle\cdots\langle c_{i_{2}}xc_{j_{2}},c_{i_{1}}xc_{j _{1}}\rangle c_{i_{1}}xc_{k_{1}}.\]
Look at one of these inner products:
\[\langle c_{i_{n}}xc_{j_{n}},c_{i_{n-1}}xc_{j_{n-1}}\rangle =\operatorname{tr}(c_{i_{n}}xc_{j_{n}}(c_{i_{n-1}}xc_{j_{n-1}})^ {*})\] \[=\pm\operatorname{tr}(c_{i_{n}}xc_{j_{n}}c_{j_{n-1}}x^{*}c_{i_{n- 1}})\] \[=\pm\operatorname{tr}(c_{i_{n-1}}c_{i_{n}}xc_{j_{n}}c_{j_{n-1}}x^{ *}).\]
This vanishes for any \(x\in SU_{2}\) when one of \(c_{i_{n-1}}c_{i_{n}}\) or \(c_{j_{n}}c_{j_{n-1}}\) equals \(I_{2}\), and the other does not, and so when
\[\delta_{i_{n-1},i_{n}}+\delta_{j_{n-1},j_{n}}=1.\]
**Proposition 3.4**.: _Let \(S_{N}^{+}\) be the quantum permutation group on \(N\) symbols with Haar state \(h\). Then, for any \(\sigma,\,\tau\in S_{N}\):_
\[h(u_{i_{1}j_{1}}\cdots u_{i_{n}j_{n}})=h(u_{\sigma(i_{1})\tau(j_{1})}\cdots u_ {\sigma(i_{n})\tau(j_{n})}).\]
Proof.: This is essentially ([17], Prop. 6.4), together with the fact that \(h\) is invariant.
**Corollary 3.5**.: _Let \(S_{N}^{+}\) be the quantum permutation group on \(N\geq 4\) symbols. Then_
\[|u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}}|^{2}=0.\]
_for trivial reasons only._
Proof.: Let \(1\leq a,b,c,d,e,f\leq 4\) such that \(u_{ab}^{S_{4}^{+}}u_{cd}^{S_{4}^{+}}u_{ef}^{S_{4}^{+}}\neq 0\). Using the quotient map \(\pi_{4}:C(S_{N}^{+})\to C(S_{4}^{+})\), \(u\to\operatorname{diag}(u^{S_{4}^{+}},\mathds{1}_{S_{4}^{+}},\ldots,\mathds{1 }_{S_{4}^{+}})\)
\[\pi_{4}(|u_{ab}u_{cd}u_{ef}|^{2})\neq 0\implies|u_{ab}u_{cd}u_{ef}|^{2}\neq 0.\]
Let \(\sigma(a)=i_{1}\), \(\sigma(c)=i_{2}\), \(\sigma(f)=i_{3}\) and similarly \(\tau\) map \(b,d,f\) to \(j_{1},j_{2},j_{3}\). Proposition 3.4 gives
\[h(|u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}}|^{2})=h(|u_{ab}u_{cd}u_{ef}|^{2} )\neq 0\implies|u_{i_{1}j_{1}}u_{i_{2}j_{2}}u_{i_{3}j_{3}}|^{2}\neq 0.\]
**Proposition 3.6**.: _The inclusion \((S_{N-1}^{+})^{\subset S_{N}^{+}}\subset(S_{N}^{+})_{N}\) is proper for \(N\geq 4\)._
Proof.: Note that for any \((\varphi\circ\pi_{N-1})\in(S_{N-1}^{+})^{\subset S_{N}^{+}}\),
\[(\varphi\circ\pi_{N-1})(u_{11}u_{2N}u_{11})=\varphi(\pi_{N-1}(u_{11}u_{2N}u_{ 11}))=\varphi(\pi_{N-1}(u_{11})\pi_{N-1}(u_{2N})\pi_{N-1}(u_{11}))=0,\]
as \(\pi_{N-1}(u_{2N})=0\). On the other hand, \(h_{N}=\widetilde{u_{NN}}h\), the idempotent in the stabiliser quasi-subgroup \((S_{N}^{+})_{N}\), is not in \((S_{N-1}^{+})^{\subset S_{N}^{+}}\), because \(h\) faithful on \(\mathcal{O}(S_{N}^{+})\) implies
\[h_{N}(u_{11}u_{2N}u_{11})=\frac{h(u_{NN}u_{11}u_{2N}u_{11}u_{NN})}{h(u_{NN})}= \frac{h(|u_{2N}u_{11}u_{NN}|^{2})}{h(u_{NN})}>0.\]
Trying to do something for more complicated partitions of \(\{1,\ldots,N\}\), with an (explicit) idempotent state with a density with respect to \(\omega_{h}\) is in general more troublesome. Consider for example:
\[\mathcal{P}_{i,j}:=(\{1,\ldots,N\}\backslash\{i,j\})\sqcup\{i\}\sqcup\{j\}.\]
The obvious way to fix two points is to work with \(p_{i,j}:=u_{ii}\wedge u_{jj}\), an element of \(C(\mathbb{G})^{**}\), and given a quantum permutation \(\varphi\in\mathbb{G}\), define a subset of \(\mathbb{G}\) by:
\[\mathbb{G}_{i,j}:=\{\varphi\in\mathbb{G}:\omega_{\varphi}(p_{i,j})=1\}.\]
Note that \(\mathbb{G}_{i,j}=\mathbb{G}_{i}\cap\mathbb{G}_{j}\). However the following is not in general well defined because \(\omega_{h}(p_{i,j})\) is not necessarily strictly positive:
\[\phi_{i,j}:=\frac{\omega_{h}(p_{i,j}\cdot p_{i,j})}{\omega_{h}(p_{i,j})},\]
For example, consider the dual of the infinite dihedral group with the famous embedding \(\widehat{D_{\infty}}\subset S_{4}^{+}\). Working with alternating projection theory, and noting the Haar state on \(\widehat{C(D_{\infty})}\) is \(h(\lambda)=\delta_{\lambda,e}\),
\[\omega_{h}(p_{1,3})=\lim_{n\to\infty}h((u_{11}u_{33})^{n})=\lim_{n\to\infty} \frac{1}{4^{n}}=0.\]
**Proposition 3.7**.: _The stabiliser quasi-subgroup \(\widehat{(D_{\infty})}_{1,3}\) is the trivial group._
Proof.: Let \(\varphi\in\widehat{(D_{\infty})}_{1,3}\) so that \(\varphi(u_{11})=\varphi(u_{33})=1\). Then \(\Phi(\varphi)=I_{4}\) and, as will be seen later, by Proposition 4.1, \(\varphi\) is a character. There are four characters in \(\widehat{D_{\infty}}\) and only the counit has Birkhoff slice equal to the identity.
By Proposition 4.3, \(p_{1,3}=p_{\varepsilon}\), the support projection of the counit. As \(\widehat{C(D_{\infty})}\) is coamenable, the Haar state is faithful and so \(\omega_{h}(p_{\varepsilon})=0\) implies that \(p_{\varepsilon}\not\in C(\widehat{D_{\infty}})\) (and indeed \(p\wedge q\not\in\mathrm{C}^{*}(p,q)\), the universal unital \(\mathrm{C}^{*}\)-algebra generated by two projections).
Note that in general \(\{\varepsilon\}\) is a quantum subgroup of any quantum permutation group in the sense that \(\varepsilon\) is a Haar idempotent via the quotient \(\pi:C(\mathbb{G})\to C(e)\) to the trivial group \(\{e\}\subseteq\mathbb{G}\):
\[[u_{ij}]_{i,j=1}^{N}\to\mathrm{diag}(1_{\mathbb{C}},\ldots,1_{\mathbb{C}}).\]
## 4. Exotic quasi-subgroups of the quantum permutation group
A second reason for studying Pal sets and their generated quasi-subgroups is to postulate, or rather speculate, on, for some \(N\geq 4\), the existence of an _exotic_ intermediate quasi-subgroup:
\[S_{N}\subsetneq\mathbb{S}_{N}\subsetneq S_{N}^{+}.\]
It is currently unknown whether or not there is a Haar idempotent giving an exotic intermediate quantum subgroup \(S_{N}\subsetneq\mathbb{G}_{N}\subsetneq S_{N}^{+}\) for some \(N\geq 6\). It is the case that \(S_{N}=S_{N}^{+}\) for \(N\leq 3\), and for \(N=4\)[4] and \(N=5\)[1] there is no such Haar idempotent. Of course, if there is _no_ exotic intermediate quasi-subgroup \(S_{N}\subsetneq\mathbb{S}_{N}\subsetneq S_{N}^{+}\) then it is the case that \(S_{N}\) is a maximal quantum subgroup of \(S_{N}^{+}\) for all \(N\), but of course this is stronger than the non-existence of an exotic intermediate quantum subgroup. Indeed it is strictly
stronger in the sense that given a quantum permutation group \(\mathbb{G}\) and its classical version \(G\subseteq\mathbb{G}\) (see below), the existence of a strictly intermediate quasi-subgroup \(G\subsetneq\mathbb{S}\subsetneq\mathbb{G}\) does not imply a strictly intermediate quantum subgroup. For example, the finite dual \(\widehat{A_{5}}\) has trivial classical version, and for any non-trivial subgroup \(H\subset A_{5}\) the non-Haar idempotent \(\mathds{1}_{H}\) gives a strict intermediate quasi-subgroup:
\[\{\varepsilon\}\subsetneq\mathbb{S}_{H}\subsetneq\widehat{A_{5}}.\]
However \(\widehat{A_{5}}\) has no non-trivial quantum subgroups because \(A_{5}\) is simple.
The idea for an example of an exotic intermediate quasi-subgroup would be to find a Pal set given by some condition that is satisfied by the 'elements of \(S_{N}\) in \(S_{N}^{+}\)' -- and some states non-zero on a commutator \([f,g]\in C(S_{N}^{+})\) -- but not by the Haar state on \(C(S_{N}^{+})\). It will be seen that the 'elements of \(S_{N}\) in \(S_{N}^{+}\)' correspond to the characters on \(C(S_{N}^{+})\).
### The classical version of a quantum permutation group
The quotient of \(C(\mathbb{G})\) by the commutator ideal is the algebra of functions on the characters on \(C(\mathbb{G})\). The characters form a group \(G\), with the group law given by the convolution:
\[\varphi_{1}\star\varphi_{2}=(\varphi_{1}\otimes\varphi_{2})\Delta,\]
the identity is the counit, and the inverse is the reverse \(\varphi^{-1}=\varphi\circ S\).
This section contains some general analysis for the support projections of characters on algebras of continuous functions on quantum permutation groups. While passing to a von Neumann algebra to talk about support projections, it will not be the conventional choice of a von Neumann algebra associated to a compact quantum group. This conventional choice is the algebra:
\[L^{\infty}(\mathbb{G}):=C_{\mathrm{r}}(\mathbb{G})^{\prime\prime}.\]
As discussed previously, the current work is at the universal level, so instead consider the bidual \(C(\mathbb{G})^{**}\).
As before the Birkhoff slice aids the analysis. See [17] for more, where the following proof is sketched.
**Proposition 4.1**.: _A state \(\varphi\) on \(C(\mathbb{G})\) is a character if and only if \(\Phi(\varphi)\) is a permutation matrix._
Proof.: If \(\varphi\) is a character,
\[\varphi(u_{ij})=\varphi(u_{ij}^{2})=\varphi(u_{ij})^{2}\Rightarrow\varphi(u_{ ij})=0\text{ or }1.\]
As it is doubly stochastic, it follows that \(\Phi(\varphi)\) is a permutation matrix. Suppose now that \(\Phi(\varphi)=\sigma\). Consider the GNS representation \((\mathsf{H}_{\sigma},\pi_{\sigma},\xi_{\sigma})\) associated to \(\varphi\). By assumption
\[\varphi(u_{ij})=\langle\xi_{\sigma},\pi_{\sigma}(u_{ij})(\xi_{\sigma})\rangle =\langle\pi_{\sigma}(u_{ij})(\xi_{\sigma}),\pi_{\sigma}(u_{ij})(\xi_{\sigma}) \rangle=\|\pi_{\sigma}(u_{ij})(\xi_{\sigma})\|^{2}=0\text{ or }1. \tag{9}\]
For \(f\in C(\mathbb{G})\), let \((f^{(n)})_{n\geq 1}\subset\mathcal{O}(\mathbb{G})\) converge to \(f\). For each \(f^{(n)}\), (9) implies there exists \(a_{n}\in\mathbb{C}\) such that
\[\pi_{\sigma}(f^{(n)})(\xi_{\sigma})=a_{n}\xi_{\sigma}.\]
The representation \(\pi_{\sigma}\) is norm continuous, and so \(\pi_{\sigma}(f^{(n)})\to\pi_{\sigma}(f)\), and \((\pi_{\sigma}(f^{(n)}))_{n\geq 1}\) is Cauchy:
\[\|\pi_{\sigma}(f^{(m)})-\pi_{\sigma}(f^{(n)})\| \to 0\] \[\implies|a_{m}-a_{n}|\|\xi_{\sigma}\| \to 0,\]
which implies that \((a_{n})_{n\geq 1}\) converges, to say \(a_{f}\in\mathbb{C}\). The norm convergence of \(f^{(n)}\to f\) implies the strong convergence of \(\pi_{\sigma}(f^{(n)})\) to \(\pi_{\sigma}(f)\):
\[\pi_{\sigma}(f)\xi_{\sigma}=\lim_{n\to\infty}\left(\pi_{\sigma}(f^{(n)})\xi_{ \sigma}\right)=\lim_{n\to\infty}(a_{n}\xi_{\sigma})=a_{f}\xi_{\sigma}.\]
Therefore
\[\varphi(gf) =\langle\xi_{\sigma},\pi_{\sigma}(gf)\xi_{\sigma}\rangle= \langle\xi_{\sigma},\pi_{\sigma}(g)\pi_{\sigma}(f)(\xi_{\sigma})\rangle\] \[=\langle\xi_{\sigma},\pi_{\sigma}(g)a_{f}\xi_{\sigma}\rangle=a_{ f}\langle\xi_{\sigma},\pi_{\sigma}(g)\xi_{\sigma}\rangle=\varphi(g)\varphi(f).\]
Define \(\mathrm{ev}_{\sigma}:C(\mathbb{G})\to\mathbb{C}\):
\[\mathrm{ev}_{\sigma}(f):=\pi_{\mathrm{ab}}(f)(\sigma)\qquad(f\in C(\mathbb{G} )).\]
This is a *-homomorphism, but in general \(\mathrm{ev}_{\sigma}\) need not be non-zero.
**Proposition 4.2**.: _If \(\varphi\) is a state on \(C(\mathbb{G})\) such that \(\Phi(\varphi)=\sigma\), then \(\varphi=\mathrm{ev}_{\sigma}\)._
Proof.: Suppose that \(\Phi(\varphi)=\sigma\). We know that \(\mathrm{ev}_{\sigma}\) is a *-homomorphism, and by Proposition 4.1 so is \(\varphi\). As \(C(\mathbb{G})\) admits a character, \(\pi_{\mathrm{ab}}\) is non-zero. Furthermore, as *-homomorphisms they are determined by their values on the generators:
\[\varphi(u_{ij})=\Phi(\varphi)_{ij}=\sigma_{ij}=\delta_{i,\sigma(j)}=\mathds{1 }_{j\to i}(\sigma)=\pi_{\mathrm{ab}}(u_{ij})(\sigma)=\mathrm{ev}_{\sigma}(u_{ ij}).\]
The _classical version_ of \(\mathbb{G}\) is therefore the finite group \(G\subseteq S_{N}\) given by:
\[G:=\{\mathrm{ev}_{\sigma}:\,\sigma\in S_{N},\,\mathrm{ev}_{\sigma}\neq 0\}.\]
References to \(u_{ij}\) in the below are in the embedding:
\[C(\mathbb{G})\subseteq C(\mathbb{G})^{**}.\]
Note that the proof of (i) doesn't use minimality to show that \(p_{\sigma}\) is central:
**Proposition 4.3**.: _Associated to each character \(\mathrm{ev}_{\sigma}\) on \(C(\mathbb{G})\) is a support projection \(p_{\sigma}\in C(\mathbb{G})^{**}\) such that:_
* \(p_{\sigma}\) _is a central projection in_ \(C(\mathbb{G})^{**}\)_, and_ \(p_{\sigma}p_{\tau}=\delta_{\sigma,\tau}p_{\sigma}\)_._
* \(p_{\sigma}=u_{\sigma(1),1}\wedge u_{\sigma(2),2}\wedge\ldots\wedge u_{\sigma( N),N}\)_._
Proof.:
1. Note that \[\operatorname{ev}_{\sigma}(u_{\sigma(j),j})=1\Rightarrow\omega_{\sigma}(u_{\sigma(j ),j})=1\Rightarrow p_{\sigma}\leq u_{\sigma(j),j},\] while \(p_{\sigma}u_{ij}=0\) for \(i\neq\sigma(j)\). Therefore \(p_{\sigma}\) commutes with all of \(C(\mathbb{G})\subseteq C(\mathbb{G})^{**}\) and thus, via the Sherman-Takeda Theorem, \(p_{\sigma}\) is in the commutant of \(C(\mathbb{G})\). Everything in \(C(\mathbb{G})^{**}\) commutes with the commutant of \(C(\mathbb{G})\). Any pair of permutations \(\sigma\neq\tau\) are distinguished by some \(\sigma(j)\neq\tau(j)\), \[p_{\sigma}p_{\tau}=p_{\sigma}u_{\sigma(j),j}u_{\tau(j),j}p_{\tau}=0.\]
2. Let \[q_{\sigma}=u_{\sigma(1),1}\wedge u_{\sigma(2),2}\wedge\ldots\wedge u_{\sigma (N),N}.\] Define \[f_{\sigma}:=u_{\sigma(1),1}\cdots u_{\sigma(N),N}.\] The sequence \((f_{\sigma}^{n})_{n\geq 1}\subset C(\mathbb{G})\) converges \(\sigma\)-weakly to \(q_{\sigma}\). The extension \(\omega_{\sigma}\) of \(\operatorname{ev}_{\sigma}\) is a character implying that: \[\omega_{\sigma}(q_{\sigma})=\lim_{n\to\infty}\omega_{\sigma}(f_{\sigma}^{n}) =1\implies p_{\sigma}\leq q_{\sigma}.\] Suppose \(r:=q_{\sigma}-p_{\sigma}\) is non-zero. Then there exists a state \(\omega_{r}\) on \(C(\mathbb{G})^{**}\) such that \(\omega_{r}(r)=1\). Define a state \(\varphi_{r}\) on \(C(\mathbb{G})\) by: \[\varphi_{r}(f)=\omega_{r}(rfr)\qquad(f\in C(\mathbb{G})).\] Then \(\varphi_{r}(u_{\sigma(j),j})=1\implies\varphi_{x}=\operatorname{ev}_{\sigma}\), by Proposition 4.2, with equal extensions \(\omega_{r}\) and \(\omega_{\sigma}\). However, in this case \[\omega_{\sigma}(p_{\sigma})=\omega_{r}(p_{\sigma})=0,\] and this contradiction gives \(q_{\sigma}=p_{\sigma}\).
In the following, whenever \(\operatorname{ev}_{\sigma}=0\), then so is \(p_{\sigma}\). Properties of the bidual summarised in Section 1.4 are used.
**Theorem 4.4**.: _Where \(G\subseteq\mathbb{G}\) is the classical version, define_
\[p_{G}:=\sum_{\sigma\in G}p_{\sigma}.\]
_Then \(p_{G}\) is a group-like projection in \(C(\mathbb{G})^{**}\). In addition, \(p_{G}\) is the support projection of the Haar idempotent \(h_{C(G)}\circ\pi_{ab}\)._
Proof.: Note \(p_{G}\) is non-zero, as \(p_{\varepsilon}p_{G}=p_{\varepsilon}\). Consider \(p_{\sigma}\neq 0\). Let \((p_{\sigma}^{\lambda})\subset\mathcal{O}(\mathbb{G})\) converge \(\sigma\)-weakly to \(p_{\sigma}\in C(\mathbb{G})^{**}\). The extension of \(\Delta\) is \(\sigma\)-weakly continuous, and recall that \(p_{\sigma}\) is a meet of projections in \(\mathcal{O}(\mathbb{G})\):
\[\Delta^{**}(p_{\sigma}) =\Delta^{**}(u_{\sigma(1),1}\wedge u_{\sigma(2),2}\wedge\cdots \wedge u_{\sigma(N),N})\] \[=\Delta(u_{\sigma(1),1})\wedge\Delta(u_{\sigma(2),2})\wedge \cdots\wedge\Delta(u_{\sigma(N),N})\] \[=\lim_{n\to\infty}\left[\big{(}\Delta(u_{\sigma(1),1})\Delta(u_{ \sigma(2),2})\cdots\Delta(u_{\sigma(N),N})\big{)}^{n}\right].\]
Consider, for \(p_{\tau}\neq 0\)
\[\Delta(u_{\sigma(1),1})\Delta(u_{\sigma(2),2})\cdots\Delta(u_{ \sigma(N),N})(\mathds{1}_{\mathbb{G}}\otimes p_{\tau})\] \[=\left(\sum_{k_{1},\ldots,k_{N}=1}^{N}u_{\sigma(1),k_{1}}u_{ \sigma(2),k_{2}}\cdots u_{\sigma(N),k_{N}}\otimes u_{k_{1},1}u_{k_{2},2}\cdots u _{k_{N},N}\right)(\mathds{1}_{\mathbb{G}}\otimes p_{\tau})\]
Note \(p_{\tau}\) is central and
\[p_{\tau}u_{kj}=\begin{cases}p_{\tau}u_{kj},&\text{if }k=\tau(j)\\ 0,&\text{otherwise}.\end{cases},\]
and so
\[\Delta(u_{\sigma(1),1})\Delta(u_{\sigma(2),2})\cdots\Delta(u_{ \sigma(N),N})(\mathds{1}_{\mathbb{G}}\otimes p_{\tau})\] \[=u_{\sigma(1),\tau(1)}u_{\sigma(2),\tau(2)}\cdots u_{\sigma(N), \tau(N)}\otimes u_{\tau(1),1}u_{\tau(2),2}\cdots u_{\tau(N),N}p_{\tau}\] \[=(u_{\sigma(1),\tau(1)}u_{\sigma(2),\tau(2)}\cdots u_{\sigma(N), \tau(N)}\otimes u_{\tau(1),1}u_{\tau(2),2}\cdots u_{\tau(N),N})(\mathds{1}_{ \mathbb{G}}\otimes p_{\tau})\]
Now
\[\Delta^{**}(p_{\sigma})(\mathds{1}_{\mathbb{G}}\otimes p_{\tau}) =\lim_{n\to\infty}\left(\Delta(u_{\sigma(1),1})\Delta(u_{\sigma( 2),2})\cdots\Delta(u_{\sigma(N),N})\right)^{n}(\mathds{1}_{\mathbb{G}}\otimes p _{\tau})\] \[=\lim_{n\to\infty}\left(\Delta(u_{\sigma(1),1})\Delta(u_{\sigma( 2),2})\cdots\Delta(u_{\sigma(N),N})^{n}(\mathds{1}_{\mathbb{G}}\otimes p_{ \tau})\right)\] \[=\lim_{n\to\infty}\left(\Delta(u_{\sigma(1),1})\Delta(u_{\sigma( 2),2})\cdots\Delta(u_{\sigma(N),N})(\mathds{1}_{\mathbb{G}}\otimes p_{\tau}) \right)^{n}\] \[=\lim_{n\to\infty}\left[(u_{\sigma(1),\tau(1)}u_{\sigma(2),\tau(2) }\cdots u_{\sigma(N),\tau(N)}\otimes u_{\tau(1),1}u_{\tau(2),2}\cdots u_{\tau( N),N})(\mathds{1}_{\mathbb{G}}\otimes p_{\tau})\right]^{n}\] \[=\lim_{n\to\infty}\left[(u_{\sigma(1),\tau(1)}u_{\sigma(2),\tau(2) }\cdots u_{\sigma(N),\tau(N)}\otimes u_{\tau(1),1}u_{\tau(2),2}\cdots u_{\tau (N),N})(\mathds{1}_{\mathbb{G}}\otimes p_{\tau})\right]^{n}\] \[=\lim_{n\to\infty}\left[(u_{\sigma(1),\tau(1)}u_{\sigma(2),\tau(2 )}\cdots u_{\sigma(N),\tau(N)}\otimes u_{\tau(1),1}u_{\tau(2),2}\cdots u_{\tau (N),N})^{n}\right](\mathds{1}_{\mathbb{G}}\otimes p_{\tau})\] \[=(p_{\sigma\tau^{-1}}\otimes p_{\tau})(\mathds{1}_{\mathbb{G}} \otimes p_{\tau})=p_{\sigma\tau^{-1}}\otimes p_{\tau}.\]
Finally, sum \(\Delta^{**}(p_{\sigma})(\mathds{1}_{\mathbb{G}}\otimes p_{\tau})\) over \(\sigma\), \(\tau\in G\).
Note that \(C(G)=C(\mathbb{G})/N_{\mathrm{ab}}\) is finite dimensional, and so by (3):
\[C(\mathbb{G})^{**}\cong C(G)\oplus N_{\mathrm{ab}}^{**}.\]
It follows that the support projection of \(h_{C(G)}\circ\pi_{\mathrm{ab}}\) is \(p_{G}\).
### The (classically) random and truly quantum parts of a quantum permutation
In the case of \(C(S_{N}^{+})\), define \(p_{C}:=p_{S_{N}}\) and \(p_{Q}:=\mathds{1}_{S_{N}^{+}}-p_{C}\). In the rest of this section the Gelfand-Birkhoff picture will be used:
\[\varphi\in S_{N}^{+}\text{ is a quantum permutation }\Longleftrightarrow\ \varphi\text{ a state on }C(S_{N}^{+}).\]
**Definition 4.5**.: _Let \(\varphi\in S_{N}^{+}\) be a quantum permutation. Say that \(\varphi\)_
1. _is a_ (classically) random permutation _if_ \(\omega_{\varphi}(p_{Q})=0\)_,_
2. _is a_ genuinely quantum permutation _if_ \(\omega_{\varphi}(p_{Q})>0\)_,_
3. _is a_ mixed quantum permutation _if_ \(0<\omega_{\varphi}(p_{Q})<1\)_,_
4. _is a_ truly quantum permutation _if_ \(\omega_{\varphi}(p_{Q})=1\)_._
Random permutations are in bijection with probability measures \(\nu\in M_{p}(S_{N})\):
\[\varphi\text{ random } \Longleftrightarrow\ \varphi=\varphi_{\nu}\text{ where }\] \[\varphi_{\nu}(f):=\sum_{\sigma\in S_{N}}\pi_{\mathrm{ab}}(f)( \sigma)\nu(\{\sigma\})\qquad(f\in C(S_{N}^{+})).\]
**Theorem 4.6**.: _Suppose \(h_{S_{N}}\) is the state on \(C(S_{N}^{+})\) defined by \(h_{C(S_{N})}\circ\pi_{ab}\). Then if_
\[\varphi\star h_{S_{N}}=h_{S_{N}}=h_{S_{N}}\star\varphi,\]
\(\varphi\) _is a random permutation._
Proof.: This follows from Theorem 2.19.
**Lemma 4.7**.: _Let \(\varphi,\rho\) be quantum permutations. The convolution operators \(\varphi\to\rho\star\varphi\) and \(\varphi\to\varphi\star\rho\) are weak*-continuous_
Proof.: Follows from \((\varphi\star\rho)(f)=\varphi((I_{C(S_{N}^{+})}\otimes\rho)\Delta(f))=\rho(( \varphi\otimes I_{C(S_{N}^{+})})\Delta(f))\).
### Exotic quasi-subgroups
**Theorem 4.8**.: _Let \(\varphi\in S_{N}^{+}\) be genuinely quantum, \(\omega_{\varphi}(p_{Q})>0\), and \(h_{S_{N}}\in S_{N}^{+}\) the Haar idempotent \(h_{C(S_{N})}\circ\pi_{ab}\). Form the idempotent \(\phi_{\varphi}\) from the weak*-limit of Cesaro means of \(\varphi\), and then define an idempotent:_
\[\phi:=w^{*}\text{-}\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}(h_{S_{N}}\star \phi_{\varphi})^{*k}. \tag{10}\]
_Then the quasi-subgroup generated satisfies:_
\[S_{N}\subsetneq\mathbb{S}_{\phi}\subseteq S_{N}^{+}.\]
Proof.: First let us show that \(S_{N}\subseteq\mathbb{S}_{\phi}\). For any \(\sigma\in S_{N}\), and \(\phi_{n}\) a Cesaro mean of \((h_{S_{N}}\star\phi_{\varphi})\):
\[\operatorname{ev}_{\sigma}\star\phi_{n}=\phi_{n}\implies w^{*}\text{-}\lim_{n \to\infty}(\operatorname{ev}_{\sigma}\star\phi_{n})=\phi\implies\operatorname {ev}_{\sigma}\star\phi=\phi\implies\phi\star\operatorname{ev}_{\sigma^{-1}}=\phi.\]
by Proposition 1.8. Similarly \(\operatorname{ev}_{\sigma^{-1}}\star\phi_{n}\to\phi\) which implies that \(\phi\star\operatorname{ev}_{\sigma}=\phi\), and so \(S_{N}\subseteq\mathbb{S}_{\phi}\)
Now suppose for the sake of contradiction that \(\phi\) is random. Then
\[\phi\star h_{S_{N}}=h_{S_{N}}=h_{S_{N}}\star\phi.\]
However for all Cesaro means \(\phi_{n}\):
\[\phi_{n}\star\varphi=\phi_{n}\implies\phi\star\varphi=\phi\implies h_{S_{N}} \star\varphi=h_{S_{N}},\]
by left convolving both sides of \(\phi\star\varphi=\phi\) with \(h_{S_{N}}\). But Theorem 4.6 says in this case that \(\varphi\) is random, a contradiction.
If in fact for all genuinely quantum \(\varphi\in S_{N}^{+}\) it is the case that \(\mathbb{S}_{\phi}=S_{N}^{+}\) for \(\phi\) given by (10), then the maximality conjecture holds, and it is tenable to say that \(h_{S_{N}}\) and _any_ genuinely quantum permutation \(\varphi\in S_{N}^{+}\) generates \(S_{N}^{+}\).
## 5. Convolution dynamics
This section will explore, with respect to \(p_{Q}\in C(S_{N}^{+})^{**}\), the qualitative dynamics of states on \(C(S_{N}^{+})\) under convolution. Again, using the Gelfand-Birkhoff picture such states will be referred to as quantum permutations. The results of this section are illustrated qualitatively in a phase diagram, Figure 1.
### The convolution of random and truly quantum permutations
**Lemma 5.1**.: _Suppose \(p\in C(\mathbb{G})^{**}\) is a group-like projection. Then, where \(q:=\mathds{1}_{\mathbb{G}}-p\):_
\[\Delta^{**}(q)(\mathds{1}_{\mathbb{G}}\otimes p)=q\otimes p.\]
Proof.: Expand
\[\Delta^{**}(p+q)(\mathds{1}_{\mathbb{G}}\otimes p)=(\mathds{1}_{\mathbb{G}} \otimes p),\]
then multiply on the right with \(q\otimes p\).
**Proposition 5.2**.: _Consider quantum permutations in \(S_{N}^{+}\):_
* _The convolution of random permutations is random._
* _The convolution of a truly quantum permutation and a random permutation is truly quantum._
* _The convolution of a truly quantum permutations can be random, mixed, or truly quantum._
Proof.:
* This is straightforward.
2. Let \(\varphi\) be truly quantum, and \(\varphi_{\nu}\) random with extension \(\omega_{\nu}\). Let \((p_{Q}^{\lambda})\subset\mathcal{O}(S_{N}^{+})\) converge \(\sigma\)-weakly to \(p_{Q}\). Using Lemma 5.1, mimic the proof of Theorem 2.19, hitting both sides of \[\Delta^{**}(p_{Q})(\mathds{1}_{S_{N}^{+}}\otimes p_{C})=p_{Q}\otimes p_{C},\] with \(\omega_{\varphi}\otimes\omega_{\nu}\), to yield: \[\omega_{\varphi\star\varphi_{\nu}}(p_{Q})=1,\] i.e. \(\varphi\star\varphi_{\nu}\) is truly quantum.
3. It will be seen in Corollary 6.3 that the Haar state is truly quantum. Note that for any \(N\geq 4\), the Kac-Paljutkin quantum group can be embedded \(G_{0}\subset S_{N}^{+}\) via \(\pi_{G_{0}}\). It can be shown that \(E^{11}\circ\pi_{G_{0}}\) is truly quantum, and \((E^{11}\circ\pi_{G_{0}})^{\star 2}=\varphi_{\nu}\) is a random permutation ([17], (4.6)). Let \(0\leq c\leq 1\) and consider the truly quantum permutation: \[\varphi:=\sqrt{1-c}\,(E^{11}\circ\pi_{G_{0}})+(1-\sqrt{1-c})\,h.\] Then: \[\varphi^{\star 2}=(1-c)\varphi_{\nu}+c\,h\implies\varphi^{\star 2}(p_{Q})=c.\]
**Corollary 5.3**.: _If the convolution of two quantum permutations is a random permutation, then either both are random, or both are truly quantum._
**Proposition 5.4**.: _A quantum permutation \(\varphi\in S_{N}^{+}\) can be written as a convex combination of a random permutation and a truly quantum permutation._
Proof.: If \(\varphi\) is random, or truly quantum, the result holds. Assume \(\varphi\) is mixed. The projections \(p_{C}\), \(p_{Q}\in C(S_{N}^{+})^{**}\) are central, and thus
\[\varphi=\omega_{\varphi}(p_{C})\,\widetilde{p_{C}}\varphi+\omega_{\varphi}(p_ {Q})\,\widetilde{p_{Q}}\varphi,\]
and \(\widetilde{p_{C}}\varphi\) is random, while \(\widetilde{p_{Q}}\varphi\) is truly quantum.
**Definition 5.5**.: _Let \(\varphi\in S_{N}^{+}\) be a quantum permutation. Define \(\varphi_{C}:=\widetilde{p_{C}}\varphi\), the (classically) random part of \(\varphi\), and \(\varphi_{Q}:=\widetilde{p_{Q}}\varphi\), the truly quantum part of \(\varphi\)._
**Proposition 5.6**.: _If \(\varphi\in S_{N}^{+}\) is a mixed quantum permutation with \(0<\omega_{\varphi}(p_{Q})<1\), then no finite convolution power \(\varphi^{\star k}\) is random, or truly quantum._
Proof.: Let \(\alpha:=\omega_{\varphi}(p_{Q})\) and write \(\varphi=(1-\alpha)\varphi_{C}+\alpha\,\varphi_{Q}\):
\[\varphi^{\star k}>(1-\alpha)^{k}\varphi_{C}^{\star k}\implies\omega_{\varphi^{ \star k}}(p_{Q})\leq 1-(1-\alpha)^{k},\]
so no \(\varphi^{\star k}\) is truly quantum. In addition, \(\varphi^{\star k}=\varphi\star\varphi^{\star(k-1)}\) cannot be random, by Corollary 5.3, because \(\varphi\) is neither random nor truly quantum.
**Definition 5.7**.: _A quantum permutation \(\varphi\in S_{N}^{+}\) is called \(\alpha\)-quantum if \(\omega_{\varphi}(p_{Q})=\alpha\)._
**Proposition 5.8**.: _If \(\varphi\in S_{N}^{+}\) is \(\alpha\)-quantum and \(\rho\in S_{N}^{+}\) is \(\beta\)-quantum, then_
\[\alpha+\beta-2\alpha\beta\leq\omega_{\varphi\star\rho}(p_{Q})\leq\alpha+\beta- \alpha\beta.\]
Proof.: Note that \(\varphi\star\rho\) equals:
\[(1-\alpha)(1-\beta)(\varphi_{C}\star\rho_{C})+\beta(1-\alpha)(\varphi_{C} \star\rho_{Q})+\alpha(1-\beta)(\varphi_{Q}\star\rho_{C})+\alpha\beta(\varphi_ {Q}\star\rho_{Q}).\]
Now apply Proposition 5.2.
**Definition 5.9**.: _Where \(\overline{(\varphi,\rho)}=(\varphi+\rho)/2\) is the mean of two quantum permutations, a quantum strictly \(1\)-increasing pair of quantum permutations \(\varphi_{1},\varphi_{2}\in S_{N}^{+}\) are a pair such that:_
\[\omega_{\varphi_{1}\star\varphi_{2}}(p_{Q})>\omega_{\overline{(\varphi_{1}, \varphi_{2})}}(p_{Q}).\]
_A quantum strictly \(2\)-increasing pair of quantum permutations are a pair such that:_
\[\omega_{(\varphi_{1}\star\varphi_{2})^{*2}}(p_{Q})>\omega_{\varphi_{1}\star \varphi_{2}}(p_{Q})>\omega_{\overline{(\varphi_{1},\varphi_{2})}}(p_{Q}).\]
_Inductively, a quantum strictly \((n+1)\)-increasing pair of quantum permutations are a pair such that:_
\[\omega_{(\varphi_{1}\star\varphi_{2})^{*(2^{n})}}(p_{Q})>\omega_{(\varphi_{1} \star\varphi_{2})^{*(2^{n-1})}}(p_{Q})>\cdots>\omega_{\varphi_{1}\star\varphi_ {2}}(p_{Q})>\omega_{\overline{(\varphi_{1},\varphi_{2})}}(p_{Q}).\]
**Proposition 5.10**.: _Let \(\varphi_{1}\in S_{N}^{+}\) be an \(\alpha\)-quantum permutation, and \(\varphi_{2}\in S_{N}^{+}\) a \(\beta\)-quantum permutation._
1. _If_ \((\alpha,\beta)\neq(0,0)\)_, then if_ \(\alpha=1/4\) _or_ \(\beta<\alpha/(4\alpha-1)\)_, the pair_ \((\varphi_{1},\varphi_{2})\) _is_ quantum strictly \(1\)-increasing_._
2. _If_ \((\alpha,\beta)\neq(0,0)\)_, and_ \(\beta=\alpha/(4\alpha-1)\)_, then:_ \[\omega_{\varphi_{1}\star\varphi_{2}}(p_{Q})\geq\omega_{\overline{(\varphi_{1}, \varphi_{2})}}(p_{Q}).\] _Equality is possible, with e.g. quantum permutations coming from the Kac-Paljutkin quantum group_ \(G_{0}\subset S_{N}^{+}\)_._
3. _If_ \(\beta>\alpha/(4\alpha-1)\) _then_ \(\omega_{\varphi_{1}\star\varphi_{2}}(p_{Q})\) _can be less than, equal to, or greater than_ \(\omega_{\overline{(\varphi_{1},\varphi_{2})}}(p_{Q})\)_._
4. _Let_ \[(S_{N}^{+}\times S_{N}^{+})_{\alpha,\beta}:=\{(\varphi,\rho):\,\omega_{\varphi} (p_{Q})=\alpha,\,\omega_{\rho}(p_{Q})=\beta\}.\] _Then_ \[\max\{|\omega_{\varphi_{1}\star\varphi_{2}}(p_{Q})-\omega_{\varphi_{3}\star \varphi_{4}}(p_{Q})|:\,(\varphi_{1},\varphi_{2}),\,(\varphi_{3},\varphi_{4}) \in(S_{N}^{+}\times S_{N}^{+})_{\alpha,\beta}\}=\alpha\beta.\]
Proof.: For (i)-(iii) apply Proposition 5.8. For (iv), the maximum in Proposition 5.8 is attained for
\[\varphi_{1} =\left(1-\alpha\right)h_{S_{N}}+\alpha\,h\] \[\varphi_{2} =\left(1-\beta\right)h_{S_{N}}+\beta\,h\] \[\varphi_{3} =\left(1-\alpha\right)h_{S_{N}}+\alpha\left(E^{11}\circ\pi_{G_{0}}\right)\] \[\varphi_{4} =\left(1-\beta\right)h_{S_{N}}+\beta\left(E^{11}\circ\pi_{G_{0}}\right)\]
Suppose that \(\varphi_{1}\) is \(\alpha\)-quantum, and \(\varphi_{2}\) is \(\beta\)-quantum. The subset of \(S_{N}^{+}\times S_{N}^{+}\) given by condition (1) is called the \(Q_{I}\)-region. In this region the dynamics of the convolution \((\varphi_{1},\varphi_{2})\to\varphi\) with respect to \(p_{Q}\) cannot be too wild:
\[\omega_{\varphi_{1}\star\varphi_{2}}(p_{Q})\in\left(\omega_{\overline{( \varphi_{1},\varphi_{2})}}(p_{Q}),\omega_{\overline{(\varphi_{1},\varphi_{2}) }}(p_{Q})+\alpha\beta\right].\]
Note that the width of this interval tends to zero for \(\alpha\beta\to 0\).
On the other hand, the region of \(S_{N}^{+}\times S_{N}^{+}\) given by (3) is called the \(Q_{W}\)-region, and the dynamics can be more wild here. Given an arbitrary pair of quantum permutations in this region, the convolution can be more, equal, or less quantum than the mean, and, as \(\alpha\beta\to 1\), over the collection of \((\varphi,\rho)\in Q_{W}\) the possible range of values of \(\omega_{\varphi\star\rho}(p_{Q})\) tends to one. Tracing from \(Q_{I}\) towards \(Q_{W}\), on the boundary \(\partial_{W}\) (given by (2)) 'conservation of quantumness',
\[\omega_{\varphi_{1}\star\varphi_{2}}(p_{Q})=\omega_{\overline{(\varphi_{1}, \varphi_{2})}}(p_{Q}),\]
becomes possible for the first time.
Similarly, higher order regions can be defined:
1. The region \(Q_{2I}\subseteq Q_{I}\) given by \(\beta<(2\alpha-1)/(2\alpha-2)\) consists of quantum strictly \(2\)-increasing pairs;
2. The region \(Q_{3I}\subseteq Q_{2I}\) given by \(\beta<1-\sqrt{2}/(1-2\alpha)\) consists of quantum strictly \(3\)-increasing pairs;
3. The region \(Q_{\frac{1}{2}W}\subseteq Q_{W}\) given by \(\beta>(1-1/\sqrt{2})/\alpha\) consists of pairs of quantum permutations \((\varphi_{1},\varphi_{2})\) such that the pair \((\varphi_{1}\star\varphi_{2},\varphi_{1}\star\varphi_{2})\not\in Q_{2I}\), etc.
### The truly quantum part of an idempotent state
**Corollary 5.11**.: _If \(\phi\in S_{N}^{+}\) is an idempotent state, then_
\[\omega_{\phi}(p_{Q})\in\{0\}\cup[1/2,1].\]
Proof.: If \(\phi\) is an idempotent state,
\[\omega_{\phi}(p_{Q})=\omega_{\phi\star\phi}(p_{Q}).\]
The rest follows from Proposition 5.10.
An idempotent on the boundary \(\partial_{W}\) is the Haar idempotent \(h_{G_{0}}\) associated with the Kac-Paljutkin quantum group \(G_{0}\subset S_{4}^{+}\) which satisfies \(\omega_{h_{G_{0}}}(p_{Q})=1/2\).
_Example 5.12_.: Let \(\mathbb{G}\) be a finite quantum group given by \(\pi:C(S_{N}^{+})\to C(\mathbb{G})\). Where \(G\subseteq\mathbb{G}\) is the classical version, the \(\sigma\)-weak extension \(\pi^{**}\) to the biduals maps onto \(C(\mathbb{G})\), and in particular \(\pi^{**}(p_{\sigma})\in C(\mathbb{G})\) is the support projection of
\[f\mapsto\pi_{\mathrm{ab}}(\pi(f))(\sigma)\qquad(f\in C(S_{N}^{+})).\]
Let \(h_{\mathbb{G}}:=h_{C(\mathbb{G})}\circ\pi\) with extension to the biduals \(\omega_{\mathbb{G}}\). From e.g. [13]:
\[\omega_{\mathbb{G}}(p_{\sigma})=\frac{1}{\dim C(\mathbb{G})}\qquad(\sigma\in G).\]
Figure 1. The phase diagram for the convolution of \(\alpha\)-quantum and \(\beta\)-quantum permutations. The phases are quantum increasing, \(Q_{I}\), in the bottom left, and quantum wild, \(Q_{W}\), in the top right, with the bold line \(\partial_{W}\) the boundary. From the bottom left, \(Q_{3I}\subset Q_{2I}\subset Q_{I}\), and then touching \(\partial_{W}\) on the diagonal, \(Q_{\frac{1}{2}W}\subset Q_{W}\). The region \(Q_{\frac{1}{2}W}\) is such that the convolution of states from this region cannot be too close to random: indeed the convolution cannot fall inside \(Q_{2I}\). The line \(\alpha=\beta\) represents \((\varphi,\varphi)\to\varphi^{\star 2}\). The shading is proportional to \(\alpha\beta\) (see Proposition 5.10 (4)).
This implies that
\[\omega_{\mathbb{G}}(p_{Q})=1-\frac{|G|}{\dim C(\mathbb{G})}. \tag{11}\]
Let \(n\geq 9\), where \(S_{n}\) is generated by elements \(\sigma,\tau\) of order two and three [18], and thus there is an embedding \(\widehat{S_{n}}\subset S_{5}^{+}\) given by Fourier type matrices \(u^{\sigma}\in M_{2}(C(\widehat{S_{n}}))\) and \(u^{\tau}\in M_{3}(C(\widehat{S_{n}}))\) ([2], Chapter 13):
\[u=\begin{bmatrix}u^{\sigma}&0\\ 0&u^{\tau}\end{bmatrix}.\]
A finite dual \(\widehat{\Gamma}\subseteq S_{N}^{+}\) has classical version with order equal to the number of one dimensional representations of \(\Gamma\) (see [17] for more). Therefore the classical version of \(\widehat{S_{n}}\) is \(\mathbb{Z}_{2}\) and so, for \(n\geq 9\), the associated Haar idempotent:
\[\omega_{\widehat{S_{n}}}(p_{Q})=1-\frac{2}{n!}, \tag{12}\]
which tends to one for \(n\to\infty\).
This suggests the following study: consider
\[\chi_{N}:=\{\omega_{\phi}(p_{Q}):\,\phi\in S_{N}^{+},\,\phi\star\phi=\phi\}.\]
It is the case that \(\chi_{N}=\{0\}\) for \(N\leq 3\), and otherwise a non-singleton. By (12), \(1\) is a limit point for \(\chi_{5}\cap[1/2,1)\). Is there any other interesting behaviour: either at fixed \(N\), or asymptotically \(N\to\infty\)?
It seems unlikely that there exists a finite exotic quantum permutation group \(S_{N}\subsetneq\mathbb{G}_{N}\subsetneq S_{N}^{+}\) for some \(N\geq 6\), but something can be said:
**Proposition 5.13**.: _An exotic finite quantum permutation group at order \(N\) satisfies:_
\[\dim C(\mathbb{G})\geq 2N!\]
_In particular, there is no exotic finite quantum group with \(\dim C(\mathbb{G})<1440\)._
Proof.: This follows from (11) and Corollary 5.11, and the fact that any exotic quantum permutation group \(S_{N}\subsetneq\mathbb{G}\subsetneq S_{N}^{+}\) must satisfy \(N\geq 6\).
### Periodicity
A periodicity in convolution powers of random permutations is possible. For example, suppose that \(G\subseteq S_{N}\) and \(N\lhd G\) is a normal subgroup. Consider the probability \(\nu\) uniform on the coset \(Ng\). Then, where \(\varphi_{\nu}\in S_{N}^{+}\) is the associated state:
\[\varphi_{\nu}(f)=\sum_{\sigma\in S_{N}}\pi_{\text{ab}}(f)(\sigma)\nu(\{\sigma \})=\frac{1}{|Ng|}\sum_{\tau\in N}\pi_{\text{ab}}(f)(\tau g)\qquad(f\in C(S_{N }^{+})),\]
the convolution powers \((\varphi_{\nu}^{*k})_{k\geq 0}\) are periodic, with period equal to the order of \(g\).
There can also be periodicity with respect to \(p_{Q}\). For example, \(\varphi:=E^{11}\circ\pi_{G_{0}}\) is such that
\[\varphi^{\star k}(p_{Q})=\begin{cases}0,&\text{if $k$ odd},\\ 1,&\text{if $k$ odd}.\end{cases}\]
**Proposition 5.14**.: _Suppose that \(\varphi\in S^{+}_{N}\) is truly quantum. If \(\varphi^{\star k}\) is random, then \(\varphi^{\star(k+1)}\) is truly quantum._
Proof.: Follows from Corollary 5.3.
**Corollary 5.15**.: _Suppose that a truly quantum permutation \(\varphi\) has a random finite convolution power. Let \(k_{0}\) be the smallest such power. Then:_
\[\omega_{\varphi^{k}}(p_{Q})=\begin{cases}0,&\text{if $k$ \mod $k_{0}=0$},\\ 1,&\text{otherwise}.\end{cases}\]
Is there a quantum permutation with \(k_{0}>2\)? This phenomenon suggests looking at when the classical version of \(\mathbb{G}\) is a normal quantum subgroup \(G\lhd\mathbb{G}\). However, in general, the classical periodicity associated with probability measures constant on cosets of \(N\lhd G\) for \(G\subseteq S_{N}\) does not extend to the quantum case. See [16], Section 4.3.1.
## 6. Integer fixed points quantum permutations
An example of an exotic intermediate quasi-subgroup would be nice: instead this section presents a non-example. For a quantum permutation group \(\mathbb{G}\), consider the observable:
\[\text{fix}:=\sum_{j=1}^{N}u_{jj}.\]
Note that \(\sigma(\text{fix})\subseteq[0,N]\). Consider a finite partition \(\mathcal{P}\) of the spectrum into Borel subsets,
\[\sigma(\text{fix})=\bigsqcup_{i=1}^{m}E_{i}.\]
Borel functional calculus can be used to attach a (pairwise-distinct) label \(\lambda_{i}\) to each \(E_{i}\subseteq\sigma(\text{fix})\), and the number of fixed points of a quantum permutation \(\varphi\) can be measured using \(\text{fix}_{\mathcal{P}}\in C(\mathbb{G})^{\ast\ast}\) given by:
\[\text{fix}_{\mathcal{P}}:=\sum_{i=1}^{m}\lambda_{i}\,\mathds{1}_{E_{i}}(\text {fix}).\]
Measurement is in the sense of algebraic quantum probability and the Gelfand-Birkhoff picture: when a quantum permutation \(\varphi\in\mathbb{G}\) is measured with a finite spectrum observable \(f=\sum_{\lambda\in\sigma(f)}\lambda\,p_{\lambda}\) in the bidual \(C(\mathbb{G})^{\ast\ast}\), the result is an element of \(\sigma(f)\), with \(f=\lambda\) with probability \(\omega_{\varphi}(p_{\lambda})\), and in that event there is wave-function collapse to \(\widetilde{p_{\lambda}}\varphi\).
**Definition 6.1**.: _A quantum permutation \(\varphi\in S_{N}^{+}\) has integer fixed points only if for all Borel subsets \(E\subseteq\sigma(\mathrm{fix})\),_
\[E\cap\{0,1,\ldots,N\}=\emptyset\implies\omega_{\varphi}(\mathds{1}_{E}(\mathrm{ fix}))=0.\]
_Equivalently, if_
\[\omega_{\varphi}(\mathds{1}_{\{0,1,\ldots,N\}}(\mathrm{fix}))=1.\]
_Let \(\mathcal{F}(\mathbb{G})\subseteq\mathbb{G}\) be the set of quantum permutations with integer fixed points._
In the quotient \(\pi_{\mathrm{ab}}:C(\mathbb{G})\to C(G)\) to the classical version \(G\subseteq\mathbb{G}\), the number of fixed points observable becomes a integer valued:
\[\pi_{\mathrm{ab}}(\mathrm{fix})=\mathrm{fix}_{G}=\sum_{\begin{subarray}{c} \lambda=0,1\ldots,N\\ \lambda\neq N-1\end{subarray}}\lambda\,p_{\lambda},\]
with
\[p_{\lambda}(\sigma)=\begin{cases}1,&\text{if $\sigma$ has $\lambda$ fixed points},\\ 0,&\text{otherwise}.\end{cases}.\]
Therefore, random permutations \(\varphi_{\nu}\in S_{N}^{+}\) are elements of \(\mathcal{F}(S_{N}^{+})\).
There are plenty of concrete examples of genuinely quantum permutations with integer fixed points: e.g. the quantum permutation \(\varphi:=E^{11}\circ\pi_{G_{0}}\) has zero fixed points. So, \(\mathcal{F}(S_{N}^{+})\) contains all the elements of \(S_{N}\) in \(S_{N}^{+}\), and also genuinely quantum permutations.
**Proposition 6.2**.: _For \(N\geq 4\), the Haar state on \(C(S_{N}^{+})\) is not an element of \(\mathcal{F}(S_{N}^{+})\). In fact:_
\[\omega_{h}(\mathds{1}_{\{x\}}(\mathrm{fix}))=0\qquad(x\in[0,N]).\]
Proof.: This follows from the fact that for \(N\geq 4\) the moments of fix with respect to the Haar state are the Catalan numbers [3], and thus the corresponding measure is the Marchenko-Pastur law of parameter one, which has no atoms:
\[\omega_{h}(\mathds{1}_{\{x\}}(\mathrm{fix}))=\int_{\{x\}}\frac{1}{2\pi}\sqrt{ \frac{4}{t}-1}\,dt=0.\]
**Corollary 6.3**.: _For \(N\geq 4\), the Haar state on \(C(S_{N}^{+})\) is truly quantum._
Proof.: The Haar state \(h\) is genuinely quantum. Assume that \(h\in S_{N}^{+}\) is mixed:
\[\omega_{h}(p_{C})>0\implies\omega_{h}(p_{\sigma})>0\]
for some \(\sigma\in S_{N}\). Let \(q_{\sigma}:=\mathds{1}_{S_{N}^{+}}-p_{\sigma}\). Recalling that \(p_{\sigma}\) is central:
\[\omega_{h}(f)=\omega_{h}(p_{\sigma})\,(\widetilde{p_{\sigma}}h)(f)+\omega_{h} (q_{\sigma})\,(\widetilde{q_{\sigma}}h)(f)\qquad(f\in C(S_{N}^{+})^{**}).\]
Note that \(\widetilde{p_{\sigma}}h\) has a central minimal projection for support, which implies it is a character. By Proposition 4.2, \(\widetilde{p_{\sigma}}h=\mathrm{ev}_{\sigma}\), which factors through the abelianisation \(\pi_{\mathrm{ab}}\):
\[\mathrm{ev}_{\sigma}(f)=\pi_{\mathrm{ab}}(f)(\sigma)\qquad(f\in C(S_{N}^{+})),\]
while the extension \(\omega_{\sigma}\) factors through \(\pi_{\mathrm{ab}}^{**}\). Suppose that \(\sigma\) has \(\lambda\in\{0,1,\ldots,N\}\) fixed points. Using Lemma 2.22, consider, where \(p_{\lambda}=\pi_{\mathrm{ab}}^{**}(\mathds{1}_{\{\lambda\}}(\mathrm{fix}))\),
\[\omega_{\sigma}(\mathds{1}_{\{\lambda\}}(\mathrm{fix})) =p_{\lambda}(\sigma)=1,\] \[\implies\omega_{h}(\mathds{1}_{\{\lambda\}}(\mathrm{fix})) =\omega_{h}(p_{\sigma})\,(\widetilde{p_{\sigma}}h)(\mathds{1}_{ \{\lambda\}}(\mathrm{fix}))+\omega_{h}(q_{\sigma})\,(\widetilde{q_{\sigma}}h) (\mathds{1}_{\{\lambda\}}(\mathrm{fix}))\] \[\geq\omega_{h}(p_{\sigma})\,\omega_{\sigma}(\mathds{1}_{\{ \lambda\}}(\mathrm{fix}))=\omega_{h}(p_{\sigma})>0,\]
contradicting Proposition 6.2.
However, \(\mathcal{F}(\mathbb{G})\subseteq\mathbb{G}\) is in general not a Pal set:
_Example 6.4_.: Let \(\widehat{S_{4}}\subset S_{5}^{+}\) by:
\[u=\begin{bmatrix}u^{(12)}&0\\ 0&u^{(234)}\end{bmatrix}.\]
Here \(u^{(12)}\in M_{2}(C(\widehat{S_{4}}))\) and \(u^{(234)}\in M_{3}(C(\widehat{S_{4}}))\) are Fourier-type magic unitaries associated with (12) and (234) ([2], Chapter 13). Consider the regular representation:
\[\pi:C(\widehat{S_{4}})\to B(\mathbb{C}^{24}).\]
Consider:
\[\pi(\mathrm{fix})=\pi(2e+(12)+(234)+(243)).\]
The spectrum contains \(\lambda_{\pm}:=(5\pm\sqrt{17})/2\) (see [17]), but consider unit eigenvectors \(x_{2}\) and \(x_{4}\in\mathbb{C}^{24}\) of eigenvalues two and four that give quantum permutations:
\[\varphi_{2}=\langle x_{2},\pi(\cdot)x_{2}\rangle\text{ and }\varphi_{4}= \langle x_{4},\pi(\cdot)x_{4}\rangle,\]
with two and four fixed points. It can be shown that:
\[\varphi:=\frac{1}{2}\varphi_{2}+\frac{1}{2}\varphi_{4}\]
is strict, that is \(|\varphi(\sigma)|=1\) for \(\sigma=e\) only, and therefore as the convolution in \(\widehat{S_{4}}\) is pointwise multiplication,
\[\varphi^{*k}\to\delta_{e},\]
which is the Haar state on \(C(\widehat{S_{4}})\). The Haar state for finite quantum groups such as \(\widehat{S_{4}}\) is faithful, and so where \(p_{\lambda_{+}}\) is the spectral projection associated with the eigenvalue \(\lambda_{+}\):
\[h_{\widehat{S_{4}}}(p_{\lambda_{+}})>0,\]
which implies that \((\varphi^{*k})_{k\geq 0}\) does not converge to an element with integer fixed points, and so \(\mathcal{F}(\widehat{S_{4}})\) is not a Pal set, and thus neither is \(\mathcal{F}(S_{N}^{+})\) for \(N\geq 4\).
_Example 6.5_.: In the case of \(C(S_{N}^{+})\) (\(N\geq 4\)), the central algebra \(C(S_{N}^{+})_{0}\) generated by the characters of irreducible unitary representations is commutative [10], and generated by fix, and so the central algebra \(C(S_{N}^{+})_{0}\cong C([0,N])\), and the central states are given by Radon probability measures.
The quantum permutation 'uniform on quantum transpositions', \(\varphi_{\rm tr}\) from [10], is a central state given by:
\[\varphi_{\rm tr}(f)=f(N-2)\qquad(f\in C(S_{N}^{+})_{0})\]
It has \(N-2\) fixed points (see [17]) but its convolution powers converge to the Haar state \(h\in S_{N}^{+}\), which is not in \(\mathcal{F}(S_{N}^{+})\) by Proposition 6.2.
### Acknowledgement
Some of this work goes back to discussions with Teo Banica. Indeed the proof of Lemma 3.3 is due to Teo. Thanks also to Matthew Daws for helping with Section 1.4, Stefaan Vaes with Remark 1.5, and Ruy Exel with the argument in Theorem 2.23 (ii).
|
2308.00154 | PATRONoC: Parallel AXI Transport Reducing Overhead for Networks-on-Chip
targeting Multi-Accelerator DNN Platforms at the Edge | Emerging deep neural network (DNN) applications require high-performance
multi-core hardware acceleration with large data bursts. Classical
network-on-chips (NoCs) use serial packet-based protocols suffering from
significant protocol translation overheads towards the endpoints. This paper
proposes PATRONoC, an open-source fully AXI-compliant NoC fabric to better
address the specific needs of multi-core DNN computing platforms. Evaluation of
PATRONoC in a 2D-mesh topology shows 34% higher area efficiency compared to a
state-of-the-art classical NoC at 1 GHz. PATRONoC's throughput outperforms a
baseline NoC by 2-8X on uniform random traffic and provides a high aggregated
throughput of up to 350 GiB/s on synthetic and DNN workload traffic. | Vikram Jain, Matheus Cavalcante, Nazareno Bruschi, Michael Rogenmoser, Thomas Benz, Andreas Kurth, Davide Rossi, Luca Benini, Marian Verhelst | 2023-07-31T21:08:37Z | http://arxiv.org/abs/2308.00154v1 | PATRONoC: Parallel AXI Transport Reducing Overhead for Networks-on-Chip targeting Multi-Accelerator DNN Platforms at the Edge
###### Abstract
Emerging deep neural network (DNN) applications require high-performance multi-core hardware acceleration with large data bursts. Classical network-on-chips (NoCs) use serial packet-based protocols suffering from significant protocol translation overheads towards the endpoints. This paper proposes PATRONoC, an open-source fully AXI-compliant NoC fabric to better address the specific needs of multi-core DNN computing platforms. Evaluation of PATRONoC in a 2D-mesh topology shows 34 % higher area efficiency compared to a state-of-the-art classical NoC at 1 GHz. PATRONoC's throughput outperforms a baseline NoC by 2-8\(\times\) on uniform random traffic and provides a high aggregated throughput of up to 350 GB/s on synthetic and DNN workload traffic.
Networks-on-chip, multi-core DNN platforms, AXI, high-performance systems
## I Introduction
Deep neural networks (DNNs) have become one of the primary workloads in computing platforms of data centers and edge devices in the internet of things (IoT). Given the high proliferation of DNN workloads, research into designing and developing high-performance specialized hardware accelerators for DNN has gained much interest, as evidenced by the several DNN accelerators presented in the past decade [1]. In the quest to support the ever-growing requirements of DNN workloads, hardware architectures have evolved from small single-core implementations to homogeneous [2] and heterogeneous [3, 4, 5] multi-core hardware implementations1. The trend of going multi-core can bring performance gains. However, it also brings new challenges, such as resource partitioning, workload mapping, complex hardware implementations, memory hierarchy design, and data communication bottlenecks between cores.
Footnote 1: In this paper, the terms core and accelerator are used interchangeably.
Multi-CPU-based general-purpose computing traditionally uses networks-on-chip (NoCs) and their various optimizations for inter-CPU data communication. Many topologies exist to balance the scalability of CPU cores, throughput, latency, and area impact of the NoC. Moreover, NoC protocols are designed for packetization and serialization over fairly narrow channels between cores (e.g., 32 bits), which reduces the number of routing resources needed. However, this implies additional hardware at the network's edges for protocol translation and serialization/deserialization (SERDES) from standard channel-oriented protocols at the endpoints (e.g., AXI4 or AXI5) to the NoC protocol. Moreover, due to their serialized nature, these NoCs need a high clock frequency to meet the bandwidth requirements, thus needing clock domain crossing hardware.
Such traditional narrow-channel NoCs work well for inter-CPU cache traffic. However, the traffic of DNN workloads is mostly deterministic, with large bursts of non-coherent data movements requiring high bandwidth interconnection to achieve high performance and low latency. To achieve high bandwidth, typical solutions either 1) use a narrow NoC and operate it at 2-8\(\times\) the core frequency [6] or 2) build a wide NoC with multiple channels [7]. The latter solution gains traction as advanced technology scaling enables the area-efficient integration of more and more on-chip interconnect resources [7, 8]. However, modern NoCs need more than just wide links to answer the needs of DNN workloads, as packet-based serial NoC protocols are inadequate for workloads that rely on burst-based traffic.
This paper proposes a template for burst-based homogeneous AXI-compliant NoCs to address the requirements of emerging multi-core DNN platforms and to tackle the challenges of packet-based serial NoCs. This work builds upon the open-source elementary AXI building blocks of [9], which focuses on crossbar-based topologies, towards a fully-configurable open source AXI-based NoC framework, PATRONoC. PATRONoC is subsequently used in a mesh topology and extensively benchmarked to demonstrate the benefits of having AXI-based NoCs. As such, this work makes the following contributions:
* We present an open-source parameterizable AXI-compliant NoC designed for providing high bandwidth links for multi-core DNN computing platforms. The NoC is available at [https://github.com/pulp-platform/axi](https://github.com/pulp-platform/axi).
* We demonstrate that using an AXI protocol for the NoC creates a fully homogeneous network interface to avoid high cost of protocol translation and provides a standard plug-and-play support for ease of integration.
* We show that using the AXI protocol end-to-end, a multi-channel, wide NoC with burst support and high bandwidth between cores as well as to-and-from memory can be supported, thereby improving performance of DNN applications on multi-core platforms.
The rest of the paper is organized as follows. Section II discusses the architectural overview of the proposed NoC,
followed by details of the NoC's physical implementation with GlobalFoundries' modern 22FDX technology in Section III. We evaluate our NoC with synthetic and real traffic patterns extracted from DNN workloads in Section IV. Subsequently, we compare our work against other modern NoC solutions in Section V before presenting our conclusions in Section VI.
## II Interconnect Architecture of PATRONoC
This section provides architectural and physical implementation details of PATRONoC for a mesh topology. NoCs are built with many elementary routing elements, each forwarding data from the ingress ports to the egress ports according to their topology-specific routing table. In this work, we extend the AXI crosspoint (XP) from [9], shown in Fig. 1 (bottom), allowing it to be used as PATRONoC's routing element. The XP consists of a configurable crossbar (XBAR) switch and ID remappers to ensure isomorphic XP ports. It is fully AXI-compliant and supports bursts, multiple outstanding transactions, and transaction ordering. We used the XP as the building block for a homogeneous, 2D mesh topology NoC with widely configurable dimensions, as shown in Fig. 1. Although this work uses the 2D mesh as a proof-of-concept, any regular topology, such as a torus, butterfly, or ring, can also be modularly built using our building blocks. We focused on the mesh due to its popularity in research and its remarkable simplicity, scalability, and efficiency [7]. Fig. 1 shows the two mesh topologies, \(2\times 2\) (top-left) and \(4\times 4\) (top-right), used to evaluate PATRONoC.
The meshes are built by instantiating the XPs in a 2D grid and connecting the NESW-bound links to neighboring XPs. AXI masters and slaves can be connected as NoC endpoints at each XP. A common AXI master is a core or a DNN accelerator, and AXI slaves can be memory or I/O tiles. Each XBAR is configured with a static routing table used for deterministic dimension-ordered routing in the mesh. Specifically, PATRONoC uses a source-based YX routing scheme, as shown with the green arrows in Fig. 1, to reduce the complexity of the route calculation step of the crosspoints. In this algorithm, a transaction is first passed forward in the same column until it reaches the same row as the destination XP and then passed forward in the same row until it reaches the destination XP. An automated script generates the address-based routing table for each XP which is used for routing the AXI transactions based on their destination address.
PATRONoC is highly parameterizable, taking advantage of the flexibility of the AXI protocol. The parameters that can be tuned at design time are shown in Table I. The number of AXI masters and slaves indicate the number of connected cores and memory/IO tiles in the design. Both ranges for possible number of AXI masters and slaves are valid for the N\(\times\)M 2D mesh and are topology-dependent. For example, in a concentrated mesh, multiple masters and slaves can connect to the same XP. Furthermore, the data width (DW) can be tuned to meet the system's bandwidth requirements, while the address width (AW) can be tuned to support a larger global address space.
The AXI protocol identifies transactions with IDs used by the master endpoints to distinguish independent transactions. The number of unique IDs can be configured using the ID width (IW) and increases with the number of masters. All transactions from the same master with the same ID must remain ordered, but there is no ordering requirement between transactions with different IDs. Multiple outstanding transactions enable the master to hide the memory latency. A higher max. number of outstanding transactions (MOT) improves performance, as all AXI building blocks can support multiple in-flight transactions, preventing bandwidth degradation when the NoC is saturated.
The XBAR connectivity parameter configures the XP to either connect all slave ports to all master ports in the case of a fully-connected network or partially connect slaves and masters in the case of a mesh or other non-point-to-point topologies. The last parameter is the register slice (cut), shown in Fig. 1, that can be optionally inserted at design time on some or all AXI channels, improving the timing of the design at the cost of increased latency. The rest of the paper evaluates PATRONoC
Fig. 1: PATRONoC instances as a \(2\times 2\) mesh (left) and a \(4\times 4\) mesh (right). The AXI masters and slaves are not shown in the \(4\times 4\) mesh for ease of readability. Elementary blocks used for the NoC are also shown: XP (bottom-left) and XBAR (bottom-right). Red XP is 3-master and 3-slave, light blue XP is 4-master and 4-slave, and, dark blue XP is 5-master and 5-slave.
in \(2\times 2\) and \(4\times 4\) mesh topologies with multiple configurations based on the DW, AW, IW, and MOT parameters.
## III Implementation Results
This section provides implementation results in terms of complexity and scalability of the NoC and its parameters. The implementation is done in GlobalFoundries' 22FDX technology node using a ten-layer metal stack. We used eight-track standard cells of SLVT/LVT type, characterized at worst-case scenario (SS/0.72 V/125 \({}^{\circ}\)C) for timing analysis. The designs from Section II are synthesized using Synopsys' Design Compiler 2022.03 in topographical mode, taking physical endpoint placement constraints into account. All designs achieve a clock frequency of 1 GHz at the worst-case condition corner with a register slice on every AXI channel.
The \(2\times 2\) PATRONoC mesh, shown in Fig. 1 (top-left), is first synthesized with different AW and DW parameters, keeping \(\mathrm{IW}=2\,\mathrm{bits}\), \(\mathrm{MOT}=1\), and other parameters at default values. Fig. 2 shows the area versus bisection bandwidth (DW-dependent) of the mesh for the different configurations. As expected, the design area scales up with increasing AW and DW, taking up mere 174 kGE for the smallest configuration of \(\mathrm{AW}=32\,\mathrm{bits}\) and \(\mathrm{DW}=32\,\mathrm{bits}\). The biggest design shown in Fig. 2, with \(\mathrm{DW}=512\,\mathrm{bits}\), takes an on-chip area of 830 kGE.
The benefit of having a homogeneous NoC is evident when the design is compared to classic NoC solutions. This work uses ESP-NoC [10] as our baseline NoC. ESP-NoC is a state-of-the-art open-source packet-based NoC including six planes for coherent and non-coherent traffic for multi-core heterogeneous systems. Synthesis results showing the area of the \(2\times 2\) ESP-NoC mesh in its 32-bit- and 64-bit-fit configurations are presented in Fig. 2. Compared to PATRONoC's configuration with \(\mathrm{AW}=32\,\mathrm{bits}\) and \(\mathrm{DW}=64\,\mathrm{bits}\), ESP-NoC takes up 68 % more area to provide only 25 % more throughput (five 32-bit wide planes providing 160 Gbit/s). The area overhead can be attributed to ESP-NoC's multiple planes with large protocol translation interfaces at each endpoint. The advantage of PATRONoC is much more evident in Fig. 2 when comparing its area efficiency (slope) with ESP-NoC. We define area efficiency as the bisection bandwidth normalized to the standard cell area, providing a measure of NoC performance at a given complexity. Fig. 2 shows that PATRONoC is closer to the Pareto front providing better area efficiency compared to the ESP-NoC in 32-bit and 64-bit configurations.
We implement the \(4\times 4\) mesh shown in Fig. 1 (top-right) to show the scalability of PATRONoC. For building the \(4\times 4\) mesh, the IW of the AXI blocks is increased to 4 to support 16 unique IDs required for 16 masters. The results of the area and bisection bandwidth of this mesh are summarized in Fig. 3 (left). As the mesh dimensions change, the area overhead of the NoC becomes approximately 32 % compared to the \(2\times 2\) mesh in similar AW and DW configurations, leading to a drop in area efficiency by 25 %. Increasing the MOT improves the performance of the NoC at the cost of larger complexity in terms of area. Fig. 3 (right) shows the tradeoff between MOT and the area of the \(4\times 4\) PATRONoC with \(\mathrm{DW}=64\,\mathrm{bits}\). While this work focuses more on performance and area aspects of the NoC, the power consumption at 1 GHz for the \(4\times 4\) PATRONoC is 45 mW (for \(\mathrm{DW}=32\,\mathrm{bits}\)) and 171 mW (for \(\mathrm{DW}=512\,\mathrm{bits}\)) on uniform random traffic. This accounts for less than 10 % of the projected power consumption of a complete platform, assuming that a typical DNN accelerator connected to one NoC node uses 100 mW to 200 mW.
## IV Performance Evaluation
PATRONoC's performance is characterized in terms of throughput versus injected load through a cycle-accurate register-transfer level (RTL) simulation. This section evaluates the performance of the \(4\times 4\) PATRONoC mesh in two configurations: 1) as a slim NoC with \(\mathrm{DW}=32\,\mathrm{bits}\) and 2) as a wide NoC with \(\mathrm{DW}=512\,\mathrm{bits}\), both with \(\mathrm{AW}=32\,\mathrm{bits}\), \(\mathrm{IW}=4\,\mathrm{bits}\), and \(\mathrm{MOT}=8\). Each master is a DMA engine, and the slaves are AXI-capable memories that cater to the DMA requests. The configurable and workload-specific maximum burst length is used by the RTL model of the DMA engine to create AXI-compliant bursts (adhering to address boundaries and max number of beats) for the NoC. In our evaluation framework, the workload-specific burst length is randomized within a user-defined range to emulate a random burst length with a random source and destination address, while the bursts in the NoC are subject to AXI compliance. All analyses assume a clock frequency of 1 GHz for the endpoints and the NoCs.
### _Uniform Random Traffic_
The Noxim simulator [11] is used to set the baseline NoC performance, taking a \(4\times 4\) mesh with the default XY routing, 32-bit flits, and eight flits per packet to closely match the slim
Fig. 3: Implementation results showing area vs. bisection bandwidth of PATRONoC in \(4\times 4\) mesh configurations (left). Configurations are represented as AXI_AW_DW_IW. Area vs. MOT tradeoff for \(\mathrm{DW}=64\,\mathrm{bits}\) (right).
Fig. 2: Implementation results showing area versus bisection bandwidth of PATRONoC and ESP-NoC [10] in \(2\times 2\) mesh configurations. PATRONoC’s configurations are represented as AXI_AW_DW_IW.
PATRONoC configurations. Fig. 4 shows the non-exhaustive characterization of the NoC on this traffic in two configurations: 1) a standard single virtual channel with 4 flits per router buffer for a compact implementation, and 2) four virtual channels with 32 flits per router buffer for high performance. The saturation throughput of the Noxim NoCs are 1.6 GiB/s and 2.25 GiB/s, respectively. Increasing the number of virtual channels (VCs) [12] and flits per buffer improves the NoC's performance, but also increases router complexity.
Fig. 4 also shows the NoC throughput for the uniform random traffic running on the 4 x 4 slim PATRONoC mesh. It is clear that PATRONoC is beneficial for burst traffic. At small transfer lengths of less than 4 B, similar to normal CPU traffic, PATRONoC performs equivalently to the Noxim NoC with 1.5 GiB/s throughput. However, when using longer bursts, PATRONoC's performance improves and reaches up to 19 GiB/s aggregated throughput at DMA burst lengths up to 10 KiB and 64 KiB. This provides an improvement of 8.4\(\times\) over the saturation throughput achieved by the best Noxim NoC configuration (4 VCs, buffers 32-flit deep), showing that PATRONoC largely outperforms it by using bursts.
### _Synthetic Traffic_
Fig. 5 shows the three synthetic traffic patterns considered: 1) all global access, 2) max two-hop access, and, 3) max single-hop access. We characterize the 4 x 4 PATRONoC mesh in both slim and wide configurations with the synthetic patterns.
_a.) All global access_: In this traffic pattern, all the AXI master and DMA endpoints communicate with a single slave endpoint leading to predominately global accesses. Fig. 5a) shows this traffic pattern on the 4 x 4 mesh, where the endpoint \((2,1)\) acts as the AXI slave. _h.) Max two-hop access_: In this use case, the AXI slave accesses are distributed to four endpoints \((1,1)\), \((1,2)\), \((2,1)\), and \((2,2)\). This considers architectures that have a distributed shared L2/L1 memory, either uniform or non-uniform. The 16 AXI masters can communicate to any of the four endpoints, but in this case, the masters are restricted to only communicate to slaves which are a maximum of two hops away. _c.) Max single-hop access_: In this traffic pattern, the AXI slaves are further distributed across eight endpoints along the edges except for the corners. The 16 masters are restricted to access only slaves which are at most one hop away. The last two cases are considered because in traffic from many DNN workloads, data scheduling can be done on nearby cores to prevent long latency and low-performance data communication.
The slim NoC can be used in architectures that are area-constrained but require more throughput than what most traditional NoCs can provide. Fig. 6 (left) shows the NoC utilization, with respect to bisection bandwidth, of the slim NoC on the three synthetic patterns at different burst sizes. Starting with traffic pattern a.), the slim NoC provides a minimum of 1.5 GiB/s of throughput with short bursts. This is approximately 4.7 % NoC utilization considering the slim NoC has a 32 GiB/s bisection bandwidth. The access pattern limits the traffic to a few links of the NoC and, thus, a low utilization is expected. The throughput improves considerably with increasing burst length and reaches a maximum of 6 GiB/s for burst lengths up to 64 KiB, providing a NoC utilization of around 18.75 %. For pattern b.), the NoC performs similarly to pattern a.) for short burst lengths. However, the aggregated throughput improves considerably with larger bursts and saturates at 17.2 GiB/s for burst lengths up to 10 KiB and 64 KiB. This leads to a higher NoC utilization of about 53.75 %, showing that all mesh links can be utilized more efficiently. Similar to pattern b.), the pattern c.) under-performs at small bursts but the aggregated saturation throughput at larger bursts improves to 22.5 GiB/s for bursts up to 64 KiB with a NoC utilization of 70.3 %.
The wide NoC is geared towards high-bandwidth large-burst multi-core DNN-workload traffic. A significant performance gain can be achieved with such wide NoC, but being parameterizable means that also alternative DWs between 32 bits and 512 bits can be considered by designers to find an optimal size for given system requirements. Fig. 6 (right) shows the NoC utilization characteristic of the wide NoC on the synthetic access patterns with different burst sizes. For the traffic pattern a.), the wide NoC can only achieve a utilization of 0.29 % at small bursts up to 4 B large, providing a maximum throughput of 1.5 GiB/s (bisection bandwidth of 512 GiB/s). As seen with the slim NoC, this is an expected performance degradation with this access pattern. The degradation in NoC utilization is further exacerbated by the wide DWs but short burst lengths. The throughput improves, however, with larger burst sizes of up to 64 KiB and reaches saturation at 95 GiB/s with 18.55 % NoC utilization. Both patterns b.) and c.) result in low throughput and utilization with small burst sizes. The aggregated throughput improves at larger bursts with length up to 10 KiB and 64 KiB
Fig. 4: Uniform Random Traffic with Poisson distribution using Noxim simulator for a 4 x 4 2D mesh and uniform random traffic on the slim PATRONoC with increasing DMA burst length.
Fig. 5: Synthetic traffic patterns for the performance evaluation.
reaching 255 GiB/s (49.8 % utilization) and 345 GiB/s (67.4 % utilization) for the patterns b.) and c.), respectively.
### _DNN Workload Traffic_
Synthetic traffic does not capture the full scope of access patterns in real multi-core hardware architectures running DNN workloads. In order to characterize the NoC in more realistic use cases, this section evaluates three emulated CNN-based workloads: a.) distributed training, b.) parallelized convolutions, and c.) pipelined convolutions, shown in Fig. 7. We use GVSoC [13] to generate real traffic patterns for the RTL simulation. GVSoC is an open-source, highly configurable, and event-driven simulator for heterogeneous RISC-V-based SoC platforms used for full-system software development and performance evaluation.
_a.) Distributed training_: For this workload, we replicate and deploy a ResNet-34 (90 % channel shrink factor) distributed training model for the ImageNet dataset on 16 cores. In terms of data communication, a mix of L2 to L1 (core), L1 (core) to L2, and L1 (core) to L1 (core) transfers are needed. _b.) Parallelized convolution_[14]: This is a CNN-based inference workload in which the layers of the network and inputs are tiled and deployed on separate cores. This is a pure L2 to L1 (core) and L1 (core) to L2 memory traffic pattern and has no inter-core communication. _c.) Pipelined convolution_[14]: Depth-first or pipeline dataflow is used in many new DNN platforms to efficiently run CNN-based inference. In this scheme, layers are executed in parallel, in a pipelined way across the different cores to reduce the data traffic to higher memory levels [15]. This workload has mostly L1 (core) to L1 (core) traffic and only cores 0 and 15 do L1 (core) to/from L2 transfers.
Fig. 8 shows the evaluation results of the \(4\times 4\) slim and wide NoCs running the three DNN workloads. For the slim NoC, the parallelized convolution--which consists of mostly core to/from shared memory transfers--reaches a throughput of 4.27 GiB/s. For the training workload, the throughput is better than the parallelized convolution workload as it involves a mix of core to/from shared memory and core-to-core transfers. On the pipelined convolution, which consists of predominantly core-to-core traffic, the NoC achieves a high 19.17 GiB/s throughput. Similar trends are reported for the \(4\times 4\) PATRONoC wide NoC shown in Fig. 8 (right), but at much higher throughput, with pipeline convolution reaching a peak throughput of 310 GiB/s.
## V Related Work
NoCs are an active area of research, and much effort has gone into optimizing topologies, routing algorithms, flow control schemes, and the microarchitecture of routers [12, 26, 27]. Multi-core (CPU) architectures have been exploiting these optimizations of NoCs for many decades. However, NoCs for multi-accelerator DNN platforms are still in nascent stage.
Fig. 8: Throughput analysis for DNN workload traffic on the PATRONoC.
Fig. 6: NoC utilization at maximum injected load for the synthetic random traffic running on the slim and wide PATRONoC using all global access, max 2 hop, and max 1 hop traffic patterns with different DMA burst sizes.
Fig. 7: Overview of the DNN workloads used for PATRONoC evaluation. FWD and BWD in (a) represents the forward and backward propagation workloads, respectively, used in DNN training.
Table II provides a brief overview of state-of-the-art NoCs used in multi-core DNN platforms compared to PATRONoC. PATRONoC is the only design that provides open-source AXI-compliant homogeneous burst-based configurable NoC for multi-core DNN platforms. Moreover, PATRONoC outperforms most of the designs in terms of throughput, with the exception of [17], which uses a bigger \(8\times 8\) concentrated mesh (CMesh) topology with primarily local access patterns. Moreover, its results are taken from the gem5 simulator [28], and the RTL of the design is not openly available. Using a CMesh topology for PATRONoC would similarly improve its performance.
OpenSoC Fabric [22] is among the few open-source NoCs with a custom non-coherent NoC protocol. It provides a socket to plug AXI-Lite-based endpoints. Unfortunately, AXI-Lite does not support bursts needed by high-performance systems. The ESP framework [4, 10] also provides an open-source implementation of its multi-plane NoC, supporting coherent and non-coherent endpoints. The NoC is a 2D mesh topology and uses a custom packet-based protocol. We used ESP-NoC as a baseline for comparison with PATRONoC. Section III shows that PATRONoC is more area efficient and provides higher bandwidth owing to its homogeneous network. BaseJump Manycore is an open-source non-coherent NoC based on a 2D mesh used in the Celerity chip [23]. Those NoCs are generally limited to meshes and use classical packet-based NoC protocols, which lead to high area overhead and low bandwidth. In comparison, PATRONoC can be used to design any topology, while providing a highly parameterizable NoC.
## VI Conclusion
This work presented the first homogeneous AXI-compliant network-on-chip architecture, building a complete open-source infrastructure for generating various NoC topologies. Using the benefits of a burst-based AXI protocol, PATRONoC targets the emerging field of multi-core DNN platforms requiring high-bandwidth burst-based traffic. The NoC provides high-performance gain compared to state-of-the-art NoCs by using its burst capability and achieves up to a maximum of 310 GB/s aggregated throughput on DNN workloads. The work provides insight into the exploration of different design parameters which affect the performance and complexity of the NoC. It also enables future work to explore different NoC topologies which might be suited for emerging DNN platforms.
|
2301.13345 | Differentiable Entailment for Parameter Efficient Few Shot Learning | Few-shot learning allows pre-trained language models to adapt to downstream
tasks while using a limited number of training examples. However, practical
applications are limited when all model parameters must be optimized. In this
work we apply a new technique for parameter efficient few shot learning while
adopting a strict definition of parameter efficiency. Our training method
combines 1) intermediate training by reformulating natural language tasks as
entailment tasks \cite{wang_entailment_2021} and 2) differentiable optimization
of template and label tokens \cite{zhang_differentiable_2021}. We quantify the
tradeoff between parameter efficiency and performance in the few-shot regime
and propose a simple model agnostic approach that can be extended to any task
By achieving competitive performance while only optimizing 3\% of a model's
parameters and allowing for batched inference, we allow for more efficient
practical deployment of models. | Ethan Kim, Jerry Yang | 2023-01-31T00:31:11Z | http://arxiv.org/abs/2301.13345v1 | # Differentiable Entailment for Parameter Efficient Few Shot Learning
###### Abstract
Few-shot learning allows pre-trained language models to adapt to downstream tasks while using a limited number of training examples. However, practical applications are limited when all model parameters must be optimized. In this work we apply a new technique for parameter efficient few shot learning while adopting a strict definition of parameter efficiency. Our training method combines 1) intermediate training by reformulating natural language tasks as entailment tasks Wang et al. (2021) and 2) differentiable optimization of template and label tokens Zhang et al. (2021). We quantify the tradeoff between parameter efficiency and performance in the few shot regime and propose a simple model agnostic approach that can be extended to any task By achieving competitive performance while only optimizing 3% of a model's parameters and allowing for batched inference, we allow for more efficient practical deployment of models.
## 1 Introduction
Large pre-trained language models have demonstrated adaptability to solve natural language processing (NLP) tasks. Typically, such language models are adapted to a downstream task through fine-tuning Howard and Ruder (2018). Although fine-tuning improves performance on downstream tasks, it is costly because it relies on updating every parameter of the model (355 million in the case of roBERTa)and requires storing a separate copy of the model for every downstream task. These storage requirements can become prohibitive, thus necessitating research into more parameter efficient methods. Alternative fine-tuning methods that update fewer parameters can have other trade-offs. For example, adapter tuning fine-tunes a small number adapter parameters inserted between the transformer layers Houlsby et al. (2019)but requires optimizing external parameters and still fine-tunes on the entire training dataset. Other methods have explored fine-tuning in the few shot learning case, where a limited number of labeled training samples are used for fine-tuning. These approaches have the disadvantages of relying on extreme model size Brown et al. (2020) Lester et al. (2021), optimizing all model parameters Wang et al. (2021),Zhang et al. (2021), or using external architectures Houlsby et al. (2019) Li and Liang (2021) Gao et al. (2021). In this project, we present a simple, extensible method that improves few-shot performance without any extra parameters by combining two approaches: 1) leveraging trainable prompt pseudotokens rather than updating all the model parameters Zhang et al. (2021), and 2) reformulating natural language processing tasks as entailment tasks and applying an intermediate training step, enabling better generalization to downstream tasks. Wang et al. (2021). Our major contributions are as follows.
* Our method achieves competitive few shot performance while optimizing only 3% of a model's parameters reducing storage costs by a factor of 30.
* We introduce a strict definition of parameter efficiency which extends the practical uses of few shot learning by allowing batching of computation across tasks.
## 2 Related Work
### Finetuning
The standard method for fine-tuning Masked Language Models (MLMs) like BERT applies a classification head to the [CLS] token representation. The language model learns to update the [CLS] representation to better solve the downstream task. A number of reformulations have been proposed seeking to increase performance and improve parameter efficiency.
### Prompting
Language models learn a general set of abilities that can be adapted to specific downstream tasks. One method is to use task-specific natural language prompts to guide the language model output. GPT-3, for example, uses prompts and in-context examples to achieve good few-shot performance on various tasks Brown et al. (2020). GPT-3 leverages extreme scale (175 Billion parameters) to adapt to natural language prompts without fine-tuning. Prompting can be particularly useful for few-shot learning in the low-data regime. For some tasks, a well designed prompt can be shown to be equivalent to hundreds or thousands of additional labeled training points Le Scao and Rush (2021). AUTORPOMPT uses a gradient-based search to optimize a discrete prompt Shin et al. (2020). LMBFF uses an auxiliary language model to generate a set of candidate prompts and chooses the best candidate Gao et al. (2021).
### Pattern Exploiting Training
One alternative to standard fine-tuning is to model the output as a cloze completion task where the output is the model's representation of a masked input token Schick and Schutze (2021). Intuitively, this approach works well because it more closely matches the training process for MLMs. In the pre-training task for models such as BERT and roBERTA, the model is asked to predict the identity of a masked token based on the hidden representations of neighboring tokens.
Further work has extended this approach to use natural language prompts to guide the cloze output Gao et al. (2021). Additional work has focused on training the prompt tokens in continuous space by optimizing a set of prompt pseudotokens. Li and Liang (2021) Liu et al. (2021)Lester et al. (2021). Additionally in the DART method, the tokens used as classification labels can be optimized Zhang et al. (2021).
### Entailment Reformulation
Work from Wang et al. (2021) focuses on improving language model performance by formulating NLP tasks as entailment tasks. Fundamentally, entailment seeks to determine whether for a pair of inputs \((S_{1},S_{2})\), the first sentence entails or contradicts the second. Most standard classification tasks in NLP can be reformulated as entailment tasks. For example, a sentiment analysis task can be be framed as an entailment task using the following template:
\[[\text{CLS}]S_{1}[\text{SEP}]S_{2}[\text{EOS}], \tag{1}\]
With \(S_{2}\) = "It was great" as the entailment prompt. Instead of using the [CLS] token representation of \(S_{1}\) to classify the review as positive or negative as in standard fine-tuning, we instead concatenate the text with the prompt and use the [CLS] token representation of the concatenated sequence to denote whether the first sentence entails the second.
For multi-class classification problems we construct a different input for every class and take the label as the class with the highest entailment score.
A key to the success of the entailment approach from Wang et al. (2021) is an intermediate training step where the pre-trained language model is fine-tuned on a natural language inference (NLI) task like MNLI. Intuitively, the model can be adapted to be good at one entailment task and then generalize to perform well on other reformulated entailment tasks.
### Parameter Efficiency
Related works adopt various, sometimes contradictory, definitions of parameter efficiency when applied to language model fine-tuning. Broadly, these definitions can be grouped into several categories:
1. reducing the number of model parameters necessary to achieve good few shot adaptability
2. optimizing a small subset of the total model parameters
3. avoiding external parameters or changes to the model architecture
Some works on few-shot learning explore techniques allowing smaller models to learn robustly Wang et al. (2021). Large models such as GPT3 with 175 billion parameters can take advantage of their scale to perform well at few-shot in-context learning Brown et al. (2020). A technique can be parameter efficient if it allows similar results be be achieved with a smaller language model e.g. a 340 million parameter roBERTa model rather than the 175 billion parameter GPT-3 or 11 billion parameter T5.
Parameter efficiency can also aim to optimize a smaller number of task specific parameters while keeping most of the language model parameters
frozen. Adapter tuning inserts trainable layers between the frozen layers of a Transformer language model. [14]. Prompt tuning optimizes a small set of trainable input tokens while keeping the pre-trained Transformer layers frozen [10]. Lite Self Training (LiST) freezes most of the encoder parameters and only trains a small number of adapter parameters [23]. LoRA, adds low rank trainable matrices between transformer layers [14] while freezing the pretrained model.
Other works define parameter efficiency as the lack of a need for parameters external to the model being fine-tuned. Part of the motivation for differential prompt tuning [11] is that it directly optimizes trainable pseudotokens without the need for an external model such as LSTM in P-tuning [15]. Such approaches are advantageous as they require no modifications to a pre-trained model's architecture and do not add additional inference time like adapters. Delta Tuning explores in depth the performance of different parameter efficient approaches at different model scales and in combination with one another [13]
We focus on parameter efficiency in the true few shot learning regime. Therefore, we do not take advantage of any additional unlabeled training data. Iterative PET used this approach to pseudolabel unlabelled samples and provide extra training examples to a model [12]. LiST iteratively trains a student model on data pseudolabeled by a teacher model [23]. However, these semi-supervised learning approaches require extra unlabeled training data as well as additional training computation compared to true few-shot learning.
## 3 Approach
Our main approach is shown in Figure 1. We convert all NLP tasks to the entailment format and train few shot models from an intermediate training checkpoint. The entailment approach outlined in [23] performs traditional fine-tuning and updates all model parameters via gradient descent. Instead of performing the computationally expensive update step on all model parameters, our approach fine-tunes only the prompt and label tokens in continuous space while keeping the main language model frozen. By using more expressive pseudotokens as part of our prompt and by training only the input parameters, we achieve a parameter efficient few shot learning method with competitive few-shot performance.
### Pseudotokens
With discrete tokens, the label template tokens are either chosen manually or determined through a search over tokens in a discrete space. In comparison, our label descriptions are optimized in continuous space via back-propagation and hence can attain more expressive, fine-grained representations to prompt a model for a certain task. Formally, we define a set of pseudotokens \(\mathcal{T}\notin\mathcal{V}\) outside of the normal vocabulary. The pseudotoken embedding \(h(\mathcal{T})\) is a trainable set of parameters that are optimized via backpropagation. For a given input we might have the following prompt:
\[S_{1}[\text{SEP}]\ \mathcal{T}_{0}\mathcal{T}_{1}\mathcal{T}_{2}\text{ it was [LABEL]}\]
We differentiably optimize prompting pseudotokens. We also experiment with allowing the label
Figure 1: Differential Entailment Approach
embedding \(h(\text{[LABEL]}\) to be a pseudotoken with a trainable embedding. For label and prompt tokens we experiment with both initializing these pseudotokens embeddings randomly and initializing them with the embeddings of the original tokens.
### Parameter Efficiency
We adopt the strictest definition of parameter efficiency that has practical advantages for downstream applications. In Differentiable Entailment we 1) use a smaller language model compared to GPT-3 or T5, 2) freeze the main encoder parameters, 3) only fine-tune a limited set of pseudotokens without any external parameters or architectural modifications and 4) employ strict few-shot learning without using any additional training data.
Following the method in Prompt Tuning, we freeze the main model parameters and only fine-tune the subset of trainable input tokens (Lester et al., 2021). In contrast to Prompt Tuning we also fine tune the model classification head since we are outputting a specific classification label rather than using a generative model such as T5.
By freezing the model parameters we can efficiently optimize a smaller set of task-specific parameters, namely the pseudotoken embeddings as well as the entailment classification head. In contrast to approaches outlined above, which rely on a large-scale model to make up for a reduction in trainable parameters (Lester et al., 2021), we use a smaller language model. With roBERTa-large this leads to a more than 30x reduction in the number of trainable parameters. Furthermore, instead of storing a fine-tuned 355 million parameter model for each task, we only need to store the task-specific trainable pseudotoken embeddings and classification head. Finally, in contrast to methods which finetune all the model parameters (Zhang et al., 2021) (Wang et al., 2021) or methods with external parameters (Houlsby et al., 2019) our method allows the hidden state computation for different tasks to be batched together since only the specific prompt embeddings for each tasks need to be changed. As others have noted: such **in batch parallel computing** has extreme practical application (Ding et al., 2022). LoRA also allows for multitask batching, however applying additional low rank matrices to later transformer layers is more complex than simply swapping out a set of task specific input embeddings (Hu et al., 2021).
### Templates
We explore several different approaches to combining label templates with pseudotokens. For various tasks, we adapt the standard prompt templates used in previous works (Zhang et al., 2021) (Wang et al., 2021). For example, sentiment analysis tasks such as CR can be prompted for both entailment and cloze completion in a simple way. In 1, we show label templates for a sentiment analysis tasks. For such tasks, the prompt standard template is "it was great". The cloze completion method concatenates the prompt to the input sentence and masks out the label "great", whereas our method concatenates the template without masking the token of interest and predicts entailment. When training label templates in the continuous space, we initialize from the embeddings of the label template tokens in the standard template. For example, given the following template:
\[S_{1}\text{[SEP]}\mathcal{T}_{0}\dots\mathcal{T}_{j}\text{it was great}\]
We would train the prompt tokens "it", "was", the label token "great" and \(j+1\) additional pseudotokens.
For sentence pair tasks such as Quora Question Pairs (QQP), we adopt a slightly different template following (Wang et al., 2021)(Zhang et al., 2021). The task is to predict entailment based on the sentence pairs and a set of prompt pseudotokens inserted between them. For QQP we use the format
\[S_{1}\text{[SEP]}\mathcal{T}_{0}\dots\mathcal{T}_{j}S_{2}\]
### Symmetry of Entailment
In (Wang et al., 2021), a single label description \(p\) is used for each example in a binary classification task, e.g. a binary sentiment classification task is formulated as whether input sentence \(S_{1}\) entails \(S_{2}=\) "This indicates positive sentiment.". To encourage more robust tuning of the label description parameters and classification head, we experiment
\begin{table}
\begin{tabular}{l l} \hline \hline Method & Template \\ \hline Cloze & S\_1 [SEP] it was [MASK] \\ Entailment & S\_1 [SEP] it was great \\ Differential Prompt & S\_1 [SEP][Prompt tokens] great \\ Differential Label and Prompt & S\_1 [SEP][Prompt tokens] [Label token] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example Prompting Templates for a Sentiment Classification task. For our method we optimize either a set of prompt pseudotokens and/or a label pseudotoken.
with using two label descriptions \(p_{1}\) and \(p_{-1}\) for binary classification tasks, and augment the dataset as:
\[\mathcal{D}_{\text{train}}=\{(x_{i},p_{1},y_{i})\cup(x_{i},p_{-1},-y_{i})\}_{i=1 }^{K} \tag{2}\]
For a positive sentiment example, the two corresponding samples in the training dataset would be \((x_{i},p_{1},1)\) and \((x_{i},p_{-1},-y_{i})\) where \(p_{1}=\) This indicates positive sentiment with label 1 (does entail) and \(p_{-1}=\) This indicates negative sentiment with label 0 (does not entail).
## 4 Experiments
### Evaluation
We evaluate our method on the tasks from Wang et al. (2021) which are mainly the subset of the GLUE and SuperGLUE benchmark tasks that are compatible with the entailment reformulation. In addition, we follow the best practices for evaluation of few shot NLP fine-tuning methods Bragg et al. (2021). For each experiment we sample 5 non-overlapping training folds and report average performance after k-shot training over the entire test set Gao et al. (2021). Hyperparameters are tuned for each task and method.
### Implementation Details
Models are implemented using the pytorch Paszke et al. (2019) and transformers Wolf et al. (2019) libraries with code adapted from Zhang et al. (2021). Our pre-trained model is roBERTa large Liu et al. (2019). Checkpoints for roberta-large-base as well as checkpoint models are downloaded from huggingface. We experiment with different intermediate checkpoints, namely roberta-large-mlni and a checkpoint trained robustly on a wide variety of NLI tasks (adversarial NLI /ANLI)Nie et al. (2020). Experiments were run using approximately 100 GPU hours on a single V100.
### Results
Table 2 contains main results for single sentence classification tasks. Table 3 shows results for various sentence pair tasks. We compare our approach with other few shot learning techniques and experiment with various modifications to the differential entailment method.
### Intermediate Training
We experiment with different intermediate training steps. Table 5 shows results for fine-tuning various checkpoints. The MNLI and ANLI checkpoints drastically outperform the roberta-base checkpoint because they have been adapted to perform well on entailment tasks. The ANLI model was trained on multiple augmented entailment tasksWang et al. (2021) and offers a further boost in performance. These results show that the entailment reformulation relies heavily fine-tuning a model that has
Figure 3: Symmetry for simple data augmentation
Figure 2: Entailment allows batching of hidden state computations across tasks
already been adapted for entailment.
### Prompting Schemes
We further experiment with different prompting schemes. We find best performance when we train the prompt tokens, the label token and an additional set of task specific pseudotokens. Table 4 shows scaling with various numbers of prompting pseudotokens. Using 5 additional pseudotokens in addition to trainable prompt and label tokens worked best.
### Symmetry
By adding an symmetric entailment example for binary classification tasks during training we can effectively provide double the training signal (Figure 3). However, it appears that it is difficult for the model to learn from the two complementary training signals in a few shot scenario. Simply adding the symmetric examples at training time leads to a drop in performance (Table 6). These results
\begin{table}
\begin{tabular}{l l} \hline \hline Tokens & SST2 \\ \hline
0 & 90.5 (0.4) \\
2 & **91.1** (0.7) \\
5 & **91.1** (0.2) \\
20 & 90.6 (0.5) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance Scaling with number of trainable pseudotokens. Using a set of 5 trainable pseudotokens performed best.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & SST-2 & MR & CR & MPQA & Subj & CoLa \\ \hline \multicolumn{5}{l}{Full Training Dataset} \\ \hline Majority & 50.9 & 50 & 50 & 50 & 50 & 69.1 \\ Finetuning & 95 & 90.8 & 89.4 & 89.4 & 97 & 86.2 (1.6) \\ EFL & 96.9 (0.2) & 92.5 (0.1) & 92.5 (0.4) & 90.8 (0.4) & 97.1 (0.2) & 86.4 (0.5) \\ \hline \multicolumn{5}{l}{Few Shot k = 16} \\ \hline Fine Tuning & 81.4 (3.8) & 76.9 (5.9) & 75.8 (3.2) & 59.0 (3.4) & 90.8 (1.8) & 70.0 (0.9) \\ DARTS & 93.5 (0.5) & 88.2 (1.0) & 91.8 (0.5) & 85.6 (0.3) & 90.7 (1.4) & - \\ LMBFF & 92.3 (1.0) & 85.5 (2.8) & 91.0 (0.9) & 85.8 (1.9) & 91.2 (1.1) & 69.5 (0.5) \\ EFL & 90.8 (1.0) & 86.2 (0.8) & 92.3 (0.4) & 87.0 (0.6) & 80.0 (5.4) & 69.4 (0.9) \\ DE & 91.9 (0.5) & 87.1 (2.1) & 91.5 (1.4) & 87.0 (0.9) & 89.5 (2.4) & 70.3 (2.4) \\ DE PE & 91.1 (0.2) & 84.5 (0.3) & 91.6 (0.2) & 85.9 (0.6) & 81.5(0.1) & 69.7 (0.3) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Main Results: all results use roBERTa-large as the base architecture, the standard deviation across 5 training folds is given. Differentiable Entailment (DE) is our method fine-tuning all model parameters. Differentiable Entailment Parameter Efficient (DE PE) is our parameter efficient method which only finetunes the trainable pseudotokens and classification head.
\begin{table}
\begin{tabular}{l l l} \hline \hline & MRPC & QQP \\ \hline Full Training Dataset & (f1) & (f1) \\ \hline Majority & 81.2 & 0 \\ Finetuning & 89.9 (1.7) & 89.0 (0.1) \\ EFL & 91.0 (0.8) & 89.2 (0.1) \\ \hline \multicolumn{5}{l}{Few Shot k = 16} \\ \hline Fine Tuning & 76.6 (2.5) & 60.7 (4.3) \\ DARTS & 78.3 (4.5) & 67.8 (3.2) \\ LMBFF & 76.2 (2.3) & 67.0 (3.0) \\ EFL & 76.2 (1.3) & 67.3 (2.6) \\ DE & **83.3** (0.1) & **72.9** (0.3) \\ DE PE & 78.0 (1.5) & 72.6 (0.7) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for sentence pair tasks. NLI tasks such as MNLI, QNLI and SNLI are excluded from the comparison because these datasets are already incorporated as part of the intermediate training step for the ANLI model
\begin{table}
\begin{tabular}{l l l} \hline \hline Tokens & SST2 \\ \hline
0 & 90.5 (0.4) \\
2 & **91.1** (0.7) \\
5 & **91.1** (0.2) \\
20 & 90.6 (0.5) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance Scaling with number of trainable pseudotokens. Using a set of 5 trainable pseudotokens performed best.
reveal limitations in the model's actual understanding of the entailment task. When given only the template with the positive label the model learns to associate entailment with the positive class and not entailment with the negative class. When using additional symmetric examples, this correlation is reversed and may be too difficult for a model of this size and ability to parse. Further work could explore improving this method or ensembling the outputs of models trained on symmetric examples.
## 5 Analysis and Discussion
Our method achieves competitive performance with other few shot learning techniques while optimizing 30 times fewer parameters. On most single sentence tasks performance is within a few points of methods that train all model parameters. When we relax the constraints on parameter efficiency performance is directly competitive with other few shot learning methods. In some cases we exceed the performance of methods that rely on optimizing all model parameters or even additional external architectures. Notable we achieve much stronger performance on sentence pair tasks such as MRPC and QQP. We theorize that this may be because these sentence pair tasks are most similar to the entailment tasks seen during intermediate training.
Fundamentally, intermediate training is crucial for parameter efficient performance because it gives the model a head start in adapting to the reformulated task. We see that using a strong NLI trained intermediate model improves results (Table 5). To adapt to a specific entailment task then requires only a small number of parameter updates.
## 6 Conclusion
In this paper we achieve parameter efficient few-shot learning by combining 1) entailment reformulation of NLP tasks and 2) trainable prompt pseudotokens in the continuous space. Our Differentiable Entailment approach achieves competitive results while only training 3% of the parameters compared to match. We quantify the impact of intermediate training steps and different prompting schemes. By adopting a strict definition of a parameter efficiency we achieve few-shot performance with fewer trainable parameters, no external parameters and without scaling up model size or using unlabeled training data. One major limitation is that we have to train a separate classification head for each downstream task, limiting potential gains in parameter efficiency. Further work could explore different intermediate training tasks, ensembling sets of prompts tokens and combining cloze completion for classification with the entailment reformulation. Given that our method is model agnostic and efficient it is likely to be broadly applicable to additional tasks.
## 7 Broader Impact
Parameter efficient models, especially with the method described in this paper have the potential to allow use of machine learning models on a more widespread basis. In our approach, batching computations for different tasks and using a single forward pass through a model could allow many models to be run on a single device at a single team. Such a scheme has advantages in terms of providing more accessibility to machine learning models and reduced energy consumption. However, parameter efficiency also opens that door to running personalized models that may be injurious to individual security or privacy. For example, user specific embeddings could easily be trained to predict a user's behavior with a specialized model. We anticipate that such potential use cases of parameter efficient few shot learning should be treated carefully.
|
2309.08621 | Exploring Social Choice Mechanisms for Recommendation Fairness in SCRUF | Fairness problems in recommender systems often have a complexity in practice
that is not adequately captured in simplified research formulations. A social
choice formulation of the fairness problem, operating within a multi-agent
architecture of fairness concerns, offers a flexible and multi-aspect
alternative to fairness-aware recommendation approaches. Leveraging social
choice allows for increased generality and the possibility of tapping into
well-studied social choice algorithms for resolving the tension between
multiple, competing fairness concerns. This paper explores a range of options
for choice mechanisms in multi-aspect fairness applications using both real and
synthetic data and shows that different classes of choice and allocation
mechanisms yield different but consistent fairness / accuracy tradeoffs. We
also show that a multi-agent formulation offers flexibility in adapting to user
population dynamics. | Amanda Aird, Cassidy All, Paresha Farastu, Elena Stefancova, Joshua Sun, Nicholas Mattei, Robin Burke | 2023-09-10T17:47:21Z | http://arxiv.org/abs/2309.08621v2 | # Exploring Social Choice Mechanisms for Recommendation Fairness in SCRUF
###### Abstract.
Fairness problems in recommender systems often have a complexity in practice that is not adequately captured in simplified research formulations. A social choice formulation of the fairness problem, operating within a multi-agent architecture of fairness concerns, offers a flexible and multi-aspect alternative to fairness-aware recommendation approaches. Leveraging social choice allows for increased generality and the possibility of tapping into well-studied social choice algorithms for resolving the tension between multiple, competing fairness concerns. This paper explores a range of options for choice mechanisms in multi-aspect fairness applications using both real and synthetic data and shows that different classes of choice and allocation mechanisms yield different but consistent fairness / accuracy tradeoffs. We also show that a multi-agent formulation offers flexibility in adapting to user population dynamics.
202320232023
Aranda Aird, [email protected], Department of Information Science; University of Colorado, Boulder,. Boulder, Colorado, USA, 80309; Cassidy All, [email protected], Department of Information Science; University of Colorado, Boulder,. Boulder, Colorado, USA, 80309; Paresha Farastu, [email protected], Department of Computer Science; University of Colorado, Boulder,. Boulder, Colorado, USA, 80309; Elena Stefancova, [email protected], Comenius University Bratislava,. Bratislava, Slovakia; Joshua Sun, [email protected], Independent Researcher.,. Boulder, Colorado, USA, 80309; Nicholas Mattei, [email protected], Department of Computer Science; Tulane University,. New Orleans, Louisiana, USA, 70118; Robin Burke, [email protected], Department of Information Science; University of Colorado, Boulder,. Boulder, Colorado, USA, 80309. +
Footnote †: journal: ACM
## 1. Introduction
The complexity of fairness considerations in recommender systems is well understood; see Ekstrand et al. (2017) for a survey. Practical applications involving fairness require attention to multiple fairness concerns, each potentially formulated in a different way, relevant to a different set of stakeholders (Krishnan et al., 2013). Methods that assume a single dimension of fairness or that assume all fairness concerns are formulated identically will not be successful in these applications.
The SCRUF-D architecture outlined in Aird et al. (2017) and Burke et al. (2017) offers one possible way to address the complexity of fairness-aware recommendation by formulating it as a two-phase social choice problem within an architecture where fairness concerns are represented as agents. A fairness concern calls out one or more features and designates a set of protected values for these features. Each of these fairness concerns also articulates a function that takes a recommendation history \(L_{t-1}\) and generates a value in \([0,1]\), where \(0\) is maximally unfair and \(1\) is the fairness target.
Each of these fairness concerns can be instantiated as an agent that can take as input the user preference ranking along with \(L\) and produce a ranking over the set of items. In the first phase, fairness agents are allocated to recommendation opportunities, i.e., user arrivals. Performing this online allocation of fairness agents allows the system to adapt dynamically to fairness outcomes as users are served recommendations. In the second phase, allocated agents and the core recommendation algorithm contribute preferences over items to a preference aggregation mechanism, i.e., a voting rule, the output of which is a ranked list of recommended items to be delivered to the user.
The key advantage of the SCRUF-D approach is its generality. Fairness agents can be implemented in many different ways, with different objective functions and measures, while both the allocation and aggregation mechanisms can be chosen from the large variety of mechanisms that have been studied in computational social choice and for which formal properties are known (Copeland and RankedPairs, 2010). 1 In this paper, we explore some of the many choices a system implementer has for these mechanisms. For the allocation phase, we examine lottery mechanisms (in which a single agent is chosen from a dynamically generated lottery distribution), weighted mechanisms (in which all agents are allocated with dynamically computed weights), and a simple least misery allocation. For preference aggregation, we study two weighted/score based methods (weighted voting and Borda score) and two pair-wise methods (Copeland and RankedPairs).
Footnote 1: Note that not all properties of interest to social choice theorists are relevant to the fairness-aware recommendation scenario. For example, in rivalrous settings, an algorithm’s potential for preference revelation might be of concern. In the SCRUF setting, agents are assumed to be all developed within a single organization for the goal of fair recommendation and are not considered rivalrous.
In particular, we ask the following research questions:
**RQ 1**: Do different mechanisms offer different fairness / accuracy tradeoffs for different conditions, and if so, why?
**RQ 2**: Do different social choice mechanisms have different dynamic characteristics, and if so, why?
**RQ 3**: What is the interaction between the mechanisms for allocation and choice and what makes for good synergy?
## 2. Related Work
Within the field of (computational) social choice there have been several investigations into the idea of _dynamic_ settings, which resemble the problem one faces in a recommender system where users arrive online. Freeman et al. (Freeman et al., 2007) investigate what they call _dynamic social choice functions_ in settings where a fixed set of agents select a single item to share over a series of time steps. Lackner (Lackner, 2007) study the problem of voting (selecting a single outcome to share) over multiple time steps where various properties such as fairness need to be guaranteed over the total time horizon. Parkes and Procaccia (Parkes and Procaccia, 2009) look at social choice settings where the preferences of agents evolve over time in response to the outcome of past rounds of voting. However, in all these settings the whole set of agents shares the resulting decision, whereas we are focused on fairness over sets of individual personalized recommendations.
The architecture presented here advances and generalizes the approach found in (Gee et al., 2010). Like that architecture, fairness concerns are represented as agents and interact through social choice. However, in (Gee et al., 2010), the allocation mechanism selects only a single agent at each time step and the choice mechanism has a fixed, additive, form. We allow for a wider variety of allocation and choice mechanisms, and therefore present a more general solution.
Ge et al. (Ge et al., 2010) investigate the problem of long term dynamic fairness in recommendation systems. This work, like ours, highlights the need to ensure that fairness is preserved as a temporal concept To this end they propose a framework to ensure fairness of exposure to the producers of items by casting the problem as a constrained Markov Decision Process where the actions are recommendations and the reward function takes into account both utility and exposure. As above, this work fixes definitions of fairness a priori, although their learning methodology may serve as inspiration to future extensions of our work.
Morik et al. (2015) investigate the problem of learning to rank over large item sets while ensuring fairness of merit based guarantees to groups of item producers. Specifically, they adapt existing methods to ensure that the exposure is _unbiased_, e.g., not subject to rich-get-richer dynamics, and _fairness_ defined as exposure being proportional to merit. Both of these goals are built into the regularization of the learner. In contrast, our work factors out the recommendation methodology and we encapsulate fairness definitions as separate agents rather than embedded in the learning objective, allowing our framework to be more flexible.
This paper concentrates on provider-side fairness (Bartos et al., 2016) although SCRUF-D is intended to be compatible with multisided fairness as well. There have been a number of efforts that explicitly consider the multisided nature of fairness in recommendation and matching platforms. Patro et al. (2017) investigate fairness in two-sided matching platforms where there are both producers and consumers with different definitions of fairness. Patro et al. (2017) also appeal to the literature on the fair allocation of indivisible goods from the social choice literature (Patro et al., 2017). Their work is closest to the allocation phase of our algorithm. However, in contrast to our work they only use exposure on the producer side and relevance on the consumer side as fairness metrics, whereas our work aims to capture additional definitions.
An important distinction between the work we present here and that of Patro et al. (2017) is that our algorithms operate online, as users arrive, whereas Patro et al. (2017) use a batch technique to create and cache a fair distribution of the whole catalog of users and items. Batch techniques cannot guarantee that the fair outputs will be delivered in practice. Only a small percentage of users may arrive over a given time window and those users may be ones to whom fewer sensitive items are assigned. The carefully balanced set of recommendation lists may never reach its intended audience. Only by tracking actual fairness outcomes over time can a system adapt to the uncertainties of user arrivals and item availabilities that characterize recommendation delivery in practice.2
Footnote 2: Only in the specific case of push-type delivery (for example, an email of recommendations sent out to all users) can batch-type fairness solutions meet their stated targets. This is an important type of recommendation but not the whole space of recommender system use cases.
Our recommendation allocation problem also has some similarities with those found in computational advertising, where specific messages are matched with users in a personalized way (Sen et al., 2012; Sen et al., 2012). Because advertising is a paid service, these problems are typically addressed through mechanisms of monetary exchange, such as auctions. There is no counterpart to budgets or bids in our context, which means that solutions in this space do not readily translate to supporting fair recommendation (Sen et al., 2012; Sen et al., 2012; Sen et al., 2012).
In fairness-aware recommendation, both Zehlike et al. (2017) and Sonboli et al. (2017) present examples of reranking with multiple protected groups. Both are static solutions, without the adaptive capabilities that we seek here. Also, the solution in (Zehlike et al., 2017) depends on a computationally-intensive pre-processing step and cannot be easily adapted to a dynamic setting. We draw from (Zehlike et al., 2017) in our definition of compatibility between an agent and a recommendation opportunity.
## 3. Scruf-d Platform
SCRUF-D (Bartos et al., 2016) (and its predecessor SCRUF (Sen et al., 2016)) are recommendation architectures for integrating fairness into recommendation generation. Both variants of SCRUF can be understood as a form of dynamic recommendation reranking, one of the most common approaches for fairness-aware recommendation (Bartos et al., 2016), since a recommendation list from a base recommendation algorithm is one of its inputs.
Figure 1 shows an overview of the architecture. The first phase of SCRUF's operation allocates agents to recommendation opportunities (i.e. user arrivals). Only allocated agents can participate in the subsequent choice (voting) phase and have an impact on the generated recommendations. In the second phase, the recommender system and the
allocated agent(s) cast ballots, i.e., a ranking / scoring of items, and a preference aggregation mechanism (voting rule) combines them to produce the final list.3
Footnote 3: Agents can, in principle, generate their rankings over any set of items but in our experiments so far, we have restricted them to constructing preferences only over those items that the recommender system has returned.
To achieve the allocation, the mechanism takes into account two aspects of the current recommendation context: fairness and compatibility. In measuring fairness, each agent tracks the level of fairness achieved over a historical time window, relative to an individually-defined fairness metric. Historical tracking of the state of fairness gives the model its dynamic character, enabling the system to respond to unfairness generated by a particular sequence of user arrivals. In the compatibility function, each agent also measures the expected propensity of the user to respond to recommendations of sensitive items within that agent's purview. This capability corresponds to the notion of _personalized fairness_ outlined in (Han et al., 2015; Wang et al., 2016), where the application of a fairness intervention is tailored to each user's historical profile. The distinction between fairness (effectively the agent's need to be allocated) and compatibility (the likely interest of the user in the items the agent promotes) makes the allocation problem an online capacitated two-sided matching problem (Han et al., 2015; Wang et al., 2016). This formulation assumes that agents' fairness and compatibility metrics are comparable. We are working to develop standard formalizations of these metrics for a wide class of possible fairness metrics.
## 4. Mechanisms
SCRUF-D provides a general framework in which the properties of different allocation and aggregation mechanisms can be explored (Bartos et al., 2015). In this paper, we experiment with combinations of mechanisms using simulated and real-world data.
### Allocation Mechanisms
We explore three allocation mechanisms using different logics to allocate agents to recommendation opportunities.
**Least Fair:** Under _Least Fair_, the fairness agent with the lowest fairness score is chosen. This mechanism ignores compatibility and focuses on the agent most in need of improvement. However, this approach can lead to starvation - if the fairness of an agent is slow to improve, it will continue to be allocated and other agents will get no opportunities to
Figure 1. Overview of the SCRUF architecture
achieve their preferred outcomes.
**Lottery:** Our _Lottery_ mechanism selects a single fairness agent in the allocation phase using a lottery computed over all agents. We compute the product of unfairness and the square of the compatibility for each agent, normalized to a sum of 1, and then draw an agent from this distribution.4 This avoids the problem of starvation (since the choice is not deterministic) and factors in compatibility.
Footnote 4: The tunable exponent for the compatibility term reduces its influence in the lottery. This can be adjusted to favor accuracy over fairness.
**Weighted:** Under _Weighted_ allocation, all agents are allocated but their resulting weight is determined by the product between unfairness and (squared) compatibility as in the case of the Lottery method above. This allocation method generates the most complex choice problem since all agents participate in the aggregation phase.
For all allocation mechanisms, the fairness agent definition is used to calculate the fairness scores. In the experiments here, agents have given the same fairness metrics, so the fairness scores used in these allocation mechanisms are directly comparable. However, even when the metrics are not identical across agents, their standardized form means that that they can be compared for allocation purposes.
### Choice Mechanisms (Voting Rules)
We examine four different choice mechanisms. In computational social choice, choice mechanisms are classically understood as integrating the preferences of multiple agents together to form a single societal preference (Borda, 2010).5
Footnote 5: Our setting differs from classical social choice in that voting is not anonymous (the recommender system plays a different role from the other agents) and weights and scores are typically employed. Typically, rankings are preferred because they do not require agent utility to be known or knowable.
**Rescoring:** The simplest mechanism is one in which each agent contributes a weighted score for each item and these scores are summed to determine the rank of items. Each fairness agent has a fixed score increment \(\delta\) that is added to all protected items, weighted by its allocation in the previous phase. This is combined with the scoring of the recommendation algorithm.
**Borda:** Under the Borda mechanism (Kendal, 2010), ranks are associated with scores and the original scores used to compute those ranks are ignored. The ranks across agents are summed and the result determines the final ranking.
**Copeland:** The Copeland mechanism calculates a win-loss record for each item considering all item-item pairs in a graph induced by the preferences. Item \(i\) scores one point over item \(j\) if the majority of allocated agents prefer \(i\) to \(j\). We then sum these pairwise match-ups for each item \(i\) and order the list of items using these scores (Kendal, 2010).
**RankedPairs:** The Ranked Pairs voting rule (Kendal, 2010) computes the pairwise majority graph as described for Copeland but orders the resulting ranking by how much a particular item wins by, selecting these in order to create a complete ranking, skipping a pair if and only if it would induce a cycle in the aggregate ranking.
Each of these choice mechanisms implements a fundamentally different logic for aggregating preferences: score-based, ordinal-based, consistency-based and pairwise-preference (Kendal, 2010). As we show in our results, choice mechanisms yield quite different accuracy / fairness tradeoffs.
## 5. Methodology
To explore our research questions, we conducted experiments with both synthetic and real-world data and compared different combinations of mechanisms. Because our research is exploring aspects of these mechanisms, it was not necessary to explore different base recommendation algorithms. For the simulated experiments, we generated synthetic recommender system output as described below. For Microlending, our real-world example, we used a simple biased
matrix factorization technique. We determined in prior experiments that this algorithm suffers from popularity bias and therefore represents a challenge for fairness-aware reranking.
### SCRUF-D Implementation
SCRUF-D is implemented in Python and available open-source from GitHub under the MIT License.6 SCRUF-D integrates Whalrus 7 for the implementation of choice mechanisms. The configuration files, data and analysis code used for our experiments are also available along with the source code for our synthetic data generator. 8
Footnote 6: [https://github.com/that-recsys-lab/scruf_d](https://github.com/that-recsys-lab/scruf_d)
Footnote 7: [https://francois-durand.github.io/whalrus/](https://francois-durand.github.io/whalrus/)
### Synthetic Data
The purpose of synthetic data in our simulations is to supply realistic recommender system output as input to the SCRUF-D reranker. We create synthetic data via latent factor generation: we create matrices of latent factors similar to those that would be created through factorization and then generate sample ratings from these matrices. Let \(\hat{U}\) and \(\hat{V}\) be the user and item latent factor matrices with \(k\) latent factors. We designate the first \(k_{s}\) of the latent factors as corresponding to protected features of items, and the remaining \(k-k_{s}\) factors correspond to other aspects of the items.
As a first step, we generate a vector of real-valued propensities for each user \(\Phi_{i}=\langle\phi_{1},...,\phi_{k_{s}}\rangle\) corresponding to the sensitive features plus additional values for each of the non-sensitive features, drawn from an experimenter-specified normal distribution. Thus, it is possible to adjust the preferences of the user population regarding different sensitive features. The propensities associated with a sensitive feature also represent the user's compatibility with the respective fairness agent, a value which in a non-synthetic case is derived from the pre-existing user profile as in (S
To explore dynamic aspects of the system's responses, we created additional datasets similar to the one described above but where the users are split into different three segments \(<A,B,C>\), each arriving in sequence. We generated synthetic users with high compatibility to Agent 2 and low compatibility with Agent 1 in segment \(A\), then reversed this affinity in segment \(B\). Segment \(C\) contained used without high compatibility with either agent. We used different generating parameters than above, making the differences between user types more extreme and with a lower prevalence of protected items. This data is referred to as _SyntheticSequenced_.
### Microlending Data
In addition to the Synthetic data, we used the Microlending 2017 dataset (Mikolov et al., 2017), which contains anonymized lending transactions from the crowd-sourced microlending site Kiva.org. The dataset has 2,673 pseudo-items, 4,005 lenders and 110,371 ratings / lending actions. See (Krishnan et al., 2017) and (Mikolov et al., 2017) for a complete description of the data set.
We considered two loan feature categories, loan size and country, as protected features. Prior work (Krishnan et al., 2017) identified loan size as a dimension along which fairness in lending may need to be sought. About 4% of loans had this feature and were considered protected items. For the second protected feature, we followed Sonboli et al. (Krishnan et al., 2017) in identifying the 16 countries whose loans have the lowest rates of funding and labeled these as the protected group for the purposes of geographic fairness. Compatibility scores were defined using the entropy of a user's ratings versus the protected status of funded loans using the method in (Krishnan et al., 2017).
We were not able to duplicate the conditions in the synthetic data because there is a high correlation between users' entropies with respect to the country variable and with respect to the loan size variable. In other words, users who were highly compatible with the loan size agent were also for the most part also compatible with the country agent. So, it was not possible to have segments of users with differential compatibility for different agents and we used only synthetic data for looking at different orderings of users.
### Agent definitions
For these experiments, we assume a single fairness definition, that of group proportional fairness: a fixed desirable proportion \(\pi\) of protected group items in each recommendation list. To calculate overall fairness, we create a union of all recommender lists within the history window and calculate the proportion of agent-specific protected items. We scale this proportion dividing by \(\pi\) and truncate larger values at 1.0.
For the Synthetic data, we set the fairness target for both Agents 1 and 2 to be 25%. In Microlending, we set the target for the loan size agent to be 30% and for the country agent to 20%. Fairness agents were allocated using scores based on these target proportions.
When a fairness agent is allocated, the scores from the recommender systems are adjusted such that each item among the protected items has its score augmented by a constant \(\delta\). For the Synthetic data, \(\delta\) was set at 0.1 for both agents. For Microlending, \(\delta\) was set at 0.3 for the country agent and 0.6 for the loan size agent to give more impact to the loan size agent, as these items were more difficult to reach the fairness proportion.
## 6. Results
For the Microlending experiments, we generated 50 recommendations for each of the 4,005 users using biased matrix factorization as implemented in the LibRec recommendation library 9. We chose this recommendation algorithm as input
for these experiments as it has known issues with popularity bias [10] and therefore presents a challenging reranking problem: as noted above, any recommendation algorithm could be used. We generated similar recommendation lists for the synthetic data as noted above. In both experiments, we set the history window, over which agents consider fairness outcomes, to be 100 users.
In reporting results, we calculate fairness by normalizing the proportion of protected items for each agent across the whole experiment and averaging across the agents. (Note that this is different than the window-limited metric that agents have access to.) We use a proportional fairness measure in these experiments, following [22], where recommendation list exposure measured in this way was the key metric and this is also the measure used by the agents themselves. We use nDCG@10 throughout for recommendation accuracy. However, for the synthetic data sets, there is no ground truth test data against which to compute utility, so for these experiments we compute nDCG relative to the original recommendation lists.
Figure 2 summarizes the results from the experiments with the randomly sequenced users, plotting accuracy vs fairness. There are clear groupings of the choice mechanisms at different tradeoff points. In the Synthetic data, the order (from most accurate / least fair to least accurate / most fair) is (roughly) Ranked Pairs, Rescoring, Borda and Copeland, although Ranked Pairs is dominated by Rescoring / Weighted. For the Microlending data, the order is Rescoring, Borda, Ranked Pairs and Copeland with Borda dominated by Rescoring / Lottery.
Weighted allocation breaks this pattern in interaction with the two concordance-based mechanisms. The Weighted agents have individually lower weights and there is rarely any synergy between them, so they are effectively outvoted by the recommender. Although Ranked Pairs and Copeland are both concordance-based methods, they have different methods of handling ties (partial orders) and because our fairness agents produce partial orders, the tie-breaking method is significant. Copeland effectively scores a tie as 0.5 of a concordant pair, while Ranked Pairs breaks ties randomly and a much larger set of possible orderings can arise. Many of these orderings are ones in which the recommender's rankings dominate. The effect is not nearly as pronounced with the Borda and Rescoring mechanisms because the output from the multiple agents are combined in an additive way.
The two other allocation mechanisms, Lottery and Least Fair, are surprisingly similar in their outcomes. (And, in fact, identical and superimposed for the Copeland mechanism in the Microlending dataset.) We expected that Least Fair would have lower accuracy since it ignores user compatibility in assigning agents. But with a few exceptions, across the
Figure 2: Accuracy vs average normalized fairness. Fairness target is at 1.0; baseline accuracy is shown in the dashed line.
different conditions and datasets, we find that the Lottery mechanism is associated with lower accuracy and greater fairness. Interestingly, the loss of accuracy is largest for the Rescoring mechanism and the Kiva dataset and smallest for Copeland in the Kiva dataset where the data points are superimposed. What we do see if we examine the individual agent scores is that Least Fair is associated with a greater difference in fairness results across agents so the difference may have to do a certain amount of starvation occurring in the experiments.
Across the two datasets, one striking difference is in the performance of Ranked Pairs. It is relatively ineffective (and Pareto-dominated) in the Synthetic data and very close to Copeland in the Kiva data. One possible reason again traces back to the differences in ranking processes. The Synthetic data is fairly simple in structure and lower in noise. In a higher entropy dataset, the differences in tie-breaking procedures do not have as much of an impact.
Figure 3 shows the distribution of each agent's fairness metric as computed in each time interval. The baseline (non-reranked) fairness values are shown as the dashed lines, and we see that the baseline fairness is greater for Agent 1. In most cases, the two agents are fairly close in average fairness across the time steps of the experiment, showing that the mechanisms are working together to equalize the agents' outcomes. Some conditions actually favor Agent 2, particularly Least Fair + Rescoring, Lottery + Copeland, and the low fairness result of Weighted + Copeland.10
Footnote 10: Because of the correlation between agent compatibilities in the Microlending data, there is very little difference between the fairness results across agents and so we do not include a similar plot for those experiments.
The second set of experiments are those in which the users were segmented into different regimes as described above. In these experiment, we focus on the ability of the different mechanisms to cope with changes in relative abundance of compatible users. Because the data was generated in a different way, the Synthetic Sequenced data is not comparable to the original Synthetic data.
Figure 4 shows the results from these dynamic experiments. Recall that both agents' fairness targets are more difficult in the Synthetic Sequenced data set. The result here are more similar to what we saw with the Microlending data above. Ranked Pairs occupies the low accuracy / high fairness area and Rescoring, the high accuracy upper left. Note that that fairness here is still an improvement over the baseline (at 0.65). The Least Fair methods all occupy more or less the same overlapping position in this corner, likely because they tend to have worse fairness results for Agent 0, as evidenced in the allocation plots below. We see that the Weighted allocation is more often a good option with this dataset, most likely because it allocates all the agents in every iteration and includes compatibility in its weighting.
Figure 3: Fairness metric distribution for each agent (Synthetic Data)
In Figure 4(b), we see the contrast between allocation mechanisms in the Synthetic Sequenced where there are strong differences between types of users over time. Both examples use Ranked Pairs as the choice mechanism. The Least Fair mechanism keeps trying to allocate Agent 1 because it is more difficult to achieve fairness for this agent, even though the initial set of users is not very compatible with this agent's fairness objective. Agent 0 is relatively starved as a result. The Weighted allocation includes both agents in its allocation for the first set of users taking advantage of the opportunity presented by the compatible users in the initial segment. In the end, greater fairness is achieved with the Least Fair mechanism but at substantial cost of accuracy because the preferences of the first segment of users is generally ignored.
Figure 4. Accuracy vs average normalized fairness in segmented experiment. Fairness target is at 1.0; baseline accuracy is shown in the dashed line.
Figure 5. Cumulative allocation of fairness agents with different allocation mechanisms
## 7. Conclusion
In this paper, we explored combinations of allocation and choice mechanisms for integrating multiple fairness concerns into recommendation. Relative to RQ1, we find that there are consistent differences between combinations of mechanisms, placing them at different points along a fairness-accuracy frontier. Although the ranking is not completely consistent across datasets, Copeland is generally towards the bottom right, except when Weighted allocation is used and the agents have less impact. Rescoring occupies a lower fairness, higher accuracy position. Borda is in between. The Synthetic Sequenced data, in which protected items were more rare, looks more like the Microlending results, suggesting that the "difficulty" of the fairness problem impacts the relative efficacy of the different mechanisms. We will explore this phenomenon further in our future work.
Considering RQ3, allocation mechanisms have a smaller impact on this tradeoff in most cases, the exception being the Weighted allocation, which interacts with the pair-wise methods in a manner that greatly reduces the impact in both fairness and accuracy dimensions. Across experiments, Weighted, Least Fair and Lottery are generally ranked in that order along the high accuracy / high fairness tradeoff diagonal.
Our segmented experiments addressed RQ 2. We see that in most cases, the Least Fair mechanism is unable to improve fairness beyond a certain point because it is blind to user-agent compatibility and as a result, fails to take advantage of recommendation opportunities as completely as some of the other methods. We also saw that there were benefits to the Weighted mechanism that were not as apparent in the randomly ordered data. These findings will get further investigation in our future work.
This paper represents the first evaluation of the fairness / accuracy tradeoff of multiple fairness objectives using a social choice framework and as such it is very preliminary. The phenomena found here need to be explored in much greater detail. We plan to vary the properties of the synthetic data and incorporate additional real datasets, explore a range of fairness targets, and investigate the dynamics of the system more fully. We also intend to compare our results with alternative algorithms for multigroup fairness-aware reranking, especially OFAIR (Krishnan et al., 2019).
There are three key areas that we think it is crucial to address in future research. The first is the issue of multiple fairness definitions. In this study, all of the fairness agents use the same fairness definition and metric. That is typical of fairness-aware recommendation research, although not typical of practical fairness settings (Krishnan et al., 2019). It will be important to explore how SCRUF operates in the presence of different fairness metrics and logics, including non-binary and continuous definitions and consumer-side fairness (Borda et al., 2019). A consumer-side fairness definition could use a fairness agent that is concerned in the cost of fairness, in terms of accuracy, to users. We can think of this as the difference between what the recommender would have returned without the fairness intervention and what actually was returned, although of course there is no great certainty that the recommender system always represents users' preferences well. Rank correlation or a similar measure can be used to compare these lists. The evaluation metric would focus on variance of this statistic. If all users are experiencing relatively similar accuracy losses, the metric will have a low value.
As we expand the scope of fairness considerations, we will inevitably find ourselves in a situation with a larger collection of agents than the \(2\) deployed in these experiments. While we do not expect that applications will need unbounded numbers of agents, our research suggests that between 5-10 will be needed in the Kiva context. Additional research is needed to examine the characteristics of larger agent collections. However, one of the key advantages of social choice mechanisms is that they are designed to handle multiple agents in interaction, so we expect that key findings for smaller numbers of agents will extend to larger groups. |
2309.07512 | Nonlinear delayed forcing drives a non-delayed Duffing oscillator | We study two coupled systems, one playing the role of the driver system and
the other one of the driven system. The driver system is a time-delayed
oscillator, and the driven or response system has a negligible delay. Since the
driver system plays the role of the only external forcing of the driven system,
we investigate its influence on the response system amplitude, frequency and
the conditions for which it triggers a resonance in the response system output.
It results that in some ranges of the coupling value, the stronger the value
does not mean the stronger the synchronization, due to the arise of a
resonance. Moreover, coupling means an interchange of information between the
driver and the driven system. Thus, a built-in delay should be taken into
account. Therefore, we study whether a delayed-nonlinear oscillator can pass
along its delay to the entire coupled system and, as a consequence, to model
the lag in the interchange of information between the two coupled systems. | Mattia Coccolo, Miguel A. F. Sanjuán | 2023-09-14T08:29:47Z | http://arxiv.org/abs/2309.07512v1 | # Nonlinear delayed forcing drives a non-delayed Duffing oscillator
###### Abstract
We study two coupled systems, one playing the role of the driver system and the other one of the driven system. The driver system is a time-delayed oscillator, and the driven or response system has a negligible delay. Since the driver system plays the role of the only external forcing of the driven system, we investigate its influence on the response system amplitude, frequency and the conditions for which it triggers a resonance in the response system output. It results that in some ranges of the coupling value, the stronger the value does not mean the stronger the synchronization, due to the arise of a resonance. Moreover, coupling means an interchange of information between the driver and the driven system. Thus, a built-in delay should be taken into account. Therefore, we study whether a delayed-nonlinear oscillator can pass along its delay to the entire coupled system and, as a consequence, to model the lag in the interchange of information between the two coupled systems.
Introduction
Over the last years an important research activity has been devoted to the dynamics of coupled and driven nonlinear oscillators. These systems exhibit complex and rich behaviors due to the interplay of nonlinearity, coupling, and external forcing. The results are of interest for several scientific disciplines such as biomedical sciences [1; 2; 3], where coupled oscillators are prevalent in biological systems, such as neurons. Studying their dynamics helps us understanding phenomena like neuronal synchronization, leading to advancements in medical treatments and diagnostics. In some cases, coupled oscillators can synchronize their motion, where their frequencies and phases align [4; 5]. To emphasize the role of the previous ideas, we can mention that coupled chaotic oscillators can be employed to enhance communication security [6; 7]. On the other hand, they are crucial in understanding and controlling vibrations in mechanical systems [8]. Furthermore, they have applications in networked systems, ranging from improving the performance of communication networks to understanding the behavior of interconnected systems [9; 10].Among the fields of interest, we can also mention electronics [11] and mechanical engineering [12].
Coupled and driven systems can be modeled as dynamical systems in which one of their parameters is the dynamical variable that comes from another dynamical system, through a coupling mechanism [13; 14; 15]. Usually, we refer as _the driver system_ the source of the driving signal, and as _the response system_ the driven signal. Consequently, the driver system sends a signal to the response system altering its behavior according to the received input. The synchronization of the dynamics of the response system with respect to the driver system [16], constitutes a relevant effect observed when an oscillator is driven by another oscillator. Two main cases can be distinguished. When the two oscillators are identical or nearly identical, identical synchronization [17] may be observed. However, when they are different, generalized synchronization [18; 19] is expected. The driver signal can be either periodic or aperiodic [20]. Among the various coupling mechanisms, such as the replacement method or the subsystem decomposition [13; 14; 15], we have chosen to use _the continuous control_ that was discussed in [20; 21]. The implementation of the coupling mechanism is carried out by introducing in the response system a square matrix, whose elements are constant, multiplied by the vector of the difference between the dynamical variable of the driver system and the dynamical variable of the response system.
Nevertheless, the study of the synchronization is not the main goal of this article. In fact, we study the case in which a delayed nonlinear oscillator is the only driver, through the coupling mechanism, of another nonlinear oscillator without delay. Therefore, we analyze the driver system as the only external forcing acting on the response system. We have chosen the continuous control [20; 21] as a coupling mechanism because we think that it models better the implementation of an external forcing into the system. As a matter of fact, the coupling matrix becomes constant playing the role of the forcing amplitude, and the time delay determines the frequency of the forcing. In the other above-mentioned methods one or more variables of the driver system are substituted directly into the response system, without the possibility to change the strength of the coupling. This means that there is nothing playing the role of the forcing amplitude. Thus, with that implementation we investigate the typical effects of an external forcing, here affected by delay, on the oscillations amplitude and frequency of a given dynamical system. Moreover, the forcing generates at the right frequency and for the right amplitude the appearance of a resonance. Some applications of implementing delay in an external forcing or control are discussed in [22; 23; 24]
Although the synchronization of the two systems is not the main goal of this article, a subsidiary objective can be pondered as a consequence of it. In fact, through the coupling mechanism the driver system transfers to the response system some of its features and here we focus on the delay transmission. We want to emphasize that the synchronization achieved here cannot be identical because the space dimensions of the two systems are different, being one infinite dimensional for the delayed oscillator and the other one finite dimensional. There is a reason to study the conditions of the delay transmission. The coupling, as we wrote before, is an interchange of information from the driver system to the response system and the speed of this interchange is finite, so a built-in delay needs to be considered. Therefore, due to this intrinsic delay, the response system is affected showing delay-induced behaviors. Hence, we have decided to study the optimal parameter values that model this delay in the driving signal and transfer it from the driver system oscillator to the entire coupled system. The result is that, for coupling values that do not trigger the resonance, the response system acts with a similar delay-induced behavior as the driver system, without being a perfect copy. Also, the coupling constant can be used as a control parameter to determine how much delay-induced behavior we want the response system output to show.
The organization of the paper is as follows. In Sec. II, we define the model that we have
used. We identify the role of the coupling constant in the dynamics of the response system in Sec. III. We discuss in Sec. IV the coupling constant and the driver system delay influence on the dynamics of the response system is done. In Sec. V, we analyze some particular values of the coupling constant in function of the driver system delay. Finally, some concluding remarks appear in Sec. VI.
## II The model and the continuous control
The unidirectional coupling can be summarized in a simple way. We can define an autonomous nonlinear dynamical systems as the driver system and its dynamical state given by a vector \(\mathbf{x_{1}}\in\mathbb{R}^{n}\) of \(n\) scalar variables. The system dynamics is governed by a set of \(n\) nonlinear differential equations \(\dot{\mathbf{x_{1}}}=\mathbf{F}(\mathbf{x_{1}})\). Then, another nonlinear dynamical system is considered as the response system, whose dynamical state is given by a vector \(\mathbf{x_{2}}\in\mathbb{R}^{d}\). The differential equations of this second system are \(\dot{\mathbf{x_{2}}}=\mathbf{G}(\mathbf{x_{2}})\). When the unidirectional drive is established, the response system becomes \(\dot{\mathbf{x_{2}}}=\mathbf{G}(\mathbf{x_{1}},\mathbf{x_{2}})\). The continuous control scheme provides a simple form of unidirectional coupling:
\[\mathbf{G}(\mathbf{x_{1}},\mathbf{x_{2}})=\mathbf{G}(\mathbf{x_{2}})+\mathbf{ C}\cdot(\mathbf{x_{1}}-\mathbf{x_{2}}), \tag{1}\]
where \(\mathbf{C}\) is a square matrix of dimension \(n\) whose elements are constants. This matrix is multiplied by the vector of differences \((\mathbf{x_{1}}-\mathbf{x_{2}})\). The numerical values of the constants inside \(\mathbf{C}\) measure the strength of the coupling for each forcing signal, which may be constructed from one, or more, of all the variables of the drive.
Here, we have decided to study the output of a Duffing oscillator, as the response system, when it is driven by a time-delayed Duffing oscillator, as the driver system, following:
\[Driver\rightarrow \frac{d^{2}x_{1}}{dt^{2}}+\mu\frac{dx_{1}}{dt}+\gamma x_{1}(t- \tau)+\alpha x_{1}(1-x_{1}^{2})=0 \tag{2}\] \[Response\rightarrow \frac{d^{2}x_{2}}{dt^{2}}+\mu\frac{dx_{2}}{dt}+\alpha x_{2}(1-x_{ 2}^{2})=C(x_{1}-x_{2}), \tag{3}\]
where we have fixed the parameters \(\mu=0.01\), \(\alpha=-1\) and \(\gamma=-0.5\). The parameter \(C\) is the coupling constant, which is the only nonzero element of the continuous control scheme coupling matrix [20; 21] that measures the strength of the coupling for the forcing signal and plays the role of the the external forcing amplitude, i.e., the time-delayed Duffing oscillator. The dissipation \(\mu\) is kept small in order to better appreciate the effects of the variation of
the driver system delay \(\tau\) and of the coupling constant \(C\) on the dynamics of the response system. The history functions of the driver system are \(u_{0}=v_{0}=1\), and the initial conditions of the response system are \(x_{0}=y_{0}=0.5\). We expect that our conclusions are of general validity and not specific for the considered boundary conditions. The potentials
\[Driver\rightarrow \frac{\alpha x^{2}}{2}+\frac{\alpha x^{4}}{4}-\frac{\gamma x^{2}}{2} \tag{4}\] \[Response\rightarrow \frac{\alpha x^{2}}{2}+\frac{\alpha x^{4}}{4} \tag{5}\]
and the fixed points of both systems are shown in Fig. 1. The unstable fixed point \(x_{0}=0\) is the same for the two potentials, while the stable fixed points of the driver system are
\[x_{*}^{DS}=\pm\sqrt{\frac{\alpha+\gamma}{\alpha}}=\pm 1.225, \tag{6}\]
and the stable fixed points of the response system are
\[x_{*}^{RS}=\pm\sqrt{\frac{\alpha}{\alpha}}=\pm 1. \tag{7}\]
Moreover, following [26], we perform the linear stability analysis for the fixed points. The characteristic equation of the linearized system is
\[\lambda^{2}+\mu\lambda+\alpha(1+3(x_{*}^{DS})^{2})+\gamma e^{\lambda\tau}=0. \tag{8}\]
Then, we take \(\lambda=\rho+i\omega\) as the eigenvalue associated with the equilibrium points. The critical stability curve can be found by fixing \(\rho=0\). Hence, we substitute \(\lambda=i\omega\) in the last equation and separate the real and imaginary parts obtaining the equations
\[\omega^{2}-\alpha(1+3(x_{*}^{DS})^{2})=\gamma\cos\omega\tau \tag{9}\] \[\mu\omega=\gamma\sin\omega\tau. \tag{10}\]
After squaring and adding both equations we obtain
\[(\omega^{2}-\alpha(1+3(x_{*}^{DS})^{2}))^{2}+(\mu\omega)^{2}=\gamma^{2}. \tag{11}\]
Then, substituting the parameter values \(\alpha=-1,\gamma=-0.5,\mu=0.01\), we can find four solutions, among which one is \(\omega=2.0004\) giving \(\tau=1.5505\) as the solution of the equation
\[\tau=\frac{\arccos\left((\omega^{2}-\alpha(1+3(x_{*}^{DS})^{2})/\gamma\right) \right)}{\omega}. \tag{12}\]
The \(\tau\) value just computed is where the fixed points lose stability and is shown as the red asterisk in Fig. 2(a).
Then, as already reported in [25; 26], the unforced time-delayed Duffing oscillator undergoes various bifurcations while \(\tau\) changes. We show such behaviors in Figs. 2(a) and (b), where we plot the oscillations amplitude and a diagram showing maxima and minima oscillations amplitudes, respectively. This diagram has been plotted by representing on the figure the maxima and minima of the last 5 periods of the oscillations for each \(\tau\) value. Four regions are discernible in the figures. The first one (**I**) for \(\tau<1.53\) in which the oscillations converge to the fixed point. The second one (**II**), \(1.53<\tau<2.35\) where the oscillations are sustained and confined to one of the wells. The third one (**III**), \(2.35<\tau<3.05\) the amplitude has jumped to a value bigger than the width of the well, what it means that the trajectories move from one well to another one, and also they are aperiodic. The last one (**IV**), for \(\tau>3.05\) where the trajectories are no longer confined to either of the wells and oscillations are periodic. The result is a limit cycle in phase space that spans both wells. All the simulations of the manuscript have been carried out with the DDE tools of Matlab and checked with a fourth-order Runge-Kutta integrator for the non-delayed case, with an integration step of 0.01. These behaviors of the driver system are depicted in Fig. 3.
Figure 1: The double-well potentials and stable fixed points of the Duffing oscillator (in red) and of the delayed-Duffing oscillator (in blue).
## III The role of the coupling constant
In Fig. 4(a), we show the \(x\)-time series of the two systems when \(C=0\), i.e., without coupling, and with \(\tau=1\) and \(\mu=0.01\). In the figure, we can see in blue the driver system \(x\)-time series oscillations, that from now on we call \(x_{1}\), while in red the response system without coupling, that we call \(x_{2}\). We can appreciate how the two oscillators tend to their specific fixed points independently one from the other. Once we have seen how the two oscillators behave independently, from this point on we switch on the coupling constant so that the time-delayed Duffing starts to drive the Duffing oscillator without delay. The effects are shown in Fig. 4, in which in black we represent the response system \(x-\)time series for \(C=0.06\), from now on \(x_{2C}\). The coupling value has been chosen for explanatory purposes. In fact, it is the value for which it is possible to appreciate that the oscillations of \(x_{2C}\) are slightly displaced up towards the fixed point of the driver system, although they have not yet jumped into the other well. In Fig. 4(b), we show (the curve in blue) the absolute value of the asymptotic distance between \(x_{1}\) and \(x_{2C}\), \(|x_{1}-x_{2C}|\), for \(t>200\). Also, it is shown that the mean distance between the \(x\)-time series of the coupled response system and the \(x\)-time series of the driver system, black line, \(<x_{1}-x_{2C}>=2.1339\), is smaller than the
Figure 2: The figures show the oscillations amplitude \(A_{x_{1}}\) (a) and the maxima and minima diagram (b) of the driver system. We can appreciate the oscillations amplitudes (a) and the oscillator behaviors (b) in all the \(\tau\) regions of the driver system. The history functions for the time-delayed Duffing oscillator are constant \((u_{0},v_{0})=(1,1)\). The red asterisk in panel (a) is the value of \(\tau\) predicted, through the Eqs.(8 - 12), by the stability analysis at which the fixed points undergo a change of stability.
Figure 3: The figure shows the oscillations of the driver system in the stable regions defined in Fig. 2. Panels (a) and (b) shows the driver system \(x\) oscillations and the orbit in the phase space for \(\tau\in\) region I, respectively. Panels (c) and (d) for \(\tau\in\) region II. Panels (e) and (f) for \(\tau\in\) region III. Panels (g) and (h) for \(\tau\in\) region IV.
mean distance of the \(x\)-time series of the uncoupled response system and the \(x\)-time series of the driver system, red line,\(<x_{1}-x_{2}>=2.2031\). This measures the level of synchronization between the two oscillators, following the standard definition of synchronization [27],
\[\lim_{t\rightarrow\infty}|x_{1}(t)-x_{2C}(t)|\to 0, \tag{13}\]
stating that if the mean of the asymptotic distance between the solutions of the two oscillators goes to zero, the two oscillators are synchronized. From now on, when we state that one case is more synchronized than another, it means that this definition has been used. To obtain the mean of the asymptotic distance, we have computed the absolute value of the difference between the last third part of the \(x\)-series of the two systems and then its mean value.
Then, we set larger values of \(C\) for \(\tau=1\), as shown in Fig. 5. We can see that for \(C=1\), Fig. 5(a), the response system \(x\)-time series jumps into the other well, its main distance from the driver system becomes significantly smaller \(<x_{1}-x_{2C}>=0.2528\) than in Fig. 4(b). Certainly, the main distance between \(x_{1}\) and \(x_{2}\) does not change. Contrary to the expectations when the \(C\) value increases at \(C=1.66\), Fig. 5(b), the response system \(x\)-time series asymptotic oscillations grow larger and the mean distance \(<x_{1}-x_{2C}>=0.4733\) increases. Then, if the coupling constant increases further at \(C=3\), Fig. 5(c), the asymptotic oscillations amplitude decreases and also the mean distance \(<x_{1}-x_{2C}>=0.1460\), as it is supposed to be. In other words, comparing the Figs. 5(a)-(c), the oscillations amplitude of the response system grows, reaches a maximum and decreases in function of the coupling constant. This effect of the coupling constant is comparable to the effect of an external forcing amplitude that induces a resonance.
## IV The combined effect of the coupling constant and the delay
Now to start the analysis of the mentioned coupling constant effect and its interaction with the delay \(\tau\), we plot Fig. 6. Here, we can find the following gradient plots. In Fig. 6(a), the oscillations amplitude of the response system, \(A_{x_{2C}}\). In Fig. 6(b) the mean distance between the driver system and the response system asymptotic behaviors, \(<|x_{1}(t)-x_{2C}(t)|>,t>200\). In Fig. 6(c) the response system oscillation frequency, \(\omega_{2}\), for the driver system regions
with periodic oscillations. All throughout the manuscript the frequencies of the response system and of the driver are all calculated using the fast Fourier transform. In Fig. 6(d) the difference between the oscillations frequencies of the driver system and response system, \(|\omega_{1}-\omega_{2}|\). All the gradient plots are in function of the delay \(\tau\) and the coupling constant \(C\). We can see in Fig. 6(a) that the oscillations amplitude of the response system grows with the delay \(\tau\) of the driver system, similarly to the driver system itself, as shown in Fig. 2. On the other hand, the oscillations amplitude grows and changes for some range of the coupling constant \(C\) that varies in every region. Then, Fig. 6(b) shows the level of synchronization between the two oscillators. In fact, every point in the figure represents the mean distance of the asymptotic behaviors of the oscillators \(x\)-series. So that, the smaller the mean distance between the two oscillators \(x\)-series, the higher the level of synchronization. Counter intuitively, it is not clear all along the panel that the larger the coupling constant the better the synchronization. In fact, the synchronization is better in some regions like region I, for \(C=1\) and \(\tau=1\), than for the case \(C=1.66\) for the same \(\tau\) value. To better understand the previous figure, we analyze the panels region by region and we study a particular case for a fixed \(\tau\) and varying \(C\) in the interesting cases.
Figure 4: Panel (a) shows the driver system and the response system \(x\)-time series at fixed \(\tau=1\). In blue the driver, in red the Duffing with \(C=0\) and in black the response system with \(C=0.06\). Panel (b) shows the distance between the driver system and the response system \(x\)-time series \(|x_{1}(t)-x_{2C}(t)|\) (the blue oscillations) and the mean distances \(<x_{1}-x_{2}>\) and \(<x_{1}-x_{2C}>\), the red and black lines, respectively.
### The coupling constant effect
**In region I** the oscillation amplitudes, Fig. 6(a), are smaller with respect to the other regions. The exception being the zone of lower \(\tau\) values and \(0\lesssim C\lesssim 2\). Then, an area of relatively higher amplitude is visible in the middle of the region. In fact, we can appreciate a higher amplitude "tubular structure" that cross the region along the \(\tau\) values, but only for certain \(C\) values. This peak resembles a resonance peak. As a matter of fact, the response system oscillations are small outside "the tube" but are much larger inside it, as a consequence of the the driver system. As expected for a resonance, there is a dependence on the amplitude of the external forcing, in this case \(C\). However, there is also a dependence
Figure 5: The figure shows the effect for growing coupling constant on the dynamics of the response system, \(C=1,C=1.66\) and \(C=3\) at fixed \(\tau=1\). The panels show the driver system \(x\)-time series, \(x_{1}\) in blue, and the response system \(x_{2C}\) in red. In the panels it is appreciable that the oscillations amplitude of the response system is larger in the \(C=1.66\) case. Moreover, counter intuitively the synchronization of the two system is better at \(C=1\) than at \(C=1.66\).
on the frequency of the external forcing, here the delay \(\tau\). In Fig. 6(b), we can notice a high difference between the two oscillators \(x-\)series for \(C\) values outside and inside the resonance area. So that, smaller \(C\) values can synchronize the two oscillators more than larger \(C\) values, located inside the resonance area, as in the already discussed example of \(\tau=1\) in Fig. 5.
Now, we study a particular case, for which we consider for a given value of \(\tau=1\) a range of \(C\) values shown in Fig. 6. This \(\tau\) value lies in the region I depicted in Fig. 2. Thus,
Figure 6: The panels show (a) the gradient of the amplitude of the response system oscillator \(A_{x_{2C}}\). Then, (b) the gradient of the mean distance between the driver system and the response system. Finally (c), the gradient of the response system frequency \(\omega_{2}\), and (d) the gradient of the difference between the frequency of the driver system and the response system, \(|\omega_{1}-\omega_{2}|\). The frequencies have been calculated using the fast Fourier transform. All figures show gradient plots in function of the coupling constant \(C\) and the time delay \(\tau\). In panels (c) and (d), only the regions II and IV are represented, because the driver system frequency in region I is zero and region III is aperiodic, making the comparison with the response system meaningless.
we plot in Fig. 7, the asymptotic oscillations amplitude and the asymptotic behavior of the response system oscillations as a maxima-minima diagram. All of this for the coupling constant values \(0.01<C<4\). In particular, Fig. 7(a) shows the amplitude \(A_{x_{2C}}\) in which we can recognize a bell shaped curve that reminds of a resonance, with its maximum at \(C=1.66\) while the red line is the amplitude \(A_{x_{2}}\simeq 0.57\) of the response system with \(C=0\). Finally, the black line is the amplitude of the driver system, \(A_{x_{1}}\simeq 0\), which oscillations fall into the fixed point. This peak is a section of the \(\tau\) space spanning peak seen in Fig. 6. The peak appearance is recognizable in the maxima-minima diagram, Fig. 7(b). These last figures have been portrayed by plotting the maxima and minima of the oscillations last 5 periods, to show the oscillators asymptotic behaviors. In the figure, also appear as a straight black line the driver system asymptotic behavior that is constant because the change of \(C\) does not affect it. Also, in Fig. 7(a) we can spot a little peak around \(C=0.444\) that matches with a change in the maxima-minima diagram of the response system in Fig. 7(b). This little peak is related with the small filiform zone around \(C\thickapprox 0.444\) that crosses all the region I along the \(\tau\) axis in Fig. 6(a). All that has been written shows how the coupling constant is introducing a perturbation into the response system that, in addition to driving it, also forces the system. This triggers a resonance between the driver system and the oscillations of the response system.
**In region II** we can see that a zone of higher amplitudes, Fig. 6(a), starts for values of \(C\thickapprox 1\) and is connected to the high amplitude area of the region I. In fact, we can recognize that the resonance area of region I continues in the region II. These amplitude maxima spread for more \(C\) values until, in the right of the figure, it reaches a wide zone before merging with the aperiodic region III. Then, we can distinguish a variety of peaks in the response system amplitude, most of them for \(C\lesssim 1\). These peaks seem generated by an erratic behavior in function of the coupling values of the response system in response of the driver system. We call them _adjustment peaks_, because the response system is adjusting its behavior to the driver system before reaching the higher amplitude trend of the higher \(C\) values. In these peaks zone, a little variation in the coupling constant can be determinant for the response system to change its asymptotic behavior into the driver system well. The name adjustment peaks comes from the fact that the coupling constant \(C\) for the peaks cannot overdue the dynamics of the response system, but it is able to stretch the orbits at
a larger amplitude. On the other hand, a slightly bigger or smaller value of \(C\) can drive the response system into the driver system well, so that the response system amplitude oscillates in function of \(C\). In region I and II, the gradient plot of the oscillations difference \(<|x_{1}(t)-x_{2C}(t)|>\), Fig. 6(b), shows that to obtain a better synchronization of the two oscillator values of \(C\) outside the resonance area should be chosen. In Figs. 6(c) and (d), we can appreciate that the response system oscillation frequency \(\omega_{2}\) grows and the difference between the two oscillators frequencies, \(|\omega_{1}-\omega_{2}|\) decreases while \(C\) grows up. However, Although, the difference grows at \(\tau\thickapprox 1.5\) and \(C\thickapprox 4\), because we are close to the aperiodic region III and some fluctuation can start. This behavior is not visible by the difference in amplitudes, Fig. 6(b), so the effect is just on the frequency. Here, we repeat what we have done before, we study a particular case, here \(\tau=2\). In fact, in Figs. 8(a) and (b) we can see how the amplitude of the response system vary in function of the coupling constant \(C\). The results can be visualized in the maxima-minima diagram, as shown in Fig. 8(b). As before, the black straight lines that are the driver asymptotic behaviors have been plotted for comparison with the response systems in Fig. 8(b) and the behavior of the driver system does not change for different values of \(C\). In this set of figures, we can distinguish a variety of peaks in the response system amplitude, most of them for \(C\leq 1\). These peaks are the adjustment peaks that we described before. The peak at \(C=1\) is different because it starts the general tendency that goes further for bigger values of \(C\), i.e., the amplitude decreases in order to adjust to the amplitude of the driver system. However, for \(C>1\) some little peaks call our attention since they disrupt the general tendency of the curve, in particular the one for \(C=3\). This can be generated by a resonance between the external forcing (the driver system) and the system (the response system). Finally, in Fig. 8(c), we show the change in the frequency \(\omega_{2}\) of the response system, in function of \(C\). Here, we can appreciate that, initially, \(\omega_{2}=1.3684\), then its value oscillates until it becomes \(\omega_{2}=\omega_{1}=1.5881\), for \(C>1\). We have to take into account that the frequency plot has been only analyzed in this case and for \(\tau\) values in the region IV, because just in these two regions the oscillations of the driver system are periodic.
**In region III**: the \(\tau\) values are in the aperiodic region, defined in Fig. 2, so the response system behaviors become aperiodic when the coupling value exceeds \(C\thickapprox 0.06\), see Fig. 6(a) and Fig. 6 (b).
**In region IV** we can find high oscillation amplitudes for both oscillators, see Fig. 6(a). Here, we can see that high oscillations amplitude grows for \(C\gtrsim 0.06\). Besides, the driver system shows a minimum in the oscillations amplitude at \(\tau=3.68\), and the same behavior appears in the response system for almost all the \(C\) values. In Fig. 6(b) we can see how the difference between the oscillators \(x-\)series is maximum for very small values of \(C\), but then it starts to oscillate, growing and shrinking when peaks in amplitude show up and finally becomes small for \(C\thickapprox 4\). As a result, the response system behaviors adjust to the driver system behavior in a complicated way inside the chosen range of \(C\) values. In fact, there are a lot of fluctuations in the response system amplitude, Fig. 6(a), and in the mean difference \(<|x_{1}(t)-x_{2C}(t)|>\), Fig. 6(b), except for \(\tau\thickapprox 4\) and \(C\thickapprox 4\). Similar oscillating behaviors can be found in the frequency plots, Figs. 6(c) and (d). In the last phrase we do not take into account values of \(\tau\) near the region III because the aperiodic oscillations influence is still noticeable.
### The \(\tau\) effect
We know that, in a time-delayed system, the variation of the delay \(\tau\) is responsible for the modification of the frequency of the delay-induced oscillations [25], when they exist. So, an interesting question is: how do the resonance peaks spotted in the previous figures behave when \(\tau\) varies? To answer this question, we study the oscillations of the response system while the external forcing frequency, the driver system delay, changes. Therefore, we analyze Fig. 6 along the \(\tau\) axis.
We start this second part for a coupling constant \(\mathbf{C}=\mathbf{1}\) at the beginning of the Fig. 7(a) resonance peak and by extension on the border of the tubular structure described in Fig. 6(a). This coupling value gives a particular peak in the amplitude plot for \(\tau=2\), Fig. 8(a). Now by analyzing Fig. 9(a), we can see that the response system amplitude in region II are high, and the driver system is forcing the response system to oscillate at large amplitudes between its fixed point and the one of the driver system. On the other hand, in the region I the oscillations amplitude follows the driver. In fact, we can see that the \(\tau\) value for which there is a change in the stability for the response system corresponds with the value predicted by the stability analysis of the driver. Analyzing the frequencies, Fig. 9(b), we can see that there is a correspondence of the values in region II. It is worth to mention a
down spike in the \(\omega_{1}\) at \(\tau\thickapprox 3.1\), not appearing in all the \(\omega-\tau\) plots, or being different in case that it appears. It seems that so close to the chaotic region III some of its influence remains. In fact, the value of the frequency is still variable for \(\tau\) values slightly larger than \(3.05\). Important to mention is that this is the coupling value for which the curves of the two frequencies are more similar in region II, except for some values. The most interesting exception is at \(\tau=2\), where the oscillations amplitude reaches a maximum, it is also possible to spot a peak for the frequency \(\omega_{2}\), Fig. 9(b).
Keeping on our analysis, bring us to the following value of \(\mathbf{C=1.66}\), see Figs. 9(c) and (d). Value for which in the Fig. 7(a) a maximum is reached by the response system amplitudes and that falls inside the tubular structure of Fig. 6(a). In Fig. 9(c), we can appreciate that the oscillations amplitude, in region I, in region II and in region III, are enhanced with respect to the previous value of \(C\). Contrary to the expectations, a larger coupling value with respect to the former one does not give a better synchronization of the two systems due to the presence of the resonance. In fact, in the above-mentioned regions, the response system oscillations amplitude, at this value of the coupling constant, reaches its maximum and the \(\omega_{1}\) and \(\omega_{2}\) curves in region II lose coherence, Fig. 9(d). In the regions in which the resonance high amplitude are present, the two oscillators are no longer synchronized in both the oscillations amplitude and the frequency.
The last case is for \(\mathbf{C=3}\), Figs. 9(e) and (f), related with the small peak in Fig. 8(a). Here, the coupling constant is big enough to fall outside the resonance tubular structure described in Fig. 6(a) and far beyond the values that produce the peak in Fig. 7. In fact, the oscillations amplitude are smaller than the previous case and since the coupling constant is large, in the range that we have used, the oscillations amplitude of the response system follows the amplitude of the driver system better than in all the other cases. Also, the \(\tau\) value for the response system changes the stability corresponds with the driver system value, see Fig. 9(e). In this case, the driver system and the response system frequencies match almost as well as the \(C=1\) case in the region II or better in region IV. Definitely, they match better than in the \(C=1.66\) case. So, again, we recognize how the oscillations amplitude and the difference in oscillations between the driver and the response system grow, reach a maximum and decrease in function of the coupling constant and how they vary in function of \(\tau\) has also been studied.
Figure 7: The panels show a slice of the previous Fig. 6 for varying values of \(C\) and fixed \(\tau=1\). In particular (a) it portraits the amplitude of the oscillators, being \(A_{x_{2C}}\) the amplitude of the response system with the nonzero coupling term, \(A_{x_{2}}\) the amplitude of the response system without the coupling term and \(A_{x_{1}}\) the amplitude of the driver system. Then, (b) the maxima and minima diagram of the asymptotic behavior for the driver (the black straight line) and the response system. It is interesting to observe the amplitude peaks in panel (a): the smaller one for \(C=0.444\) and the larger one for \(C=1.66\) that suggests a resonance induced by the coupling of the two oscillators. In panel (b), the maxima and minima of the driver system overlap since the oscillator goes to the fixed point. It is also shown that the oscillations of the response system tend to the driver system fixed point (straight black line), for bigger values of \(C=0.06\) and values outside the resonance area.
Figure 8: The panels show (a) the amplitude of the oscillators, being \(A_{x_{2C}}\) the amplitude of the response system with the nonzero coupling term, \(A_{x_{2}}\) the amplitude of the response system without the coupling term and \(A_{x_{1}}\) the amplitude of the driver system. Then, (b) the maxima and minima diagram of the asymptotic behavior for the driver (black straight lines) and the response system for varying values of \(C\) and fixed \(\tau=2\), respectively. Some interesting peaks appear in the amplitude plot, due to a resonance. Finally, panel (c) shows the \(\omega_{1}\) of the driver system in blue, that does not change when \(C\) changes and \(\omega_{2}\) of the response system in red that for \(C=0\) is \(\omega_{2}=1.3684\) and when the coupling constant is larger than \(C\simeq 1.2\) it becomes \(\omega_{2}=\omega_{1}\).
Figure 9: Panels (a), (c) and (e) show the oscillations amplitude and panels (b),(d) and (f) the frequency in the region II and IV for three coupling values. The first one before the peak in Fig. 7(a), i.e., \(C=1\). The second is the value that gives the top of the peak, \(C=1.66\). The third one beyond the peak, \(C=3\). Interestingly in panel (a) we can see that in the region I for \(C=1\), the response system follows the driver better than the second case, panel(c), although the second coupling constant value is larger than the first one. The resonance peaks already mentioned are recognizable in Region I, II and III for \(C=1.66\) and in region II for \(C=1\). Finally, it is interesting that in Region II the frequencies follows the \(\omega_{1}\) better than in all the other cases.
Figure 10: The panels (a) and (b) show, respectively, for \(C=3\) and for \(C=4\) the mean of the maxima and minima of the last 5 periods of the driver system (solid line) and of the response system (the dots). Panels (c) and (d) show in black the oscillations amplitude of the driver and in blue of the response system, the red line is the amplitude of the Duffing with \(C=0\). Panels (e) and (f) show the differences of the \(x-\)series of the driver and the Duffing in red, and of the driver and the response system in blue. For this constant coupling values the response system dynamics adapt to the driver system and the delay-induced oscillations are completely transmitted from the driver system to the response system. Hence, we have modeled the built-in delay of the synchronization just using the delay in the driver system.
Delay-induced oscillations in a non-delayed system
In the introduction we wrote about a subsidiary objective of this work. We use the delayed driver system as the only excitation of the driven system and we want to ascertain for which values of the coupling constant its features are better transferred. In particular, coupling means an interchange of information between the driver and the driven system and the speed of this interchange is finite. Thus, an in-built delay should be taken into account. This is the reason to study whether a delayed-nonlinear oscillator can pass along its delay to the entire coupled system. In previous sections, we have seen that the best candidates are values of \(C\) outside the resonance areas. This means that through the coupling mechanism the driver system can transfer some of its delay-induced oscillations to the response system in a complicated way that depends on the \(\tau\) region. Thus, the response system starts to behave as a delayed oscillator. To visualize this effect we can focus on Fig. 9(e). Then, to analyze deeply the phenomenon we plot Fig. 10. In the panels (a) and (b), we show the mean of the maxima-minima diagram of the last 5 periods of the orbit in function of \(\tau\). The lighter line on the background is the mean of the maxima-minima diagram of the driver system. In Figs. 10(c) and (d) we show the oscillations amplitude of the driver and the response system. In Figs. 10(e) and (f) the distance between the driver and the response system \(x-\)series in blue and the distance between the driver and the Duffing for \(C=0\) in red. In these panels we can appreciate a good agreement between the driver and the response system. Looking at those figures we can assure that, for those values of \(C\), the delay is completely transmitted, although not perfectly, from the driver system to the response system. In Figs. 10(a) and (c), in the region II the effect of the Fig. 8 peak at \(C=3\) is visible as a zone of higher amplitudes. In fact, slightly smaller or larger values of \(C\) outside the mentioned peak can guarantee a good enough match with the driver system as the \(C=4\) case. So, we can say trustingly that it is possible for the driver to carry on the synchronization built-in delay just with its own delay. As we saw, the coupling constant value has to fall outside the regions of resonance. If the coupling constant is large enough, i.e., \(C>3\) the synchronization of the delay-induced oscillations is not just localized to a particular \(\tau\) region or value, as we have seen the case of \(C=1\) and \(\tau\) region 1, but it is generalized to all the regions. For \(C>4\) this effect does not change drastically in comparison with the case \(C=4\). Finally, we also show here that we can use the coupling constant to control which degree of delay-induced
oscillations we want to transfer to the response system in order to control the in-built delay due to the coupling.
## VI Conclusions
Two coupled systems have been studied. A time-delayed Duffing oscillator as the driver system and a Duffing oscillator without delay as the response system. The driver system plays two roles, the first one as the external forcing of the response system and the second one as the responsible to bring the coupling built-in delay into the response system.
As regards the first role, the driver system behaving as an external forcing can induce a resonance in the response system as we have seen in the case of \(C=1.66\) in the regions I, II and III. Also, in the case of \(C=1\) a resonance shows up but just in the region II. Finally, some other resonance peaks pop up, in region I and II, that can be assimilated, in the case of a periodic external forcing, as peaks related with other harmonics. An interesting feature is the adjustment peaks, that give birth to fractal-like zones in the \(C-\tau\) gradient plots. In those zones, the amplitude of the response system is highly sensitive to the \(C\) value. In fact, for really close values of the coupling constant the response system can fall into the driver system well or have the orbits stretched by the influence of both the driver system and of the response system fixed points.
On the other hand, when the coupling constant takes values outside the resonance area the delay-induced behaviors are better transferred from the driver system to the response system. So that, the response system behaves as a time-delayed system, allowing for the finite velocity in the coupling transmission of information. The best synchronization along all the \(\tau\) value is reached at \(C=4\), but there are specific cases, as \(\tau\) regions or values that reach a good synchronization for smaller \(C\) values. As it has been shown, the difference between the asymptotic oscillations of the two systems changes in the parameter set in a complicated way. Finally, the coupling constant can be used as a control parameter that allowing to decide how much of the delay-induced oscillations we want to be transmitted from the driver to the response system, always remembering that identical synchronization is impossible, as explained before. This means that a perfect transmission of the delay from the driver system to the response system is unattainable.
To summarize, both roles unveiled interesting properties in the response system behavior.
First of all, the coupling mechanism can trigger a resonance in the driven system oscillations as an external forcing. Second, it is possible to model the delay due to the coupling interchange of information, just with the delay of the driver. Also, we have proved that a previous study can give suggestions to which values of the coupling constant we should use in a specific case. Not always it is possible to apply the stronger coupling value. Also, we can decide, through the coupling constant, how much of the delay-induced oscillation are transferred from one oscillator to the other.
## VII Acknowledgment
This work has been supported by the Spanish State Research Agency (AEI) and the European Regional Development Fund (ERDF, EU) under Project No. PID2019-105554GB-I00 (MCIN/AEI/10.13039/501100011033).
|
2301.04639 | Experimental verification of the temperature coefficient of resistivity | We have created an experimental procedure for determining the temperature
coefficient of resistivity, $\alpha_R$, for introductory physics laboratories.
This method examines the relationship between temperature and resistivity to
establish $\alpha_R$ within 10% of the accepted value. | Robert D. Polak, Michael R. Harris, Kiet A. Nguyen, Anthony Kearns | 2022-11-21T20:58:53Z | http://arxiv.org/abs/2301.04639v1 | # Experimental verification of the temperature coefficient of resistivity
###### Abstract
We have created an experimental procedure for determining the temperature coefficient of resistivity, \(\alpha_{R}\), for introductory physics laboratories. As in the procedure from Henry [1], this method examines the relationship between temperature and resistivity to establish \(\alpha_{R}\) within 10% of the accepted value.
Electrical resistivity, \(\rho\), varies with temperature according to:
\[\rho=\rho_{o}(1+\alpha_{R}(T-T_{o})) \tag{1}\]
where \(\rho_{o}\) is the resistivity for a given temperature \(T_{o}\), \(T\) is the temperature of the material, and \(\alpha_{R}\) is the temperature coefficient of resistivity. For a wire of length, \(L\), and cross-sectional area, \(A\), the resistance of a wire, \(R\), is defined accordingly as
\[R=\rho\frac{L}{A}. \tag{2}\]
While resistance will increase as a product of both increased length and resistivity, the increase in length provides a negligible increase in resistance. This is evident from observing that the thermal coefficient of resistivity is approximately two orders of magnitude larger than the coefficient of thermal expansion. As such, the change in resistivity is primarily responsible for the increase in resistance. Hence, \(R\) will vary similarly with
\[R=R_{o}(1+\alpha_{R}(T-T_{o})) \tag{3}\]
where \(R_{o}\) is the resistance of the wire at a temperature \(T_{o}\).
By applying a current through the wire, its temperature will also vary as a result of Joule heating and the resistance of the wire can be measured based on a given current, I, and difference in voltage, \(\Delta V\), by
\[R=\frac{\Delta V}{I}. \tag{4}\]
Hence, by measuring the resistance as a function of temperature, we can determine \(\alpha_{R}\) by plotting \(R\) vs. \(T-T_{o}\) and performing a linear fit using Eq. (3).
To perform the experiment, we created a closed circuit (see Fig. 1) in which a carbon steel wire [2] is suspended under tension above a surface, as in most stringed instruments. Two digital
multiimeters were used to record the voltage across and current through a \(0.016\) inch (\(0.406\) mm) diameter, \(40\)-cm long wire. Temperature measurements were taken using liquid crystal thermometers [3] placed in thermal contact with the wire by fastening them to the wire with an adhesive backing. Two thermometers were used, one ranging from \(14-31^{o}C\) and the other from \(32-49^{o}C\), to provide an overall temperature range of \(14-49^{o}C\). For the most accurate temperature readings, we found it is essential to avoid all contact with the thermometers during the experiment. To collect the data, we applied different currents to the wire, ranging from \(0.2A\) to \(1.0A\). We used a BK Precision 1787B power supply that allows for digital control of the current. We found that having initial steps of \(0.2A\) and later reduced to \(0.1A\) created consistent temperature changes in the wire of \(2-3^{o}C\). The experiment proved much more difficult to complete using an analog power supply because of the difficulty in creating the precise changes in current needed to have a well formed data set. After allowing around \(30\) seconds for the system to reach thermal equilibrium, the recorded temperature of each trial was given as the uppermost visible temperature reading of the liquid crystal thermometer, as seen in Fig. 2. We recorded the temperature of, current through and voltage across the wire, and calculated its resistance using Eq. (4).
By graphing the resistance of the wire as a function of \(T-T_{o}\), where \(T_{o}\) is the temperature of the wire with the lowest current applied, we can then apply a linear fit to the data with the y-intercept yielding \(R_{o}\) and slope giving \(R_{o}\alpha_{R}\), according to Eq. (3). Figure 4 shows example experimental results with the fit giving \(R_{o}=0.744\Omega\) and \(\alpha_{R}=0.0039K^{-1}\). This is within \(5\%\) of the accepted value of \(\alpha_{R}=0.0041K^{-1}\)[4]. Repeated experiments found these results to be reproducible with \(\alpha_{R}\) consistently measured within \(10\%\) of the accepted value.
To get the best results, we found that the resistance of the wire should be at least \(0.3\Omega\) to allow for accurate resistance measurements. Furthermore, the wires need to be thick enough to support the thermometers. As such, we achieved the best results when using steel as opposed to other materials with lower resistivity and tensile strength, such as copper and aluminum.
This experiment can be completed in less than \(2\) hours and uses equipment that is commonly present in a typical introductory physics lab, with the exception of relatively low cost supplies such as music wire and liquid crystal thermometers. It also reinforces key ideas from introductory physics such as conservation of energy, where electrical energy becomes thermal energy, Ohm's Law, and the temperature dependence of resistance.
|
2308.16736 | Operator splitting for semi-explicit differential-algebraic equations
and port-Hamiltonian DAEs | Operator splitting methods allow to split the operator describing a complex
dynamical system into a sequence of simpler subsystems and treat each part
independently. In the modeling of dynamical problems, systems of (possibly
coupled) differential-algebraic equations (DAEs) arise. This motivates the
application of operator splittings which are aware of the various structural
forms of DAEs. Here, we present an approach for the splitting of coupled
index-1 DAE as well as for the splitting of port-Hamiltonian DAEs, taking
advantage of the energy-conservative and energy-dissipative parts. We provide
numerical examples illustrating our second-order convergence results. | Andreas Bartel, Malak Diab, Andreas Frommer, Michael Günther | 2023-08-31T13:53:24Z | http://arxiv.org/abs/2308.16736v1 | # Operator splitting for semi-explicit differential-algebraic equations and port-Hamiltonian DAEs
###### Abstract
Operator splitting methods allow to split the operator describing a complex dynamical system into a sequence of simpler subsystems and treat each part independently. In the modeling of dynamical problems, systems of (possibly coupled) differential-algebraic equations (DAEs) arise. This motivates the application of operator splittings which are aware of the various structural forms of DAEs. Here, we present an approach for the splitting of coupled index-1 DAE as well as for the splitting of port-Hamiltonian DAEs, taking advantage of the energy-conservative and energy-dissipative parts. We provide numerical examples illustrating our second-order convergence results.
Keywords:operator splitting, differential algebraic equations, port-Hamiltonian systems
## 1 Introduction
Operator splitting [4] is one of the powerful numerical tools in solving dynamical systems. By splitting the problem into a sequence of subproblems, one can exploit the different structural or physical properties of each subsystem independently. The operator splitting method is often used for solving initial value problems in ordinary differential equations (ODEs). It combines numerical integration for subsystems into an efficient scheme for the overall problem. More precisely, for an ODE-IVP
\[x^{\prime}(t)=f(x,t)=f_{1}(x,t)+f_{2}(x,t),\quad x(t_{0})=x_{0},\]
the so-called Lie-Trotter splitting method sequentially solves dynamical systems driven by \(f_{1}\) and \(f_{2}\) with step sizes \(h\), where the subsystems are connected via initial condition. Strang splitting [4] solves the ODE system driven by \(f_{1}\), \(f_{2}\) and again \(f_{1}\) with time steps \(h/2\), \(h\) and \(h/2\) respectively.
The question of extending the method from ODEs to DAEs has been addressed for DAE systems in a decoupled form [5]. However, in a more general setting, the DAE model is not given in a decoupled structure. Decoupling the DAE system is possible in principle but is not preferable for the application
of the splitting schemes, since we want to preserve the structure of the system and exploit it for the subsystems. This paper examines splitting methods for general coupled index-1 DAEs. In addition, it considers DAE systems in the port-Hamiltonian framework.
## 2 Coupled Semi-Explicit Index-1 DAEs
Semi-explicit index-1 DAEs arise in particular in network modeling, where the coupling of two or more networks can occur via algebraic and differential variables. Here we focus on the coupling of two networks modeled by an index-1 coupled system of DAEs of the form
\[y_{1}^{\prime} =f_{1}(y_{1},y_{2},z_{1},z_{2}), \tag{1a}\] \[0 =g_{1}(y_{1},y_{2},z_{1},z_{2}),\] (1b) \[y_{2}^{\prime} =f_{2}(y_{1},y_{2},z_{1},z_{2}),\] (1c) \[0 =g_{2}(y_{1},y_{2},z_{1},z_{2}), \tag{1d}\]
with
\[\frac{\partial(g_{1},g_{2})}{\partial(z_{1},z_{2})}\]
regular in a neighborhood of the solution. System (1) equipped with initial values
\[y(0)=y_{0}=\begin{bmatrix}y_{1,0}\\ y_{2,0}\end{bmatrix}\in\mathbb{R}^{n_{y}},\quad z(0)=z_{0}=\begin{bmatrix}z_{1,0}\\ z_{2,0}\end{bmatrix}\in\mathbb{R}^{n_{z}}. \tag{2}\]
is assumed to have a unique solution \(y=(y_{1},y_{2})^{\top}:[0,T]\rightarrow\mathbb{R}^{n_{y}}\), \(z=(z_{1},z_{2})^{\top}:[0,T]\rightarrow\mathbb{R}^{n_{z}}\) where \([0,T]=:\mathcal{I}\) is a finite interval. The functions \(f_{1}\), \(f_{2}\), \(g_{1}\) and \(g_{2}\) are supposed to be sufficiently differentiable in the neighbourhood of the solution. To apply a splitting technique, we introduce the following "doubled" (the algebraic constraints appear twice) decomposition of the DAE overall system (1)
\[\begin{pmatrix}y_{1}^{\prime}\\ 0\\ y_{2}^{\prime}\\ 0\end{pmatrix}=\begin{pmatrix}f_{1}\\ 2g_{1}\\ f_{2}\\ 2g_{2}\end{pmatrix}=\underbrace{\begin{pmatrix}f_{1}\\ g_{1}\\ 0\\ g_{2}\end{pmatrix}}_{\text{subsystem 1}}+\underbrace{\begin{pmatrix}0\\ g_{1}\\ f_{2}\\ g_{2}\end{pmatrix}}_{\text{subsystem 2}}. \tag{3}\]
Based on the index-1 condition for the overall system and using the implicit function theorem, there exists a continuously differentiable function \(\varphi:\mathbb{R}^{n_{y}}\rightarrow\mathbb{R}^{n_{z}}\) such that \((z_{1},z_{2})=\varphi(y_{1},y_{2})\) which consequently allows to reduce the subsystems in (3) to the ODEs
\[\begin{cases}y_{1}^{\prime}=f_{1}(y_{1},y_{2},\varphi_{1}(y_{1},y_{2}))\\ y_{2}^{\prime}=0\end{cases}\qquad\text{and}\qquad\begin{cases}y_{1}^{\prime}=0 \\ y_{2}^{\prime}=f_{2}(y_{1},y_{2},\varphi_{2}(y_{1},y_{2}))\end{cases}. \tag{4}\]
The structure of the systems (4) leads to an operator splitting method based on the doubled decomposition (3), where the subsystems are given by
\[y_{1}^{\prime} =f_{1}(y_{1},y_{2},z_{1},z_{2}), y_{1}^{\prime} =0, \tag{5}\] \[0 =g_{1}(y_{1},y_{2},z_{1},z_{2}), 0 =g_{1}(y_{1},y_{2},z_{1},z_{2}),\] \[y_{2}^{\prime} =0, y_{2}^{\prime} =f_{2}(y_{1},y_{2},z_{1},z_{2}),\] \[0 =g_{2}(y_{1},y_{2},z_{1},z_{2}), 0 =g_{2}(y_{1},y_{2},z_{1},z_{2}),\]
which is in this case equivalent to an ODE operator splitting [4] and therefore similar convergence outcomes will result. We note that with this splitting the algebraic constraints are solved twice. For efficiency reasons, this is not a major limitation if the dimension of the algebraic constraints is small compared to the dimension of the dynamic quantities.
Proposition 1: _Solving the two subsystems of (5) sequentially on subintervals \([t_{n},t_{n+1}]\) of \(\mathcal{I}\) with at least first order convergent time integration methods yields a first order convergent method on \(\mathcal{I}\). Furthermore, applying Strang splitting with at least second order time integration schemes yields a method of second order._
Proposition 1 follows directly from known convergence results for ODE operator splitting; see [4] and [5].
Remark 1: If we have only coupling via differential variables, i.e., \(\frac{\partial f_{1}}{\partial z_{2}}=\frac{\partial g_{1}}{\partial z_{2}}=0\), and \(\frac{\partial f_{2}}{\partial z_{1}}=\frac{\partial g_{2}}{\partial z_{1}}=0\) in system (1), then we can proceed with the splitting without considering the doubled decomposition. In this case, the subsystems in (5) are given as follows:
\[y_{1}^{\prime} =f_{1}(y_{1},y_{2},z_{1}), y_{1}^{\prime} =0, \tag{6}\] \[0 =g_{1}(y_{1},y_{2},z_{1}), y_{2}^{\prime} =f_{2}(y_{1},y_{2},z_{2}),\] \[y_{2}^{\prime} =0, 0 =g_{2}(y_{1},y_{2},z_{2}).\]
Remark 2: A particular coupling type of system (1), where we have a dedicated coupling equation, has the form
\[y_{1}^{\prime} =f_{1}(y_{1},y_{2},z_{1}), \tag{7a}\] \[0 =g_{1}(y_{1},y_{2},z_{1},u),\] (7b) \[y_{2}^{\prime} =f_{2}(y_{1},y_{2},z_{2}),\] (7c) \[0 =g_{2}(y_{1},y_{2},z_{2},u),\] (7d) \[0 =k(y_{1},y_{2},z_{1},z_{2}), \tag{7e}\]
where \(k\) is a coupling equation and \(u\) a dedicated Lagrangian coupling variable. Such coupled systems arise, for example, in circuit simulation or multibody dynamics [1], and an operator splitting scheme can be similarly applied by consid
ering the decomposition
\[\begin{pmatrix}y_{1}^{\prime}\\ 0\\ y_{2}^{\prime}\\ 0\\ 0\end{pmatrix}=\begin{pmatrix}f_{1}\\ 2g_{1}\\ f_{2}\\ 2g_{2}\\ 2k\end{pmatrix}=\underbrace{\begin{pmatrix}f_{1}\\ g_{1}\\ 0\\ g_{2}\\ k\end{pmatrix}}_{\text{subsystem 1}}+\underbrace{\begin{pmatrix}0\\ g_{1}\\ f_{2}\\ g_{2}\\ k\end{pmatrix}}_{\text{subsystem 2}}. \tag{8}\]
## 3 Port-Hamiltonian DAEs
In this section we are interested in applying the operator splitting method to port-Hamiltonian systems (PHS) of differential-algebraic equations. An operator splitting for PHS-ODEs is considered in [6] based on the natural decomposition given by the power-conserving part and the dissipative part. Unlike ODEs, DAEs suffer from singularity due to the presence of possibly hidden constraints. Different approaches like index reduction and decoupling the system--after which the system structure might be lost--allow for the application of ODE numerical methods to DAEs. Our goal is to avoid decoupling, so that we can preserve the system's structure.
Definition 1: For \(t\in\mathcal{I}\) and a state variable \(x:\mathcal{I}\to\mathbb{R}^{n}\), a PHS with quadratic Hamiltonian \(H(x)=\frac{1}{2}x^{\top}Q^{\top}Ex\) is given by equationparentequation
\[Ex^{\prime}(t) =(J-R)Qx(t)+Bu(t), \tag{9a}\] \[y(t) =B^{\top}Qx(t) \tag{9b}\]
where \(E,Q,J\) and \(R\) are real \(n\times n\) matrices satisfying \(E^{\top}Q=Q^{\top}E\), \(R=R^{\top}\geq 0\), \(J=-J^{\top}\), \(B\) is \(n\times m\) matrix and \(Q\) is assumed to be symmetric positive definite. The functions \(u\) and \(y\) are referred to as the inputs and outputs, respectively. \(E\) might not have a full rank.
A fundamental property for the solution of (9) is the dissipativity inequality
\[\frac{d}{dt}H(x(t))=\partial_{x}H(x(t))x^{\prime}(t)=-x^{\top}Q^{\top}RQx+y^{ \top}(t)u(t)\leq y^{\top}(t)u(t). \tag{10}\]
The dissipativity inequality (10) can be also written in integral form
\[H(x(t_{0}+h))-H(x(t_{0}))\leq\int_{t_{0}}^{t_{0}+h}y(s)^{\top}u(s)\,ds. \tag{11}\]
For the splitting of a system of the form (9), we note that the regularity of the matrix pencils \(\{E,J\}\) and \(\{E,R\}\) is, as encountered in applications, not guaranteed. As an alternative, we adapt the idea of \(\varepsilon\)-embedding methods [3] and apply the splitting method to the perturbed system
\[E_{\varepsilon}x^{\prime}(t) =(J-R)Qx(t)+Bu(t), \tag{12a}\] \[y(t) =B^{\top}Qx(t) \tag{12b}\]
with the perturbed flow matrix \(E_{\varepsilon}=E+\varepsilon I\) representing a regularization of \(E\). The system of equations (12) is a PHS-ODE with the Hamiltonian \(H_{\varepsilon}(x)=\frac{1}{2}x^{\top}Q^{\top}E_{\varepsilon}x\). Denoting by the pair \((x_{\varepsilon},y_{\varepsilon})\) the solution of system (12), we have
\[H(x_{\varepsilon}(t_{0} +h))-H(x_{\varepsilon}(t_{0}))=\Big{(}H(x_{\varepsilon}(t_{0}+h)) -H_{\varepsilon}(x_{\varepsilon}(t_{0}+h))\Big{)}\] \[\quad+\Big{(}H_{\varepsilon}(x_{\varepsilon}(t_{0}+h))-H_{ \varepsilon}(x_{\varepsilon}(t_{0}))\Big{)}+\Big{(}H_{\varepsilon}(x_{ \varepsilon}(t_{0}))-H(x_{\varepsilon}(t_{0}))\Big{)}\] \[\leq-\frac{\varepsilon}{2}x_{\varepsilon}(t_{0}+h)^{\top}Qx_{ \varepsilon}(t_{0}+h)+\int_{t_{0}}^{t_{0}+h}\!\!\!y_{\varepsilon}(s)^{\top}u(s )\,ds+\frac{\varepsilon}{2}x_{\varepsilon}(t_{0})^{\top}Qx_{\varepsilon}(t_{ 0})\] \[\leq\int_{t_{0}}^{t_{0}+h}y_{\varepsilon}(s)^{\top}u(s)\,ds+ \frac{\varepsilon}{2}\Big{|}\|x_{\varepsilon}(t_{0}+h)\|_{Q}-\|x_{ \varepsilon}(t_{0})\|_{Q}\Big{|},\]
where \(\|\cdot\|_{Q}\) is the \(Q\)-norm. We point out that the second term on the right-hand side, which is of order \(\epsilon\cdot h\), is negligible as \(\varepsilon\to 0\). The solution \(x_{\varepsilon}\) of the perturbed system (12) converges to that of system (9), see [3], and thus \(y_{\varepsilon}\) converges to \(y\) for \(\varepsilon\to 0\). Furthermore, the energy function \(y_{\varepsilon}^{\top}\cdot u\) cannot increase indefinitely, therefore using the dominated convergence theorem, \(\int_{0}^{h}y_{\varepsilon}(s)^{\top}u(s)\,ds\) converges to \(\int_{0}^{h}y(s)^{\top}u(s)\,ds\) for \(\varepsilon\to 0\). So we may say that the dissipation inequality (10) is satisfied in the limit and that it is increasingly less violated as \(\varepsilon\to 0\).
The splitting of the port-Hamiltonian system (12) is performed by splitting the right-hand side into an energy-preserving part \(f_{1}(x,t):=JQx(t)\) and a dissipative part \(f_{2}(x,t):=-RQx(t)+Bu(t)\). The Strang splitting method yields the approximate value \(w_{\varepsilon}(h/2)\) and consequently offers a second-order approximation for the exact solution, summarized as follows:
\[z_{\varepsilon}^{\prime}(t) =f_{1}(z_{\varepsilon},t),\ y_{z,\varepsilon}=B^{\top}Qz(t),\ \ z_{0}=x_{0},\ \ t\in[0,\tfrac{h}{2}],\] \[v_{\varepsilon}^{\prime}(t) =f_{2}(v,t),\ y_{v,\varepsilon}=B^{\top}Qv(t),\ \ v_{0}=z(\tfrac{h}{2}),\ t\in[0,h],\] \[w_{\varepsilon}^{\prime}(t) =f_{1}(w,t),\ y_{w,\varepsilon}=B^{\top}Qw(t),\ w_{0}=v(h),\ \ t\in[0,\tfrac{h}{2}].\]
If the port-Hamiltonian system (12) is solved numerically using, for instance, Strang splitting, provided that the used numerical integration method is at least of second order, then the dissipativity inequality is preserved on a discrete level [6]. For the exact solution it is given by
\[H_{\varepsilon}(w_{\varepsilon}(\tfrac{h}{2}))-H_{\varepsilon}(x _{0}) =\big{(}H_{\varepsilon}(w_{\varepsilon}(\tfrac{h}{2}))-H_{ \varepsilon}(w_{\varepsilon}(0))\big{)}+\big{(}H_{\varepsilon}(v_{\varepsilon }(h))-H_{\varepsilon}(v_{\varepsilon}(0))\big{)}\] \[\quad+\big{(}H_{\varepsilon}(z_{\varepsilon}(\tfrac{h}{2}))-H_{ \varepsilon}(x_{0})\big{)}\] \[\leq\int_{0}^{h}y_{v,\varepsilon}(s)^{\top}u(s)\,ds.\]
Note that only the second step in the Strang splitting method contributes to the inequality, the first and third being energy-conserving.
## 4 Numerical Results
We show numerical results demonstrating the order of convergence for the operator splitting method considered in the previous sections.
Example 1: _(Coupled LC Oscillator)._ We consider the coupled, linear circuit in Figure 1 consisting of two resistors \(R_{1}\), \(R_{2}>0\), two capacitors \(C_{1}\), \(C_{2}>0\) and two inductors \(L_{1}\), \(L_{2}>0\). By \(u_{1},u_{2},u_{3}\) and \(u_{4}\) we denote the node potentials, \(j_{L,1},j_{L,2}\) are the currents through \(L_{1}\) and \(L_{2}\) respectively, and \(j_{co}\) is a coupling current. The circuit is modeled by the following two subsystems.
subsystem1: subsystem 2:
\[C_{1}\frac{d}{dt}u_{1}-\frac{1}{R_{1}}(u_{2}-u_{1}) =0, -\frac{1}{R_{2}}(u_{4}-u_{3})+j_{L,2}-j_{co} =0,\] \[\frac{1}{R_{1}}(u_{2}-u_{1})+j_{L,1}+j_{co} =0, L_{2}\frac{d}{dt}j_{L,2}-u_{3} =0,\] \[L_{1}\frac{d}{dt}j_{L,1}-u_{2} =0, C_{2}\frac{d}{dt}u_{4}+\frac{1}{R_{2}}(u_{4}-u_{3}) =0,\] \[u_{2}-u_{3} =0.\]
Both subsystems, as well as the overall coupled system are of index one. We compute a reference solution for the overall system using the midpoint rule with time step \(h=10^{-7}\). A comparison is then done by solving sequentially subsystems 1 and 2 using the Lie-Trotter and Strang splitting approaches. The time integration of the subsystems is performed using the midpoint rule as a second order scheme, with different choices for the time steps \(h\) between \(10^{-5}\) and \(10^{-7}\). The results given in Figure 2 verify that the orders of convergence of the Lie-Trotter and Strang splitting approaches are indeed 1 and 2, respectively.
Example 2: For \(x^{\top}=(u_{1},u_{2},u_{3},j_{1},u_{4},u_{5})\) the electric circuit in Figure 3 is modeled by
\[Ex^{\prime}=(J-R)x+Bi(t)\;\;\text{with}\;E=\text{diag}(0,C_{1},0,L_{1},0,C_{2}).\]
Figure 1: Subsystems of coupled oscillator circuit via coupling equation.
Using \(R_{1}=R_{2}=R_{3}=R_{4}=0.5,R_{5}=5,C_{1}=C_{2}=5\cdot 10^{-4},L=20\) and \(i(t)=\sin(2\pi\cdot 50t)\cdot\sin(2\pi\cdot 500t)\), we compute a reference solution for the model equations describing the circuit in Fig. 3 using the mid-point rule and a step size of \(h=10^{-8}\). In the first step, we compare the solution of the original system to the solution of the perturbed PHS-DAE problem (12) for small values of \(\varepsilon\). We solve using the implicit Euler method for \(h=10^{-5}\), plots are shown in Fig. 4 for \(\varepsilon=10^{-4}\) and \(\varepsilon=10^{-5}\) respectively. We observe the convergence of the solution of the perturbed problem to the solution of the original problem as \(\varepsilon\) approaches zero. Similar to the previous example, we implement the Lie-Trotter and Strang splittings. For time integration we use the midpoint rule with \(h=10^{-5}\), and the convergence results are plotted in Fig.5, illustrating again the first and second order convergence of the two splitting methods.
## Conclusions and Outlook
In this paper we introduced a "doubled" decomposition for semi-implicit DAEs, on which our operator splitting is based. By doing this, one is able to work with each smaller part more effectively, using techniques that are both efficient and that keep the essential characteristics of the original problem. We additionally applied the operator splitting to PHS-DAE after perturbation to avoid non-regularity. The numerical tests we considered demonstrate second-order convergence for the Strang splitting approach. Further work will consider
Figure 3: Electric circuit for Example 2
Figure 2: Circuit 1: order of convergence for some of the state variables.
the non-regularity of the split DAE matrix pencils, index-2 DAEs and higher order cases.
|
2309.05305 | Fully-Connected Spatial-Temporal Graph for Multivariate Time-Series Data | Multivariate Time-Series (MTS) data is crucial in various application fields.
With its sequential and multi-source (multiple sensors) properties, MTS data
inherently exhibits Spatial-Temporal (ST) dependencies, involving temporal
correlations between timestamps and spatial correlations between sensors in
each timestamp. To effectively leverage this information, Graph Neural
Network-based methods (GNNs) have been widely adopted. However, existing
approaches separately capture spatial dependency and temporal dependency and
fail to capture the correlations between Different sEnsors at Different
Timestamps (DEDT). Overlooking such correlations hinders the comprehensive
modelling of ST dependencies within MTS data, thus restricting existing GNNs
from learning effective representations. To address this limitation, we propose
a novel method called Fully-Connected Spatial-Temporal Graph Neural Network
(FC-STGNN), including two key components namely FC graph construction and FC
graph convolution. For graph construction, we design a decay graph to connect
sensors across all timestamps based on their temporal distances, enabling us to
fully model the ST dependencies by considering the correlations between DEDT.
Further, we devise FC graph convolution with a moving-pooling GNN layer to
effectively capture the ST dependencies for learning effective representations.
Extensive experiments show the effectiveness of FC-STGNN on multiple MTS
datasets compared to SOTA methods. The code is available at
https://github.com/Frank-Wang-oss/FCSTGNN. | Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, Xiaoli Li, Lihua Xie, Zhenghua Chen | 2023-09-11T08:44:07Z | http://arxiv.org/abs/2309.05305v3 | # Fully-Connected Spatial-Temporal Graph for Multivariate Time Series Data
###### Abstract
Multivariate Time-Series (MTS) data is crucial in various application fields. With its sequential and multi-source (multiple sensors) properties, MTS data inherently exhibits Spatial-Temporal (ST) dependencies, involving temporal correlations between timestamps and spatial correlations between sensors in each timestamp. To effectively leverage this information, Graph Neural Network-based methods (GNNs) have been widely adopted. However, existing approaches separately capture spatial dependency and temporal dependency and fail to capture the correlations between Different sEnsors at Different Timestamps (DEDT). Overlooking such correlations hinders the comprehensive modelling of ST dependencies within MTS data, thus restricting existing GNNs from learning effective representations. To address this limitation, we propose a novel method called Fully-Connected Spatial-Temporal Graph Neural Network (FC-STGNN), including two key components namely FC graph construction and FC graph convolution. For graph construction, we design a decay graph to connect sensors across all timestamps based on their temporal distances, enabling us to fully model the ST dependencies by considering the correlations between DEDT. Further, we devise FC graph convolution with a moving-pooling GNN layer to effectively capture the ST dependencies for learning effective representations. Extensive experiments show the effectiveness of FC-STGNN on multiple MTS datasets compared to SOTA methods.
1
Footnote 1: Corresponding Author
1
Footnote 2: Corresponding Author
## 1 Introduction
Multivariate Time-Series (MTS) data has gained popularity owing to their extensive utilization in various real-world applications such as predictive maintenance and healthcare [13, 14]. Considering its sequential property together with multiple data sources, e.g., sensors, MTS data exhibits Spatial-Temporal (ST) dependencies, including temporal correlations between timestamps and spatial correlations between sensors in each timestamp. Traditional approaches mainly focus on capturing temporal dependencies by employing temporal encoders, disregarding spatial dependencies and thus limiting their ability to learn effective representations [12, 13]. To address this limitation, Graph Neural Network-based methods (GNNs) have emerged as popular solutions to exploit ST dependencies within MTS data [14, 15]. However, this approach overlooks the correlations between different sensors at different timestamps, e.g., \(x_{T-1}^{3}\) and \(x_{T}^{2}\), failing to model comprehensive ST dependencies.
GNNs are always combined with temporal encoders to capture ST dependencies. The process begins with the construction of ST graphs, where separate graphs are constructed for each timestamp, representing the relationships between sensors over both time and space. To capture the ST dependencies, existing methods [14, 15] primarily adopt a two-step approach, incorporating GNNs and temporal encoders to capture the spatial dependency and temporal dependency separately. As shown in Fig. 1, GNNs are initially employed to capture spatial dependencies between sensors at each timestamp, and then temporal encoders capture temporal dependencies for corresponding sensors across different timestamps (The order might be reversed).
These works have shown improved performance compared to conventional methods using temporal encoders alone. However, they process each graph independently, overlooking the correlations between Different sEnsors at Different Timestamps (DEDT), e.g., the correlation between \(x_{T}^{2}\) and \(x_{T-1}^{3}\) in Fig. 1. These correlations are crucial in
Figure 1: ST graphs are constructed from MTS data, creating separate graphs for each timestamp, to capture ST dependencies. In step 1, GNN captures the spatial dependency within each graph, e.g., [\(x_{T}^{1}\), \(x_{T}^{2}\), \(x_{T}^{3}\), \(x_{T}^{4}\)]. In step 2, temporal encoders capture temporal dependencies for the corresponding sensors across different timestamps, e.g., [\(x_{T-1}^{1}\), \(x_{T}^{1}\), \(x_{T+1}^{2}\)]. However, this approach overlooks the correlations between different sensors at different timestamps, e.g., \(x_{T-1}^{3}\) and \(x_{T}^{2}\), failing to model comprehensive ST dependencies.
modelling comprehensive ST dependencies within MTS data. For instance, we consider a machine health detection scenario where a temperature sensor is highly correlated with a fan speed sensor. In this case, not only are the two sensors at the same timestamp highly correlated, but the past temperature would also influence the future fan speed, resulting in the correlations between DEDT. Due to limitations in graph construction and graph convolution, existing methods fail to effectively capture the correlations between DEDT, restricting their ability to model the comprehensive ST dependencies within MTS data.
To solve the above limitation, we propose a novel method called Fully-Connected Spatial-Temporal Graph Neural Network (FC-STGNN), which consists of two key components: FC graph construction and FC graph convolution, together to capture the comprehensive ST dependencies within MTS data. For graph construction, we introduce an FC graph to establish full connections between all sensors across all timestamps, enabling us to fully model the ST dependencies within MTS data by additionally considering the correlations between DEDT. The process begins by segmenting each MTS sample into multiple patches, each corresponding to a timestamp, and then encoding the signals of each sensor as sensor features. The sensors across all patches are fully connected through dot-product computations. To improve the FC graph, we design a decay matrix by considering the temporal distances between these patches, assigning larger correlations to closer patches. This design ensures that the temporally close sensors exhibit stronger correlations compared to those that are temporally distant.
We then design FC graph convolution to effectively capture the ST dependencies within the FC graph. While a naive approach would directly perform graph convolution across the entire graph by considering all sensors across all patches, we recognize that this may fail to capture local temporal patterns within MTS data, similar to how Convolutional Neural Networks (CNNs) adopt local convolution to capture local patterns within images instead of directly stacking all pixels. Additionally, using all sensors across all patches introduces unnecessary computational costs. To address this, we propose a moving-pooling GNN layer, which adopts moving windows with a specific size to slide along patches. Within each window, graph convolution is performed to update node features through edge propagation. Subsequently, a temporal pooling operation is applied to obtain high-level sensor features. After multiple parallel layers of moving-pooling GNN, we acquire the updated sensor features, which are then stacked and mapped to obtain final representations.
In summary, our contributions are three folds. First, we propose a fully-connected ST graph to explicitly model the correlations between sensors across all timestamps. By designing a temporal distance-based decay matrix, we improve the constructed graph, effectively modelling the comprehensive ST dependencies within MTS data. Second, we propose a moving-pooling GNN layer to effectively capture the ST dependencies from the constructed graph for learning effective representations. It introduces a moving window to consider local ST dependencies, followed by a temporal pooling operation to extract high-level features. Third, we conduct extensive experiments to show the effectiveness of our method for effectively modelling and capturing the complex ST dependencies within MTS data.
## 2 Related Work
Conventional methods for MTS dataDue to the inherent sequential nature of MTS data, traditional methods primarily focused on capturing temporal correlations between timestamps. This is often achieved through leveraging temporal encoders such as CNNs, Long Short-Term Memory (LSTM), and Transformers. Initially, due to the popularity in computer vision, 1D-CNN was first applied [14, 15, 16, 17, 18, 19]. These models employed 1D-CNN as encoders to extract temporal features, which were then employed for downstream tasks. Additionally, some investigations explored 2D-CNN models, treating MTS data as two-dimensional images [16]. LSTM-based model is another branch to capture the temporal dependency from MTS data due to its ability to capture long-term dependency [15, 17, 18]. More recently, due to its powerful attention mechanism, Transformers [16] become popular, and extensive transformer-based works are developed to maximize its potential to capture temporal correlations within MTS data [15, 16, 17].
While these methodologies have greatly advanced MTS analysis, they overlooked the spatial dependency within MTS data which originates from its multi-source nature, i.e., signals are collected from multiple sensors. The dependency represents the spatial correlations between these sensors, which play important roles in fully modelling MTS data. For instance, in a scenario involving machine status detection, a temperature sensor's readings would correlate with those of a fan speed sensor. Overlooking the spatial dependency restricts the ability to fully model MTS data, resulting in limited performance when learning effective representations.
GNN for MTS dataIn recent years, a growing number of researchers have recognized the significance of incorporating spatial dependencies into the learning of MTS data representations [15]. To achieve that, a common approach is to leverage GNN, generally involving the combination of GNN with other temporal encoders, such as 1D-CNN, to capture the spatial dependency and temporal dependency respectively [16, 17, 18, 19, 15, 14, 13]. For example, HierCorrPool [16] designed sequential graphs and adopted CNN to capture temporal dependency within these graphs. Subsequently, GNNs were utilized to capture the spatial dependencies between sensors within each graph. GraphSleepNet [16] also introduced sequential graphs and designed a CNN-GNN encoder to capture ST dependencies within MTS data for sleep stage classification. HAGCN [17] employed LSTM to extract temporal features, which were then used to construct graphs that were further processed by GNN. These researchers have made significant
contributions by leveraging GNN to capture spatial dependencies within MTS data. However, as previously discussed, their approaches suffer from limitations in graph construction and graph convolution, preventing them from explicitly considering the correlations between DEDT. This limitation hinders their ability to comprehensively model ST dependencies within MTS data, ultimately impacting their performance in learning effective representations.
To address the limitation in existing approaches and comprehensively model ST dependencies within MTS data, we introduce FC-STGNN, a novel framework designed to enhance representation learning for MTS data.
## 3 Methodology
### Problem Formulation
Given a dataset \(\mathcal{D}\) consisting of \(n\) labelled MTS samples \(\{X_{j},y_{j}\}_{j=1}^{n}\), each sample \(X_{j}\in\mathbb{R}^{N\times L}\) is collected from \(N\) sensors with \(T\) timestamps. Our objective is to learn an effective encoder \(\mathcal{F}\) capable of fully capturing the underlying spatial-temporal dependencies within MTS data. This approach can help extract effective representations \(h_{j}=\mathcal{F}(X_{j})\in\mathbb{R}^{d}\) from \(X_{j}\), enabling us to perform well in diverse downstream tasks, such as machine remaining useful life prediction, human activity recognition, and so on. For simplicity, the subscript \(j\) is removed, and we denote an MTS sample as \(X\).
### Overall Structure
Fig. 2 shows the overall structure of FC-STGNN, which aims to fully capture the ST dependencies within MTS data. Given an MTS sample, we first segment the signals of each sensor into multiple patches, each corresponding to a timestamp. Each patch is then processed by an encoder to learn sensor-level features. Subsequently, we employ positional encoding to integrate positional information into the sensor features across different patches. Next, we propose FC graph construction to achieve comprehensive interconnections between sensors across patches, realized by calculating the dot product of sensors. To enhance these connections, we introduce a decay matrix by considering temporal distances between patches. Next, a moving-pooling GNN is then proposed to fully capture the ST dependencies within the FC graph. We design moving windows which traverse along patches and then apply GNN within each window. After updating sensor features by capturing the comprehensive ST dependencies within each window, a temporal pooling operation is employed to learn high-level sensor features. By using multiple parallel layers of FC graph construction and convolution to capture ST dependencies from different perspectives, we concatenate the features, followed by an output layer to obtain the final representations for downstream tasks. Further details are provided in subsequent sections.
### FC Graph Construction
Graph ConstructionGiven an MTS sample \(X\in\mathbb{R}^{N\times L}\), we segment the signals of each sensor into multiple patches
Figure 2: Overall structure of FC-STGNN. Beginning with an MTS sample, each sensor’s signals are segmented into multiple patches, as shown in the example with three patches (each containing four sensors). Sensor-level features are then learned through an encoder within each patch. Then, the features from different patches are further encoded with positional encoding, followed by FC graph construction and convolution. (1) FC graph construction: This involves fully connecting the sensors across patches by calculating their dot products, enabling the additional connections of DEDT. To refine the full connections of sensors across patches, a decay matrix is introduced by considering their temporal distances. (Note: Due to space constraints, only one sensor exhibits fully-connected weights in this example). (2) FC graph convolution: Moving windows with specific sizes traverse along patches (e.g., two in this example). Graph convolution is then applied to the FC graph within each window. Following the update of each sensor’s features by capturing the comprehensive ST dependencies within each window, a temporal pooling operation is employed to learn high-level sensor features for each window. After multiple parallel layers, we concatenate the features, followed by an output layer to obtain final representations for downstream tasks.
by considering the local temporal patterns within MTS data [23]. Using patch size \(f\), we create \(\{X_{t}\}_{t=1}^{L}\) from \(X\), where \(t\) is the patch index representing a timestamp, and each \(X_{t}\in\mathbb{R}^{N\times f}\). \(\hat{L}\) denotes the number of segmented patches, calculated as \(\hat{L}=[\frac{L}{f}]\), where \([\cdot]\) represents the truncation operation. Each \(X_{t}\) contains segmented signals from \(n\) sensors, i.e., \(X_{t}=\{x_{t,i}\}_{i=1}^{N}\), where \(x_{t,i}\in\mathbb{R}^{f}\).
Subsequently, we employ an encoder \(f_{c}(\cdot|W_{c})\) to process the segmented signals within each window. Notably, the encoder operates at the sensor-level to learn sensor-level features, i.e., \(x^{\prime}_{t,i}=f_{c}(x_{t,i}|W_{c})\). Moreover, to maintain relative positional information of corresponding sensors across patches, we adopt positional encoding as inspired by [14]. Specifically, for the \(i\)-th sensor \(\{x^{\prime}_{t,i}\}_{t=1}^{L}\), positional encoding, as shown in Eq. (1), is introduced into sensor features, e.g., \(z_{t,i}=f_{p}(t)+x^{\prime}_{t,i}\) representing the sensor features enhanced by positional encoding. Here, \(m\) represents the \(m\)-th feature of sensor features.
\[\vec{p_{t}}^{(m)}=f_{p}(t)^{(m)}:=\begin{cases}sin(\omega_{k}\cdot t)&\text{ if }m=2k,\\ cos(\omega_{k}\cdot t)&\text{if }m=2k+1.\end{cases} \tag{1}\]
With the learned sensor features across multiple patches, we can then proceed to construct an FC graph that interconnects all sensors across these patches by additionally considering the correlations between DEDT. For graph construction, we have the assumption that correlated sensors should exhibit similar properties, making their features close within the feature space. This enables us to adopt similarity to represent the correlation between sensors, with greater similarity reflecting a higher correlation. In this case, we employ a simple yet effective metric, the dot product, to quantify the similarity between two sensors, defined as \(e_{tr,ij}=g_{s}(z_{t,i})(g_{s}(z_{\tau,j}))^{T}\), where \(t,r\in[1,\hat{L}]\) and \(i,j\in[1,N]\). Here, the function \(g_{s}(z)=zW_{s}\) is employed to enhance the expressive capacity, drawing inspiration from the attention computation in [14], where \(W_{s}\) is the learnable weights. Further, the softmax function restricts the correlations within [0,1]. Finally, we derive the FC graph \(\mathcal{G}=(Z,E)\), where \(Z=\{\{z_{t,i}\}_{i=1}^{N}\}_{t=1}^{\hat{L}}\), and \(E=\{\{e_{tr,ij}\}_{i=1}^{N}\}_{t,r=1}^{\hat{L}}\). \(E\) is denoted as the adjacent matrix of the FC graph, whose elements represent the correlations between sensors among all patches. The graph \(\mathcal{G}\) encompasses not only temporal correlations between timestamps and spatial correlations in each timestamp, but also additionally includes the correlations between DEDT, enabling us to model the comprehensive ST dependencies within MTS data.
Decay MatrixThe FC graph \(\mathcal{G}\) is constructed based on sensor similarity across patches only, without accounting for temporal distances between sensors across these patches. However, it is intuitive that sensors at more distant timestamps should show weaker correlations compared to those at closer timestamps. Motivated by this, we devise a decay matrix that incorporates temporal distances between sensors, aiming to enhance the precision of the FC graph \(\mathcal{G}\).
We provide Fig. 3 for visual clarification. The left represents the adjacency matrix of a graph involving three patches, each containing four sensors. The dimension of this adjacent matrix is \(E\in\mathbb{R}^{(3\times 4)\times(3\times 4)}\). In this matrix, each row presents a sensor's connections with other sensors across all patches. We take the first row as an example, which represents the connectivity of the first sensor \(z_{T-1,1}\) of the \((T-1)\)-th patch. The first four columns represent its connections with sensors within the same patch. As these sensors occur simultaneously, they should exhibit stronger correlations than those in other patches. The subsequent four columns represent the connections of \(z_{T-1,1}\) with sensors from the \(T\)-th patch. As these sensors are in different patches, their correlations with \(z_{T-1,1}\) should be decayed, measured by a decay rate \(\delta\). The final four columns represent the connections of \(z_{T-1,1}\) with sensors from the \((T+1)\)-th patch. As the temporal gap expands, correlations naturally decline further, measured by \(\delta^{2}\). Drawing from these discussions, we formulate the decay matrix \(C=\{\{c_{tr,ij}\}_{i,j=1}^{N}\}_{t,r=1}^{\hat{L}}\), where each element \(c_{tr,ij}=\delta^{t-r}\). This matrix is employed to enhance the correlations between sensors across patches, yielding \(e_{tr,ij}=e_{tr,ij}\cdot c_{tr,ij}\). This approach ensures that temporally close sensors exhibit stronger correlations than those temporally distant sensors.
### FC Graph Convolution
Utilizing the constructed FC graph, the next step is to capture the ST dependencies within MTS data for representation learning. A straightforward approach would involve applying graph convolution across the entire graph. Nevertheless, this approach might fail to effectively capture the local ST dependencies within MTS data. This is similar to the rationale behind CNNs employing local convolution to capture local information from images. Furthermore, directly utilizing the entire graph could lead to extra computation costs. To solve these limitations, we propose a moving-pooling GNN, including a moving window to capture local ST dependencies and temporal pooling to extract high-level features.
We begin by utilizing a moving window with a specific size \(M\) that traverses along patches. The window moves by \(s\) slides in each movement. Fig. 4 provides a visual illustration, featuring an FC graph with four patches, each containing four sensors. In this example, a size-two window moves with stride one, leading to three windows obtained. Here, each window contains two patches, each containing four sensors. Then, GNN is adopted within each window.
Specifically, following previous works [23, 24], we employ a Message Passing Neural Network (MPNN), a variant of GNN, to capture
Figure 3: Decay matrix to improve the adjacent matrix.
ST dependencies of the graph within each window. Specifically, MPNN involves propagation and updating stages. During the propagation stage, the information from neighboring nodes is propagated into the central node. Given a central node \(z_{t,i}^{l}\) of the \(w\)-th window in the \(l\)-th layer, it has a set of neighboring nodes \(\{\{z_{r,j}^{l}\}_{j=1}^{N}\}_{r=w-\frac{M}{2}}^{w+\frac{M}{2}}\) across \(M\) patches in the same window. The central node has correlations with its neighbors as \(\{\{e_{tr,ij}^{l}\}_{j=1}^{N}\}_{r=w-\frac{M}{2}}^{w+\frac{M}{2}}\). After the propagation stage, we obtain the propagated features \(h_{t,i}^{l}=\sum_{r=w-\frac{M}{2}}^{w+\frac{M}{2}}\sum_{j=1}^{N}z_{r,j}^{l}e_{ tr,ij}^{l}\). Then, the updating stage adopts a non-linear function to update the propagated sensor features, i.e., \(z_{t,i}^{l+1}=f_{g}(h_{t,i}^{l}|W_{g})\). Overall, MPNN propagates the information of sensors based on the correlations between all sensors across \(M\) patches, enabling us to fully capture the comprehensive ST dependencies within the window to update sensor features. The updating stage introduces non-linear functions to update sensor features, further enhancing the ability to learn effective representations.
After updating sensor features by capturing ST dependencies, a temporal pooling operation is employed to extract high-level features for each window, drawing inspiration from the pooling operation in CNNs. Given the updated sensor features \(\{z_{t,i}^{l+1}\}_{t=w-\frac{M}{2}}^{w+\frac{M}{2}}\) for the \(i\)-the sensor across \(M\) patches, we perform temporal pooling using an average pooling strategy, yielding sensor features \(z_{w,i}^{l+1}=\sum_{t=w-\frac{M}{2}}^{w+\frac{M}{2}}z_{t,i}^{l+1}/M\) for the \(w\)-th window. Subsequently, by stacking the sensors across all windows as depicted in Fig. 2, we create a high-level FC graph serving as input for the subsequent layer. Note that we only adopt one layer in this study, thus directly utilizing the obtained sensor features from each window for output purposes.
Inspired by the multi-branch concept introduced in previous research [23], we also integrate multiple parallel layers of graph construction and convolution. This approach allows us to initialize the model with diverse weights, enabling training to capture ST dependencies from various comprehensive viewpoints and obtain the best possible solution. Stacking all sensor features from these multiple layers, we employ a straightforward output layer, i.e., MLP, to transform the stacked features into representations. These representations can be leveraged for downstream tasks.
## 4 Experimental Results
DatasetsWe examine our method on three different downstream tasks: Remaining Useful Life (RUL) prediction, Human Activity Recognition (HAR), and Sleep Stage Classification (SSC). Specifically, we utilize C-MAPSS [24] for RUL prediction, UCI-HAR [19] for HAR, and ISRUC-S3 [10] for SSC, following the previous work [25]. For C-MAPSS which includes four sub-datasets, we adopt the pre-defined train-test splits. The training dataset is further divided into 80% and 20% for training and validation. For HAR and ISRUC, we randomly split them into 60%, 20%, and 20% for training, validating, and testing. The details of these datasets can be found in our appendix.
EvaluationTo evaluate the performance of RUL prediction, we adopt RMSE and the Score function, following previous works [13, 25]. Lower values of these indicators refer to better model performance. For the evaluation of HAR and SSC, we adopt Accuracy (Accu.) and Macro-averaged F1-Score (MF1) in accordance with prior studies [1, 25]. Larger values of these indicators refer to better performance. Besides, to reduce the effect of random initialization, we conduct ten times for all experiments and take the average results for comparisons.
Implementation DetailsAll methods are conducted with NVIDIA GeForce RTX 3080Ti and implemented by PyTorch 1.9. We set the batch size as 100, choose ADAM as the optimizer with a learning rate of 1e-3, and train the model 40 epochs. More details can be found in our appendix.
### Comparisons with State-of-the-Art
We compare our method with SOTA methods, encompassing conventional methods like AConvLSTM [26], DAGN [11], Transformer-based approaches such as InFormer [23] and AutoFormer [24], as well as GNN-based methods including GCN [11], HAGCN [12], HierCorrPool [25], and MAGNN [13]. All methods are re-implemented based on their original configurations, with the exception of GNN-based methods, where we replace their encoders with the same encoders used in our approach for fair comparison.
Table 1 presents the comparison results, showing the remarkable effectiveness of our FC-STGNN. As shown in the table, our method exhibits large improvements across a majority of cases in comparison to both conventional temporal encoder-based and GNN-based methods. For instance, our method shows improvements of 7.6% and 3.4% in FD001 and FD003 of C-MAPSS, respectively, over the second-best results regarding RMSE. Similar improvements can be observed in UCI-HAR and ISRUC-S3, where our method outperforms the second-best methods by 1.02% and 1.56% regarding accuracy, respectively. These advancements underline the necessity of fully capturing spatial-temporal dependencies within MTS data, enabling us to achieve superior overall performance compared to SOTA methods.
Figure 4: Three windows obtained by moving along patches.
### Ablation Study
We conducted an ablation study to assess the effectiveness of our proposed modules. In the first variant 'w/o FC GC2', we excluded the usage of our FC Graph Construction and Graph Convolution approach. Instead, we followed the conventional methods [14, 22] to separately construct and convolve graphs for each patch. The second variant 'w/o M&P' involved incorporating FC graph construction but omitted the moving window and temporal pooling, so local ST dependencies cannot be captured. Furthermore, we obtained the third variant 'w/o pooling' by introducing the moving window while excluding the temporal pooling operation that is designed for high-level features. Lastly, the 'w/o decay' variant refrained from using the designed decay matrix to enhance the constructed FC graph. These variants are compared with the complete version.
Table 2 presents the ablation study results across three datasets. We take the RMSE results on FD001 of C-MAPSS as examples. Comparing against the 'w/o FC GC2' variant, we observe that our complete method achieves a 7.6% improvement, highlighting the necessity of the FC graph for effective feature learning through comprehensive modelling of ST dependencies within MTS data. With the introduction of FC graph construction, there is a noticeable performance boost of the 'w/o M&P' variant, and the gap with the complete version narrows, i.e., the gap is reduced to 4.5%. This outcome suggests that the FC graph contributes to representation learning, even without accounting for local ST dependencies within MTS data. Furthermore, by incorporating the moving window approach, we witness further performance improvements of the 'w/o pooling' variant due to its effectiveness in capturing local ST dependencies, narrowing the gap to 3.4%. With the inclusion of the temporal pooling operation, high-level sensor features are obtained, which helps to eliminate redundant features and thus further enhance the performance. Finally, when the decay matrix is excluded, there is a 4.3% decrease in performance, emphasising the necessity of employing the decay matrix to refine the constructed FC graph.
The above observations hold true across other sub-datasets of C-MAPSS, UCI-HAR, and ISRUC-S3 as well. These results underline the importance of modelling the comprehensive ST dependencies within MTS data, which in turn allows for the learning of more effective representations. This comprehensive modelling leads to superior overall performance in various downstream tasks.
### Sensitivity Analysis
In this section, we conduct sensitivity analysis for No. of parallel layers, patch size, moving window size, and decay rate. Typical results are reported, and additional results can be found in our appendix.
No. of Parallel LayersIn our approach, we employ multiple parallel layers of FC graph construction and graph convolution, allowing us to capture the spatial-temporal dependencies within MTS data from diverse perspectives. To assess the impact of varying the number of layers, we obtain the results in Fig. 5. It can be observed that incorporating additional parallel layers leads to enhanced performance, affirming the efficacy of employing multiple layers to model ST dependencies. For instance, in all cases, the model with 2 layers outperforms the single-layer counterpart. Additionally, in specific cases of ISRUC-S3, introducing 3 layers contributes to better performance compared to using fewer layers. However, the performance gains start diminishing or even reversing when many layers are introduced due to overfitting. Thus, too many layers are unnecessary.
Patch Size AnalysisWe segment each MTS sample as multiple patches for FC graph construction, which makes the patch size a parameter \(f\) influencing the constructed FC graph. To evaluate its impact, we conducted the patch size analysis. Notably, since C-MAPSS samples have relatively short time lengths, e.g., 30 timestamps for FD001, we opted for smaller patch sizes within [2, 4, 6, 8, 10] for sample segmentation. While for those in ISRUC-S3 which have larger time lengths, i.e., 300, we explored patch sizes within [10, 15, 30, 60, 75, 100, 150].
\begin{table}
\begin{tabular}{l|c c c c c c c c c|c c c} \hline \hline \multirow{3}{*}{Models} & \multicolumn{8}{c}{C-MAPSS} & \multicolumn{2}{c|}{UCI-HAR} & \multicolumn{2}{c}{ISRUC-S3} \\ & \multicolumn{2}{c}{FD001} & \multicolumn{2}{c}{FD002} & \multicolumn{2}{c}{FD003} & \multicolumn{2}{c|}{FD004} & \multicolumn{2}{c|}{UCI-HAR} & \multicolumn{2}{c}{ISRUC-S3} \\ & RMSE & Score & RMSE & Score & RMSE & Score & RMSE & Score & Accu & MF1 & Accu & MF1 \\ \hline AConvLSTM & 13.10 & 286 & 13.11 & 737 & 12.13 & 276 & 14.64 & 1011 & 86.06 & 85.75 & 72.93 & 69.52 \\ DAGN & 16.11 & 595 & 16.43 & 1242 & 18.05 & 1216 & 19.04 & 2321 & 89.02 & 88.94 & 55.35 & 50.51 \\ InFormer & 13.13 & 263 & 13.20 & 715 & 12.58 & 228 & 14.16 & 1023 & 90.23 & 90.23 & 72.15 & 68.67 \\ AutoFormer & 23.04 & 1063 & 16.51 & 1248 & 25.40 & 2034 & 20.31 & 2291 & 56.70 & 54.41 & 43.75 & 37.88 \\ GCN & 12.58 & 237 & 13.78 & 849 & 11.92 & 218 & 14.44 & 967 & 94.79 & 94.82 & 79.62 & 77.57 \\ HAGCN & 13.10 & 263 & 14.92 & 1086 & 13.46 & 327 & 14.66 & 880 & 80.79 & 81.08 & 66.59 & 60.20 \\ HierCorPool & 12.64 & 227 & 13.23 & **709** & 12.30 & 220 & 13.86 & 854 & 93.81 & 93.79 & 29.31 & 6.25 \\ MAGNN & 12.63 & 246 & 13.09 & 714 & 12.15 & 253 & 14.30 & 978 & 90.91 & 90.79 & 68.13 & 64.31 \\ \hline Ours & **11.62** & **203** & **13.04** & 738 & **11.52** & **198** & **13.62** & **816** & **95.81** & **95.82** & **80.87** & **78.79** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons with state-of-the-arts across various tasks
Figure 5: Sensitivity analysis for No. of parallel layers.
Fig. 6 presents the results. For C-MAPSS where sample sizes are small, we find that relatively smaller patch sizes would be good to obtain better performance. For instance, considering the RMSE of FD001, the optimal performance is achieved when the patch size is set to 6. Similar trends can be found across various sub-datasets, where the best performance can be generally found when the patch sizes are set to 4 or 6. Conversely, for datasets characterized by larger time lengths, employing relatively larger patch sizes leads to improved performance. For example, ISRUC-S3 samples exhibit enhanced performance with patch sizes around 75. These observations emphasize the nuanced relationship between patch size and performance, which is influenced by the characteristics of a specific dataset.
Moving Window Size AnalysisWe utilize moving windows with a designated size \(M\), which traverse along the patches with stride \(s\), to capture the local ST dependencies within MTS data. To evaluate their effects, we consider window sizes \(M\) of [1, 2, 3, 4], and stride sizes \(s\) of [1, 2].
Fig. 7 shows the analysis results. We consider the RMSE on C-MAPSS as examples. We find that a larger \(M\) can help to obtain better performance. For instance, the variant with \(M=2\) outperforms those with \(M=1\) which represents the variant without considering the correlations between DEDT. The improvements highlight the importance of considering these correlations through our FC graph. Additionally, further increasing \(M\) does not consistently yield additional benefits. In fact, performance may decrease when \(M\) becomes too large, e.g., \(M=4\) for FD001. This is because larger \(M\) includes more patches within each window for graph convolution, potentially causing local ST dependencies to be poorly captured. Meanwhile, similar trends can be found when \(s=2\). Notably, the performance of \(s=2\) is generally poorer when \(M\) is smaller, as small \(M\) and large \(s\) will lose information when moving the windows. Overall, these findings suggest that a window size of \(M=2\) and stride size \(s=1\) are optimal for achieving the best performance.
Decay Rate AnalysisWe employ the decay matrix to enhance our FC graph for more accurately representing the correlations between DEDT. The choice of decay rate \(\delta\) is crucial in this process and necessitates evaluation. In this regard, we consider \(\delta\) values within [0.1, 0.3, 0.5, 0.7, 0.9, 1], where \(\delta=1\) represents the variant without using the decay matrix. From the results in Fig. 8, we find that the variants with relatively larger \(\delta\) yield better performance, such as \(\delta=0.7\) and \(\delta=0.9\). When \(\delta\) is exceedingly small, e.g., 0.1, the performance experiences a significant drop, as the correlations between DEDT are overly distorted. Thus, setting \(\delta\) to 0.7 or 0.9 proves effective in achieving good performance for our model.
## 5 Conclusion
To model the comprehensive Spatial-Temporal (ST) dependencies within MTS data, we design a novel method named as Fully-Connected Spatial-Temporal Graph Neural Network (FC-STGNN). The method includes two essential modules, FC graph construction and FC graph convolution. For graph construction, we design an FC graph to connect sensors among all timestamps by additionally considering the correlations between DEDT, enabling comprehensive ST dependencies modelling within MTS data. Next, FC graph convolution is designed, with a moving-pooling
\begin{table}
\begin{tabular}{l|c c c c c c c c|c c c|c c} \hline \hline \multirow{2}{*}{Variants} & \multicolumn{6}{c}{C-MAPSS} & \multicolumn{1}{c|}{\multirow{2}{*
GNN by leveraging a moving window and temporal pooling to capture the local ST dependencies and then learn high-level features. Our method is evaluated through extensive experiments, emphasizing its capacity to effectively model the comprehensive ST dependencies within MTS data.
|
2303.17982 | Adaptive stabilized finite elements via residual minimization onto
bubble enrichments | The Adaptive Stabilized Finite Element method (AS-FEM) developed in Calo et.
al. combines the idea of the residual minimization method with the inf-sup
stability offered by the discontinuous Galerkin (dG) frameworks. As a result,
the discretizations deliver stabilized approximations and residual
representatives in the dG space that can drive automatic adaptivity. We
generalize AS FEM by considering continuous test spaces; thus, we propose a
residual minimization method on a stable Continuous Interior Penalty (CIP)
formulation that considers a C0-conforming trial FEM space and a test space
based on the enrichment of the trial space by bubble functions. In our
numerical experiments, the test space choice results in a significant reduction
of the total degrees of freedom compared to the dG test spaces of Calo et. al.
that converge at the same rate. Moreover, as trial and test spaces are
C0-conforming, implementing a full dG data structure is unnecessary,
simplifying the method's implementation considerably and making it appealing
for industrial applications, see Labanda et. al. | José G. Hasbani, Paulina Sepúlveda, Ignacio Muga, Victor M. Calo, Sergio Rojas | 2023-03-31T11:34:05Z | http://arxiv.org/abs/2303.17982v1 | # Adaptive stabilized finite elements via residual minimization onto bubble enrichments
###### Abstract
The Adaptive Stabilized Finite Element method (AS-FEM) developed in [16] combines the idea of the residual minimization method with the inf-sup stability offered by the discontinuous Galerkin (dG) frameworks. As a result, the discretizations deliver stabilized approximations and residual representatives in the dG space that can drive automatic adaptivity. We generalize AS-FEM by considering continuous test spaces; thus, we propose a residual minimization method on a stable Continuous Interior Penalty (CIP) formulation that considers a \(C^{0}\)-conforming trial FEM space and a test space based on the enrichment of the trial space by bubble functions. In our numerical experiments, the test space choice results in a significant reduction of the total degrees of freedom compared to the dG test spaces of [16] that converge at the same rate. Moreover, as trial and test spaces are \(C^{0}\)-conforming, implementing a full dG data structure is unnecessary, simplifying the method's implementation considerably and making it appealing for industrial applications, see [42].
keywords: adaptivity, stabilized finite elements, residual minimization, Continuous Galerkin, Continuous Interior Penalty +
Footnote †: journal: CAMWA
###### Contents
* 1 Introduction
* 2 Model problem.
* 3 Notation, discrete spaces and interpolation results.
* 3.1 Discrete spaces
* 3.2 Interpolation results
* 4 Residual minimization method onto bubble enriched test spaces
* 4.1 Preliminaries
* 4.2 Continuous interior penalty method onto bubble enriched continuous space.
* 4.3 Residual minimization
* 5 Numerical experiments.
* 5.1 Advection-reaction problem.
* 5.2 Goal-oriented adaptivity for advection-reaction problems
* 6 Contributions and future work.
* 7
## 1 Introduction
The continuous Galerkin (cG) finite element methods (FEM) for advection-dominated reaction-type problems might suffer from instabilities. When the hyperbolic character of the problem becomes predominant, interior and outflow layers form, causing large local gradients in the solution. The cG-FEM is unstable in this regime, and various stabilized methods have been proposed. Techniques in the framework of cG-FEM are available in the literature, including Streamline-Upwind Petrov-Galerkin (SUPG) method [39], residual-free bubbles [8], and sub-viscosity models for advection-diffusion problems [35]. The relationships between the aforementioned strategies are also well understood in almost all cases. For example, in [7], a relation between stabilized finite element methods and the Galerkin method employing bubble functions was established for the advection-diffusion problems. In that work, the authors showed that bubble functions help stabilize the advective operator without using up-winding or any other numerical strategy. In particular, for the advection-diffusion problem, the Galerkin method employing piecewise linear functions enriched with bubble functions was shown to be equivalent to the SUPG method in the diffusive limit. Applications of this type of enrichment include stabilization of Stokes flow [3] and stabilization of Galerkin approximation using artificial viscosity [35]. Hughes generalized the stabilized methods construction into a unified framework in the variational multiscale framework [36; 37]; these ideas were extended to other applications such as turbulence modeling [6; 38]. Although those strategies are now reaching maturity, those methods have some drawbacks in certain complex flow regimes. For instance, the SUPG stabilization becomes non-symmetric and does not allow lumped mass, the residual free bubbles method adds additional degrees of freedom to the system, and the projection methods introduce hierarchical meshes for the projection on the subgrid viscosity model [35].
Interior Penalty penalty methods using continuous functions were introduced originally by [4; 29] for different problems, namely, the biharmonic operator and the second-order elliptic and parabolic problems. These methods penalize the flux jump of the discrete solution at mesh interfaces. Thus, the method [29] keeps the benefits of continuous finite element methods as they were standard for elliptic problems while simultaneously managing the difficulties encountered by these methods when the hyperbolic character of diffusion-advection problems becomes dominant in the advection limit. However, the robustness of the error estimate in the advection-dominated regime was not analyzed until [13], in which the analysis was based on linear finite elements. Recently, a generalized hp-convergence analysis for a high-order Continuous Interior Penalty (CIP) finite element method was presented in [12] applied to advection-reaction and diffusion-advection-reaction problems.
Alternatively, the Continuous Interior Penalty (CIP) method, introduced in [30], for advection-diffusion-reaction problems; this method adds an \(L^{2}\) penalization to the flux jumps (i.e., gradient jumps for uniform coefficients) over the mesh interior edges/facets to stabilize a continuous approximation (see [11; 12; 13; 14]). In [11] and [12], the authors developed a hp-convergence analysis for high-order CIP methods for advection-reaction and advection-diffusion problems. This stabilization does not depend on the diffusion coefficient and considers the case of pure advection. Moreover, Burman [10] introduced a formulation relating the stabilized continuous and discontinuous Galerkin frameworks with the CIP formulation. In that work, the author showed that both frameworks can be condensed into a single formulation, presented a robust a-posteriori error estimate for advection-reaction problems to guide adaptivity, and compared the low-order CIP and dG approximations using an adaptive approach showing that the CIP method achieves optimal convergence in the \(L^{2}\) norm.
More recently, the Adaptive Stabilized Finite Element Method (AS-FEM) was introduced (see [16]). This method formulates residual minimization problems within a dG mathematical framework. Namely, a discrete approximation of the solution in a continuous trial space is constructed by minimizing the residual in the dual norm of a dG test space with inf-sup stability. The residual minimization problem is equivalent to a saddle-point problem that inherits dG's inf-sup stability. There are some similarities with the Discontinuous Petrov-Galerkin (DPG) method as both minimize the residual in a non-standard norm (see, e.g., [15; 18; 23; 24; 25; 45; 46; 47; 50]). However, AS-FEM builds on non-conforming dG formulations, where stronger norms than those used in DPG may be chosen when the test space contains continuous functions. Application examples include diffusive-advective-reactive problems [20], incompressible Stokes flows [41; 43], continuation analysis of compaction banding in geomaterials [19] and weak constraint enforcement for advection-dominated diffusion problems [21]. The method has been
successfully applied to several nonlinear problems, such as dynamic fracture propagation [42], mineral deposition [48], and the method of lines for Bratu's equation [33]. In [49], the authors extend AS-FEM to goal-oriented adaptivity (GoA); they describe a general theory for problems with well-posed formulations and provide error estimates to guide the GoA for advection-diffusion-reaction problems. In addition, they define a discrete adjoint system as a saddle-point problem where the discontinuous conforming space across element interfaces restricts the solution. The same dG inf-sup arguments guarantee the well-posedness of this adjoint saddle-point problem. Solving the primal and the adjoint problem requires the solution of a single saddle-point problem with two right-hand sides. Moreover, the authors proposed two alternative stable discrete problems that can measure the discrete adjoint error of the problem; a strategy similar to a recent DPG theory [40] in which the adjoint problem is solved using the original saddle-point formulation with a different right-hand side.
The method proposed in [16] and its extension to GoA in [49] contain a general theory motivated in a dG framework; this framework can also use continuous test spaces (see [42] for a demonstration of this idea). Herein, we analyze the extension of the AS-FEM based on a stable CIP formulation for an advection-reaction problem proposed in [12; 13]. We consider a bubble-enriched continuous trial space as a test space to explode the power of the residual minimization method, which delivers an error representative which is robust and reliable to guide adaptive mesh refinements. This space is sufficient to guarantee a distance to the trial space and obtain a residual representative to drive adaptivity.
We test the method's performance in a challenging advection-reaction problem. We choose this model problem since the elliptic character of the diffusion term in the advection-diffusion equation has some smoothing properties on the transport problem. Nonetheless, extending the results obtained in this work to advection-diffusion-reaction problems is straightforward. Additionally, we present a new result on a priori error estimates for the residual minimization method using continuous test spaces, which relies on an orthogonality argument for boundedness of the discrete bilinear form and coercivity that proves quasi-optimal convergence for advection-dominated problems. Finally, we perform adaptive numerical experiments using energy-based and GoA strategies [16; 49]. In the energy-based examples, we evaluate the performance of the residual error estimate obtained by the residual minimization method against the a-posteriori error estimate of [10] regarding the relative error in the \(L^{2}\)-norm and the resulting adapted meshes.
The remainder of this paper is structured as follows: Section 2 introduces the advection-reaction model problem. In Section 3, the notation and the discrete problem settings are presented. In Section 4, we describe the CIP formulation in the context of bubble-enriched continuous spaces, its main properties, and the residual minimization method, and we state the main result of this work. In Section 5, we present numerical experiments to show the method's performance and compare the results with other methods found in the literature. Finally, Section 6 summarizes the main contributions of this work and points to possible future research directions.
## 2 Model problem.
Let \(\Omega\subset\mathbb{R}^{d}\) ( \(d=2,3\)) be an open bounded connected set, with Lipschitz boundary \(\partial\Omega\), and outward normal vector \(\mathbf{n}\). Consider an advection field \(\mathbf{b}\in[L^{\infty}(\Omega)]^{d}\) such that \(\nabla\cdot\mathbf{b}\in L^{\infty}(\Omega)\), and a reaction coefficient \(\mu\in L^{\infty}(\Omega)\). Let us define the following graph space:
\[W:=\left\{w\in L^{2}(\Omega):\mathbf{b}\cdot\nabla w\in L^{2}( \Omega)\right\}, \tag{1}\]
equipped with the graph norm \(\|w\|_{W}^{2}=\|w\|_{L^{2}(\Omega)}^{2}+\|\mathbf{b}\cdot\nabla w\|_{L^{2}( \Omega)}^{2}\). The advection field \(\mathbf{b}\) partitions the boundary \(\partial\Omega\) into inflow, characteristic, and outflow parts, having the following expressions when \(\mathbf{b}\) is continuous1:
Footnote 1: These boundaries may also be defined for the general case \(\mathbf{b}\in[L^{\infty}(\Omega)]^{d}\) and \(\nabla\cdot\mathbf{b}\in L^{\infty}(\Omega)\); see [9; Section 2].
\[\partial\Omega_{-} :=\left\{x\in\partial\Omega:\mathbf{b}(x)\cdot\mathbf{n}(x)<0\right\}, \tag{2a}\] \[\partial\Omega_{0} :=\left\{x\in\partial\Omega:\mathbf{b}(x)\cdot\mathbf{n}(x)=0\right\},\] (2b) \[\partial\Omega_{+} :=\left\{x\in\partial\Omega:\mathbf{b}(x)\cdot\mathbf{n}(x)>0\right\}. \tag{2c}\]
Given \(f\in L^{2}(\Omega)\), we consider the advection-reaction model problem that seeks \(u\in W\) such that:
\[\left\{\begin{array}{rl}\mathbf{b}\cdot\nabla u+\mu\,u=f,&\text{in }\Omega\\ u=0,&\text{on }\partial\Omega_{-}.\end{array}\right. \tag{3}\]
We point out that traces of \(W\) are well-defined2, implying that the linear space \(W_{0,-}:=\{w\in W:w|_{\partial\Omega_{-}}=0\}\) is also well-defined. Moreover, equation (3) translates into finding \(u\in W_{0,-}\) such that \(Au=f\), where \(A:W_{0,-}\to L^{2}(\Omega)\) (defined as \(A\,w:=\mathbf{b}\cdot\nabla w+\mu w\), for all \(w\in W_{0,-}\)) is a linear isomorphism provided
Footnote 2: See, e.g., [27, Section 2.1.3] and [34] for an extension.
\[\mu-\frac{1}{2}\nabla\cdot\mathbf{b}\geq\mu_{0}>0,\quad\text{ \emph{a.e.} in }\Omega, \tag{4}\]
holds for some positive constant \(\mu_{0}\) (see [31, Proposition 5.9] ). When \(\mu=0\) and \(\nabla\cdot\mathbf{b}=0\), then \(A\) still is a linear isomorphism provided that \(\mathbf{b}\) is an \(\Omega\)-filling field (see [31, Remark 5.10]).
**Remark 1** (Non-homogeneous inflow boundary condition).: _The trace operator is linear, continuous, and surjective from \(W\) onto the space_
\[L^{2}(|\boldsymbol{b}\cdot\boldsymbol{n}|;\partial\Omega):=\left\{w\text{ measurable in }\partial\Omega:\int_{\partial\Omega}|\boldsymbol{b}\cdot\boldsymbol{n}|w^{2}< +\infty\right\},\]
_which allows us to consider non-homogeneous inflow boundary conditions in (3), whenever the inflow data belongs to \(L^{2}(|\boldsymbol{b}\cdot\boldsymbol{n}|;\partial\Omega)\) (see, e.g., [27, Lemma 2.11])._
## 3 Notation, discrete spaces and interpolation results.
Let \(\Omega_{h}\) be a quasi-uniform simplicial mesh of \(\Omega\), defined as a collection of finite many open connected elements \(T\subset\Omega\) with Lipschitz boundaries, such that \(\overline{\Omega}\) is the union of the closures of all mesh elements \(T\) in \(\Omega\). We denote by \(\mathcal{F}_{h}^{0}\) the collection of all interior element boundaries (edges/faces); by \(\mathcal{F}_{h}^{0}\) the collection of those element boundaries that belong to \(\partial\Omega\); and set \(\mathcal{F}_{h}:=\mathcal{F}_{h}^{0}\bigcup\mathcal{F}_{h}^{0}\). Given \(T\in\Omega_{h}\), let us denote the diameter of \(T\) by \(h_{T}>0\), and let \(h:=\max\limits_{T\in\Omega_{h}}h_{T}\). Analogously, for \(e\subset\mathcal{F}_{h}^{0}\), let \(h_{e}>0\) be the diameter \(e\).
For a given set \(D\subset\mathbb{R}^{d}\), consider the Sobolev space \(H^{s}(D)\) of order \(s\geq 0\), and denote by \(\|\cdot\|_{s,D}\) its well-known norm (see, e.g., [1]). By convention, set \(L^{2}(D)=H^{0}(D)\) and abbreviate the \(L^{2}(D)\) inner product by \((\cdot,\cdot)_{D}\) and its respective norm by \(\|\cdot\|_{D}\).
We define the _broken_ Sobolev space \(H^{s}(\Omega_{h})\) as follows
\[H^{s}(\Omega_{h}):=\{v\in L^{2}(\Omega):v|_{T}\in H^{s}(T)\text{ for all }T\in\Omega_{h}\},\]
Figure 1: Notation for an interface
with its corresponding broken norm \(\|\cdot\|_{s,\Omega_{h}}^{2}:=\sum\limits_{T\in\Omega_{h}}\|(\cdot)_{T}\|_{s,T}^{2}\). When \(s>\frac{1}{2}\), traces over the edges of elements are well-defined. Thus, for \(e\subset\mathcal{F}_{h}^{0}\), we define the jump of a function \(v\in H^{s}(\Omega_{h})\) across \(e\), as the following expression:
\[\left[\!\left|v\right|\!\right|_{e}:=v\big{|}_{T^{+}}-v\big{|}_{T^{-}},\qquad \forall e\in\mathcal{F}_{h}^{0}, \tag{5}\]
where \(v\big{|}_{T^{+}}\) and \(v\big{|}_{T^{-}}\) are the traces over \(e\) related with a predefined normal \(\mathbf{n}_{e}\) (see Figure 1 for a reference of the interface notation).
### Discrete spaces
Let \(\mathbb{P}^{p}(T)\) be the space of polynomials of total degree \(\leq p\) over \(T\). We consider the following discrete spaces:
\[U_{h}^{p}:=\left\{v\in C^{0}(\Omega):v|_{T}\in\mathbb{P}^{p}(T),\,\forall T \in\Omega_{h}\right\}, \tag{6}\]
\[V_{h}^{p}:=\left\{v\in L^{2}(\Omega):v|_{T}\in\mathbb{P}^{p}(T),\,\forall T \in\Omega_{h}\right\}, \tag{7}\]
and define the space of local bubble functions of degree \(\leq k\), with \(k>d\) (see [3]), by
\[\mathbb{B}^{k}(\Omega_{h}):=\left\{v\in H_{0}^{1}(\Omega):v|_{T}\in\mathbb{P} ^{k}(T)\bigcap H_{0}^{1}(T),\,\forall T\in\Omega_{h}\right\}. \tag{8}\]
Finally, we define the bubble-enriched continuous function space, with \(k>\max\{p,d\}\), as
\[U_{h}^{p,k}:=U_{h}^{p}+\mathbb{B}^{k}(\Omega_{h}). \tag{9}\]
As an example, in the case of triangular elements and \(k=3\), a local bubble function over \(T\) can be defined as a cubic function spanned by \(\lambda_{1}\lambda_{2}\lambda_{3}\), where \(\lambda_{1},\lambda_{2},\lambda_{3}\) are the barycentric coordinates on \(T\), see [3; 42; 44].
**Remark 2** (Containment of discrete spaces).: _When \(k\leq p\) implies \(U_{h}^{p,k}=U_{h}^{p}\); alternatively, when \(k>p\), the following discrete space contention occurs_
\[U_{h}^{p}\subset U_{h}^{p,k}\subseteq U_{h}^{k}. \tag{10}\]
### Interpolation results
Our convergence analysis follows [12] for the \(hp\)-continuous interior penalty (CIP) method, which uses trace and inverse-trace inequalities, local interpolation results, and error estimates for the \(L^{2}(\Omega)\)-projection. Here is a summary of these results in two dimensions (i.e., \(d=2\)). We simplify notation by abbreviating the inequalities \(a\leq C\,b\) as \(a\lesssim b\) whenever the positive constant \(C\) is independent of the mesh and polynomial degree.
**Definition 1** (Admissible set).: _Given a simplex \(T\), let \(n_{p}:=\dim\mathbb{P}^{p}(T)\). A nonempty set of nodes \(\mathcal{A}=\{a_{i}\}_{1\leq i\leq n_{p}}\) of \(T\) is admissible if and only if \(\mathcal{A}\) is unisolvent in \(\mathbb{P}^{p}(T)\) and \(\mathcal{A}\cap e\) is unisolvent in \(\mathbb{P}^{p}(e)\), for all edges \(e\subset\partial T\)._
Given a unisolvent set of nodes \(\mathcal{A}\) of a simplex \(T\), define:
\[\mathbb{P}^{p}_{\mathcal{A}}(T):=\left\{v\in\mathbb{P}^{p}(T):v(a_{i})=0, \forall\,a_{i}\in\mathcal{A}\setminus\partial T\right\}.\]
**Assumption 1**.: _Let \(\hat{T}\) be the reference element; for \(\mathbb{P}^{p}(\hat{T})\), there exists an admissible set of nodes \(\hat{\mathcal{A}}\) of \(\hat{T}\), such that_
\[\|v\|_{\hat{T}}\lesssim p^{-1/2}\|v\|_{\partial\hat{T}},\qquad\forall\,v\in \mathbb{P}^{p}_{\mathcal{A}}(\hat{T}).\]
In [12; Section 5.1], the authors establish that Assumption 1 leads to the following inequalities.
**Lemma 1** (Trace and inverse trace inequalities on triangles).: _Under Assumption 1, the following inequalities hold for any \(T\in\Omega_{h}\):_
\[\|v\|_{\partial T}\lesssim\left(\frac{p^{2}}{h_{T}}\right)^{\frac{1}{2}}\|v \|_{T},\quad\forall\,v\in\mathbb{P}^{p}(T); \tag{11}\]
\[\|v\|_{T}\lesssim\left(\frac{h_{T}}{p}\right)^{\frac{1}{2}}\|v\|_{\partial T}, \quad\forall\,v\in\mathbb{P}^{p,0}(T), \tag{12}\]
_where \(\mathbb{P}^{p,0}(T)\) denotes the subspace of \(\mathbb{P}^{p}(T)\) spanned by those polynomials vanishing at all interior nodes of \(T\)._
**Definition 2** (Oswald Interpolation).: _Given \(T\in\Omega_{h}\), let \(\mathcal{A}_{T}\) be the image of the admissible set \(\hat{\mathcal{A}}\) (see Assumption 1) under an affine transformation mapping from \(\hat{T}\) onto \(T\). For each node \(a\in\mathcal{A}_{T}\), consider the patch of elements \(\Omega_{a}:=\{T\in\Omega_{h},a\in\overline{T}\}\). The Oswald interpolation operator \(I_{Os}:V_{h}^{p}\to U_{h}^{p}\) (see, e.g., (12, Section 5.5.2)), is defined locally by setting_
\[I_{Os}(v_{h})(a):=\frac{1}{|\Omega_{a}|}\sum_{T\in\Omega_{a}}v_{h}\big{|}_{T}(a )\,,\qquad\forall v_{h}\in V_{h}^{p}, \tag{13}\]
_where \(|\Omega_{a}|\) stands for the cardinality of \(\Omega_{a}\)._
The following Lemma (proved in (12, Lemma 5.3)) establishes the interpolation error associated with \(I_{Os}\).
**Lemma 2** (Interpolation error).: _For all \(T\in\Omega_{h}\), the following estimate holds:_
\[\|v_{h}-I_{Os}v_{h}\|_{T}\lesssim\left(\frac{h_{T}}{p}\right)^{\frac{1}{2}} \sum_{e\in\mathcal{F}_{T}}\|\{v_{h}\}\|_{e}\,,\qquad\forall v_{h}\in V_{h}^{p},\]
_where \(\mathcal{F}_{T}=\left\{e\in\mathcal{F}_{h}:e\cap\overline{T}\neq\emptyset\right\}\)._
The following Lemma (proved in (12, Lemma 5.4)) establishes an estimate for the \(L^{2}(\Omega)\)-projection error.
**Lemma 3** (Error estimate for the \(L^{2}(\Omega)\)-projection).: _Let \(\Pi_{h}:L^{2}(\Omega)\mapsto U_{h}^{p}\) be the \(L^{2}(\Omega)\)-orthogonal projector onto \(U_{h}^{p}\). For all \(u\in H^{s}(\Omega),s\geq 1\), the following estimates (14) and (15) in Lemma 3 hold:_
\[\|u-\Pi_{h}u\|_{\Omega}\lesssim p^{\frac{1}{4}}\left(\frac{h}{p}\right)^{r}\| u\|_{r,\Omega}, \tag{14}\]
\[\|\nabla\left(u-\Pi_{h}u\right)\|_{\Omega}\lesssim p^{\frac{5}{4}}\left(\frac{ h}{p}\right)^{r-1}\|u\|_{r,\Omega}, \tag{15}\]
_with \(r=\min(p+1,s)\)._
## 4 Residual minimization method onto bubble enriched test spaces
### Preliminaries
Assume that \(\nabla\cdot\mathbf{b}=0\) and \(\mu(x)>\mu_{0}>0\) (a.e. in \(\Omega\)); thus, fulfilling condition (4). Related to Problem (3), we consider the bilinear form \(a\colon H^{1}(\Omega)\times H^{1}(\Omega)\to\mathbb{R}\) defined by
\[a(v,w):=\left(\mu v,w\right)_{\Omega}-\left(v,\mathbf{b}\cdot\nabla w\right)_ {\Omega}+\left(\mathbf{b}\cdot\mathbf{n}\,v,w\right)_{\partial\Omega^{+}}, \quad\forall v,w\in H^{1}(\Omega), \tag{16}\]
and the \(hp\)-CIP bilinear form \(j_{h,k}:H^{s}(\Omega_{h})\times H^{s}(\Omega_{h})\to\mathbb{R}\) (with \(s>\frac{3}{2}\)) defined by
\[j_{h,k}(v,w):=\sum_{e\in\mathcal{F}_{h}^{\partial}}\gamma_{h,k}\big{\{}[ \nabla v\cdot\mathbf{n}_{e}],\,[\nabla w\cdot\mathbf{n}_{e}]\big{\}}_{e}, \quad\forall v,w\in H^{s}(\Omega_{h}), \tag{17}\]
where \(\gamma_{h,k}:=\frac{h_{e}^{2}}{k^{a}}\left\|\mathbf{b}\cdot\mathbf{n}_{e} \right\|_{L^{2\infty}(e)}\) is the stabilization parameter (see (12, 14)). We determine the exponent \(\alpha\) using the \(hp\)-convergence analysis and \(k\) is related to the particular polynomial order used in the discrete counterparts.
We define the following norm for functions \(w\in H^{s}(\Omega_{h})\), with \(s>\frac{3}{2}\) (see (12, Eq. 8)) :
\[\|w\|_{h,k}^{2}:=\left\|\mu_{0}^{1/2}w\right\|_{\Omega}^{2}+\frac{1}{2}\big{\| \mathbf{b}\cdot\mathbf{n}\|^{1/2}\,w\big{\|}_{\partial\Omega}^{2}+j_{h,k}(w,w ),\qquad\forall\ w\in H^{s}(\Omega_{h}). \tag{18}\]
**Lemma 4** (Coercivity).: _For \(s>\frac{3}{2}\), the bilinear form \(b_{h}(\cdot,\cdot):=a(\cdot,\cdot)+j_{h,k}(\cdot,\cdot)\) is coercive in \(H^{s}(\Omega_{h})\) with respect to the norm defined in (18)._
Proof.: Since \(\nabla\cdot\mathbf{b}=0\), observe that \(\left(w,\mathbf{b}\cdot\nabla w\right)_{\Omega}=\frac{1}{2}\big{(}\mathbf{b}\cdot \mathbf{n}w,w\big{)}_{\partial\Omega}\), for all \(w\in H^{s}(\Omega_{h})\). Hence,
\[a(w,w)=\big{(}\mu w,w\big{)}_{\Omega}-\frac{1}{2}\big{(}\mathbf{b}\cdot \mathbf{n}w,w\big{)}_{\partial\Omega}+\big{(}\mathbf{b}\cdot\mathbf{n}w,w \big{)}_{\partial\Omega^{+}}=\big{(}\mu w,w\big{)}_{\Omega}+\frac{1}{2}\big{\|} \mathbf{b}\cdot\mathbf{n}\big{|}^{1/2}w\big{\|}_{\partial\Omega}^{2}.\]
Thus, since \(\mu(x)>\mu_{0}\), we get
\[b_{h}(w,w)=a(w,w)+j_{h,k}(w,w)\geq\|w\|_{h,k}^{2},\quad\forall w\in H^{s}( \Omega_{h}).\]
### Continuous interior penalty method onto bubble enriched continuous space.
Consider the CIP formulation in the bubble enriched continuous spaces \(U_{h}^{p,k}\) as follows,
\[\left\{\begin{array}{l}\text{Find }\theta_{h}\in U_{h}^{p,k}\text{ such that:}\\ b_{h}(\theta_{h},\nu_{h})=l_{h}(\nu_{h}),\quad\forall\,\nu_{h}\in U_{h}^{p,k},\end{array}\right. \tag{19}\]
where \(b_{h}(\theta_{h},\nu_{h})\) is the sum of the \(a\) and \(j\) forms defined in (16) and (17), and \(l_{h}(\nu_{h})\) is \((f,\nu_{h})_{\Omega}\).
In the following, we assume that \(u\in H^{s}(\Omega)\), \(s>\frac{3}{2}\) solves (3) and \(\theta_{h}\in U_{h}^{p,k}\) solves (19). Thus, the formulation (19) satisfies the following properties,
**Lemma 5** (Coercivity).: _For all \(\nu_{h}\in U_{h}^{p,k}\) then,_
\[b_{h}(\nu_{h},\nu_{h})\gtrsim\|\!|\nu_{h}|\!|\!|_{k,h}^{2}. \tag{20}\]
Proof.: Follows from Lemma 4.
**Lemma 6** (Consistency).: _Let \(u\in H^{s}(\Omega),s>\frac{3}{2}\) solve (3) and let \(\theta_{h}\in U_{h}^{p,k}\) solve (19). Then, for all \(\nu_{h}\in U_{h}^{p,k}\),_
\[b_{h}(u-\theta_{h},\nu_{h})=0. \tag{21}\]
Proof.: Since \(u\in H^{s}(\Omega)\), \(q>\frac{3}{2}\), the jump \([\nabla u\cdot n]_{e}=0\), for all \(e\in\mathcal{F}\). Thus, \(j_{h,k}(u,\nu_{h})=0\) for all \(\nu_{h}\in U^{p,k}\). Thus,
\[b_{h}(u-\theta_{h},\nu_{h}) =b_{h}(u,\nu_{h})-b(\theta_{h},\nu_{h})\] \[=a(u,\nu_{h})-(f,\nu_{h})=0.\]
### Residual minimization
Calo et al. [16] present the Adaptive Stabilized Finite Element Method (AS-FEM), which combines residual minimization with an inf-sup stable discretization (e.g., a discontinuous Galerkin (dG) formulation) to deliver a robust on-the-fly adaptive method. Consequently, this method delivers a stabilized approximation of the solution and a residual representative in the dG space that drives adaptivity with no further a-posteriori error analysis. In the following, we enrich the discretization space with bubbles \(U_{h}^{p,k}\), see (9), and use it as a test space; a choice that guarantees a distance between the trial and the test space, which is suitable for residual minimization and adaptive mesh refinement guided by the built-in error representative. We adopt the CIP formulation (19), which is coercive-stable, and show a new result inspired by [12] for the a-priori error estimate for the residual minimization method that relies on boundedness and discrete coercivity properties of \(b_{h}(\cdot,\cdot)\). In the abstract setting of AS-FEM, we consider two real Hilbert spaces \(U,V\), and a conforming subspace \(U_{h}^{p}\) of either \(U\) or \(V\). In addition, we assume that the discrete variational formulation satisfies Lemmas 5 and 6. Thus, the main idea behind AS-FEM is finding \(u_{h}\in U_{h}^{p}\) using a well-posed discrete variational formulation (e.g., discrete problem (19)) set in a discrete space, which can be chosen as \(U_{h}^{p,k}\), and find a residual representative of the error in the \(\|\!|\!|\cdot\!|\!|\!|_{h,p}\) via residual minimization in the discrete-dual space of that discrete space \(\left(U_{h}^{p,k}\right)^{*}\).
Let
\[\left\{\begin{array}{c}B_{h}:U_{h}^{p,k}\mathop{\rightarrow}\left(U_{h}^{p,k} \right)^{*}\\ \\ w_{h}\mathop{\rightarrow}b_{h}(w_{h},\cdot)\end{array}\right., \tag{22}\]
and let \(R_{h}^{-1}\) be the inverse of the Riesz map defined as:
\[\left\{\begin{array}{c}R_{h}:U_{h}^{p,k}\mathop{\rightarrow}\left(U_{h}^{p, k}\right)^{*}\\ \\ \left\langle R_{h}\tau_{h},\nu_{h}\right\rangle_{\left(U_{h}^{p,k}\right)^{*} \times U_{h}^{p,k}:=(\tau_{h},\nu_{h})_{U_{h}^{p,k}}}\end{array}\right.. \tag{23}\]
In addition, the dual triple norm \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!
Proof.: Using triangular inequality and the inf-sup condition that holds from the coercivity condition.
\[\left|\!\left|\!\left|u-u_{h}\right|\!\right|\!\right|_{h,k} \leq\left|\!\left|\!\left|u-\Pi_{h}u_{h}\right|\!\right|\!\right|_{ h,k}+\left|\!\left|\!\left|\!\left|\Pi_{h}u-u_{h}\right|\!\right|\!\right|\! \right|_{h,k}. \tag{29}\] \[\leq\left|\!\left|\!\left|u-\Pi_{h}u_{h}\right|\!\right|\!\right|_ {h,k}+\sup_{v_{h}\in U_{h}^{p}\setminus\{0\}}\frac{b\left(u_{h}-\Pi_{h}u,v_{h }\right)}{\left|\!\left|\!\left|v_{h}\right|\!\right|\!\right|_{h,k}}. \tag{30}\]
We now bound the last term of (30). Let \(\theta_{h}\) solve equation (19), the consistency condition (5) establishes:
\[l_{h}(v_{h})=b_{h}(\theta_{h},v_{h})=b_{h}(u,v_{h})\qquad\forall v_{h}\in U^{p,k}. \tag{31}\]
Since \(u_{h}\) solves (26), the first equation in (26) and (31) imply
\[b(u_{h}-\Pi_{h}u,v_{h})=b(u-\Pi_{h}u,v_{h})-(\epsilon_{h},v_{h})_{h,k}.\]
Then,
\[\sup_{v_{h}\in U_{h}^{p}\setminus\{0\}}\frac{b\left(u_{h}-\Pi_{h }u,v_{h}\right)}{\left|\!\left|\!\left|v_{h}\right|\!\right|\!\right|_{h,k}} \lesssim\sup_{v_{h}\in U_{h}^{p}\setminus\{0\}}\frac{b(u-\Pi_{h }u,v_{h})}{\left|\!\left|\!\left|v_{h}\right|\!\right|\!\right|_{h,k}}+\sup_{v _{h}\in U_{h}^{p}\setminus\{0\}}\frac{b(\epsilon_{h},v_{h})}{\left|\!\left| \!\left|v_{h}\right|\!\right|\!\right|_{h,k}}\] \[\lesssim\left|\!\left|\!\left|u-\Pi_{h}u\right|\!\right|\!\right| _{h,k,\theta}+\left|\!\left|\!\left|\!\left|\epsilon_{h}\right|\!\right|\! \right|\!\right|_{h,k}\] (using property ( 27a ) )
Moreover,
\[\left|\!\left|\!\left|\epsilon_{h}\right|\!\right|\!\right|_{h,k}^ {2} =l_{h}(\epsilon_{h})-b_{h}(u_{h},\epsilon_{h}) \text{(using (\ref{eq:def_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eqeq_eq_eq_eq_eqeq_eq_eq_eq_eq_eq_eq_eqeq_eq_eqeq_eq_eq_eqeq_eq_eq_eqeq_eq_eq_eqeq_eq_eqeq_eq_eqeq_eqeq_eq_eq_eqeq_eq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeq_eqeqeq_eqeqeq_eqeqeqeq_eq
Proof.: See Appendix A.
**Assumption 3** (Saturation).: _Let \(u\in U\) be the solution of (3) and \(\theta_{h}\in U_{h}^{p,k}\) be the discrete solution of (19) and let \(u_{h}\in U_{h}^{p}\) be the solution of the saddle-point problem (26). There exists a constant \(C_{s}\in[0,1)\), uniform with respect of the mesh size such that,_
\[\left|\kern-1.075pt\left|\kern-1.075pt\left|u-\theta_{h}\right|\kern-1.075pt \right|\kern-1.075pt\right|_{h,k}\leq C_{s}\left|\kern-1.075pt\left|u-u_{h} \right|\kern-1.075pt\right|\kern-1.075pt\right|_{h,k}.\]
This assumption states that the discrete solution \(\theta_{h}\) is closer than \(u_{h}\) to the exact solution \(u\) with respect to the norm \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right| \kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right|_{h,k}\). This is a relevant assumption as \(U_{h}^{p}\subset U_{h}^{p,k}\). However, this assumption does not necessarily hold in the pre-asymptotic regime (see [16]) or if \(U_{h}^{p,k}\) is not rich enough. Additionally, we assume that the error estimate in (28) is at least quasi-optimal in the following sense:
**Assumption 4** (Optimality and quasi-optimality).: _If the analytical solution \(u\) is sufficiently regular, the quantities \(\left|\kern-1.075pt\left|\kern-1.075pt\left|u-u_{h}\right|\kern-1.075pt\right| \kern-1.075pt\right|_{h,k}\) and \(\left|\kern-1.075pt\left|\kern-1.075pt\left|u-\Pi_{h}^{p}u_{h}\right|\kern-1.075pt \right|\kern-1.075pt\right|_{h,k,\theta}\) follow the same convergence rate as \(h\to 0^{+}\). If the norms \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\right|_{h,k}\) and \(\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|\kern-1.075pt\right| \kern-1.075pt\right|_{h,k,\theta}\) are equal, then the error estimate (28) is optimal, otherwise, it is quasi-optimal._
The robustness of the residual representative and the a posteriori error estimate is given by the following result:
**Proposition 2** (Robustness of the residual representative and a posteriori error estimates).: _Considering the same assumptions of Theorem 1 and the triple norm in \(U_{h}^{p,k}\) defined in (18), the following bounds for \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\varepsilon_{h}\right|\kern-1.075pt \right|\kern-1.075pt\right|_{h,k}\) hold:_
\[\left|\kern-1.075pt\left|\kern-1.075pt\left|\theta_{h}-u_{h}\right|\kern-1.075pt \right|\kern-1.075pt\right|_{h,k}\lesssim\left|\kern-1.075pt\left|\kern-1.075pt \left|\varepsilon_{h}\right|\kern-1.075pt\right|\kern-1.075pt\right|_{h,k} \lesssim\left|\kern-1.075pt\left|\kern-1.075pt\left|u-\Pi_{h}^{p}u\right| \kern-1.075pt\right|\kern-1.075pt\right|_{h,k,\theta}, \tag{33}\]
_where, \(\theta_{h}\) is the solution of the discrete problem (19), \(\Pi_{h}^{p}:L^{2}(\Omega)\mapsto U_{h}^{p,k}\) is the \(L^{2}(\Omega)\)-orthogonal projection onto \(U_{h}^{p,k}\) and \(u\) is the exact solution of the continuous problem (3). Moreover, if the solution satisfies Assumption 3, the following efficiency error estimate holds:_
\[\left|\kern-1.075pt\left|\varepsilon_{h}\right|\kern-1.075pt\right|\kern-1.075pt \right|_{h,k}\lesssim\left|\kern-1.075pt\left|\kern-1.075pt\left|u-u_{h} \right|\kern-1.075pt\right|\kern-1.075pt\right|_{h,k}. \tag{34}\]
_Additionally, if Assumption 4 is satisfied, \(\left|\kern-1.075pt\left|\theta_{h}-u_{h}\right|\kern-1.075pt\right|\kern-1.075pt \right|_{h,k}\) and \(\left|\kern-1.075pt\left|\varepsilon_{h}\right|\kern-1.075pt\right|\kern-1.075pt \right|\kern-1.075pt\right|_{h,k}\) have the same convergence rate as \(h\to 0^{+}\)._
## 5 Numerical experiments.
In this section, we consider several numerical experiments in the context of the advection-reaction equation to demonstrate the performance of the residual minimization method considering bubble-enriched test spaces. The method can be extended to advection-diffusion-reaction problems, modifying the formulation and the discrete norm (18) to the one defined in [12]. In our first numerical example, we compare the performance of the residual-based error estimate to guide adaptivity against the a posteriori estimate proposed in [10] considering the same initial problem setting and element marking strategy. As a second numerical example, we compare the results for the goal-oriented adaptivity (GoA) strategy proposed in [49] using a dG framework against those obtained using the CIP formulation and residual minimization using a bubble-enriched test space. Although in [49], the results were obtained using a different formulation, the comparison using continuous test spaces is still fair since we calculate the relative error of a quantity of interest (QoI).
In all our numerical experiments, we use a standard adaptive procedure that considers an iterative process in which each iteration consists of the following four steps:
\[\text{SOIVE}\leftarrow\text{ESTIMATE}\rightarrow\text{MARK}\rightarrow\text{ REFINE}. \tag{35}\]
In addition, we adopt the Dofler bulk-chasing criterion (see [28]), where elements are marked when the local error estimate value is above a fraction of the total estimated error. Following [16; 49], we adopt 50 % error fraction for the energy-based adaptivity and a 20 % error fraction for the GoA strategy. We also employ a bisection-type refinement criterion [5] for the adaptive solver. Finally, we use Fenics [2] Python library to implement each of the numerical examples, and we use a direct LU solver for the resulting linear systems.
### Advection-reaction problem
We consider the advection-reaction problem (3) over the unit square domain \(\Omega=[0,1]^{2}\subset\mathbb{R}^{2}\). Following [10], we take the reaction parameter \(\mu=0.1\), the velocity field
\[\mathbf{b}(x_{1},\,x_{2})=\left(\frac{x_{2}+1}{\sqrt{x_{1}^{2}+(x_{2}+1)^{2}}}, \frac{-x_{1}}{\sqrt{x_{1}^{2}+(x_{2}+1)^{2}}}\right)^{T}\]
and \(g(x_{1},x_{2})\) so that the exact solution reads as,
\[u(x_{1},x_{2})=e^{\mu\sqrt{x_{1}^{2}+(x_{2}+1)^{2}}\arcsin\left(\frac{x_{2}+1} {\sqrt{x_{1}^{2}+(x_{2}+1)^{2}}}\right)}\arctan\left(\frac{\sqrt{x_{1}^{2}+(x_{ 2}+1)^{2}}-1.5}{\delta}\right)\]
where \(\delta\) is a parameter that controls the stiffness of the internal boundary layer (see Figure 2). In this example, we set \(\delta=0.01\) to obtain a smooth solution to assess the expected convergence rates. In addition, we apply the adaptive strategy from (35) to solve this problem using the a-posteriori residual estimate of [10] and the residual minimization error estimate to guide the adaptivity from the same initial mesh (see Figure 2(a)).
Figures 2(b) and 2(c) compare the refined meshes resulting from residual-based and a-posteriori estimates at similar numbers of degrees of freedom (DoFs). The a-posteriori estimate focuses the mesh refinement at the outflow boundary and the interior layer, with less emphasis at the inflow boundary. The residual-based estimator concentrates the refinement at the inflow boundary and progressively solves the interior layer.
Figure 3: Initial & refined meshes for similar total degrees-of-freedom (DoF) number using residual-based & a-posterior estimator from [10]
Figure 2: 2D advection-reaction problem (exact solution)
Figure 4 shows the comparison of the \(L^{2}\)-relative error between the a posteriori error estimate proposed in [10] (red curve) and the residual-based estimate (blue curve). For the residual minimization scheme, we choose a piecewise continuous trial space of order \(p=1\) and its enrichment with bubbles of order \(k=p+2\) as a test space. As for the a-posteriori error estimation strategy, we use conforming piecewise continuous trial and test space of order \(p=1\)[10]. We plot all quantities against the total number of degrees of freedom (DoFs), and the triangle shows the slope DoFs\({}^{-1}\), which is the optimal convergence in the \(L^{2}\)-norm for smooth problems [10]. The convergence rate of the CIP formulation that uses the residual minimization strategy over a bubble-enriched test space is slightly better than the one obtained from applying the CIP method with the a-posteriori error estimation strategy. In addition, note, in particular, the superconvergence of the \(L^{2}\)-error using CIP and residual minimization method when the mesh enters the asymptotic range. This is meaningful since the residual error estimate is defined in the energy norm. Although the convergence rate is improved in the \(L^{2}\)-norm the total number of DoFs using residual minimization is greater than the DoFs using the strategy in [10]. This fact is because residual minimization solves a full saddle-point system that gives a discrete solution and a residual estimate in the bubble-enriched test space.
Next, we study the validity of the saturation assumption (see Assumption 3) by considering the triple norm \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left| \!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!
### Goal-oriented adaptivity for advection-reaction problems
Following [49], we perform Goal-Oriented Adaptivity (GoA). The key insight behind this theory is to consider an adjoint continuous formulation and an adjoint problem to the saddle-point formulation (26). Thus, we approximate a quantity of interest (QoI) \(q(u)\), where \(q:U\mapsto\mathbb{R}\) is a bounded linear functional and \(u\in U\) is the exact solution of the continuous problem (3). The GoA strategy of [49] solves an additional continuous adjoint problem, which is equivalent to solving the following saddle-point problem as the adjoint formulation of problem (26):
\[\left\{\begin{array}{rcl}\text{Find }(\nu_{h}^{*},w_{h}^{*})\in U_{h}^{p,k} \times U_{h}^{p},&\text{such that:}\\ (\nu_{h}^{*},\nu_{h})_{U^{p,k}}+b_{h}(w_{h}^{*},\nu_{h})&=&0,&\forall\,\nu_{h} \in U_{h}^{p,k},\\ b_{h}(w_{h},\nu_{h}^{*})&=&q(w_{h}),&\forall\,w_{h}\in U_{h}^{p},\end{array}\right. \tag{37}\]
where \(\nu_{h}^{*}\in U_{h}^{p,k}\) approximates the discrete solution of the adjoint continuous problem (see [49]); the adjoint counterpart of the solution \(u_{h}\in U_{h}^{p}\) of (26) while \(w_{h}^{*}\in U_{h}^{p}\) is an additional variable that constrains the solution dimension. Moreover, as the direct saddle-point problem (26) is well-posed, the adjoint saddle-point problem (37) is also well-posed; the two problems share the same left-hand side. Thus, we approximate the error in the QoI following [49], which solves an auxiliary discrete problem that has a unique solution and is well-posed,
\[\left\{\begin{array}{rcl}\text{Find }\varepsilon_{h}^{*}\in U_{h}^{p,k} \text{ such that:}\\ (\varepsilon_{h}^{*},\nu_{h})=q(\nu_{h})-b_{h}(\nu_{h},\nu_{h}^{*}),&\forall \,\nu_{h}\in U_{h}^{p,k}\end{array}\right. \tag{38}\]
We adopt the following adjoint-residual-based estimator for the QoI [49],
\[\mathbb{E}_{T}^{2}(\varepsilon_{h},\varepsilon_{h}^{*}):=\left| \left(\varepsilon_{h},\varepsilon_{h}^{*}\right)_{U_{h}^{p,k}}\right| \tag{39}\]
which solves (26) and (37) along with the adjoint residual problem (38). Then, we mark each element with its local upper bound \(\left\|\!\left|\varepsilon_{h}\right\|\!\right|\!\left|\!\left|\varepsilon_{h} ^{*}\right|\!\right|\!\right|\!\right|_{T_{h,k}}\) where \(\left\|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left| \!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!
In addition, the QoI has the following form,
\[q(u)=\frac{1}{|\Omega_{0}|}\int_{\Omega_{0}}ud\Omega_{0}, \tag{41}\]
where \(u\) is the exact solution (40) and \(\Omega_{0}=(0.7,0.8)\times(0.3,0.5)\) is a subdomain of the physical domain \(\Omega\). As the starting point for the adaptive procedure, we consider the \(\Omega_{0}\)-conforming mesh of Figure 5(a). Figure 6 displays the resulting adapted meshes using a CIP formulation with bubble-enriched test space and a dG framework at a similar total number of DoFs. These figures show that the resulting adapted meshes are similar, and both methodologies adjust the refinement process consistently to the physical problem nature. Next, we compare error plots against the expected optimal convergence slope \(\text{DoF}^{-(p+\frac{1}{2})}\) where \(p\) is the polynomial order of the trial space (see [32]). Since the error is measured in a QoI, we compare the results obtained in [49] using a dG framework against CIP method with residual minimization onto a bubble-enriched test space results directly. We evaluate the
Figure 6: Initial & refined meshes for similar total degrees-of-freedom DoF number using CIP formulation with a bubble enriched test space (\(p=1\), \(k=3\)) & a dG framework (\(p=1\), \(\Delta p=0\))
Figure 7: Relative error (\(|q(u-u_{h})|/|q(u)|\) in the quantity of interest (QoI) using residual minimization with a CIP formulation with bubble-enriched test space (solid lines) and a dG framework (dashed lines)
numerical results considering two polynomial orders for the trial space (namely \(p=1,2\)) with the same polynomial order for the dG test space (i.e., \(\Delta p=0\)) and \(k=p+2\) for bubble enriched test space. Figure 7 shows the convergence of the relative error in the QoI for the CIP formulation with residual minimization onto bubble-enriched test spaces (solid lines) and the dG framework proposed in [49] (dashed lines). For low order approximations, the CIP formulation with residual minimization shows super convergence compared to the residual minimization using a dG test space that achieves the optimal convergence rate. For higher-order polynomial spaces, CIP has a performance similar to the dG framework.
## 6 Contributions and future work.
This paper extends AS-FEM to use the \(hp\)-CIP finite element method with an enriched test space. We use a stable formulation (namely, the CIP formulation) in the trial space and enlarge the test space via bubble enrichment of the trial space to estimate a residual representative to guide adaptivity, see [42]. This is meaningful as, in this case, the formulation is already stable in the trial space. Since we choose the coercive stable CIP formulation, we derive a new a-priori error estimate result for the residual minimization method using continuous bubble-enriched test spaces, which relies on an orthogonality argument for the boundedness of the discrete bilinear form that proves quasi-optimal convergence for advection-reaction problems. We also confirm numerically that the residual estimate is robust regarding the energy norm and competitive with the a-posteriori error estimate available in the literature. Moreover, we show that using the CIP formulation and residual minimization onto bubble-enriched test spaces improves the convergence rates in the \(L^{2}\)-norm. However, the residual estimator is calculated using the energy norm. The numerical results for goal-oriented adaptivity suggest that applying residual minimization and a coercive stable formulation using bubble-enriched continuous test spaces improves convergence rates for the error in the quantity of interest when the polynomial order for the trial space is low. Additionally, the method maintains the optimal convergence rates for high-order spaces. Further research will explore the performance of this method applied to other challenging problems. For example, we will study the extension of this method to non-linear hyperbolic equations that describe many engineering problems, such as solving the shallow water equations and coupled fluid and solid mechanics equations for assessing the long-term stability of tailing dams.
#### Acknowledgments
This publication was made possible in part by the support of Vista Energy Company, which promoted the exploration of new numerical techniques for simulating industry-related problems. Additionally, this publication was made possible in part by the Professorial Chair in Computational Geoscience at Curtin University. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 777778 (MATHROCKS).
## Appendix A Proof of Proposition 1.
We separate the proof of Proposition 1 into two part: Lemma 7 and Theorem 2, with the a-priori error estimate onto bubble enrichment.
**Lemma 7** (Boundedness).: _Consider \(k=p+1>d=2\). If \(\eta\in\left(U_{h}^{p}\right)^{\perp}:=\{z\in L^{2}(\Omega):(z,\nu_{h})=0, \forall\nu_{h}\in U_{h}^{p}\}\) the following result holds,_
\[\sup_{\nu_{h}\in U_{h}^{p,k}}\frac{b_{h}(\eta,\nu_{h})}{\|\nu_{h}\|_{k,h}} \lesssim\left|\kern-1.075pt\left|\kern-1.075pt\left|\eta\right|\right|\kern-1.075pt \right|_{k,h,\#},\] (A.1)
_where the norm_
\[\left|\kern-1.075pt\left|\kern-1.075pt\left|\eta\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\right|_{k,h,\#}:=\left|\kern-1.075pt\left| \eta\right|\kern-1.075pt\right|\kern-1.075pt\right|_{k,h}+\left(k^{2}\,h^{ \frac{1}{2}}p^{1/2}+k^{\alpha/2}p^{-1/2}\right)\left|\eta\right|_{h}\]
_and \(\left|\cdot\right|_{h}\), is the semi-norm_
\[\left|\eta\right|_{h}:=\left(\sum_{T\in\Omega_{h}}h_{T}^{-1}\beta_{T,\infty} \|\eta\|_{T}^{2}\right)^{1/2}.\]
Proof.: The only term to estimate in (19) is \(\left(\eta,\mathbf{b}\cdot\nabla\nu_{h}\right)_{\Omega}\). Let \(\mathbf{b}_{h}\) the \(L^{2}\)-orthogonal projection of \(\mathbf{b}\) onto \(V_{h}^{0}\). Thus,
\[\left(\eta,\mathbf{b}\cdot\nabla\nu_{h}\right)_{\Omega}=\left(\eta,(\mathbf{b} -\mathbf{b}_{h})\cdot\nabla\nu_{h}\right)_{\Omega}+\left(\eta,\mathbf{b}_{h} \nabla\nu_{h}\right)_{\Omega}.\] (A.2)
Since \(\mathbf{b}\in\left[W^{1,\infty}(\Omega)\right]^{2}\), and \(\nu_{h}\in U_{h}^{p,k}\subset U_{h}^{k}\) we have the following inverse inequalities (see e.g., [17]):
\[\|\mathbf{b}-\mathbf{b}_{h}\|_{[L^{\infty}(T)]^{d}} \lesssim h_{T}\|\mathbf{b}\|_{[W^{1,\infty}(T)]^{d}},\] (A.3) \[\|\nabla\nu_{h}\|_{T} \lesssim k^{2}h_{T}^{-1}\|\nu_{h}\|_{T},\text{ for all }T\in\Omega_{h}.\] (A.4)
Thus, we can bound the first term in the right-hand side of (A.2) as follows:
\[\left(\eta,(\mathbf{b}-\mathbf{b}_{h})\cdot\nabla\nu_{h}\right)_ {\Omega} \lesssim\left(\sum_{T\in\Omega_{h}}h_{T}^{-1}\beta_{T,\infty}\| \eta\|_{T}^{2}\right)^{\frac{1}{2}}\left(\sum_{T\in\Omega_{h}}h_{T}^{3}\beta_{ T,\infty}^{-1}\|\mathbf{b}\|_{W^{1,\infty}(T)}^{2}\|\nabla\nu_{h}\|_{T}^{2}\right)^{ \frac{1}{2}}\quad\text{(by (\ref{eq:1}) and Cauchy-Schwartz)}\] \[\lesssim k^{2}h^{\frac{1}{2}}|\eta_{h}|\nu_{h}|_{\Omega}\] (by (32), and (A.4)).
The second term in the right-hand side of (A.2) is estimated using the fact that \(\eta\in\left(U_{h}^{p}\right)^{\perp}\), the Cauchy-Schwartz inequality and the interpolation estimate of Lemma 2. Let \(\phi_{h}:=\mathbf{b}_{h}\cdot\nabla\nu_{h}\). We call \(U_{g,h}:=\left\{I_{Os}(\phi_{h}):\phi_{h}=\mathbf{b}_{h}\cdot\nabla\nu_{h}, \quad\nu_{h}\in U_{h}^{k,p}\right\}.\) Since, \(I_{Os}\phi_{h}\) may not be in \(U_{h}^{p,k}\) but it is contained in \(U_{h}^{p}\), since \(k=p+1\). Hence,
\[\left(\eta,\phi_{h}\right)_{\Omega} =\left(\eta,\phi_{h}-I_{Os}\phi_{h}\right)_{\Omega}\] \[\leq\left(\sum_{T\in\Omega_{h}}h_{T}^{-1}\beta_{T,\infty}\|\eta\| _{T}^{2}\right)^{\frac{1}{2}}\left(\sum_{T\in\Omega_{h}}h_{T}\beta_{T,\infty} ^{-1}\|\phi-I_{Os}\phi_{h}\|_{T}^{2}\right)^{\frac{1}{2}}\] \[\lesssim|\eta|_{h}\left(\sum_{T\in\Omega_{h}}\sum_{e\in\mathcal{ F}_{T}}h_{T}^{2}p^{-1}\beta_{T,\infty}^{-1}\|[\phi_{h}]\|_{e}^{2}\right)^{ \frac{1}{2}}.\]
Since
\[\left\|[\phi_{h}]\right\|_{e}=\|\mathbf{b}_{h}\cdot\nabla\nu_{h}\|_{e} \lesssim\|(\mathbf{b}-\mathbf{b}_{h})\cdot\nabla\nu_{h}\|_{e}+\|[\mathbf{b} \cdot\nabla\nu_{h}]\|_{e},\] (A.5)
and
\[\|[(\mathbf{b}-\mathbf{b}_{h})\cdot\nabla\nu_{h}]\|_{e} \lesssim\sum_{e\in T}h_{T}\|\mathbf{b}\|_{[W^{1,\infty}(T)]^{d}} \|\nabla\nu_{h}\|_{T}\|_{e}\] (A.6) \[\lesssim\sum_{e\in T}h_{T}\|\mathbf{b}\|_{[W^{1,\infty}(T)]^{d}} \left(\frac{p^{2}}{h_{T}}\right)^{1/2}\|\nabla\nu_{h}\|_{T}\] (Estimate ( 11 )) (A.7) \[\lesssim\sum_{e\in T}h_{T}\|\mathbf{b}\|_{[W^{1,\infty}(T)]^{d}} \left(\frac{p^{2}}{h_{T}}\right)^{1/2}\frac{k^{2}}{h_{T}}\|\nu_{h}\|_{T}\] (Estimate ( ( ( ( ( ( ( ( ( ( ( ( ( ((((((((((((((((((((((((((((((((((((((\)))))))))))))(((((((((((((((((((())))))))))))) (((((((((( (( ( ( ( ))))))(((( ( ((( ( ( ( ( 000} {
Collecting the above estimates and Lemma 5 yields
\[\left(\eta,\phi_{h}\right)_{\Omega}\lesssim|\eta|_{h}\big{(}k^{2}\,h^{\frac{1}{2}} p^{1/2}+k^{\alpha/2}p^{-1/2}\big{)}\|\nu_{h}\|_{k,h}.\]
This completes the proof.
**Lemma 8**.: _Consider the continuous \(L^{2}(\Omega)\)-projection operator \(\Pi_{h}:L^{2}(\Omega)\to U_{h}^{p}\). Then, for all \(w\in H^{s}(\Omega)\), \(s\geq 1\), we have the following \(hp\)-approximation properties_
\[|w-\Pi_{h}w|_{h} \lesssim p^{-\frac{1}{4}}\left(\frac{h}{p}\right)^{q-1/2}\|w\|_{q, \Omega},\] (A.12) \[\|w-\Pi_{h}w\|_{h,k} \lesssim\left(p^{3/4}+p^{11/4}k^{-\alpha/2}\right)\left(\frac{h}{ p}\right)^{q-1/2}\|w\|_{q,\Omega}\] (A.13)
_with \(q=\min\{p+1,s\}\)._
Proof.: Let \(w\in H^{s}(\Omega),s\geq 1\). The proof of (A.12) follows from [12, Lemma 5.6].
The proof of estimate (A.13) is as follows. By definition,
\[\|w-\Pi_{h}w\|_{h,k}^{2}:=\big{\|}\mu_{0}^{1/2}(w-\Pi_{h}w)\big{\|}_{\Omega}^ {2}+\frac{1}{2}\big{\|}|\mathbf{b}\cdot\mathbf{n}|^{1/2}(w-\Pi_{h}w)\big{\|}_{ \partial\Omega}^{2}+j_{h,k}(w-\Pi_{h}w,w-\Pi_{h}w).\] (A.14)
The estimates (14) and (15) of Lemma 3 imply
\[\|w-\Pi_{h}w\|_{\partial\Omega}\leq p^{3/4}\left(\frac{h}{p}\right)^{q-1/2}\| w\|_{q,\Omega}.\]
Thus, the second term of (A.14) is bounded. To bound the last term of (A.14), we will use the well-known estimate for the orthogonal \(L^{2}\) projection \(\Pi_{h}^{*}:L^{2}(\Omega)\to V_{h}^{p}\) under the same conditions of [12, Lemma 5.4],
\[\|w-\Pi_{h}^{*}w\|_{\partial T}\lesssim p^{1/4}\left(\frac{h}{p}\right)^{q-1/ 2}\|w\|_{q,T},\qquad\forall\,w\in\mathbb{P}^{p}(T)\] (A.15)
and
\[\|\nabla(w-\Pi_{h}w)\|_{\partial T} \leq\|\nabla(w-\Pi_{h}^{*}w)\|_{\partial T}+\|\nabla(\Pi_{h}w- \Pi_{h}^{*}w)\|_{\partial T}\] \[\lesssim\|\nabla(w-\Pi_{h}^{*}w)\|_{\partial T}+\left(\frac{p^{2 }}{h_{T}}\right)^{3/2}\|\Pi_{h}w-\Pi_{h}^{*}w\|_{T}\] (estimates ( 11 ) and ( A.4 )) \[\lesssim\|\nabla(w-\Pi_{h}^{*}w)\|_{\partial T}+\left(\frac{p^{2 }}{h_{T}}\right)^{3/2}\|\Pi_{h}^{*}w-I_{Os}(\Pi_{h}^{*}w)\|_{T}\] \[\lesssim\|\nabla(w-\Pi_{h}^{*}w)\|_{\partial T}+\left(\frac{p^{2 }}{h_{T}}\right)^{3/2}\left(\frac{h_{T}}{p}\right)^{1/2}\sum_{e\in\mathcal{F}_{ T}}\|\Pi_{h}^{*}w-w|_{e}\|_{e}\] \[\lesssim\|\nabla(w-\Pi_{h}^{*}w)\|_{\partial T}+\left(\frac{p^{2 }}{h_{T}}\right)^{3/2}\left(\frac{h_{T}}{p}\right)^{1/2}p^{1/4}\left(\frac{h_{ T}}{p}\right)^{q-1/2}\|w\|_{q,T}\] (estimate (A.15) \[\lesssim\|\nabla(w-\Pi_{h}^{*}w)\|_{\partial T}+p^{7/4}\left( \frac{h_{T}}{p}\right)^{q-3/2}\|w\|_{q,T}\] \[\lesssim p^{7/4}\left(\frac{h}{p}\right)^{q-3/2}\|w\|_{q,T}.\]
Thus, the last term of (A.14) is controlled as follows
\[j_{h,k}(w-\Pi_{h}w,w-\Pi_{h}w) \lesssim\sum_{e\in\mathcal{F}}\frac{h_{e}^{2}}{k^{a}}\beta_{\infty, e}\|\nabla(w-\Pi w)\cdot n\|_{e}\|_{e}^{2}\] \[\lesssim\sum_{T\in\Pi_{h}}\sum_{e\in\mathcal{F}(T)}\frac{h_{T}^{2 }}{k^{a}}\beta_{\infty,e}\bigg{(}p^{7/2}\left(\frac{h_{T}}{p}\right)^{2q-3}\|w \|_{q,\Omega}^{2}\bigg{)}\] \[\lesssim p^{11/4}k^{-\alpha}\left(\frac{h}{p}\right)^{2q-1}\|w\|_ {q,\Omega}^{2}.\]
**Theorem 2** (A priori error estimate onto bubble enrichment).: _Let \(u\in H^{s}(\Omega)\) with \(s>\frac{3}{2}\), solve (3) and let \(u_{h}\in U_{h}^{p}\) solve (26). Take \(\alpha=\frac{7}{2}\), then under Assumption 1 and using Lemma 8, the following estimate holds_
\[\|u-u_{h}\|_{k,h}\lesssim(p+p^{\frac{5}{4}}h^{\frac{1}{2}})\left(\frac{h}{p} \right)^{q-\frac{1}{2}}\|u\|_{q,\Omega},\] (A.16)
_with \(q=\min\{p+1,s\}\). Moreover, if \(h\leq p^{-\frac{1}{2}}\) then,_
\[\|u-u_{h}\|_{k,h}\lesssim p\left(\frac{h}{p}\right)^{q-\frac{1}{2}}\|u\|_{q, \Omega}.\] (A.17)
Proof.: When \(\alpha=7/2\), following estimate (A.13) in Lemma 8, we have
\[\|u-\Pi_{h}u\|_{k,h}\lesssim\left(p+p^{5/4}h^{1/2}\right)\left(\frac{h}{p} \right)^{q-1/2}\|u\|_{q,\Omega}.\]
Moreover, following the proof of Theorem 1 we have
\[\|u_{h}-\Pi_{h}u\|_{h,k}\lesssim\|u-\Pi_{h}u\|_{h,k,\theta}\]
Thus,
\[\|u-u_{h}\|_{k,h} \lesssim\|u-\Pi_{h}u\|_{h,k}+\|\Pi_{h}u-u_{h}\|_{k,h}\] \[\lesssim\|u-\Pi_{h}u\|_{h,k}+\left(k^{2}\,h^{\frac{1}{2}}p^{1/2} \,k^{4/2}p^{-1/2}\right)\big{|}u-\Pi_{h}u\big{|}_{h}\] \[\lesssim\|u-\Pi_{h}u\|_{h,k}+\left(k^{2}\,h^{\frac{1}{2}}p^{1/2} +k^{7/4}p^{-1/2}\right)p^{-\frac{1}{4}}\left(\frac{h}{p}\right)^{q-1/2}\|u\|_ {q,\Omega} (\text{using estimate \eqref{eq:estimate}})\] \[\lesssim\left(p^{3/4}+p^{11/4}k^{-7/4}+k^{2}p^{1/4}\,h^{\frac{1}{ 2}}+k^{7/4}p^{-3/4}\right)\left(\frac{h}{p}\right)^{q-1/2}\|u\|_{q,\Omega}, (\text{using estimate \eqref{eq:estimate}})\] \[\lesssim\left(p+p^{5/4}h^{\frac{1}{2}}\right)\left(\frac{h}{p} \right)^{q-1/2}\|u\|_{q,\Omega} (\text{overestimating})\]
with \(q=\min(p+1,s)\) and the inequality (A.16) is satisfied. Moreover, if \(h\leq p^{-1/2}\), with \(p\geq 1\) can be bounded by \(1\), we obtain
\[\|u-u_{h}\|_{k,h}\lesssim p\left(\frac{h}{p}\right)^{q-1/2}\|u\|_{q,\Omega},\]
with \(q=\min(p+1,s)\). |
2309.13567 | MentaLLaMA: Interpretable Mental Health Analysis on Social Media with
Large Language Models | With the development of web technology, social media texts are becoming a
rich source for automatic mental health analysis. As traditional discriminative
methods bear the problem of low interpretability, the recent large language
models have been explored for interpretable mental health analysis on social
media, which aims to provide detailed explanations along with predictions. The
results show that ChatGPT can generate approaching-human explanations for its
correct classifications. However, LLMs still achieve unsatisfactory
classification performance in a zero-shot/few-shot manner. Domain-specific
finetuning is an effective solution, but faces 2 challenges: 1) lack of
high-quality training data. 2) no open-source LLMs for interpretable mental
health analysis were released to lower the finetuning cost. To alleviate these
problems, we build the first multi-task and multi-source interpretable mental
health instruction (IMHI) dataset on social media, with 105K data samples. The
raw social media data are collected from 10 existing sources covering 8 mental
health analysis tasks. We use expert-written few-shot prompts and collected
labels to prompt ChatGPT and obtain explanations from its responses. To ensure
the reliability of the explanations, we perform strict automatic and human
evaluations on the correctness, consistency, and quality of generated data.
Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA,
the first open-source LLM series for interpretable mental health analysis with
instruction-following capability. We also evaluate the performance of
MentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where their
correctness for making predictions and the quality of explanations are
examined. The results show that MentalLLaMA approaches state-of-the-art
discriminative methods in correctness and generates high-quality explanations. | Kailai Yang, Tianlin Zhang, Ziyan Kuang, Qianqian Xie, Jimin Huang, Sophia Ananiadou | 2023-09-24T06:46:08Z | http://arxiv.org/abs/2309.13567v3 | # MentalLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models
###### Abstract.
As an integral part of people's daily lives, social media is becoming a rich source for automatic mental health analysis. As traditional discriminative methods bear the problem of low interpretability, the recent large language models (LLMs) have been explored for interpretable mental health analysis on social media, which aims to provide detailed explanations along with predictions. The results show that LLMs still achieve unsatisfactory classification performance in a zero-shot/few-shot manner, which significantly affects the quality of the generated explanations. Domain-specific finetuning is an effective solution, but faces two critical challenges: 1) lack of high-quality training data. 2) no open-source foundation LLMs. To alleviate these problems, we formally model interpretable mental health analysis as text generation tasks, and build the first multi-task and multi-source interpretable mental health instruction (IMHI) dataset with 105K data samples to support LLM instruction tuning. The raw social media data are collected from 10 existing sources covering 8 mental health analysis tasks. We prompt ChatGPT with expert-designed prompts to obtain explanations. To ensure the reliability of the explanations, we perform strict automatic and human evaluations on the correctness, consistency, and quality of generated data. Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLaMA, the first open-source instruction-following LLM series for interpretable mental health analysis on social media. We evaluate MentalLaMA on the IMHI benchmark, the first holistic evaluation benchmark for interpretable mental health analysis. The results show that MentalLaMA approaches state-of-the-art discriminative methods in correctness and generates ChatGPT-level explanations. MentalLaMA models also show strong generalizability to unseen tasks. The project is available at [https://github.com/SteveKGSYang/MentalLaMA](https://github.com/SteveKGSYang/MentalLaMA).
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote
generated explanations (Zhang et al., 2017). An effective solution is to fine-tune foundation LLMs with task-specific data to align with the target domain (Liu et al., 2017; Zhang et al., 2018). However, there are two key challenges in improving LLMs for interpretable mental health analysis with fine-tuning. Firstly, fine-tuning LLMs requires high-quality supervised training data. In mental health analysis on social media, though a few datasets include short extracted casual text spans (Song et al., 2016; Wang et al., 2017), it still lacks open-source data that provides detailed and reliable explanations for detection results. This is mainly due to the sensitive research topic (Liu et al., 2017; Zhang et al., 2018) and the high cost of writing explanations by domain experts. Secondly, prompting or fine-tuning close-source LLMs such as ChatGPT can be expensive3, time-consuming, and with huge carbon emissions4, while no open-source LLMs for interpretable mental health analysis have been released for public use. The lack of resources and high costs hinder the progress in related research.
Footnote 2: [https://openaac.com/pricing](https://openaac.com/pricing)
Footnote 3: [https://www.cutter.com/article/environmental-impact-large-language-models](https://www.cutter.com/article/environmental-impact-large-language-models)
To bridge these gaps, we formally model interpretable mental health analysis as text generation tasks, and build the first multi-task and multi-source interpretable mental health instruction (IMHI) dataset with 105K data samples to support LLM instruction tuning (Zheng et al., 2018) and evaluation. Firstly, we collect training data from 10 existing data sources covering 8 mental health analysis tasks. The collected data includes social media posts and their corresponding annotated labels. Secondly, we obtain a detailed explanation for each annotated label. Inspired by the success of self-instruct (Zheng et al., 2018), we use expert-written few-shot prompts and the collected labels to prompt ChatGPT and obtain explanations from its responses. To further ensure the quality of the explanations, we perform comprehensive automatic evaluations on all collected data, where the correctness of the predictions, consistency between labels and explanations, and quality of the explanations are evaluated. We also perform human evaluations for a subset of the collected data with a carefully designed annotation scheme from domain experts. Thirdly, we transform all collected social media posts, the labels, and the explanations into instruction-based query-answer pairs in a rule-based manner, which are used to build the IMHI training data and the IMHI evaluation benchmark, the first holistic evaluation benchmark for interpretable mental health analysis tasks.
Based on the IMHI dataset, we propose MentaLLaMA, the first open-source LLM series for interpretable mental health analysis with instruction-following capability. MentaLLaMA models are trained based on the LLaMA2 foundation models (Zhang et al., 2018). Specifically, we fine-tune 3 MentalLaMA models: MentaLLaMA-7B, MentaLLaMA-chat-7B, and MentaLLaMA-chat-13B. Some examples of MentaLLaMA's strong capabilities are presented in Figure 1. We also comprehensively evaluate the performance of MentaLLaMA models on the IMHI evaluation benchmark. We investigate the correctness of MentaLLaMA in making mental health classifications. The results show that MentaLLaMA-chat-13B surpasses or approaches SOTA discriminative methods (Zheng et al., 2018) on 7 out of 10 test sets in correctness. We also evaluate the quality of generated explanations. The results show that MentaLLaMA can generate ChatGPT-level explanations. Its generation quality benefits from instruction tuning, reinforcement learning from human feedback (RLHF) (Zhang et al., 2018), and increasing model sizes. MentaLLaMA models also show strong generalizability to unseen tasks.
We summarize our contributions as follows: 1) We formalize the interpretable mental health analysis tasks and build the IMHI dataset, the first multi-task and multi-source instruction-tuning dataset for interpretable mental health analysis on social media. 2) We propose MentaLLaMA, the first open-source instruction-following LLM for interpretable mental health analysis. MentaLLaMA can perform mental health analysis on social media data and generate high-quality explanations for its predictions. 3) We introduce the first holistic evaluation benchmark with 19K test samples, which covers 8 tasks and 10 test sets. Results and analysis on this benchmark demonstrate the superiority of MentaLLaMA.
## 2. Task Formalization
Based on preliminary explorations (Zhang et al., 2017; Zhang et al., 2017), we formally define the interpretable mental health analysis task in this section. Unlike previous discriminative settings, we model mental health analysis as generation tasks, where a generative model, such as an autoregressive language model \(P_{\phi}(y|x)\) parameterized by pre-trained weights \(\phi\), is set as the foundation. The model is adapted to simultaneously
Figure 1. Some examples of MentaLLaMA’s capabilities in diverse mental health analysis tasks.
solve \(N\) mental health analysis tasks, such as mental health detection and cause detection. Each task \(t\) is represented by a subset of \(N_{t}\) training context-target pairs: \(\mathcal{D}_{t}=(\{q_{t}^{t},r_{t}^{t}\})_{i=1,...,N_{t}}\), where \(q\) is a token sequence containing the target post and the query, and \(r\) is another sequence consisting of the answer to the query (e.g. the classification result) and a rationale for the decision making conveyed in natural language. All subsets are merged as the training dataset: \(\mathcal{D}=\cup_{t=1,...,N}\mathcal{D}_{t}\). The model is optimized on these data to improve the correctness of predictions and the quality of rationales by maximizing the conditional language modeling objective:
\[\underset{\phi}{max}\sum_{(q_{\pi})\in\mathcal{D}}\sum_{j=1}^{|r|}log(P_{\phi} (r_{j}|q,r_{<j})) \tag{1}\]
## 3. Imhi Dataset
This section introduces the construction process of the IMHI dataset. The process mainly involves 4 procedures: raw data collection, explanation generation via ChatGPT, AIGC evaluation for the generated explanations, and instruction construction.
### Raw Data Collection
The raw data is collected from 10 existing mental health analysis datasets from multiple social media data sources, including Reddit, Twitter, and Short Message Service (SMS) texts. These datasets are also annotated with high-quality labels, which are important resources for explanation generation and AIGC evaluation. More statistics of the collected raw data are shown below and in Table 1.
**Binary mental health detection**. This task aims to detect symptoms of one mental health condition, where each social media post is annotated with a binary label. We select two datasets for depression symptom detection: Depression_Reddit (DR) (Noh et al., 2017) and CLPsych15 (CLP) (Cheng et al., 2017). We also utilize Dreaddit (Shen et al., 2017), a dataset for stress detection, and a loneliness symptom detection dataset.
**Multi-class mental health detection**. This task aims to identify symptoms of one mental health condition from a given list of multiple mental health conditions, which are normally modeled as a multi-class single-label classification task. We select T-SID (Krishnan et al., 2017) and SWMH (Krishnan et al., 2017) datasets for this task, including symptoms of depression, PTSD, anxiety, etc.
**Mental health cause/factor detection**. With a post showing a mental health condition, this task aims to assign a label to the post for a possible cause/factor leading to the mental health condition from a given causes/factors list. Common causes include social relationships, medication, work pressure, etc. We select a stress-cause detection dataset SAD (Shen et al., 2017) and a depression/suicide cause detection dataset CAMS (Krishnan et al., 2017).
**Mental risk/wellness factors detection**. This task dives deep into the social or mental factors behind mental health conditions and aims to identify psychological risk/wellness factors from social media posts, which is also modeled as a classification task to detect the existence of certain factors. We select IRF (Krishnan et al., 2017), an annotated dataset for interpersonal risk factors of mental disturbance. Another dataset called MultiWD (Shen et al., 2017) is also collected, which is developed for analyzing mental wellness dimensions from psychological models.
### Explanation Generation with ChatGPT
Though rich data sources with high-quality classification annotations are available, it lacks open-source data that provides detailed and reliable explanations for the annotations. Therefore, we leverage ChatGPT to generate explanations for the collected samples, which is proven a reliable LM in interpretable mental health analysis (Shen et al., 2017). Firstly, we ask the domain experts to manually write 1 query instruction and 35 explanation examples for each of the tasks in 10 collected datasets. The expert-written explanations lead to a golden explanation set \(\mathcal{G}\) with 350 samples. To facilitate model training and evaluation, all expert-written explanations are based on the following template:
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Data** & **Task** & **Raw(train/val/test)** & **Insuraction(train/val/test)** & **Source** & **Annotation** & **LabLab(Aspects** \\ \hline DR & depression detection & 1,000/43/045 & 1,003/400/405 & Reddit & weak supervision & Yes, No \\ Dreaddit & stress detection & 2,837/300/414 & 2,837/300/414 & Reddit & human annotation & Yes, No \\ CLP & depression detection & 46/10/70/290 & 456/10/7629 & Reddit & human annotation & Yes, No \\ SWMH & mental disorders detection & 34,828/20/15/01.82 & 34,282/28/295/10.882 & Reddit & weak supervision & Suicide, Anxiety, Bipolar disorder, Depression, None \\ T-SID & mental disorders detection & 3,071/767/759 & 3,071/767/759 & Twitter & weak supervision & None, Suicide, Depression, PTSD \\ SAD & stress cause detection & 5,547/614/684 & 5,547/616/684 & SMS & human annotation & School, Finance, Family, Social Relation, \\ & & & & & & & Work, Health, Emotion, Decision, Others \\ CAMS & depression/suicide cause detection & 2,807/320/452 & 2,207/320/452 & Reddit & human annotation & Bias, Job, Medification, Relationship, \\ loneliness & loneliness detection & 2,463/527/531 & 2,463/527/531 & Reddit & human annotation & Attention, None \\ MultiWD & Wellness dimensions detection & 2,842/259/353 & 15,744/150/602/441 & Reddit & human annotation & Spitinal, Physical, Intelligent, Social, \\ IRF & interpersonal risk factors detection & 1,971/493/1,859 & 3,047/852/113 & Reddit & human annotation & Vocational, Emotional \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of the collected data. “Raw” and “Instruction” denote the split sample numbers for the raw data and converted instruction data in the IMHI dataset. “Annotation” denotes the reliability of the annotated labels in the raw data.
Figure 2. Three components shown in the figure are concatenated to construct the prompts. In the DR example, the key information is marked in blue.
where _[label]_ and _[explanation]_ denote the classification label and the corresponding explanation content. Secondly, for each dataset, we randomly sample 2 explanations from \(\mathcal{G}\) for each class, and include them as few-shot examples in the prompt. To further enhance the generation quality, we include supervised labels from the raw datasets. Thirdly, we utilize task-specific instruction, few-shot expert-written examples, and the assigned label for the target post to construct the prompt. An example of the constructed prompt for the dataset DR is shown in Figure 2, and the prompts for other datasets are presented in Appendix B.3.
### Explanation Evaluation
We perform comprehensive evaluations on the ChatGPT-generated explanations to ensure their quality. Due to the large quantity of generated explanations (105K), we perform holistic automatic evaluations on all collected data and select a subset for human evaluation.
#### 3.3.1. Automatic Evaluation
In automatic evaluation, we believe three criteria are crucial to guarantee the quality of the generated explanations: 1) **Correctness**: the explanations should make correct label predictions in the corresponding mental health analysis task. 2) **Consistency**: the explanations should provide clues and analyses that are consistent with their predicted labels (Srivastava et al., 2017). 3) **Quality**: from the perspective of psychology, the generated explanations should provide supportive evidence with high quality in aspects such as reliability, professionality, etc (Srivastava et al., 2017). Based on the above definitions, we design automatic evaluation methods for each of these criteria as follows:
**Correctness**. During the explanation generation process, we combine the annotated labels from each collected dataset into the prompts to supervise ChatGPT in generating correct explanations. An appropriate assumption is that a classification result that is agreed upon by both the dataset annotations and ChatGPT can be considered correct. However, we notice that ChatGPT can sometimes express disagreement with the assigned label in its response. Examples of such disagreements are shown in Appendix B.2. These disagreements are possibly due to the subjectivity of some tasks and the weakly-supervised annotation processes (as shown in Table 1) of some datasets. In these cases, we ask the domain experts to manually check the prompts and responses to modify/rewrite the classification and explanations. We present the agreement percentages between dataset annotations and ChatGPT for each collected dataset in Figure 3(a). According to the results, 7 out of 10 datasets have agreement percentages above 90%, showing the high correctness of most generated responses. T-SID dataset has an agreement percentage below 70% because it has weakly-supervised labels obtained by the clustering of subreddits in Reddit (Reddit, 2018). loneliness and IRF datasets also have percentages below 80%, as they are built on relatively subjective tasks loneliness detection, and interpersonal risk factors identification.
**Consistency**. As all ChatGPT generations follow the template specified in Sec. 3.2, consistency evaluates whether the evidence in _[explanation]_ supports _[label]_ in each response. Specifically, we split _[explanation]_ and _[label]_ contents via the "Reasoning" symbol in each response, and use the _[explanation]_ set in the ChatGPT responses for the training split of each raw dataset to train a classifier based on MentalBERT (Krishnam et al., 2017). For the _i_-th split explanation _[explanation]_, we have:
\[[label]_{i}^{p}=\mathit{MentalBERT}([\mathit{explanation}]_{i}) \tag{2}\]
where _[label]_\({}_{i}^{p}\) is then supervised by the _i_-th split label _[label]_\({}_{i}\). The intuition behind this method is that the training split pairs with higher consistency are expected to supervise a more precise classifier for identifying the supported label given the explanation. To evaluate the precision of the trained classifiers, we test them on both the ChatGPT responses for the test split of each raw dataset, and the expert-written golden explanation set \(\mathcal{G}\). The classification performance is presented in Figure 3(b). According to the results, all classifiers achieve weighted F1 scores of over 93.5% on the responses for test splits, which shows a highly stable distribution in consistency for ChatGPT-generated explanations. Test results on the golden explanation set show that the classifiers achieve over 94% on 9 of 10 datasets, with 4 datasets achieving 100% performance. These results show that the classifiers can identify the correct explanation/label pairs with very high accuracy, which proves the high consistency of the training data (ChatGPT responses on training splits of the raw datasets). However, the performance on SAD is relatively low (86.6%). A possible reason is that explanations for some labels (e.g. 'School' and 'Work', 'Family' and 'Social Relation'), as shown in Table 1, can have similar semantics, which can be difficult to distinguish. With the above evidence, we conclude that
Figure 3. Automatic evaluation results on ChatGPT-generated data.
ChatGPT-generated explanations have high consistency with the assigned labels.
**Quality**. With careful human evaluations, Yang et al. (Yang et al., 2018) show that ChatGPT can generate approaching-human explanations in a zero-shot manner in terms of fluency, reliability, etc. Therefore, we set the zero-shot explanations of ChatGPT as the benchmark to evaluate the generation quality of our data. Specifically, based on our designed prompts (we refer to as with-label prompts) in Sec. 3.2, we remove the assigned labels to obtain the few-shot prompts, and remove the assigned labels and the few-shot expert-written examples to obtain the zero-shot prompts. We separately use the zero-shot prompts, few-shot prompts, and with-label prompts to probe ChatGPT for the 350 posts in the golden explanation set \(\mathcal{G}\). Setting expert-written explanations in \(\mathcal{G}\) as the golden standard, we utilize BART-score (Yang et al., 2018) to automatically evaluate the quality of the responses to the three kinds of prompts, as BART-score is proven most correlated with human evaluations compared to other popular automatic metrics in interpretable mental health analysis (Yang et al., 2018). The evaluation results are shown in Figure 3(c). According to the results, few-shot outputs show significant improvement over zero-shot outputs on most datasets, which proves the effectiveness of expert-written few-shot examples in enhancing the quality of the explanations. However, with-label outputs show limited improvements over few-shot outputs on most datasets, indicating that supervised labels do not significantly benefit the explanation generation process. Overall, the generated explanations from with-label prompts further outperform zero-shot explanations, which are proven to approach human performance. The above evidence proves that the explanations in the IMHI dataset are of high quality.
#### 3.3.2. Human Evaluation
We randomly select 200 explanations generated from the raw datasets to perform human evaluations. The annotation scheme is developed based on previous protocols for similar tasks (Yang et al., 2018; Yang et al., 2018), and further modified for interpretable mental health analysis with collaborative efforts from 2 domain experts. Specifically, we assess the explanations in 4 aspects: 1) **Consistency**: The text should build from sentence to sentence to a coherent body of information about mental health that supports the classification results. 2) **Reliability**: The trustworthiness of the evidence to support the classification results in the generated explanations. 3) **Professonality**: It measures the rationality of the evidence in generated explanations from the perspective of psychology. 4) **Overall**: The general effectiveness of the generated explanation. Each aspect is divided into 4 standards rating from 0 to 3, where higher scores denote more satisfactory performance. More details of the annotation scheme are presented in Appendix A. During annotation, each sample is rated by 3 domain experts (Ph.D. students' majorin in Quantitative Psychology) on all aspects. We aggregate all annotations by averaging the scores of each sample and present the results in Figure 4. According to the results, most explanations are assigned consistency scores over 2.5, which shows that these data are consistent with the classification results, and completely fluent, coherent, and error-free. Most samples also obtain over 2.0 scores on reliability, proving that they provide mostly reliable information with non-critical misinformation or wrong reasoning. Finally, the evaluation results on professionality indicate that most explanations can provide multiple evidences that are supportive from the perspective of psychology. Overall, the human evaluations show that ChatGPT can generate explanations that have good overall performance, which is consistent with previous analysis (Yang et al., 2018) and the automatic evaluation results.
### Instruction Construction
We construct the IMHI dataset based on all posts from the raw datasets and the corresponding evaluated ChatGPT-generated explanations. We simplify the instructions introduced in Sec. 3.2 to adapt to less powerful LLMs and construct the questions in a rule-based manner, where the prompt templates are presented in Table 2. The evaluated ChatGPT-generated explanations are directly used as the responses to these questions. We mix the question-response pairs from the training split of all raw datasets and randomize the order to build the training split of the IMHI dataset, which consists of 72,095 samples. To facilitate the best model selection, we build a validation set, which is developed from the valid split of each raw dataset using the same method, with 14,346 samples.
Due to the poor instruction following ability of some baseline models, we also convert the IMHI data into a completion-based form using another set of templates, which are presented in Appendix B.1. We refer to this dataset as IMHI-completion.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Dataset** & **Template** \\ \hline DR & Post-[_Post_] Question: Does the poster suffer from depression? \\ \hline Dreadit & Post-[_Post_] Question: Does the poster suffer from stress? \\ \hline CLP & Post-[_Post_] Question: Does the poster suffer from depression? \\ \hline SWHH & Post-[_Post_] Question: What mental disorder symptoms does \\ \hline T-SGD & Post-[_Post_] Question: What mental disorder does this post show? \\ \hline SAD & Post-[_Post_] Question: What stress cause does this post show? \\ \hline CAMS & Post-[_Post_] Question: What mental disorder cause does \\ this post show? \\ \hline loneliness & Post-[_Post_] Question: Does the poster suffer from loneliness? \\ \hline MultiWD & Post-[_Post_] Question: Does the [_Lapet_] mental weakness dimension \\ \hline IBF & Post-[_Post_] Question: Does the post show risks of [_Lapet_]? \\ \hline \hline \end{tabular}
\end{table}
Table 2. Templates for constructing prompts for IMHI dataset. _[Post]_ denotes the target post. _[Aspect]_ denotes the detection aspects (shown in Table 1).
Figure 4. Distributions of human evaluation scores on ChatGPT-generated explanations. Orange lines and green dots denote the median and average numbers.
## 4. MentalLama Training
Based on the IMHI dataset, we finetune the LLaMA2 (Luo et al., 2019) models to build our MentalLLaMA models. Firstly, we build a MentalLLaMA-7B by training LLaMA2-7B on the IMHI training set for 10 epochs, and select the best model based on the validation results on the IMHI validation set. We set the batch size to 32 and a gradient accumulation step of 8, which leads to an actual batch size of 256. The model is trained based on the AdamW optimizer (Kingmae et al., 2014), and we set a max learning rate of 1e-5 with a warm-up ratio of 3%. The max model input length is set to 2048. We also utilize Flash-Attention (Beng et al., 2017) to accelerate the training process. Secondly, we build MentalLaMA-chat-7B and MentalLaMA-chat-13B models by training on LLaMA2-chat-7B and LLaMA2-chat-13B, which are optimized with instruction tuning (Kumar et al., 2019), and the first open-source LLMs tuned with reinforcement learning from human feedback (RLHF) (Luo et al., 2019). The training process is on the same IMHI dataset with the same experimental settings. Thirdly, to enable fair comparisons with the baseline models that are fine-tuned in a completion-based manner, we train another LLaMA2-7B model on the IMHI-completion dataset. All models are trained on 4 Nvidia Tesla A100 GPUs, each with 80GB of memory.
## 5. IMHI Evaluation Benchmark
We build the IMHI evaluation benchmark for interpretable mental health analysis on the test splits of the collected datasets. As data from each dataset requires a different evaluation metric setting, we split the test data into 10 subsets based on the data sources. The statistics of the evaluation benchmark are presented in Table 1.
Following the evaluation criteria of AIGC introduced in Sec. 3.3.1, the benchmark evaluates 2 key aspects of the model responses: correctness of the predictions and quality of the explanations. We model the evaluation of correctness as a classification task and compute the weighted F1 scores based on the predictions of the output and the assigned labels in the references. A key challenge of this method is that some models, especially the instruction-tuned ones, do not respond in a unified template as in Sec. 3.2. These irregular responses make rule-based determinations of the predicted labels difficult. To solve this problem, we utilize the MentalBERT-based classifiers, which are used for evaluating the consistency of the IMHI dataset (introduced in Sec. 3.3.1), to assign a prediction label to each response. The classifiers are expected to accurately assign the labels based on the responses because they are proven to perform well in the IMHI test set and the golden explanation set, as shown in Figure 3(b). For evaluating the explanation quality, we follow the same methods as in Sec. 3.3.1, where BART-score (Shi et al., 2017) is used to evaluate the model outputs.
## 6. Experiments and Analysis
### Baseline Models
We select the following strong and representative baseline models to compare with our MentalLaMA models:
**Discriminative methods.** As mental health analysis is previously modeled as text classification tasks, we select classification models as baseline models, where most recent methods finetune discriminative PLMs such as BERT (Chen et al., 2017) and RoBERTa (Rao et al., 2018) on the target dataset. We also include SOTA methods MentalBERT and MentalRoBERTa (Kumar et al., 2019), which pre-train a language model from scratch on large-scale data in the mental health domain and further finetunes on the target datasets. As all these models cannot generate texts, we only use these models in comparisons of correctness.
**Zero-shot/few-shot methods.** With the recent advancement in foundation LLMs, zero-shot and few-shot solutions have become effective and cost-efficient. We select the 7B and 13B versions of the open-source LLM LLaMA (Luo et al., 2019) to perform zero-shot prompting on the benchmark data. We also perform zero-shot and few-shot prompting on the close-source LLM ChatGPT and GPT-4 (Xu et al., 2019).
**Completion-based fine-tuning methods.** To evaluate the parameter efficiency of our models, we also finetune generative PLMs with smaller sizes with the same training settings. we select SOTA generative PLMs BART-large (Rao et al., 2018) and T5-large (Luo et al., 2019). Since these PLMs do not possess strong instruction-following ability (Kumar et al., 2019), we finetune them on the IMHI-completion dataset. To enable fair comparison, we also train a LLaMA-7B model on the same dataset.
### IMHI Test Results
#### 6.2.1. Correctness
The evaluation results of correctness are presented in Table 3. In discriminative methods, MentalBERT and MentalRoBERTa still achieve SOTA performance on 8 out of 10 test sets. Considering the small sizes of these models, we conclude that fine-tuning domain-specific PLMs remains the most efficient method for discriminative mental health analysis. However, the key limitation of these methods is the poor interpretability of their decisions. In comparisons of zero-shot methods, LLaMA-13B\({}_{ZS}\) does not show advantages over LLaMA-7B\({}_{ZS}\) on most test sets, but ChatGPT\({}_{ZS}\) significantly outperforms both LLaMA models on all 10 datasets. These results are possibly due to the emergent ability (Zhu et al., 2019) of LLMs, where the mental health analysis ability is weak in smaller models (7B, 13B LLaMA models), but rapidly improves in larger models (175B ChatGPT model). In addition, ChatGPT\({}_{FS}\) and GPT-4\({}_{FS}\) further outperforms ChatGPT\({}_{ZS}\) on all test sets. These observations are consistent with previous works (Chen et al., 2017), where in-context learning from expert-written examples can calibrate LLMs' decision boundaries for subjective tasks. However, GPT-4 does not show apparent advantages over ChatGPT on most datasets. In completion-based fine-tuning methods, we surprisingly find that T5 or BART outperforms LLaMA2-7B on most test sets with only 15% in model size. A possible reason is that training LLaMA2 on the unnatural IMHI-completion dataset cannot trigger its ability well. To further evaluate this hypothesis, we train MentalLaMA-7B with the IMHI dataset. As shown, MentalLaMA-7B outperforms the completion-based LLaMA2-7B on 8 out of 10 test sets, showing domain-specific instruction tuning as more efficient than completion-based fine-tuning in improving the correctness of LLaMA2. Experiments on LLaMA2-chat further prove this conclusion, as MentalLaMA-chat-7B and MentalLaMA-chat-13B outperform MentalLaMA-7B on 9 out of 10 test sets. Based on LLaMA2, LLaMA2-chat models are enhanced with high-quality instruction tuning (Kumar et al., 2019), which allows them to better follow the mental health-related questions. Notably, MentalLaMA-chat-13B surpasses or bears a less than 5% gap to MentalRoBERTa in 7 out of 10 test sets, showing its approaching SOTA ability in achieving correctness in mental health analysis.
#### 6.2.2. Quality
We present the BART-score evaluation results to evaluate the quality of the generations. In completion-based methods presented in Figure 5(a), LLaMA2-7B greatly outperforms LLaMA2-7B\({}_{ZS}\) on all 10 test sets, showing the effectiveness of completion-based finetuning in improving the quality of the explanations. T5 and BART models generate explanations that have similar scores, showing their close ability in interpretability. LLaMA2-7B outperforms BART-large on 9 out of 10 test sets, but to a limited scale, where only 2 test sets (MultiWD and IRF) improve over 0.2 in BART-score. These results further prove that completion-based finetuning for LLaMA2 is inefficient. Based on the above observations, we recommend utilizing BART-large to build a completion-based interpretable mental health analysis model, which is both capable and cost-efficient.
In instruction tuning methods presented in Figure 5(b), MentLaLMA greatly outperforms zero-shot results on LLaMA2-7B on all 10 test sets, showing the effectiveness of instruction tuning in improving the quality of the explanations. MentalLaMA-chat-7B also significantly outperforms MentalLaMA-7B, with improvement in all 10 test sets and over 0.2 gain on 6 test sets. These results prove that the instruction tuning and RLHF [33] enhancements on LLaMA2-chat models also improve their ability to generate high-quality explanations compared to the vanilla LLaMA2 models. In
\begin{table}
\begin{tabular}{l c|c c c c c c c c c} \hline \hline
**Model** & **Param.** & **CAMS** & **CLP** & **DR** & **Dreaddit** & **IRF** & **loneliness** & **MultiWD** & **SAD** & **SWMH** & **T-SD** \\ \hline \multicolumn{10}{c}{**Discriminative methods**} \\ BERT-base & 110M & 34.92 & 62.75 & 90.90 & 78.26 & 72.30 & 83.92 & **76.69** & 62.72 & 70.76 & 88.51 \\ RoBERTs-base & 110M & 36.54 & 66.07 & 95.11 & 80.56 & 71.35 & 83.95 & – & 67.53 & 72.03 & 88.76 \\ MentalBERT & 110M & 39.73 & 62.63 & **94.62** & 80.04 & **76.73** & 82.97 & 76.19 & 67.34 & 71.11 & 88.61 \\ MentalRoBERTa & 110M & **47.62** & **69.71** & 94.23 & **81.76** & – & **85.33** & – & **68.44** & **72.16** & **89.01** \\ \hline \multicolumn{10}{c}{**Zero-shot/few-shot methods**} \\ LLaMA-7B\({}_{ZS}\) & 7B & 16.34 & 36.26 & 58.91 & 53.51 & 38.02 & 58.32 & 40.1 & 11.04 & 37.33 & 25.55 \\ LLaMA-13B\({}_{ZS}\) & 13B & 14.64 & 39.29 & 54.07 & 36.28 & 38.89 & 55.48 & 53.65 & 13.2 & 40.5 & 25.27 \\ ChatGT\({}_{ZS}\) & 175B & 33.85 & 56.31 & 82.41 & 71.79 & 41.33 & 58.40 & 62.72 & 54.05 & 49.32 & 33.30 \\ ChatGT\({}_{ZS}\) & 175B & 44.46 & 61.63 & 84.22 & 75.38 & 43.31 & 58.78 & 64.93 & 63.56 & 60.19 & 43.95 \\ GPT-\({}_{ZS}\) & 1.76T & 42.37 & **62.0** & 82.0 & 78.18 & 51.75 & 72.85 & 62.58 & 55.68 & 62.94 & 40.48 \\ \multicolumn{10}{c}{**Completion-based fine-tuning methods**} \\ T5-Large & 70M & 40.2 & 48.6 & 84.9 & 77.7 & 74.0 & 80.8 & 76.4 & 58.1 & 70.0 & 77.1 \\ BART-Large & 406M & 43.8 & 50.3 & **84.6** & **80.0** & 76.2 & 83.3 & **77.2** & 59.6 & 71.5 & **77.9** \\ LLaMA2-7B & 7B & 30.47 & 51.17 & 84.94 & 61.59 & 73.5 & 81.25 & 65.52 & 49.6 & 63.08 & 68.93 \\ \multicolumn{10}{c}{**Instruction-tuning methods**} \\ MentalLaMA-7B & 7B & 32.52 & 59.86 & 76.14 & 71.65 & 67.53 & 83.52 & 68.44 & 49.93 & 72.51 & 72.64 \\ MentalLaMA-chat-7B & 7B & 44.8 & 51.84 & 83.95 & 62.2 & 72.88 & 83.71 & 75.79 & 62.18 & **75.58** & 77.74 \\ MentalLaMA-chat-13B & 13B & **45.52** & 52.61 & **85.68** & 75.79 & **76.49** & **85.1** & 75.11 & **63.62** & 71.7 & 75.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Evaluation results of correctness on the IMHI test set. All results are weighted F1 scores. “Param.” denotes the number of parameters for each model. In zero-shot/few-shot Methods, “ZS” denotes zero-shot methods, and “FS” denotes few-shot methods. The best values in discriminative and interpretable mental health analysis methods are highlighted in bold.
Figure 5. BART-score evaluation results on the IMHI test set and expert-written golden set.
addition, MentalLaMA-chat-13B further advances the quality of the explanations, which outperforms MentalLaMA-chat-7B by over 0.2 on 8 out of 10 test sets. These results show that LLaMA-chat can efficiently leverage the expansion of model size to enhance its interpretability. We believe the RLHF training allows larger models to use their increasing capabilities to generate explanations that are more aligned with human preferences.
We also compare the generation quality of MentalLaMA on the expert-written golden set \(\mathcal{G}\) to the few-shot results on ChatGPT and GPT-4. According to the results in Figure 5(c), the MentalLaMA models achieve comparable performance to ChatGPT and GPT-4 on most test sets with much smaller model sizes, showing the effectiveness of IMHI instruction tuning and the outstanding explanation generation quality of MentalLaMA models. We also notice that GPT-4 does not show significant improvement in generation quality over ChatGPT. ChatGPT has comparable model performance in correctness and quality to GPT-4 but with much lower inference costs, which is more appropriate for obtaining large-scale responses for building the IMHI dataset.
### Generalizability
In addition to their outstanding generation ability, LLMs are also proven to bear high generalizability to unseen tasks (Deng et al., 2017; Chen et al., 2018). To evaluate the generalizability of MentalLaMA, we exclude the data of the following tasks from the IMHI training set: stress detection (Dreaddit), mental disorder detection from Twitter (T-SID), depression/suicide cause detection (CAMS), and interpersonal risk factors detection (IRF), to build a new training set IMHI-general. We re-finetune T5, BART, and MentalLaMA-chat models on IMHI-general, and evaluate these models on the test sets of the 4 unseen tasks.
The BART-score test results are shown in Figure 5(d). According to the results, MentalLaMA-chat models significantly outperform T5 and BART on Dreaddit and CAMS, showing that MentalLaMA-chat models can generate explanations with higher quality to new tasks in fundamental mental health conditions/cause detection tasks. MentalLaMA's superior performance on IRF also proves its deeper understanding of high-level mental health factors behind mental health conditions. Excluding all Twitter data from the training set, MentalLaMA-chat models still achieve better scores on the Twitter-derived test set T-SID, proving that MentalLaMA can be better generalized to new data sources with different data characteristics. In addition, MentalLaMA-chat-13B further improves the explanation quality compared to MentalLaMA-chat-7B, denoting the benefit of model size expansion to interpretable mental health analysis on new tasks. Overall, the aforementioned analysis proves that MentalLaMA bears higher generalizability in unseen tasks compared to other generative PLMs.
## 7. Related Work
### Mental Health Analysis on Social Media
In mental health analysis, traditional methods mostly make predictions in a discriminative manner. Effective methods mostly fine-tune pre-trained language models (PLMs), such as BERT (Devlin et al., 2017) and RoBERTa (Romero et al., 2017), on a small target set (Liu et al., 2017; Wang et al., 2017) usually for one mental health condition. To further enhance the PLM representations, some works pre-train language models from scratch with large-scale mental health-related social media data, which usually produce better post representations than general PLMs. Representative works include MentalBERT (Liu et al., 2017), MentalXLNet (Liu et al., 2017), etc.
Though the above black-box models achieve impressive classification performance, there are works exploring interpretable mental health analysis. Some works incorporate metaphor concept mappings as extra features to provide clues on model decisions (Liu et al., 2017). Other works introduced PHQ-9 questionnaire information to assist the predictions (Romero et al., 2017; Wang et al., 2017). Commonsense knowledge graphs were also leveraged to increase the transparency of PLMs (Liu et al., 2017; Wang et al., 2017). The recent advancements in large language models take a leap forward for interpretable mental health analysis. Some works (Bahdan et al., 2017; Wang et al., 2017; Wang et al., 2017) comprehensively evaluated the performance of general foundation LLMs on various mental health analysis tasks. Xu et al. (Xu et al., 2017) glimpsed the explanation generation ability of LLMs, and Yang et al. (Yang et al., 2017) holistically evaluated ChatGPT's explanation generation ability with careful human evaluation.
### Open-source Large Language Models
Though LLMs such as ChatGPT and GPT-4 (Wang et al., 2017) achieve general outstanding performance, their closed-source availability affects the development of the research community. Therefore, many efforts have been made to democatize LLMs, such as the LLaMA series (Yang et al., 2017) developed by Meta AI. Based on LLaMA, many works tried to replicate ChatGPT-like instruction-following ability by training on large-scale instruction-tuning data (Wang et al., 2017). Representative general instruction-following LLMs include the Alpaca5 and the Vicuna6 model series. Domain-specific instruction tuning also improves LLM performance in certain domains, such as the MedAlpaca (Madjaca et al., 2018) in the biomedical domain and the Pixiu models (Pixiu, 2018) in the finance domain. In addition, the LLaMA-chat models (Yang et al., 2017) are the first open-source LLMs enhanced with reinforcement learning from human feedback (RLHF) (Yang et al., 2017), which significantly aligns model responses with human preferences.
Footnote 5: [https://crfim.stanford.edu/2023/03/13/alpaca.html](https://crfim.stanford.edu/2023/03/13/alpaca.html)
Footnote 6: [https://imsys.org/blog/2023-03-30-vicuna/](https://imsys.org/blog/2023-03-30-vicuna/)
## 8. Conclusion
This paper proposes the first multi-task and multi-source interpretable mental health instruction dataset on social media with 105K data. We leverage ChatGPT to generate high-quality explanations for all raw data to build the training data. We also perform strict automatic and human evaluations to ensure the reliability of the generated data. Based on the IMHI dataset, we train MentalLaMA, the first open-source LLM series for interpretable mental health analysis with instruction-following capability. Evaluations on the IMHI benchmark show that MentalLaMA approaches SOTA discriminative methods in correctness and generates ChatGPT-level explanations. MentalLaMA also shows high generalizability to unseen tasks.
In future works, we will explore fine-tuning MentalLaMA on data from other modalities, such as emojs and mental health questionnaire tables to further expand its capabilities.
## 9. Ethical Considerations
The raw datasets collected to build our IMHI dataset are from public social media platforms. We strictly follow the privacy protocols (Krishrishnan et al., 2017) and ethical principles (Bahdan et al., 2015) to protect user privacy and guarantee that anonymity is properly applied in all mental health-related texts. In addition, to minimize misuse, all examples provided in our paper are paraphrased and obfuscated utilizing the moderate disguising scheme (Bahdan et al., 2015).
Although experiments on MentalLaMA show promising performance, we stress that all predicted results and generated explanations should only be used for non-clinical research. Help-seekers should ask help from professional psychiatrists or clinical practitioners. In addition, recent studies have indicated LLMs can introduce potential bias, such as gender gaps (Krishnan et al., 2017). Meanwhile, incorrect prediction results, inappropriate explanations, and over-generalization also illustrate the potential risks of current LLMs. Therefore, there are still many challenges in applying LLMs to real-scenario mental health monitoring systems.
###### Acknowledgements.
This work is supported by the computational shared facility at the University of Manchester and the University of Manchester President's Doctoral Scholar award. This work is supported by the project JPNP20006 from New Energy and Industrial Technology Development Organization (NEDO).
|
2309.13573 | The second multi-channel multi-party meeting transcription challenge
(M2MeT) 2.0): A benchmark for speaker-attributed ASR | With the success of the first Multi-channel Multi-party Meeting Transcription
challenge (M2MeT), the second M2MeT challenge (M2MeT 2.0) held in ASRU2023
particularly aims to tackle the complex task of \emph{speaker-attributed ASR
(SA-ASR)}, which directly addresses the practical and challenging problem of
``who spoke what at when" at typical meeting scenario. We particularly
established two sub-tracks. The fixed training condition sub-track, where the
training data is constrained to predetermined datasets, but participants can
use any open-source pre-trained model. The open training condition sub-track,
which allows for the use of all available data and models without limitation.
In addition, we release a new 10-hour test set for challenge ranking. This
paper provides an overview of the dataset, track settings, results, and
analysis of submitted systems, as a benchmark to show the current state of
speaker-attributed ASR. | Yuhao Liang, Mohan Shi, Fan Yu, Yangze Li, Shiliang Zhang, Zhihao Du, Qian Chen, Lei Xie, Yanmin Qian, Jian Wu, Zhuo Chen, Kong Aik Lee, Zhijie Yan, Hui Bu | 2023-09-24T07:51:52Z | http://arxiv.org/abs/2309.13573v2 | The Second Multi-Channel Multi-Party Meeting Transcription Challenge (M2MeT 2.0): A Benchmark for Speaker-Attributed ASR
###### Abstract
With the success of the first Multi-channel Multi-party Meeting Transcription challenge (M2MeT), the second M2MeT challenge (M2MeT 2.0) held in ASRU2023 particularly aims to tackle the complex task of _speaker-attributed ASR (SA-ASR)_, which directly addresses the practical and challenging problem of "who spoke what at when" at typical meeting scenario. We particularly established two sub-tracks. The fixed training condition sub-track, where the training data is constrained to predetermined datasets, but participants can use any open-source pre-trained model. The open training condition sub-track, which allows for the use of all available data and models without limitation. In addition, we release a new 10-hour test set for challenge ranking. This paper provides an overview of the dataset, track settings, results, and analysis of submitted systems, as a benchmark to show the current state of speaker-attributed ASR.
Yuhao Liang\({}^{1,2}\), Mohan Shi\({}^{3}\), Fan Yu\({}^{2}\), Yangze Li\({}^{1}\), Shiliang Zhang\({}^{2}\), Zhihao Du\({}^{2}\), Qian Chen\({}^{2}\),
Lei Xie\({}^{1}\), Yanmin Qian\({}^{4}\), Jian Wu, Zhuo Chen, Kong Aik Lee\({}^{5,6}\), Zhijie Yan\({}^{2}\), Hui Bu\({}^{7}\)\({}^{1}\)Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science,
Northwestern Polytechnical University, China
\({}^{2}\)Speech Lab of DAMO Academy, Alibaba Group, China,
\({}^{3}\)NERC-SLIP, University of Science and Technology of China (USTC), China
\({}^{4}\)SpeechLab, Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
\({}^{5}\)ICT Cluster, Singapore Institute of Technology, Singapore
\({}^{6}\)Institute for Infocomm Research, A*STAR, Singapore
\({}^{7}\)Beijing Shell Shell Technology Co., Ltd., Beijing, China M2MeT 2.0, Alimeeting, Meeting Transcription, Multi-speaker ASR, Speaker-attributed ASR
## 1 Introduction
Despite years of research, meeting transcription accuracy still faces significant challenges, including but not limited to overlapping speech, an unknown number of speakers, far-field attenuated speech signals, noise, reverberation, and other factors that can degrade transcription performance. The ICASSP2022 Multi-Channel Multi-Party Meeting Transcription (M2MeT) challenge [1, 2] has played a crucial role in the development of Mandarin meeting transcription technology by addressing the challenge of speech overlap in actual meetings. The challenge consists of two distinct tasks: speaker diarization and multi-speaker automatic speech recognition (ASR). The former involves identifying _who spoke when_ in the meeting, while the latter aims to transcribe speech from multiple speakers. In the second M2MeT challenge (M2MeT 2.0), these two tasks are merged into a single speaker-attributed task.
The M2MeT 2.0 challenge presents two key differences from its predecessor, the first M2MeT. First, the evaluation metric in the first M2MeT challenge was speaker-independent, meaning that transcription could be determined, but the corresponding speaker was not identified. To overcome this limitation and advance current multi-talker ASR systems, the M2MeT 2.0 challenge introduces the _speaker-attributed ASR (SA-ASR)_ task. This task not only transcribes the speech but also assigns speaker labels to each transcription. Specifically, we introduce the concatenated minimum permutation character error rate (cpCER) metric to evaluate the performance of the submitted systems. The cpCER is proposed for Mandarin particularly, which is defined similarly to the concatenated minimum permutation word error rate (cpWER) [3]. Second, unlike other related challenges such as Computational Hearing in Multisource Environments (CHIME) [3], Multimodal Information Based Speech Processing (MISP) [4], and M2MeT [1, 2], the M2MeT 2.0 challenge offers participants the freedom to utilize any open-source pre-trained model, which is typically prohibited in these challenges. This flexibility aims to explore the feasible industrial application of academic research proposed in previous studies, utilizing various open-source pre-trained models trained on a large amount of data for the SA-ASR task.
The M2MeT 2.0 challenge consists of two sub-tracks: 1) The fixed training condition track, is designed to enable reproducible research in this field by providing a fixed set of training data, open-source pre-trained models, and evaluation criteria. 2) The open training condition track, aims to bench
mark state-of-the-art performance in speaker-attributed ASR by allowing participants to use their own data and training techniques.
## 2 Related Works
The SA-ASR task [5] involves identifying multiple speakers and transcribing overlapped speech within a single session. One common approach to address this cocktail party challenge is to use speaker diarization to identify the active regions of different speakers. Then, a single-talker ASR system with a speaker separation module can be used to transcribe speech from the known active regions. Alternatively, an end-to-end multi-talker ASR system can be used to transcribe speech and assign speaker labels simultaneously based on corresponding speaker information provided by the diarization system.
Speaker diarization techniques that follow conventional clustering-based approaches usually include two main steps: speaker embedding extraction and clustering. These approaches begin by transforming the input audio stream into a speaker-specific representation, followed by a clustering process like Variational Bayesian HMM clustering (VBx) [6] that groups the regions of each speaker into separate clusters. Clustering-based methods typically assign a single speaker label to each frame, making it challenging for them to handle speech overlap. With the rapid development of deep learning, End-to-End speaker diarization methods like end-to-end neural diarization (EEND) [7] are proposed, leveraging a single neural network to replace the modular cluster-based system. Inspired by the target speaker extraction [8, 9, 10, 11], target speaker voice activity detection (TS-VAD) [12, 13, 14] has been proposed, which can estimate the activity level of each speaker in the presence of overlapping speech, providing a promising solution for speaker diarization.
Single-talker ASR has been extensively investigated in recent years, and various architectures, such as Conformer [15], Branchformer [16], and Paraformer [17], have been proposed to push the limit of the accuracy of ASR.
Meanwhile, there has been a growing interest in multi-talker ASR, which is designed to transcribe speech containing several speakers. One recent approach, called Speaker-Attributed Transformer(SA-Transformer) [18], generates token-level speaker labels during the decoding phase and can directly produce speaker-attributed transcription only using the clustered speaker profile. Moreover, TS-ASR [19] is also a promising approach that utilizes speaker embedding for target speaker extraction (TSE) and achieves good performance when TSE and ASR are jointly trained. By extracting the speaker embedding, TS-ASR can identify the target speaker's voice and enhance the accuracy of the overall ASR system.
The field of rich transcription with speech overlap has undergone extensive research, with advancements facilitated by numerous challenges and open-source datasets [4, 20, 21, 22]. Table 1 outlines the primary datasets utilized in this scenario.
WSJ0-2mix [23] and Libri2Mix [24] datasets involve mixing pairs of utterances from different speakers at random SNRs, making them primarily used for speech separation tasks where there is full overlap. On the other hand, AMI [25], LibriCSS [26], and CHIME-6 [3] datasets are recorded in real rooms. However, the AMI dataset's fixed speaker count of four and poor recording quality limit its practical applications. LibriCSS, similar to Libri2Mix, utilizes the Librispeech corpus to produce speech mixes. However, due to the fixed intonation and pace of reading in Librispeech, there remains a disparity between LibriCSS and real meeting scenarios. CHIME-6 dataset is designed for conference or indoor conversation transcription tasks and accounts for overlapped speech.
Although these datasets significantly contribute to the progress of transcription overlapping speech, they are limited to English. The language barrier poses a challenge in achieving comparable results for non-English languages, such as Mandarin. To overcome this challenge, the AISHELL-4 [27] and AliMeeting [1] datasets have been developed specifically for Mandarin meeting transcription. AISHELL-4 has a lower overlap ratio, while AliMeeting contains intense discussions. Additionally, AliMeeting records the near-field signal of each participant using a headset microphone, ensuring that only the participant's speech is transcribed.
## 3 Dataset and Tracks
AliMeeting [1, 2], AISHELL-4 [28], and CN-Celeb [29] corpus are adopted as our training data, which is the same as the first M2MeT challenge. The AliMeeting dataset is a collection of multi-talker conversation recordings in a meeting setting, comprising a total of 118.75 hours of speech data. The dataset is split into 104.75 hours for training (_Train_), 4 hours for evaluation (_Eval_), and 10 hours for testing (_Test_). The AliMeeting corpus includes both far-field overlapped audios and corresponding near-field audios, which exclusively record and transcribe single-speaker speech. To evaluate the submitted systems, an additional 10 hours of audio data,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Hours & \#SPK & Devices & OR (\%) \\ \hline WSJ0-2mix [23] & 43 & 129 & Simu & Full \\ \hline Libri2Mix [24] & 292 & 1252 & Simu & Full \\ \hline AMI [25] & 100 & 190 & \begin{tabular}{c} Headset mic, \\ 8-ch mic array \\ \end{tabular} & \(<\)10 \\ \hline LibriCSS [26] & 10 & 40 & 7-ch mic array & 0-40 \\ \hline CHIME-6 [3] & 35 & 26 & 4-ch mic array & 40 \\ \hline AISHELL-4 [27] & 120 & 61 & 8-ch mic array & 19 \\ \hline AliMeeting [1] & 129 & 481 &
\begin{tabular}{c} Headset mic, \\ 8-ch mic array \\ \end{tabular} & 42 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets available in the literature in multi-talker speech transcription (OR: overlap rate)
called _Test-2023_, is incorporated specifically for testing purposes. _Test-2023_ comprises 10 sessions that were recorded in 5 different rooms and include 58 speakers. It is crucial to highlight that none of the speakers in _Test-2023_ overlap with those in the AliMeeting corpus.
This challenge introduces the speaker-attributed ASR which poses a unique challenge of transcribing speech from multiple speakers and assigning a speaker label to the transcription simultaneously. Figure 1 illustrates the difference between the speaker-attributed ASR task and the multi-speaker ASR task. The speaker-attribution ASR task groups transcriptions from the same speaker together, while the multi-speaker ASR task combines overlapping sentences spoken by different speakers. Sub-track 1 restricts participants from using constrained datasets, while sub-track 2 allows the use of any dataset, including private ones. Any open-source pre-trained models are available for both of these two sub-tracks. The accuracy of a speaker-attributed ASR system is evaluated using the cpCER. The calculation of cpCER involves three steps. The first step is to concatenate the reference and hypothesis transcriptions from each speaker in chronological order. This produces a single transcription within one session. Next, the character error rate (CER) is calculated between the concatenated reference and hypothesis transcriptions, and this process is repeated for all possible speaker permutations. Finally, the permutation with the lowest CER is selected as the cpCER for that session.
To provide a clear and concise description of the cpCER computation, we illustrate it in Algorithm 1. To run the algorithm, it is necessary to provide both the ground truth and hypothesis transcriptions for a given session, which are arranged in chronological order. In cases where the length of \(Y\) and \(H\) is not equal, we employ padding with blank transcriptions to ensure that both sets have the same length.
```
input : Ground truth: \(\{Y_{1},Y_{2},...,Y_{S}\}\), hypothesis of different speakers: \(\left\{H_{1},H_{2},...,H_{\hat{S}}\right\}\). \(S\) is the oracle speaker number and \(\hat{S}\) is the predicted speaker number output : The \(cpCER\) of given session
1\(Y\leftarrow\{Y_{1},Y_{2},...,Y_{S}\}\);
2\(H\leftarrow\left\{H_{1},H_{2},...,H_{\hat{S}}\right\}\);
3\(mindistance\gets INFINITY\);
4foreach permutation of \(H\)do
5\(distance\gets 0\);
6for\(i\gets 1\)to\(\max(S,\hat{S})\)do
7\(distance\leftarrow distance+editdistance(H\left[i\right],Y\left[i\right])\);
8
9 end for
10if\(distance<mindistance\)then
11\(mindistance\gets distance\);
12
13 end if
14
15 end for
16
17\(totaltoken\leftarrow\sum_{i=0}^{S}length(Y\left[i\right])\);
18\(cpCER\gets mindistance/totaltoken\times 100\%\)
```
**Algorithm 1**Computation of cpCER
## 4 System Description
### Baseline system
We release an E2E SA-Transformer baseline built on FunASR [30] toolkit for easy and reproducible research. The model architecture is illustrated in Figure 2. It comprises an ASR block and a speaker block to carry out ASR and token-level speaker identification. The ASR block is represented as
\[H^{asr} =\text{AsrEncoder}(X), \tag{1}\] \[o_{n} =\text{AsrDecoder}(y_{[1:n-1]},H^{asr},\bar{d}_{n}). \tag{2}\]
In the ASR block, the AsrEncoder converts the given acoustic feature \(X\) into a series of hidden embeddings \(H^{asr}\). The AsrDecoder then produces the output distribution \(o_{n}\) step-by-step. To generate each token, the AsrDecoder takes in the history token \(y_{[1:n-1]}\), the hidden embeddings \(H^{asr}\), and the weighted speaker profile \(\bar{d}_{n}\). Compared to the other encoder-decoder-based ASR models, our model differs in its use of the weighted speaker profile \(\bar{d}_{n}\), computed in the speaker block. This profile is employed to bias the transcription towards a specific speaker, which enhances the model's ability to identify individual speakers within a session. The posterior probability of token \(i\) at the \(n\)-th decoding step is formulated as
\[Pr(y_{n}=i|y_{[1:n-1]},s_{[1:n]},X,D)=o_{n,i}. \tag{3}\]
On the other hand, the speaker block is denoted as
\[H^{spk} =\text{SpeakerEncoder}(x), \tag{4}\] \[q_{n} =\text{SpeakerDecoder}(y_{[1:n-1]},H^{spk},H^{asr}),\] (5) \[\beta_{n,k} =\frac{\exp(\cos(q_{n},d_{k}))}{\sum_{j}^{K}\exp(\cos(q_{n},d_{j} ))},\] (6) \[\bar{d}_{n} =\sum_{k=1}^{K}\beta_{n,k}d_{k}. \tag{7}\]
Figure 1: The main difference between the speaker-attributed ASR task (M2MeT 2.0) and the multi-speaker ASR task (M2MeT).
The speaker encoder takes the input acoustic feature \(X\) and produces the speaker embedding \(H^{spk}\), which has the same shape as the ASR embedding \(H^{asr}\) and represents the speaker's unique characteristics. During each decoding step \(n\), the SpeakerDecoder uses \(y_{1:n-1}\), \(H^{spk}\), and \(H^{asr}\) to generate a speaker query \(q_{n}\). Based on this query, a cosine distance-based attention weight \(\beta_{n,k}\) is calculated for every profile \(d_{k}\) in \(D\). These weights can be viewed as posterior probabilities for the \(n\)-th token being attributed to the \(k\)-th speaker, taking into account all previous estimations, as well as relevant information from the input \(X\) and the speaker profiles in \(D\). The \(\beta_{n,k}\) can be represented as
\[Pr(s_{n}=k|y_{[1:n-1]},s_{[1:n-1]},X,D)=\beta_{n,k}. \tag{8}\]
To extract speaker embeddings and initialize the SpeakerEncoder, we leveraged a pre-trained x-vector extractor from ModelScope trained on CN-Celeb. To train our E2E SA-ASR system, we employed a two-stage training strategy. In the first stage of training the E2E SA-ASR system, we trained a standard Conformer for the ASR task, which was utilized in the second stage to initialize the ASR block. In this second stage, both the ASR and speaker losses were incorporated to fine-tune the model. By using Eqs. 3 and 8, the joint posterior of token \(Y\) and speaker \(S\), optimized to be maximized during training, is represented as
\[\begin{split} Pr(Y,S|X,D)=&\prod_{n=1}^{N}Pr(y_{n}|y _{[1:n-1]},s_{[1:n]},X,D)\\ \times Pr(s_{n}|y_{[1:n-1]},s_{[1:n-1]},X,D).\end{split} \tag{9}\]
During the training phase, speaker embeddings were extracted from audio solely containing one speaker to generate speaker profiles, utilizing the oracle time stamp. However, during the decoding phase, when the oracle speaker label was absent, we turned to spectral clustering for providing the speaker profile.
### Official modular system
In addition to the E2E SA-Transformer baseline, we also develop an official modular system based on pre-trained models that serves as a strong alternative. With the goal of encouraging research and development in the field of speaker-attributed ASR, we are dedicated to releasing this system in the near future. The modular baseline is illustrated in Figure 3. Our Front-End process leverages two methods to enhance the input audio: weighted prediction error (WPE), a well-known dereverberation technique, and guided source separation (GSS), which utilizes prior segmentation information to assist the separation process. To obtain speaker-labeled segments for GSS, we employ a speaker diarization module. The input long-form audio is first processed by a VBx diarization system to obtain an initial diarization output. Using this output, an x-vector extractor generates a speaker profile for each session from non-overlapping speech segments. The speaker profiles are then utilized by a pre-trained
## 5 Experimental Setup
In this section, we describe the settings of the SA-Transformer baseline and the modular system based on the pre-train models.
Figure 3: The official modular system basing on open-source pre-trained models.
Figure 2: The SA-Transformer [18] baseline system uses in M2MeT 2.0 challenge.
### SA-Transformer baseline
The ASR Encoder is composed of 12 Conformer layers, each equipped with 4-head multi-head attention (MHA). The MHA and feed-forward network (FFN) dimensions are set to 256 and 2048, respectively. The ASR Decoder is made up of 6 transformer layers. The Speaker Encoder is initialized with a pre-trained ResNet x-vector extractor, but with an additional linear layer that transforms the embedding dimension into 256. The Speaker Decoder comprises 3 transformer layers. The speaker loss weight is set to 0.5, while the CTC loss weight is set to 0.3.
### Official modular system
We have implemented VBx diarization in our modular speaker-attributed system, following the workflow from the first M2MeT challenge baseline. To extract x-vectors, we use the same extractor as in SA-Transformer. When generating speaker profiles, we discard segments shorter than 0.5 seconds. Our system utilizes the pre-trained SOND and Paraformer models, which are both open-sourced and can be accessed on ModelScope. The train set of AliMeeting is utilized to fine-tune the pre-trained paraformer model for 50 epochs with a learning rate of 0.00005.
## 6 Results and Analysis
In this section, we provide a comprehensive analysis of various systems used in this challenge. The major techniques used and evaluation results of each team is shown in Table 2. A total of 30 teams registered for this challenge and 8 of them submitted their results. Almost all teams use constraint data to build their system except C17, so they submit the same result to the two sub-tracks. While we initially released an end-to-end SA-ASR baseline, most participants opted for a modular system approach due to the long training time and limited performance with restricted data for end-to-end approaches. Moreover, with the availability of open-source pre-trained models, developing a modular system is more straightforward and yields satisfactory results. Consequently, we examine different modules, data augmentation techniques, and post-processing methods to ascertain their effectiveness in improving the overall system performance.
### Speaker modules
The speaker modules play a crucial role in the speaker-attributed ASR system by providing time-stamp and speaker profiles for subsequent ASR module. The input audio is typically divided into shorter segments, and then a speaker embedding extractor, such as ResNet [32], or CAM++ [33], is employed to encode the audio into an embedding that captures the speaker characteristics. The speaker embedding is used to identify the speaker and corresponding speech segment throughout the ASR process.
The performance of the diarization model is typically measured by the diarization error rate (DER), which is impacted by both speaker activation region prediction and speaker identification accuracy. However, in the context of speaker-attribute ASR tasks, the speaker activation region is not always necessary. As a result, we present the results of speaker counting in Table 3. Teams X27 and M42 both employ TS-VAD with different speaker embedding extractors. By utilizing the clustering result as initialization to further enhance diarization accuracy, team X27 achieves DER of 3.64% on the _Test_ set. Team C17 and V29 opted for spectral clustering and VBx clustering, respectively. Due to the low overlap ratio in the _Test-2023_ set, cluster-based diarization methods can also perform well on speaker counting. Team C31 and the official modular system use the SOND model and achieved an accuracy of 80% and 100% in speaker counting, respectively. The main difference is that the official modular system uses the VBx to split out the single-talker speech part which can produce more accurate speaker profiles. Benefiting from the accuracy of speaker counting, the official modular system achieves DER of 1.51% on the _Test-2023_ set.
### Front-end
Out of all the participating teams, only the top 2 teams use the front-end process. The winning team, X27, implements the weighted prediction error (WPE) and weighted delay-and-sum acoustic beamforming (BeamformIt) techniques, which have proven effective in recognizing far-field audio. Team M42 utilizes a Conformer-based Metric GAN (CMGAN) [34] to separate multi-talker segments into single-talker segments and compared the effectiveness of separate training and joint training strategies. Joint training of the speech separation module and ASR module results in a significant 4.2% reduction in the character error rate (cpCER) on the _Test-2023_ setting (26.60% \(\rightarrow\) 22.40%). The official modular system adopts WPE and GSS. GSS is proven to be an effective method in various challenges when the time stamp can be easily obtained. In the official modular system, it results in a 1.80% cpCER reduction compared to the process of WPE and BeamformIt (11.98% \(\rightarrow\) 10.18%).
### ASR module
Multiple ASR models are explored by the participants, including U2++ [35], Paraformer [17], MFCCA [36], and SA-Transformer [18]. In their experiment, Team M42 utilizes the U2++ and Paraformer models and assessed their performance by comparing the cpCER obtained after joint training with the speaker separation module. The results indicate that Paraformer outperforms U2++ with a lower cpCER of 20.67% as compared to 22.40%. Notably, the top four teams processed the audio into single talker segments, which could be effectively handled using a pre-trained standard ASR model.
Team X27 [37] successfully transforms a single-talker ASR model into a target speaker model [38] by incorporating the target speaker's embedding into a fully connected layer and injecting it with an element-wise product between the encoder layers. This approach enabled the model to accurately recognize the target speaker's speech with high precision.
Team C31 implements the frame-level diarization with SOT (FD-SOT) [19] multi-talker ASR system to mitigate the issue of overlap in the ASR module. This multi-talker ASR system combines diarization and ASR results by adhering to the first-in first-out property of SOT. By splitting the ASR result using speaker change symbols and assigning speaker labels using diarization results, FD-SOT ensures one-to-one correspondence. Furthermore, the MFCCA multi-channel solution is adopted as their ASR model, which effectively leverages fine-grained channel-wise information at each time step. It should be noted that MFCCA achieves state-of-the-art level performance with limited data from AliMeeting. However, the conversion of multi-talker ASR transcripts to speaker-attributed ASR transcripts remains a challenging task. The official modular system adopts Paraformer as its ASR model. We use the AliMeeting dataset to fine-tune the pre-trained model and achieve the cpCER of 8.84%, while before fine-tuning, the cpCER is 10.18%.
### Data augmentation and post-processing
Data augmentation methods are only adopted by the top 2 teams. One such team, X27, uses the non-overlapping segments in Aishell-4 and CN-Celeb as simulation data to train their TS-VAD model. They have also applied speed perturbation to the training data of their TS-ASR model. Furthermore, Dover-lap is employed to merge the diarization result from different channels and ROVER is also employed as the final system fusion method, which results in a 0.24% cpCER reduction on the Test set, reducing it from 17.08% to 16.84%. Team M42 effectively employs a wide range of data augmentation methods, such as speed perturbation, adding noise and reverb, pitch shifting, data simulation, audio codec, and SpecAugment [39]. They have also developed U2++ and Paraformer based speaker-attributed systems, and utilize a system fusion method that depends on whether the audio has undergone speech separation. They discover that the conformer model performs better on audio that goes through speech separation while the paraformer model is more effective on audio without speech separation. Therefore, the processed audio is taken as the input of U2++, while the Paraformer takes other inputs to produce the fused hypothesis. This approach results in 2.03% cpCER reduction on the _Test-2023_ set.
## 7 Conclusion
This paper provides an overview of the outcomes of the M2MeT 2.0 challenge, with a focus on the techniques used by the top-ranking teams. Given the limited training data and system-building period, leveraging open-source pre-trained models to construct a modular system is an effective approach. For speaker modules, TS-VAD and SOND are potent methods, but accurate speaker profiles are necessary for optimal performance. Front-end processing methods such as WPE, beamformIt, CMGAN, and GSS are beneficial for transcribing far-field data, although CMGAN may have a negative effect on ASR accuracy for non-overlapped speech. In the ASR module, SOT-based methods underperform due to limited training data, while single-talker ASR models trained on a large amount of data perform well when speech is well-preprocessed in a modular system. Data simulation is less important than that in the first M2MeT challenge, as pre-trained ASR models provide sufficient initialization for fine-tuning with a small amount of data. The best-performing system achieves 8.84% cpCER given the limited training data of the challenge.
\begin{table}
\begin{tabular}{c| |
2303.17996 | Definition of tolerances and corrector strengths for the orbit control
of the High-Energy Booster ring of the future electro-positron collider | After the discovery of the Higgs boson at the LHC, particle physics community
is exploring and proposing next accelerators, to address the remaining open
questions on the underlying mechanisms and constituents of the present
universe. One of the studied possibilities is FCC (Future Circular Collider), a
100 km long collider at CERN. The feasibility study of this future proposed
accelerator implies the definition of tolerances on magnets imperfections and
of the strategies of correction in order to guarantee the target performances
of the High Energy Booster ring. The efficiency of the correction scheme, used
to control the orbit, directly bounds the corrector needs and magnet
tolerances. Analytic formulae give a first estimation of the average rms value
of the required linear correctors' strengths and of the allowed magnets
misalignments and field quality along the entire ring. The distribution of the
correctors along the ring is simulated in order to verify the quality of the
residual orbit after the proposed correction strategy and compared with the
analytical predictions. First specifications of the orbit correctors strength
and tolerances for the alignment of the main elements of the ring are
presented. The limits of the studied correction scheme and method are also
discussed. | Barbara Dalena, Tatiana Da Silva, Antoine Chance, Adnan Ghribi | 2023-03-31T12:17:02Z | http://arxiv.org/abs/2303.17996v1 | Definition of tolerances and corrector strengths for the orbit control of the high-energy BOOSTER ring of the future electron-positron collider1
###### Abstract
After the discovery of the Higgs boson at the LHC, particle physics community is exploring and proposing next accelerators, to address the remaining open questions on the underlying mechanisms and constituents of the present universe. One of the studied possibilities is FCC (Future Circular Collider), a 100-km-long collider at CERN [1]. The feasibility study of this future proposed accelerator implies the definition of tolerances on magnets imperfections and of the strategies of correction in order to guarantee the target performances of the High Energy Booster ring. The efficiency of the correction scheme, used to control the orbit, directly bounds the corrector needs and magnet tolerances. Analytic formulae give a first estimation of the average RMS values of the required dipole correctors' strengths and of the allowed magnets misalignments and field quality along the entire ring. The distribution of the correctors along the ring is simulated, in order to verify the quality of the residual orbit after the proposed correction strategy and to compare it with the analytical predictions. First specifications of the orbit correctors strength and tolerances for the alignment of the main elements of the ring are presented. The limits of the studied correction scheme and method are also discussed.
## 1 Introduction
The aim of the present study is to first, determine the tolerances for the misalignment of the elements of the High Energy Booster (HEB) of the lepton version of the Future Circular Collider (FCC-ee). Then, to estimate the correctors' strength to correct for closed orbit perturbations caused by machine errors. The type of errors considered are:
* random dipole field error and random dipole roll ;
* quadrupole alignment errors ;
* beam profile monitors (BPM) alignment and reading errors ;
* sextupole alignment errors.
The expected Root Mean Square (RMS) values of the residual orbit and corrector strengths in presence of the considered errors have been calculated according to the following formulae [2, 3].
\[x_{\rm rms} = \frac{\pi}{\sqrt{2}\sin\pi Q_{x}}\frac{\bar{\beta}}{\sqrt{N_{d}} }\left(\frac{\Delta B}{B}\right)_{\rm rms}+ \tag{1}\] \[\frac{\sqrt{N_{q}}}{\sqrt{2}\sin\pi Q_{x}\cos\mu/2}(\Delta q_{x} )_{\rm rms}+\] \[\frac{\sqrt{1/2}}{[1+\sin(\mu/2)]}(\Delta\sigma_{x})_{\rm rms}\] \[y_{\rm rms} = \frac{\pi}{\sqrt{2}\sin\pi Q_{y}}\frac{\bar{\beta}}{\sqrt{N_{d}} }(\Delta\theta)_{\rm rms}+\] (2) \[\frac{\sqrt{N_{q}}}{\sqrt{2}\sin\pi Q_{y}\cos\mu/2}(\Delta q_{y} )_{\rm rms}+\] \[\frac{\sqrt{1/2}}{[1+\sin(\mu/2)]}(\Delta\sigma_{y})_{\rm rms}\]
\[(\delta_{x})_{\rm rms}= \left\{\frac{\beta}{\bar{\beta}_{\rm rms}}\left(n\left[\frac{2\pi }{N_{d}}\left(\frac{\Delta B}{B}\right)_{\rm rms}\right]^{2}+2(\Delta q_{x} )_{\rm rms}^{2}K_{q}^{2}L_{q}^{2}\right)+\right. \tag{3}\] \[\left.\frac{1+2\cos^{2}(\mu)}{2(L_{\rm coll}/2)^{2}[1+\sin(\mu/2) ]^{2}}(\Delta\sigma_{y})_{\rm rms}^{2}\right\}^{1/2}\] \[(\delta_{y})_{\rm rms}= \left\{\frac{\bar{\beta}}{\bar{\beta}_{\rm rms}}\left(n\left[ \frac{L_{d}}{\rho}\left(\Delta\theta\right)_{\rm rms}\right]^{2}+2(\Delta q_ {y})_{\rm rms}^{2}K_{q}^{2}L_{q}^{2}\right)+\right.\] (4) \[\left.\frac{1+2\cos^{2}(\mu)}{2(L_{\rm coll}/2)^{2}[1+\sin(\mu/2) ]^{2}}(\Delta\sigma_{y})_{\rm rms}^{2}\right\}^{1/2}\]
where \(N_{d}\) is the number of dipoles, \(Q_{x,y}\) the horizontal/vertical tunes, \(\bar{\beta}=\frac{L_{\rm coll}}{\sin(\mu)}\) the mean betatron function, \(L_{\rm coll}\) the cell length, \(\mu\) the phase advance per cell, and \(N_{q}\) the number of quadrupoles. For each of 100 different error configurations of the lattice model we compute the corresponding analytical value. We compare the maximum among the analytical RMS estimates with the distribution of 100 numerical results, after applying the correction strategy described in the next section.
## 2 Correction strategy
Before correcting the orbit, we first need to define and choose the correctors and BPMs to use. If a quadrupole is focussing in the horizontal plane the neighbouring BPM will read in horizontal and the following corrector will correct in horizontal. Reciprocally, if the quadrupole is focussing in the vertical plane, the BPM and corrector will act in vertical. The range of correctors and BPMs selected for each arc is
defined in order to have the same number of correctors and BPMs used in each plane. In the \(x\) plane: from the first BPM and the two correctors before the beginning of the arc to the three BPMs and correctors after the end of the arc. In the \(y\) plane: from the first BPM and the two correctors before the beginning of the arc to the two BPMs and correctors after the end of the arc.
The same strategy has been used for the LHC during commissioning [5]: a first correction, Segment-by-Segment (SbS) _i.e._ in our case, arc by arc. Each segment of the machine is handled as a line, using the optical parameters at the first segment entry. The correction procedure is divided in two sections: one with the sextupoles turned off and the other with the sextupoles on. This decomposition sextupoles off/on of the orbit study is commonly done in accelerators such as SuperKEKB during commissioning [4]. We assume that all the correctors of the Booster will be individually powered.
**Procedure part 1: sextupoles off.** After turning off the sextupoles, we first do the SbS. This gives us a first orbit corrected to inject in a singular value decomposition (SVD) algorithm over the entire machine. The SVD is done with the MadX command _CORRECT_[6]. By doing an SbS before a SVD on all arcs, we can make more seeds converge than starting directly with all arcs together. After the SbS, two iterations of SVD are made on all arcs and in line, in order to reduce the residual orbit. This is sufficient to get an orbit reduced enough to find the closed orbit. Finally, multiple iterations of SVD in ring are made and the number of iterations vary for each seed. Iterations are made until the RMS of the orbit in each plane is below the analytical RMS, while the number of iterations is under a limit of maximum iterations (fixed to 15).
**Procedure part 2: sextupoles on.** The sextupoles are turned on and we do one iteration of SVD in ring. More iterations are not useful because the command _CORRECT_ does not give a better correction of the residual orbit.
To illustrate the different steps of the correction procedure, the Fig. 1 represents the orbit for the seed 3, with an RMS random error on the quadrupole offset of \(150\,\mathrm{\SIUnitSymbolMicro m}\), dipole relative field error of \(10^{-4}\), main dipole roll of \(300\,\mathrm{mrad}\), BPM offset of \(150\,\mathrm{\SIUnitSymbolMicro m}\) and sextupole offset of \(150\,\mathrm{\SIUnitSymbolMicro m}\), after each important step. We studied and tested different case scenarios by adding one by one error types. All tests were run on 100 seeds. Our starting point is the combination of the offsets of the quadrupoles (designated as MQ), the dipole (designated as MB) relative field error and main dipole roll error. The statistical study of this first error configuration revealed that all seeds converged until we reached a MQ offset of \(150\,\mathrm{\SIUnitSymbolMicro m}\). By following this method, we added and fixed the other elements errors as reported in Table 1.
It is worth noticing that all the errors applied on the elements are randomly Gaussian distributed within \(\pm 3\) RMS.
## 3 Results and Discussion
Figures 2 and 3 show the orbit and correctors strength distribution with their respective RMS values for the 99 successful seeds at the end of the correction procedure. The following errors are included: relative dipole field error of \(10^{-3}\), main dipole roll of \(300\,\mathrm{mrad}\), BPMs offset of \(150\,\mathrm{\SIUnitSymbolMicro m}\), sextupole offset of \(150\,\mathrm{\SIUnitSymbolMicro m}\) and BPM reading error of \(50\,\mathrm{\SIUnitSymbolMicro m}\).
\begin{table}
\begin{tabular}{l l} \hline Error type (Gaussian RMS ) & Value [Unit] \\ \hline MQ offset & 80, 90, 100, 120, 150, 200 \(\mathrm{\SIUnitSymbolMicro m}\) \\ MB relative field error & \(10^{-4}\),\(10^{-3}\) \\ MB main dipole roll error & 300 mrad \\ BPM offset & 60, 80, 100, 150, 200 \(\mathrm{\SIUnitSymbolMicro m}\) \\ _MQ offset_ & 150 \(\mathrm{\SIUnitSymbolMicro m}\) \\ MS offset & 60, 80, 100, 120, 150, 200 \(\mathrm{\SIUnitSymbolMicro m}\) \\ _MQ, BPM offset_ 150 \(\mathrm{\SIUnitSymbolMicro m}\) & 60, 80, 100, 120, 150, 200 \(\mathrm{\SIUnitSymbolMicro m}\) \\ BPM resolution & 10 and 50 \(\mathrm{\SIUnitSymbolMicro m}\) \\ _MQ, BPM, MS offset_ 150 \(\mathrm{\SIUnitSymbolMicro m}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the different error types and their values.
Figure 1: Example of the evolution of the orbit during three different steps of the correction procedure (seed=3, MQ offset (\(150\,\mathrm{\SIUnitSymbolMicro m}\)), MB field error (\(10^{-4}\)), main MB roll (\(300\,\mathrm{mrad}\)), BPM offset (\(150\,\mathrm{\SIUnitSymbolMicro m}\)), and MB offset (\(150\,\mathrm{\SIUnitSymbolMicro m}\)). Top panel shows residual orbit with errors and no correction. Middle panel shows residual orbit after last iteration of the SVD correction with sextupoles OFF. Bottom panel shows residual orbit after the SVD correction with sextupoles ON.
The dashed red lines on the distributions represents \(\pm 3\) times the RMS calculated analytically. For both the orbit and the correctors' strength, we can see that our analytical predictions fit well the data from the simulations, for most all of the seeds. The deviations can be explained by the combination of the different errors and \(\beta\)-function, for which the procedure is less effective. The residual orbit amplitude in both planes is in the order of magnitude of the MQ offsets (which are the dominant errors as expected) and the pattern of the succession of the arcs and insertions is visible (because we only applied the errors to the arcs, the residual orbit after correction in the insertions is expected to be almost zero). The RMS values for each of the successful seeds are distributed around the dashed red line representing one analytical RMS estimate, the blue dots correspond to the RMS after turning on the sextupoles and correcting the orbit once, always using the SVD algorithm. Apart from the QP offset, the other main contributors to the residual orbit are the MS offset and the BPM resolution. Concerning the correctors' strength distribution (Fig. 3), we can notice that for almost all the correctors, their strength is within the limit of the three analytical RMS estimates (red dashed line). The RMS values for the 99 successful seeds are the same between the last iteration of the first part of the correction procedure (sextupoles off) and its end (see sec. Correction Strategy). The analytical RMS (green dashed line) is well above the numerical results from the simulations. The values of the corresponding residual orbit and corrector specifications is reported in Table 2.
## 4 Conclusions
We have computed the first specification for the tolerance of mis-alignment of the main high energy booster elements (of about 150 \(\upmu\)m) and dipole corrector strengths (of about 20 mT\(\cdot\)m), considering only the orbit corrections. These specifications do not consider the additional strength that will be required at the extraction energy to compensate the orbit deviation due to the energy loss in one turn because of synchrotron radiation (tapering). Moreover, these values need to be confirmed by the full emittance tuning (i.e. adding the \(\beta\)-beating, the dispersion and the coupling correction). Finally, a further optimization of these specifications and of the correction strategy can be attempted, for example by mean of machine learning techniques.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & & Plane & \multicolumn{2}{c}{3\(\times\)RMS} \\ & & & Analytic & Seeds\({}^{a}\) \\ \hline Residual orbit & [\(\upmu\)m] & x & 188 & 174 \\ & & y & 192 & 180 \\ \hline Corrector strength & [mT\(\cdot\)m] & x & 16 & 12 \\ & & y & 16 & 12 \\ \hline \hline \end{tabular} \({}^{a}\) Mean of RMS of all seeds.
\end{table}
Table 2: Residual Orbit and Corrector strengths for 150 \(\upmu\)m RMS Gaussian distributed random offset values of quadrupoles, sextupoles, and BPMs, 50 \(\upmu\)m RMS Gaussian distributed random precision of BPM reading, 10\({}^{-3}\) RMS Gaussian distributed random relative dipole magnetic field error and 300 mrad RMS Gaussian distributed roll of dipoles.
Figure 3: Distribution of the correctors strength and the corresponding RMS values for the 99 successful seeds.
Figure 2: Distribution of the residual orbits and the corresponding RMS values for the 99 successful machine configurations and errors described in the text.
## Acknowledgements
The Authors would like to thanks I. Agapov, R. Tomas and M. Hostettler for useful discussions.
|
2309.03760 | Superconductivity in twisted bismuth bilayers | First-principles calculations for two twisted bismuth bilayers, each with 120
atoms, were studied by means of the electronic density of states and
vibrational density of states. Metallic character at the Fermi level was found
for the non-rotated sample as well as for each sample rotated 0.5{\deg},
1.0{\deg}, 1.5{\deg}, 2.0{\deg}, 2.5{\deg}, 3.0{\deg}, 4.0{\deg}, 5.0{\deg},
6.0{\deg}, 7.0{\deg}, 8.0{\deg} and 10{\deg} with respect to the static
bilayer. Assuming that the superconductivity is BCS-type and the invariance of
the Cooper pairing potential, we predict a maximum superconducting temperature
T_c ~ 1.8 K for a magic angle of 0.5{\deg} degrees between the two bilayers,
increasing the superconducting transition temperature from the experimentally
measured value of 0.53 mK for the Wyckoff structure of crystalline bismuth. | Isaías Rodríguez, Renela María Valladares, David Hinojosa-Romero, Alexander Valladares, Ariel Alberto Valladares | 2023-09-07T15:02:38Z | http://arxiv.org/abs/2309.03760v2 | **Superconductivity in twisted bismuth bilayers**
## Abstract
First-principles calculations for two twisted bismuth bilayers, each with 120 atoms, were studied by means of the electronic density of states and vibrational density of states. Metallic character at the Fermi level was found for the non-rotated sample as well as for each sample rotated 0.5\({}^{\circ}\), 1.0\({}^{\circ}\), 1.5\({}^{\circ}\), 2.0\({}^{\circ}\), 2.5\({}^{\circ}\), 3.0\({}^{\circ}\), 4.0\({}^{\circ}\), 5.0\({}^{\circ}\), 6.0\({}^{\circ}\), 7.0\({}^{\circ}\), 8.0\({}^{\circ}\) and 10\({}^{\circ}\) with respect to the static bilayer. Assuming that the superconductivity is BCS-type and the invariance of the Cooper pairing potential, we predict a maximum superconducting temperature \(T_{c}\)\(\sim\) 1.8 K for a magic angle of 0.5\({}^{\circ}\) degrees between the two bilayers, increasing the superconducting transition temperature from the experimentally measured value of 0.53 mK for the Wyckoff structure of crystalline bismuth.
## Introduction
For more than 100 years superconductivity has been present in scientific research. Sometimes in the forefront, sometimes in the background, but the hope of finding high temperature superconductors, the holy grail of technology, has prevail over a century. This discovery would certainly revolutionize several fields of knowledge and technology. Up to the present, disappointment has been the result of multiple claims of having found superconductivity at room temperature. Investigations of materials at high pressures have given hope, once more, towards this discovery, however some of these claims are either for very high pressures [1, 2, 3, 4], or for new materials that have not been reproduced worldwide [5, 6, 7, 8]. High pressure experiments are very difficult to conduct since the pressures needed, of the order of giga pascals, can only be applied for very short times. One should ask whether pressures below atmospheric may be a different and promising way to approach the problem [9, 10, 11, 12]. On the other hand, the new materials so far reported have to be validated by several laboratories to accomplish credibility. Some known materials, like the Xenes [13, 14], have also been the subject of study in this quest.
Xenes are a class of two-dimensional (2D) single-element layered materials, primarily derived from group IV elements like carbon (graphene) [15], group V elements like phosphorus (phosphorene) [16], or group VI elements like tellurium (tellurene) [17]. These materials demonstrated unique physical and chemical characteristics against the corresponding three-dimensional (3D) structures.
Xenes have sparked interest due to their tunable properties and potential applications in electronics and photonics [18]. Xenes electronic and lattice dynamics properties can be
engineered by various techniques, like adjusting the lattice structure, doping the layers, twisting the layers, etc.
Bismuth xenes, also known as bismuthenes, have several 2D layered allotropes that have been predicted theoretically [19, 20], and at least two of these phases have been experimentally confirmed [21, 22]. The first one is an \(\alpha\)- phase also known as Bi(110) and consist of puckered orthorhombic structures similar to black phosphorous, while the \(\beta\)- phase also known as Bi(111) consists of a buckled honeycomb-like structure similar to silicene [20, 21, 23].
Bismuthene, as several other xenes, have been the subject of much interest and investigation as a topological insulator (TI) [24, 25, 26, 27]. However, the discovery of superconductivity in twisted bilayer graphene (TBG) [28, 29] opens up a new field of study for bismuthene.
Bismuth superconductivity was discovered in amorphous bismuth (\(a\)-Bi) [30, 31, 32] with a critical transition temperature, \(T_{c}\), of \(\sim\) 6 K, while superconductivity in crystalline bismuth (\(x\)-Bi) was for the first time predicted to have a \(T_{c}\) lower than 1.5 mK [33], and later measured with a \(T_{c}\) of 0.53 mK [34]. While the other stable crystalline phases of bismuth at high pressure were studied finding a maximum of \(T_{c}\)\(\sim\) 8 K for the Bi-V phase [35, 36]. In bismuth bilayers a possible \(T_{c}\) of \(\sim\) 2.6 K was found [37].
Here we report the possible superconductivity in twisted bismuth bilayers (TBB) of two bilayers of about 5.1 nm long and 4.2 nm wide, that were twisted with respect to one another at angles of 0.5\({}^{\circ}\), 1\({}^{\circ}\), 1.5\({}^{\circ}\), 2\({}^{\circ}\), 2.5\({}^{\circ}\), 3\({}^{\circ}\), 4\({}^{\circ}\), 5\({}^{\circ}\), 6\({}^{\circ}\), 7\({}^{\circ}\), 8\({}^{\circ}\), 9\({}^{\circ}\) and 10\({}^{\circ}\) forming different Moire patterns.
**Method**
A supercell was constructed starting from the Bi-I (Wyckoff) structure and multiplying it by 10x8x2 obtaining a 960-atom structure, (see Figure 1 A). Then atoms were removed to end with two 120 atoms bilayers for a total of 240-atoms, (Figure 1 B). To avoid possible self-interactions, and to maintain the structure of two infinite bilayers with the original interatomic structure, the periodic boundary conditions were removed, and we ended with the initial structure of two 120 atoms bilayers, (Figure 1 C). Finally, one of the two nanolayers from the initial structure was rotated around the perpendicular Z axis 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, and 10.0 degrees, obtaining 13 different structures, each with a different Moire pattern, (Figure 1 D).
Then, electronic density of states (_N(E)_) and vibrational density of states (_F(\(\omega\))_) were calculated using DMol\({}^{3}\), a first principles simulation code included in the Dassault Systemes Materials Studio Suite [38, 39]. To obtain the _N(E)_, a single-point energy calculation was performed using a double numerical plus polarization functions basis set (dnp), the density functional semi-core pseudo-potential (dspp) was used for the core treatment, the Vosko-Wilk-Nursair exchange-correlation functional was considered within the local density approximation (LDA) [40], a fine integration grid with octupolar angular momentum fitting functions was incorporated. The real-space cutoff radius for the orbitals was set to 6.0 A.
The _F(\(\omega\))_ was calculated with the finite displacement method (frozen-phonon method), using the same electronic parameters as the _N(E)_ calculation, and a finite difference of 0.005 A to obtain the Hessian matrix.
Then the Debye frequencies and Debye temperatures were calculated using the obtained _F(\(\omega\))_ and the method proposed by Grimvall [41]:
\[\omega_{D}=\exp{\left(\frac{1}{3}+\frac{\int_{0}^{\omega_{max}}ln(\omega)F(\omega )d\omega}{\int_{0}^{\omega_{max}}F(\omega)d\omega}\right)},\]
\[\theta_{D}=\frac{\hbar\omega_{D}}{k_{B}}.\]
### Electronic Density of States
For the _N(E)_ determination, the DMol\({}^{3}\) analysis tool included in the Materials Studio suite was used, set to eV and an integration method with smearing width of 0.1 eV. The number of points per eV was set to 100. The results were smoothed and analyzed by means of the OriginPro software using a Fast Fourier Transform (FFT) filter of 3-step [42]. The results are given in states per eV per atom in Figure 2.
Figure 1: Construction of the Bismuth bilayers. A) 10x8x2 supercell of Bi-I (Wyckoff). B) Isolation of two bilayers of 240 atoms. C) Superior view of the two bilayers without periodic boundary conditions. D) 10\({}^{\circ}\) rotation of one of the layers with respect to the other; bond-sticks were omitted in D) in order to better appreciate the Moiré pattern.
The _N(E)_ is composed of two main bands; the first is bimodal and goes from around -14 eV to -8 eV, this first band includes the majority of s-electron contributions, while the second band is located from around -4.7 eV to above the Fermi level, this second band has contributions from the p-band and d-band electrons, with a small proportion of s-band electrons. For all studied samples, the density of states at the Fermi level (\(N(E_{F})\)) is in a non-zero valley, which is characteristic of the semimetals, as is the case for most of the phases of bismuth [36]. Both of these bands have a small decrease in its height while increasing its width. This is common in structures under external stress as is the case for compressed bismuth [12]. This may indicate the presence of internal forces in the studied samples, such as the mechanical deformation required for the fabrication of the TBG [28, 29].
The studied samples have a Van Hove Singularity (VHS) in the _N(E)_ at the Fermi level (Figure 2), these sudden increases in the _N(E)_ are also present in TBG [45, 46, 47] as well as in Moire nanolayers [26], and it seems to be caused by an almost flat band near the \(\Gamma\) point. However, the presence of VHS near the Fermi level has not been detected, neither in the Bi crystalline structure [33, 36] nor in the Bismuth (111) bilayers [26, 48, 49, 50]. The VHS seems to be caused by the local density of states of the border atoms.
Figure 2: Comparison of the electronic density of states (_N(E)_) for twisted bismuth bilayers (TBB). Fermi Level is shown as the dashed vertical line. The electronic density of states is normalized per atom. Experimental results from Jezequel (1986) [43] and Jezequel (1997) [44] in green triangles. DFT calculations from Hinojosa-Romero in red [37].
To correctly represent the electronic density of states of a larger bismuthene bilayer [26, 37, 48, 49, 50], as well as the crystalline results [33, 36], it was decided to study the partial electronic density of states of the backbone (center) of the bilayer disregarding the first 4 atoms from the border on each side of the nanosheets; the partial electronic density of states can be seen in Figure 3. The partial eDoS for the backbone is also represented by two bands, but the VHS near the Fermi level is substantially decreased and completely disappears for several of the studied samples (Figure 3), which corresponds to all the studies in the literature for bismuth and bismuthene. This seems to indicate that the VHS present in the total eDoS is mostly caused by border defects. A more detailed study is currently undergoing and will be reported elsewhere.
### Vibrational Density of States
For the _F(\(\omega\))_ the results were analyzed with OriginPro software [42], using the normal modes calculated in meV with DMol\({}^{3}\). To obtain the _F(\(\omega\))_ a frequency count with a 0.1 meV bin width was used, then the resulting histograms were smoothed with a three-step FFT filter. The translational modes around 0 were removed. Finally, the _F(\(\omega\))_s were normalized to 3, these results can be seen in Figure 4.
Figure 3: Comparison of the partial electronic density of states (_N(E)_) for the backbone (center) of twisted bismuth bilayers (TBB). Fermi Level in vertical dashed line. The electronic density of states is normalized per atom in the nanolayers. Experimental results from Jezequel (1986) [43] and Jezequel (1997) [44] in green triangles. DFT calculations from Hinojosa-Romero in red [37].
The vibrational density of states (\(F(\omega)\)) is composed of two primary bands, the acoustic band is located from 0 to about 7 meV (56 cm\({}^{-1}\), 1.6 THz) while the optical band is located from around 9 meV (72 cm\({}^{-1}\), 3.6 THz) to around 15 meV (120 cm\({}^{-1}\), 3.6 THz) for the twist of 0.0\({}^{\circ}\). This is consistent with both experimental and theoretical results for the crystalline structure [33, 36] as well as bismuthene (Bismuth (111) Bilayers) [51, 52, 53, 54, 37].
The forbidden band-gap from around 7 meV to around 9 meV decreases as the bismuthene bilayers are twisted with respect to each other and it completely disappears at the 4.0\({}^{\circ}\) rotation, although a minimum at 9 meV is still present for all the studied rotations. The presence of a bimodality in the \(F(\omega)\) seems to be a universal characteristic of all 2D materials [55], and this may be due to the existence of layers as demonstrated by our group for 3D materials that have layered structures.
Also, at 2.0\({}^{\circ}\) of rotation isolated phonon modes begin to appear above 16 meV, the isolated modes increase in frequency and intensity up to 42 meV for the 10\({}^{\circ}\) rotation, although these contributions are small in quantity (less than 0.5% of the modes for the 2.5\({}^{\circ}\) sample, and less than 1.8% of the modes for the 10\({}^{\circ}\) sample). The relatively high energy of these modes has an impact in related quantities like the Debye frequency and temperature as will be seen in the next section. The existence of these high-energy modes was predicted by Chowdhury [56], and these modes seem to be related to border defects in the bismuthene bilayers.
Figure 4: Comparison of the vibrational densities of states (\(F(\omega)\)), normalized to three modes per atom.
### Superconductivity
Although there seems not to be an agreement in the kind of superconductivity responsible for the magic angle in TBG [28, 29], there has been a lot of success using conventional superconductivity for bismuth crystalline phases [33, 34, 36], as well as for bismuthene bilayers [37].
In order to calculate the superconducting transition temperature, the approach developed by Mata-Valladares [33], is used. The superconducting transition temperature for the crystalline bismuth is calculated with the BCS equation:
\[T_{c}^{\ \alpha}=1.13\theta_{D}^{\ \alpha}exp\left(-\frac{1}{N(E_{F})^{\alpha}V_{ 0}}\right).\]
Analogously, the superconducting transition temperature for the twisted bismuth bilayers is calculated with the BCS equation:
\[T_{c}^{\ \beta}=1.13\theta_{D}^{\ \beta}exp\left(-\frac{1}{N(E_{F})^{\beta}V_{ 0}}\right),\]
where we have assumed that the electron-phonon coupling potential \(V_{0}\) is the same for both materials, the crystalline and the bilayared structures. If we know take the ratio \(T_{c}^{\ \beta}/T_{c}^{\ \alpha}\) and solve for \(T_{c}^{\ \beta}\), the following equation is obtained:
\[T_{c}^{\ \beta}=(T_{c}^{\ \alpha})^{1/\varepsilon}\delta\{1.13\theta_{D}^{ \ \alpha}\}^{(\varepsilon-1)/\varepsilon},\]
where:
\[N(E_{F})^{\beta}=\varepsilon\ N(E_{F})^{\alpha}\ \text{and}\ \theta_{D}^{\ \beta}=\ \delta\ \theta_{D}^{\ \alpha}.\]
\(N(E_{F})^{\beta}\) was calculated directly from Figure 3 and can be seen in Figure 5 A), while \(\theta_{D}^{\ \beta}\) was calculated using Grimvall approach [41] and Figure 4, and the results are presented in Figure 5 B). The results considering a frequency cutoff of \(\omega_{max}=23\) meV are shown in red dots, while the results for \(\omega_{max}=50\) meV are shown in black squares. These results offer the exact same \(\theta_{D}^{\ \beta}\) up to a twisting angle of \(3.0^{\circ}\), and start to differ from \(4.0^{\circ}\) up to \(10.0^{\circ}\). This difference is caused by the high energy isolated modes in the _F(\(\omega\))_. Although these modes represent less than 2% of the total modes calculated, the difference in the Debye temperature can be as large as 6 K for the 9.0\({}^{\circ}\) rotation.
Figure 5: A) Electronic density of states at the Fermi level for the twisted bismuth bilayers and B) Debye temperatures for the twisted bismuth bilayers for two different frequency cut-offs.
Then, using the experimental results for crystalline bismuth: \(N(E_{F})^{\alpha}=0.15\) atom-1 and \(\theta_{D}{}^{\alpha}=134.2\) K we calculated the superconducting transition temperatures. The results can be seen in Table 1, and the corresponding Figure 6.
The difference in the superconducting transition temperatures is less than 2.7% whether we consider the high-energy vibrational modes above 23 meV, or not. It also appears that superconductivity is mostly driven by the electronic density of states.
The largest \(T_{c}\) found was 1.85 K for a rotation of 0.5\({}^{\circ}\) between layers, as can be seen in Figure 6. A more detailed study should be conducted around this angle to find a possible higher maximum or corroborate if the maximum \(T_{c}\) is indeed 1.85 K. The minimum \(T_{c}\) was 0.35 K for a rotation of about 6.0\({}^{\circ}\). For angles above 7.0\({}^{\circ}\) the \(T_{c}\) begins to increase again, so higher rotation angles above 10\({}^{\circ}\) may have a higher \(T_{c}\). A more extensive study should be conducted to investigate this surmise.
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c} \hline Structure & \(N(E_{F})\) & \(\theta_{D}\) & \(\theta_{D}\) & \(\epsilon\) & \(\delta\) & \(\delta\) & \(T_{c}\) & \(T_{c}\) \\ & atom & (23 meV) & (50 meV) & & (23 meV) & (50 meV) & (23 meV) & (50 meV) \\ & s-1 & & & & & & & \\ \hline Bi-I & 0.15 & 134.2 & 134.2 & - & - & - & 0.0015 & 0.0015 \\ \hline TBB 0.0\({}^{\circ}\) & 0.42 & 99.88 & 99.88 & 2.94 & 0.74 & 0.74 & 1.58 & 1.58 \\ TBB 0.5\({}^{\circ}\) & 0.44 & 100.50 & 100.50 & 2.88 & 0.75 & 0.75 & 1.85 & 1.85 \\ TBB 1.0\({}^{\circ}\) & 0.43 & 101.69 & 101.69 & 2.79 & 0.76 & 0.76 & 1.76 & 1.76 \\ TBB 1.5\({}^{\circ}\) & 0.42 & 102.54 & 102.54 & 2.67 & 0.76 & 0.76 & 1.57 & 1.57 \\ TBB 2.0\({}^{\circ}\) & 0.40 & 103.00 & 103.00 & 2.55 & 0.77 & 0.77 & 1.32 & 1.32 \\ TBB 2.5\({}^{\circ}\) & 0.38 & 102.96 & 102.96 & 2.44 & 0.77 & 0.77 & 1.08 & 1.08 \\ TBB 3.0\({}^{\circ}\) & 0.37 & 102.97 & 102.97 & 2.21 & 0.77 & 0.77 & 0.88 & 0.88 \\ TBB 4.0\({}^{\circ}\) & 0.33 & 102.64 & 103.48 & 2.10 & 0.76 & 0.77 & 0.55 & 0.55 \\ TBB 5.0\({}^{\circ}\) & 0.31 & 101.14 & 103.04 & 2.03 & 0.75 & 0.77 & 0.41 & 0.41 \\ TBB 6.0\({}^{\circ}\) & 0.30 & 99.58 & 102.65 & 2.06 & 0.74 & 0.76 & 0.34 & 0.35 \\ TBB 7.0\({}^{\circ}\) & 0.31 & 98.67 & 102.34 & 2.14 & 0.74 & 0.76 & 0.36 & 0.37 \\ TBB 8.0\({}^{\circ}\) & 0.32 & 98.20 & 102.45 & 2.31 & 0.73 & 0.77 & 0.44 & 0.45 \\ TBB 9.0\({}^{\circ}\) & 0.35 & 97.52 & 101.89 & 2.35 & 0.73 & 0.76 & 0.64 & 0.67 \\ TBB 10.0\({}^{\circ}\) & 0.36 & 98.76 & 104.00 & 2.83 & 0.74 & 0.77 & 0.72 & 0.76 \\ \hline \end{tabular}
\end{table}
Table 1: Values for the electronic densities of states at Fermi level (\(N(E_{F})\)), Debye temperatures \(\theta_{D}\), the parameters for the data-Valladares approach and the superconducting transition temperatures (\(T_{c}\)).
## Conclusions
Twisted bismuth bilayers have shown several interesting properties, such as the presence of Van Hove singularities near the Fermi level, located at the border of the nanolayers, as well as superconductivity for all angles studied, with a maximum of 1.85 K for a rotation of 0.5 \({}^{\circ}\) between layers.
The superconductivity seems to be of the conventional type, and the superconducting transition temperature appears to be electronic driven since calculating the Debye temperature with two different frequency cutoffs have a negligible impact in the superconducting transition temperature.
## Acknowledgements
I.R. thanks CONAHCYT, for his postdoctoral fellowship. D.H.-R. acknowledges CONAHCYT for supporting their graduate studies. A.A.V., R.M.V. and A.V. thank DGAPA-UNAM (PAPIIT) for continued financial support to carry out research projects under Grant No. IN116520. Maria Teresa Vazquez and Oralia Jimenez provided the information requested. Alberto Lopez\({}^{+}\) and Alejandro Pompa assisted with the technical support and maintenance of the computing unit at IIM-UNAM. Simulations were partially carried at the Computing Center of DGTIC-UNAM through project LANCAD-UNAM-DGTIC-131.
## Author contributions
I.R. and A.A.V. conceived this research and designed it with the participation of R.M.V., A.V., D.H.-R. All the simulations were done by I.R. All authors discussed and analyzed the results. A.A.V.
Figure 6: Superconducting critical temperatures for twisted bismuth bilayers.
wrote the first draft and the other authors enriched the manuscript. All authors gave their consent for the publication of this manuscript.
The authors declare no conflict of interest in this work.
|
2308.00140 | Pressure sensitivity in non-local flow behaviour of dense hydrogel
particle suspensions | Slowly sheared particulate media like sand and suspensions flow
heterogeneously as they yield via narrow shear bands where most of the strain
is accumulated. Understanding shear band localization from microscopics is
still a major challenge. One class of so-called non-local theories identified
that the width of the shearing zone should depend on the stress field. We
explicitly test this picture by using a uniquely stress-sensitive suspension
while probing its flow behavior in a classic geometry in which shear bands can
be well-tuned: the Split-Bottom Shear Cell (SBSC). The stress-sensitive
suspension is composed of mildly polydisperse soft, slippery hydrogel spheres
submersed in water. We measure their flow profiles and rheology while
controlling the confinement stress via hydrostatic effects and compression. We
determine the average angular velocity profiles in the quasi-static flow regime
using Magnetic Resonance Imaging based particle image velocimetry (MRI-PIV) and
discrete element method (DEM) simulations. We explicitly match a
pressure-sensitive non-local granular fluidity (NGF) model to observed flow
behavior. We find that shear bands for this type of suspension become extremely
broad under the low confining stresses from the almost density-matched fluid
particle mixture, while collapsing to a narrow shear zone under finite,
externally imposed compression levels. The DEM and NGF results match the
observations quantitatively, confirming the conjectured pressure sensitivity
for suspensions and its role in NGF. Our results indicate that pressure
sensitivity should be part of non-local flow rules to describe slow flows of
granular media. | Zohreh Farmani, Nazanin Ghods, Harkirat Singh, Jing Wang, Ralf Stannarius, Stefan Radl, David L. Henann, Joshua A. Dijksman | 2023-07-31T20:16:43Z | http://arxiv.org/abs/2308.00140v1 | # Pressure sensitivity in non-local flow behaviour of dense hydrogel particle suspensions
###### Abstract
Slowly sheared particulate media like sand and suspensions flow heterogeneously as they yield via shear bands where most of the strain is accumulated. Understanding shear band localization from microscopics is still a major challenge. One class of so-called non-local theories identified that the width of the shearing zone should depend on the stress field. We explicitly test this picture by using a uniquely stress-sensitive suspension while probing its flow behavior in a classic geometry in which shear bands can be well-tuned: the Split-Bottom Shear Cell (SBSC). The stress-sensitive suspension is composed of mildly polydisperse soft, slippery hydrogel spheres submersed in water. We measure their flow profiles and rheology while controlling the confinement stress via hydrostatic effects and compression. We determine the average angular velocity profiles in the quasi-static flow regime using Magnetic Resonance Imaging based particle image velocimetry (MRI-PIV) and discrete element method (DEM) simulations. We explicitly match a pressure-sensitive non-local granular fluidity (NGF) model to observed flow behavior. We find that shear bands for this type of suspension become extremely broad under the low confining stresses from the almost density-matched fluid particle mixture, while collapsing to a narrow shear zone under finite, externally imposed compression levels. The DEM and NGF results match the observations quantitatively, confirming the conjectured pressure sensitivity for suspensions and its role in the NGF model. Our results indicate that pressure sensitivity should be part of non-local flow rules to describe slow flows of granular media.
## I Main article
Granular materials are dense arrangements of particles that collectively can display solid or liquid-like behavior. They are crucial ingredients in applications as diverse as, e.g., geomechanics [1], food [4], battery assemblies [14], pharmaceuticals [29], and ceramics [11; 6]. Predicting how granular materials flow is thus important but challenging. Most studies on the flow behavior of granular materials have focused on simplified systems, composed of rigid, dry granular materials [15; 24; 25; 16], where the effects of particle deformation, fluid lubrication, and interstitial fluids are negligible for the particle dynamics. Much has been learned about the physics of such rigid, dry granular materials over the past several decades, yet it remains to be seen how the results from these works generalize or connect to other interesting "granular systems". In particular, the influence of strong stress gradients and a wide variety of particle friction coefficients on the flow behavior has not been explored. This lack of understanding is a major roadblock to understand the generality of certain successful flow modeling approaches for the broader collection of particulate matter, such as, e.g., emulsions and foams [5]. Here, we use experiments and numerical simulations to show that dense, non-Brownian, nearly-frictionless, soft particle suspensions can be effectively modeled with a common class of non-local flow models, in which stress sensitivity is built into a non-local length scale, governing the fluidization of the material. This flow modeling framework is applicable to extremely slow flows, where contacts are long lived and dominate the material response.
This so-called "quasi-static" flow regime shows two separate features: (1) rate independence of the driving stress and (2) broad shear bands, often modeled as non-local effects [21; 2]. Non-locality is conjectured to come from the idea that in particle systems with non-uniform steady flow, stress fluctuations are induced by distant rearrangements, meaning that flow in one region can fluidize distant regions. Even though such non-locality had been used before to model plasticity in other materials [5; 19], it took some time to take non-locality into account in models for granular flows [26; 3; 21]. Non-locality is modeled by using either soil plasticity models [7] or non-local granular fluidity (NGF) models [18; 19]. The NGF model has shown success in quantitative predictions of quasi-static, dense granular flow in certain flow geometries for stiff frictional glass beads [17; 20]. However, how generally applicable is this model to other dense granular systems?
To test the applicability of non-local modeling approaches to such challenging granular systems, we probe the flow dynamics of a dense suspension of soft, frictionless hydrogel particles. Such a suspension can be
made by submersing almost buoyancy matched, swelled hydrogel spheres in water, reducing the hydrostatic particle pressure gradient dramatically, which makes the suspension very sensitive to external pressure effects. Additionally, hydrogel suspensions are composed of soft, frictionless spheres. To create a stringent test for the continuum modeling, we perform experiments in the well-known split-bottom shear cell (SBSC) geometry, which creates non-trivial azimuthally symmetric two-dimensional flow fields. This geometry produces wide shear bands away from the sidewalls of the container [9]. We use Magnetic Resonance Imaging (MRI) as a tomographic technique to characterize the shape of the shear bands. MRI applied to hydrogels provides us with structural information from the bulk of the particle system at sub-mm resolution [28]. MRI can be combined with Particle Image Velocimetry (PIV) to obtain flow profiles. The experimentally obtained profiles are compared to DEM simulations to confirm stress dependence. We then test the capability of the NGF model in predicting the experimentally and numerically observed bulk flows.
_Experiments and Analysis --_ To study the shear behavior of dense collections of soft, frictionless hydrogel particles using MRI, we use 2-3 mm hydrogel spheres that are swelled in water and fully submersed thereafter in water. Detailed information on the materials and methods used in the MRI measurements can be found in the Supplementary Information (SI) and elsewhere [28]. We designed and used a SBSC to shear/compress the hydrogel suspension with a constant rotation rate (\(\omega_{0}\)) and added confining pressure (\(P\)). In the shear cell, a layer of material of depth \(H\) is driven by the rotation of an inner disc of radius \(R_{s}\), as shown in Fig. 1a. This geometry is well studied [12; 30; 13] and produces robust and wide shear zones for dry granular flows and suspensions [10]. The ratio of the filling height to the radius of the rotating bottom disk, \(H/R_{s}\), controls the geometry of the shear zone. As an extra parameter, we use the confining pressure \(P\) exerted by the top plate on the granular phase of the suspension. To extract velocity profiles, the 3D tomograms from MRI were divided into rings with constant depth \(z\) below the free surface and constant distance \(r\) from the central axis. We used a cross-correlation method on said domain to extract the displacement field [31], which may be used to calculate the angular velocity (\(\omega\)) at certain \(z\) and \(r\). The data are averaged over 5 shear steps for each \(H\), removing an initial transient of 2 shear steps. The angular velocity imposed by the moving boundary disk \(\omega_{0}\) can be independently measured from the same cross-correlation analysis of displacements of tracer hydrogel beads glued to the underside of the disk, which are also imaged in the same tomograms at each shear step. To gain deeper insight on how the micro-properties of the submersed hydrogel suspension can affect the flow behavior, the experiments are replicated by discrete element method-based simulations (DEM). More details on DEM methods can be found in the SI. The non-local granular fluidity (NGF) model has shown success in quantitatively predicting granular rheology in arbitrary geometries [17; 21; 23], especially in the split-bottom flow configuration [17; 22]. However, in previous studies on split-bottom flow, the model has primarily been tested against surface flow field data; here, we enable
Figure 1: (a) Design of the SBSC for compression-shear measurements. Here, we use a porous plate to apply compression to the packing. We set the size of the holes in this plate to less than a particle diameter so that the particles cannot move out. We also keep the gap between the central shaft and the porous plate less than a particle diameter to prevent particles from leaving the confined volume. Non-dimensional, steady-state angular velocity fields for dense hydrogel suspensions in a SBSC with no confining pressure \(P=0\) and filling heights of \(H=15\), 24, and 50 mm from (b) MRI measurements, (c) DEM simulation, and (d) NGF continuum modeling.
comparison of NGF model predictions with experimental bulk flow measurements. We can thus use this test bed to assess the capability of the NGF model in predicting bulk flows of soft hydrogels both without and with confining pressure. A summary of the NGF modeling approach can be found in the SI, including the process utilized to determine the parameters of the NGF model for a dense system of soft hydrogel particles.
_Results --_ MRI measurements provide time-averaged azimuthal velocity fields \(v_{\theta}(r,z)\) with high spatial resolution by PIV, with \(z\) the distance from the top surface. These profiles are then used to calculate the angular velocity fields \(\omega(r,z)=v_{\theta}(r,z)/r\). Non-dimensionalizing \(\omega(r,z)\) by the angular velocity of the bottom plate \(\omega_{0}\) gives fields that vary between zero and one. We determine the average angular velocity via a standard autocorrelation analysis of azimuthal image intensity profiles in consecutive image stacks. Figure 1b shows the angular velocity fields in a dense hydrogel suspension from the MRI measurements for three filling heights. Similarly, we use DEM and NGF methods to produce flow profiles in the same domains, as shown in Figs. 1c and d. In Figs. 1b-d, we observe the shear band marked as a white region between the moving zone (red) and static zone (blue). The shear band, in general, is broader than the typical shear band in frictional particles for all \(H\)[10]. A narrower shear band is observed at low \(H\). The shear band reaches the surface up to \(H\approx 0.59R_{s}\), but then, as \(H\) is increased further, the system moves towards a dome structure. It is noteworthy to observe that a bulk-confined dome does not appear, and even at \(H\approx R_{s}\), there are small displacements close to the surface, although with a reduced rate. We confirm observing wide shear bands using DEM (Fig. 1c) and NGF modeling (Fig. 1d).
To go beyond a qualitative spatial comparison of the flow fields generated by different methods, Fig. 2a illustrates the quantitative angular velocity profiles extracted from the mid-plane \(z=H/2\) (Fig. 2a) and at a constant radial position \(r=0.8R_{s}\) (Fig. 2b) of the 2D profiles of Fig. 1. We observe exceptionally wide shear bands which do not reach \(\omega/\omega_{0}=1\) along the mid-plane, even in shallow layers at the lowest height of \(H=15\,\mathrm{mm}\). By increasing \(H\), the shear band becomes broader and moves inwards from a vertical to a horizontal position. From the shallow layer to the deep layer, \(\omega/\omega_{0}\) at \(z=H/2\) and as \(r\to 0\) decreases from \(\approx 0.8\) to \(0.3\); however, it never reaches \(0\). Figure 2 also illustrates that the predictions of the DEM and NGF model are consistent with the experimental data for four filling heights \(H/R_{s}\). DEM can predict the shallow layers well, while the NGF model works well for deeper layers of \(H/R_{s}=0.95,1.05\). However, there is \(\approx 0.15\) difference in the ratio \(\omega/\omega_{0}\) predicted by the NGF model in shallow layers of \(H/R_{s}=0.42,0.59\) as \(r\to 0\) (Fig. 2a). When we look at vertical profiles at a constant radius (Fig. 2b), DEM matches the experiments well, and while NGF model predictions match experiments for deep layers, some discrepancy remains for shallow layers.
_Confining pressure effect --_ We observed exceptionally wide shear bands for unconfined, dense hydrogel suspensions, where the only relevant pressure scale comes from the weight of the particles. This buoyancy-compensated stress field is on the order of \(1\,\mathrm{Pa}\) or less [8]. This weak intrinsic pressure scale allows us to further test the pressure sensitivity of the flow profiles. We investigate a confined flow structure where we add a stress boundary condition \(P\) to the top surface to observe how this affects the shear band structure, via MRI, DEM and NGF methods.
We performed sets of MRI measurements for filling heights of \(H=50,31\), and \(20\,\mathrm{mm}\). We focus now on the results from the \(H=50\,\mathrm{mm}\) case, noting that for this case the fixed compression plate least affects the flow structure, as the surface velocity at this \(H/R_{s}\) is already small. The general trends observed are shown in Fig. 3a. After performing step-wise shear without compressing the packing, we do see the shear band evolve to the surface. However, by applying compression and then performing step-wise shear, the dome becomes much thinner, and flow is more confined to the rotating plate. Experimental flow profiles for deep layers of \(H=50\,\mathrm{mm}\) from zero to \(93.5\,\mathrm{Pa}\) applied pressure are shown in Fig. 3b. DEM results with the same top boundary stress show the same trends as shown in Fig. 3c. Non-local modeling results are shown in Fig. 3d, in which the compressive normal traction \(P\) was applied to the top surface. The flow profiles for \(H=20\) and \(31\,\mathrm{mm}\) show qualitatively consistent behavior and can be found in the SI. Our results suggest that adding a small pressure of \(\approx 20\,\mathrm{Pa}\) is enough to significantly affect the flow profile. By increasing the confining pressure up to \(\approx 93.5\,\mathrm{Pa}\), the dome becomes thinner and thinner. In the experiments, the dome reaches the point where only one layer of particles moves with the speed of the rotating disk \(\omega_{0}\). This shows a significant effect of the pressure on the width of the shear bands in the SBSC in quasi-static flows, and the decrease in shear-band width with increasing confining pressure is captured by both
Figure 2: Quantitative comparisons between experimental measurements (circles), DEM simulations (dotted lines), and NGF model predictions (solid lines) for flow in the SBSC with \(H/R_{s}\)=0.42, 0.59, 0.95, and 1.05. (a) Mid-plane (\(z=H/2\)) angular velocity comparison for four different \(H/R_{s}\) ratios. (b) Corresponding angular velocity comparisons as a function of \(z\) at \(r/R_{s}=0.8\).
the DEM and NGF modeling.
_Discussion --_ To understand how local stresses can set the width of the shear band, one needs to consider several mechanisms. The hydrogel particle-fluid mixture is nearly density-matched; the density of the hydrogel is a few percent higher than that of water so the buoyancy almost balances gravity. As the hydrostatic pressure becomes smaller, the transmission of shear stresses is also reduced in the vertical direction. One could therefore expect shear bands to become narrower. However, low contact friction suspensions are known to still have a finite effective friction coefficient, due to anisotropy effects [27]. The effective static yield value \(\mu_{s}\) required for the NGF model is therefore bound to stay much larger than the contact friction coefficient. It is therefore not immediately obvious which microscopic mechanism sets the width of the shear band. The additional stress provided by the confinement is likely forcing the local ratio of shear to normal stress to be much lower than \(\mu_{s}\) in all of the SBSC except the region just above the rotating plate and hence further away from the divergence of the cooperativity length \(\xi\) in the NGF model, reducing the shear band width significantly.
_Conclusion --_ We have performed experiments as well as discrete element method and continuum-based modeling on the flow behavior of near-density matched, soft frictionless suspensions. We have shown that in the rate-independent regime, NGF modeling can still capture both the experimental observations and the predictions of DEM, extending the validity of non-local modeling into materials in which the existence of propagation of fluidization is not immediately obvious. We have additionally confirmed that the pressure sensitivity of the materials is as predicted in the DEM and NGF modeling approaches, at least for the flow structure in the rate-independent limit. Our work suggests that non-local models generally require stress-dependent closure equations that determine the length scale of these gradient-based models.
_Acknowledgements --_ ZF, NG, SR, JW, RS, JAD acknowledge funding received from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement CALIPER No. 812638. Support from NAWI Graz by providing access to its HPC computing resource dcluster.tugraz.at is acknowledged by TU Graz researchers. The authors acknowledge Thomas Gerlach and Oliver Speck for providing the MRI machine time, support and technical discussions.
|
2309.06618 | Multi-dimensional Fusion and Consistency for Semi-supervised Medical
Image Segmentation | In this paper, we introduce a novel semi-supervised learning framework
tailored for medical image segmentation. Central to our approach is the
innovative Multi-scale Text-aware ViT-CNN Fusion scheme. This scheme adeptly
combines the strengths of both ViTs and CNNs, capitalizing on the unique
advantages of both architectures as well as the complementary information in
vision-language modalities. Further enriching our framework, we propose the
Multi-Axis Consistency framework for generating robust pseudo labels, thereby
enhancing the semisupervised learning process. Our extensive experiments on
several widelyused datasets unequivocally demonstrate the efficacy of our
approach. | Yixing Lu, Zhaoxin Fan, Min Xu | 2023-09-12T22:21:14Z | http://arxiv.org/abs/2309.06618v3 | # Multi-dimensional Fusion and Consistency for Semi-supervised Medical Image Segmentation
###### Abstract
In this paper, we introduce a novel semi-supervised learning framework tailored for medical image segmentation. Central to our approach is the innovative Multi-scale Text-aware ViT-CNN Fusion scheme. This scheme adeptly combines the strengths of both ViTs and CNNs, capitalizing on the unique advantages of both architectures as well as the complementary information in vision-language modalities. Further enriching our framework, we propose the Multi-Axis Consistency framework for generating robust pseudo labels, thereby enhancing the semi-supervised learning process. Our extensive experiments on several widely-used datasets unequivocally demonstrate the efficacy of our approach.
Keywords:Medical image segmentation Semi-supervise learning ViT-CNN fusion Multi-axis consistency.
## 1 Introduction
Medical image segmentation is a pivotal and intricate process within the realm of intelligent diagnosis, entailing the extraction of regions of interest within medical imagery. This task is of paramount importance for enabling precise diagnosis and tailored treatment. Over recent years, Convolutional Neural Networks (CNNs) [15, 12, 4] and Vision Transformers (ViTs) [5, 32, 11], both endowed with a U-shaped architecture, have witnessed significant advances in the domain of medical image segmentation.
Medical image segmentation literature mainly employs pretrained Convolutional Neural Networks (CNNs) [15] or Transformers [5]. The benefits of using both CNNs and Vision Transformers (ViTs) haven't been thoroughly explored. Interestingly, CNNs and ViTs seem to complement each other for medical image understanding. CNNs excel in local feature recognition [3], while ViTs are superior in comprehending long-range dependencies [14, 7]. For medical image segmentation, combining these strengths is crucial to understanding the organ and its interrelations with others. This raises the question: _Can we fuse the strengths of CNNs and ViTs into a single framework for medical image segmentation?_ An additional noteworthy observation is that both Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) tend to
necessitate extensive quantities of annotated data for effective training. While this may be manageable in the realm of natural image segmentation, it poses a formidable hurdle in the context of medical image segmentation, where the process of annotation is both laborious and costly. As a result, a second question emerges: _Is it feasible to reduce the model's dependency on annotated medical image-mask pairs without undermining the performance of the ViT/CNN?_
In response to the aforementioned questions, we introduce a novel semi-supervised learning framework for medical image segmentation in this study. We first propose a simple yet efficacious Multi-scale ViT-CNN Fusion scheme. The underlying principle of this approach is that the integration of ViT and CNN features can equip the model with the ability to capture both intricate local details and extensive global long-range dependency information. Additionally, given that both the CNN and ViT are pretrained on large-scale networks, they can retain both abstract natural features and domain-specific medical features during fusion, thereby further enhancing the segmentation task. Moreover, inspired by vision-language models [30, 24, 36, 39], and considering the relative ease of obtaining text descriptions for medical images, we introduce a text-aware language enhancement mode to further enrich the learned features. This effectively addresses the first question. Subsequently, we incorporate a Multi-Axis Consistency framework in our study to extend our approach to scenarios where annotated labels are limited. Within this framework, we unify and formulate multiple consistency regularizations under a single framework, dubbed Multi-AXIs COnsistency (MaxiCo). This framework combines intra-model, inter-model, and temporal consistency regularizations to generate robust probability-aware pseudo-labels, thereby enabling the use of a large corpus of unlabeled data for semi-supervised network training. Furthermore, we design a voting mechanism within this module, whereby each intermediate output can contribute to the final pseudo-label. This mechanism further enhances the trustworthiness of pseudo-labels and bolsters the final model's performance, therefore providing a satisfactory answer to the second question.
To deliver a comprehensive demonstration of the efficacy of our proposed method, we have undertaken extensive experimentation using the MoNuSeg [18] dataset and QaTa-COV19 [9] dataset. The empirical results obtained from these experiments substantiate that our method establishes a new benchmark in fully-supervised settings, outperforming existing state-of-the-art methodologies. Moreover, within semi-supervised scenarios, our strategy shows remarkable superiority over other leading-edge techniques.
Our contribution can be summarized as: 1) We pioneer a semi-supervised framework that harnesses the power of textual information to support fused ViT-CNN networks for medical image segmentation, representing a unique approach to this problem. 2) We propose a novel Multi-scale Text-aware ViT-CNN Fusion methodology that adroitly amalgamates CNNs and ViTs to boost segmentation accuracy. 3) We introduce a novel Multi-Axis Consistency Learning module that
capitalizes on consistency regularizations to generate reliable pseudo-labels for semi-supervised learning, effectively addressing the issue of data scarcity.
## 2 Related Work
**Transformers in Medical Image Segmentation.** The success of Vision Transformer (ViT) [10] in various computer vision tasks has led to its integration into medical image segmentation [41, 11, 38, 42, 22]. Some studies use transformers for image representation [14], while others propose hybrid encoders combining transformers and convolutional neural networks (CNNs) [7]. Cao et al. [5] proposed a pure transformer network replacing convolutional layers with Swin Transformer blocks [23] and the traditional Unet skip-connection with a transformer-based channel-attention module [34]. However, these methods are typically trained in a fully-supervised manner, which may be impractical due to the scarcity of annotated medical data. To address this, we introduce a semi-supervised approach that fuses ViT-CNN networks for medical image segmentation, aiming to overcome the challenge of limited annotated data.
**Semi-Supervised Medical Image Segmentation.** In light of the challenge posed by the dearth of annotated data in medical image segmentation, semi-supervised learning has come to the fore as a promising solution [40, 33, 37, 35]. Predominant strategies for semi-supervised learning encompass pseudo labeling [8], deep co-training [43], and entropy minimization [13]. In our work, we adopt a consistency learning framework to generate pseudo labels for unmarked images. There are several versions of consistency regularization methods, including temporal consistency [19], model-level consistency [25], and pyramid consistency [26], in existing literature. However, most of these methods only depend on a single type of regularization, which will limit the porwer of the model. In contrast, our approach amalgamates multiple consistency regularizations and employs a voting mechanism to produce more robust pseudo labels, which demonstrates better performance.
**Vision-Language Fusion for Dense Predictions.**In recent years, the fusion of vision and language in large-scale pretraining has garnered significant attention. The CLIP model [29] showcases this, demonstrating impressive transfer learning across multiple datasets. Building on this, researchers have explored fine-tuning the CLIP model for dense prediction tasks [30, 20, 36], framing it as a pixel-text matching problem. Vision-language models also enable zero-shot inference, bridging the gap between seen and unseen categories [39]. Some research has further explored the potential of visual prompts and their interpolation with text prompts during training [24]. In this paper, we use textual information to enhance ViT-CNN training for medical image segmentation, showcasing a novel application of vision-language fine-tuning.
## 3 Methodology
### Overview
Given an image \(I\in X\), where \(X\) represents the space of all possible medical images, and a text input \(T\in T\), where \(T\) is the space of all possible text inputs (e.g., medical notes or labels), the task of medical image segmentation in our study is to learn a mapping function \(F_{\theta}:X\times T\to Y\):
\[F_{\theta}:X\times T\to Y \tag{1}\]
Where \(Y\) represents the segmentation masks corresponding to the input medical image, and \(\theta\) denotes the parameters of our model. The goal is to train the model such that the mapping function \(F_{\theta}\) can accurately predict the segmentation mask \(y\in Y\) for any given input image and text \((I,T)\).
To address the first challenge, our Multi-scale Text-aware ViT-CNN Fusion scheme integrates a pretrained ViT and CNN, incorporating text features for increased prediction accuracy. We perform vision-language pretraining to obtain vision and text features, aligning them to formulate ViT features. These are then fused with CNN features at various resolutions, enabling the efficient use of local and global features.
To facilitate semi-supervised training, we introduce a Multi-Axis Consistency framework to generate pseudo labels, leveraging inter-model, multi-scale intra-model, and temporal consistency. Our network makes multiple predictions in a single pass, generating probabilistic pseudo labels via a voting mechanism, supporting semi-supervised training.
### Multi-scale Text-aware ViT-CNN Fusion
In this section, we present a novel architectural design named Multi-scale Text-aware ViT-CNN Fusion, as depicted in Fig. 1. This scheme is primarily composed of three major components: _Dense Vision-Language Alignment module_, _Multi-scale ViT-CNN Fusion module_, and a _Supervised Loss function_ for joint training.
Figure 1: The illustration of Multi-scale Text-aware ViT-CNN Fusion.
The Dense Vision-Language Alignment model is responsible for aligning the vision and text features into a common embedding space. By performing this alignment, we can effectively exploit the complementary information from both modalities to enhance the feature representations. The second component, Multi-scale ViT-CNN Fusion module, facilitates the fusion of the features extracted by the ViT and the CNN. This fusion is carried out at multiple scales, allowing the model to capture abstract features, domain-specific features, local details, and global long-range dependencies at different resolutions. Finally, the Multi-scale Supervision Loss Function. By optimizing this loss, the network learns to predict segmentation masks in a multi-scale and progressive manner. Next, we introduce them in detail.
**Dense Vision-Language Alignment module.** In our work, we incorporate both image and text information as inputs for segmentation. The inclusion of text allows us to capture the strengths of transformers more effectively and bolsters the fusion of Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs). This approach leverages contextual cues from text to enhance segmentation precision. To align the image and text features, we adopt a progressive approach. Visual features are extracted from a sequence of layers \((3,6,9,12)\) of a pretrained visual encoder, forming a set \(L=x_{1},x_{2},...,x_{\ell-1},x_{\ell}\) where \(x_{1}\) and \(x_{\ell}\) represent the features from the shallowest and deepest layers, respectively. We obtain text embeddings, denoted by \(y\), from a pretrained clinical text encoder.
We use transformer layers with skip connections to compactly represent visual features, as shown in the equation below:
\[X_{i}=\begin{cases}\textsf{TransLayer}_{i}(W_{i}^{x}x_{i}),&i=\ell\\ \textsf{TransLayer}_{i}(W_{i}^{x}x_{i}+X_{i+1}),&i\in L-\{\ell\}\end{cases} \tag{2}\]
Here, \(W_{i}^{x}\) is a linear layer for dimension reduction, and \(\textsf{TransLayer}_{i}\) denotes a Transformer layer. We reduce the dimension of the text embeddings and transfer them to different layers using simple MLP blocks:
\[Y_{i}=\begin{cases}W_{i}^{y}y,&i=\ell\\ W_{i}^{y}Y_{i+1},&i\in L-\{\ell\}\end{cases} \tag{3}\]
Here, \(W_{i}^{y}\) signifies a linear layer for dimension reduction, while \(W_{i}^{y}\) denotes an MLP block for transferring text embeddings. We perform Vision-Text alignment using element-wise multiplication so that the alignment is "dense":
\[Z_{i}=W_{i}^{X}X_{i}\odot W_{i}^{Y}Y_{i},\quad i\in L \tag{4}\]
In the equation above, \(\odot\) represents element-wise multiplication, and \(W_{i}^{X}\) and \(W_{i}^{Y}\) are tensor reshape operations for \(X_{i}\) and \(Y_{i}\), respectively.
**Multi-scale ViT-CNN Fusion module.** ViTs and CNNs each have their unique strengths in image analysis tasks. ViTs excel in capturing global dependencies, while CNNs are particularly adept at extracting local features. However,
when dealing with complex tasks such as medical image segmentation, a combination of these two can be beneficial, leveraging global contextual understanding and local feature extraction. Addressing this, we propose a dense fusion of ViT and CNN features at different resolutions. This approach is designed to enhance local interactions and preserve global knowledge. Our fusion method follows two guiding principles: 1) it should improve model performance, and 2) it should maintain the robustness of each individual feature to avoid over-dependence on either.
We begin with a non-parametric fusion method, where the fusion parameter \(\beta\) is uniformly sampled from \([0,1]\). A Unet CNN processes the input \(I\in\mathbb{R}^{H\times W\times 3}\), projecting it initially to \(C_{1}\) and then applying \((N-1)\) down/up-sampling operations to yield multi-scale features \(F_{j}^{CNN}\in\mathbb{R}^{H_{j}\times W_{j}\times C_{j}}\) at \(N\) different resolutions (\(N=4\) in our case).
ViT features \(Z_{i}\) are projected to match the size of the corresponding CNN features \(F_{j}^{CNN}\), resulting in the ViT features \(F_{j}^{ViT}\). These are then fused with the CNN features as follows:
\[F_{j}=\beta F_{j}^{CNN}+(1-\beta)F_{j}^{ViT},\quad j=1,2,...,N \tag{5}\]
\(\beta\) is sampled from \([r_{1},r_{2}](0\leq r_{1}<r_{2}\leq 1)\), and \(F_{j}\) is the fused feature map at level \(j\).
Beyond non-parametric fusion, we explore parametric fusion, employing a channel attention mechanism [1] at each scale. This mechanism is defined as:
\[\begin{split}&\hat{X}=W\cdot\textsf{Attention}(\hat{Q},\hat{K}, \hat{V})\\ &\textsf{Attention}(\hat{Q},\hat{K},\hat{V})=\hat{V}\cdot \textsf{Softmax}(\frac{\hat{K}\cdot\hat{Q}}{\alpha})\end{split} \tag{6}\]
Here, \(\hat{X}\in\mathbb{R}^{H\times W\times C}\) denotes the output feature map; \(\hat{Q}\in\mathbb{R}^{HW\times C},\hat{K}\in\mathbb{R}^{C\times HW},\hat{V}\in \mathbb{R}^{HW\times C}\) are tensors reshaped from \(Q,K,V\), respectively; \(W\) is a \(1\times 1\) convolution for output projection; \(\alpha\) is a learnable parameter to control the magnitude of \(\hat{K}\cdot\hat{Q}\). The definition of \(Q,K,V\) is \(Q,K,V=W^{Q}X,W^{K}X,W^{V}X\) and \(X=\textsf{LayerNorm}([F^{CNN},F^{VIT}])\), where \([\cdot,\cdot]\) denotes feature concatenation and \(W^{(\cdot)}\) denotes point-wise \(1\times 1\) convolutions.
**Multi-scale Supervision Loss Function.** In our proposed network architecture that combines ViT and CNN in a multi-scale manner, multiple predictions are generated in a single forward pass. However, relying exclusively on the final output for training can lead to convergence issues. To address this, we propose an end-to-end optimization of multiple predictions, which we term the Multi-scale Supervision Loss function.
We denote the network's final prediction as \(P\) and its multi-scale predictions as \(S=Q_{1},Q_{2},...,Q_{s-1},Q_{s}\), where \(Q_{s}\) represents the prediction at the \(s\)-th scale. For simplicity, we exclude the final prediction \(P\) from the set \(S\). The CNN branch prediction is represented by \(R\). Then, we utilize multi-scale predictions \(S+P\) and the CNN output \(R\) to compute the loss. The Multi-scale Supervision Loss
function is formulated as follow:
\[\mathcal{L}_{ms}=\alpha_{1}\mathcal{L}(P,T)+\alpha_{2}\mathcal{L}(R,T)+\alpha_{3 }\frac{1}{|S|}\sum_{s=1}^{|S|}\mathcal{L}(Q_{s},T) \tag{7}\]
Here, \(\mathcal{L}\) refers to the average of Dice loss and Cross Entropy loss. \(T\) denotes the ground truth label, and \(|S|\) is the cardinality of set \(S\). The weights \(\alpha_{1},\alpha_{2}\), and \(\alpha_{3}\) are used to balance each term in the loss function, and are set to \(\alpha_{1}=\alpha_{2}=1\) and \(\alpha_{3}=0.6\) for all our experiments.
### Multi-Axis Consistency Framework
In our pursuit to accomplish semi-supervised learning, we present a novel Multi-Axis Consistency framework as illustrated in Fig. 2. This all-encompassing framework is made up of three main components: the _Multi-Axis Consistency Soft-Hard Label Generation Module, the Multi-Axis Consistency Voting Mechanism, and the Multi-scale Unsupervised Loss Function_. The Soft-Hard Label Generation Module generates robust labels, taking into consideration intra-model and inter-model consistency, as well as temporal consistency. The Consistency Voting Mechanism selects the most probable predictions across different models and scales, thereby enhancing the robustness and accuracy of the learning process. The Multi-scale Unsupervised Loss Function provides a metric for model optimization in scenarios where ground truth labels are absent, promoting the extraction of valuable features from unlabeled data. Next, we introduce them in detail.
**Multi-Axis Consistency Soft-Hard Label Generation Module.** This innovative module is designed based on the Coordinate Systems we established to model different consistency paradigms, as depicted in Fig. 2 (a). We represent the input as \(x\) and the consistency condition as \(\theta=[m,s,t]^{T}\), indicating that the output is generated by model \(m\), scale \(s\), and training iteration \(t\). The module's objective is to minimize the distance between two outputs under consistency
Figure 2: The illustration of Multi-Axis Consistency Framework.
regularization from multiple axes. The module achieves this by applying an augmentation \(\sigma\) to the input \(x\) and generating a modified input \(\hat{x}\) while ensuring a small consistency relaxation \(\epsilon\). This process is expressed as follows:
\[\begin{split}\min&\|f(x,\theta)-f(\hat{x},\theta+ \epsilon)\|\\ s.t.\hat{x}&=\sigma(x),\|\epsilon\|\to 0 \end{split} \tag{8}\]
Then, the module generates robust labels by predicting multiple segmentation maps \(P_{\theta}\in\mathbb{R}^{H\times W\times K},\theta\in\Theta\), where \(K\) denotes the number of segmentation classes. These are the soft labels. The module then converts these soft labels into binary hard labels using a threshold. Next, we introduce the whole process in our,multi-axis consistency voting mechanism.
**Multi-Axis Consistency Voting Mechanism.** The Voting Mechanism is implemented based on the Semi-Supervised Learning strategy, illustrated in Fig. 2 (b). This mechanism samples a subset of the predicted segmentation maps within the consistency relaxation and utilizes them to generate a probabilistic pseudo-label. It leverages the outputs from the Vision Transformer (ViT), Convolutional Neural Network (CNN), and multi-scale outputs to collaboratively vote for the most probable pseudo-label. The pseudo-label includes the probability of each pixel belonging to a specific class.
To achieve this, the mechanism first predicts multiple segmentation maps, which can be considered as "soft labels" indicating the probability of each class. These soft labels are then converted into binary "hard labels" using a threshold of 0.5, as shown in the first part of the equation.
\[M_{\theta}(h,w,k)=\left\{\begin{aligned} & 1\text{ for }P_{\theta}(h,w,k)\geq 0.5\\ & 0\text{ for }P_{\theta}(h,w,k)<0.5\end{aligned}\right. \tag{9}\]
Here, \(M_{\theta}(h,w,k)\) denotes the binary hard label of pixel \((h,w)\) for class \(k\) under condition \(\theta\) and \(P_{\theta}(h,w,k)\) represents the soft label corresponding to the same. The final pseudo-label \(M_{pseu}\) is then generated by taking the average of these binary hard labels across all conditions \(\theta\in\Theta\), as expressed in the second part of the equation.
\[M_{pseu}=\frac{1}{|\Theta|}\sum_{\theta\in\Theta}M_{\theta} \tag{10}\]
In this equation, \(M_{pseu}\) represents the final probabilistic pseudo-label and \(|\Theta|\) denotes the cardinality of set \(\Theta\). In this way, the Multi-Axis Consistency Voting Mechanism generates robust probabilistic pseudo-labels that reflect the consensus among different models (ViT, CNN) and multi-scale outputs, embodying the concept of multi-axis consistency.
**Multi-scale Unsupervised Loss Function.** The unsupervised loss function is incorporated in our semi-supervised training via an unsupervised loss term,
as is shown in Fig. 2 (c). For unlabeled images, the function first generates pseudo-labels according to multiple network outputs. Multiple outputs from the current training iteration as well as previous iterations all contribute to the generation of the pseudo-label. The function aims to minimize the distance between contributors from the current training iteration to the pseudo-label:
\[\mathcal{L}_{unsup}=\frac{1}{|\Theta|}\sum_{\theta\in\Theta}\mathcal{L}(P_{ \theta},M_{pseu}) \tag{11}\]
Here, \(\mathcal{L}\) represents the average of Dice loss and Cross-Entropy loss. The final loss for semi-supervised learning \(\mathcal{L}_{final}\) is represented by the weighted sum of \(L_{sup}\) and \(L_{unsup}\), as shown below:
\[\mathcal{L}_{final}=\mathcal{L}_{sup}+\lambda\mathcal{L}_{unsup} \tag{12}\]
Here, \(\lambda\) is a weight factor, defined by a time-dependent Gaussian warming-up function to control the balance between the supervised loss and unsupervised loss.
## 4 Experiments
### Experiment Setup
We use pretrained vision and language transformers, which remain frozen during training. We use a ViT pretrained on the ROCO dataset [28] via DINO [6] as our vision backbone, and Clinical BERT [2] as our language backbone. We adopt U-Net [31] as the CNN branch. We set the batch size to 4 and the initial learning rate to \(10^{-3}\), using the Adam optimizer [17] with cosine annealing cyclic schedule. Data augmentation includes random flips and \(90^{\circ}\) rotations. All experiments are conducted on an A5000 GPU. We use Dice and mIoU metrics as our evaluation metrics. The experiments are conducted on MoNuSeg [18] and QaTa-COV19 [9] datasets. The MoNuSeg dataset includes images of tissue from various patients with tumors and approximately 22,000 nuclear boundary annotations across 30 training images and 14 test images. And the QaTa-COV19 dataset includes 9258 annotated COVID-19 chest radiographs. The text annotations for both datasets are derived from [22].
### Quantitative Results
In this section, we conduct main experiments on MoNuSeg dataset on both fully-supervised and semi-supervised settings. We also include the fully-supervised results on QaTa-COV19 dataset.
**Results on Fully-Supervised Setting.** In our research, we compare our methodology to state-of-the-art methods in a fully-supervised setting, these methods include Unet[31], Unet++[44], AttUnet[27], nnUnet[16], MedT[32], transUnet[7], GTUnet[21], Swin-Unet[5], and UCTransNet[34]. The results of our approach
and the other state-of-the-art methods in a fully-supervised learning setting are presented in Table 1. Notably, our method demonstrates a significant improvement over existing approaches. With the employment of parametric ViT-CNN fusion, our method achieves results that are not only comparable with TransUnet [7] on the MoNuSeg dataset but also surpasses it under specific conditions. More notably, our approach exhibits superior performance under non-parametric feature fusion, namely, random fusion with a uniformly sampled \(\beta\) during training. In this case, our method sets a new benchmark on the MoNuSeg dataset, outperforming all other state-of-the-art methods. This remarkable performance demonstrates the robustness of the random fusion strategy, where both the ViT and CNN branches can learn strong representations.
**Results on Semi-Supervised Setting.** In Table 2, we present our results under a Semi-Supervised setting. These results are achieved by using 25% and 50% labels for model evaluation, under our proposed Multi-Axis Consistency framework. The performance of our method stands out in several ways: 1) Most notably, our method delivers a comparable result to TransUnet in a fully-supervised setting even with only 25% labels. Moreover, when we possess 50% labels, the result improves significantly. This clearly showcases the potential of our proposed method. It illustrates how our method can effectively reduce the reliance on labeled data by learning from limited data and large-scale unlabeled data, thereby alleviating the cost of labels. 2) Our method with ViT-CNN random fusion and parametric channel attention consistently produces strong results across all semi-supervised settings. While the version with ViT-CNN random fusion outperforms the version with parametric channel attention by a small margin (less than 0.5%) when using 50% and 100% labels, the results are fairly comparable. This highlights the advantages of our multi-scale ViT-CNN fusion,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{MoNuSeg} & \multicolumn{2}{c}{QuTa-COV19} \\ \cline{2-5} & Dice (\%) & mIoU (\%) & Dice (\%) & mIoU (\%) \\ \hline Unet & 76.45 & 62.86 & 79.02 & 69.46 \\ Unet++ & 77.01 & 63.04 & 79.62 & 70.25 \\ AttUnet & 76.67 & 63.74 & 79.31 & 70.04 \\ nnUnet & 80.06 & 66.87 & 80.42 & 70.81 \\ MedT & 77.46 & 63.37 & 77.47 & 67.51 \\ TransUnet & 78.53 & 65.05 & 78.63 & 69.13 \\ GTUnet & 79.26 & 65.94 & 79.17 & 69.65 \\ Swin-Unet & 77.69 & 63.77 & 78.07 & 68.34 \\ UCTransNet & 79.87 & 66.68 & 79.15 & 69.60 \\ \hline Ours+PF & 79.91 & 66.74 & **82.29** & **72.87** \\ Ours+NPF & **80.60** & **67.66** & 82.03 & 72.80 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results under Fully-supervised Learning and Comparison with the state-of-the-arts. “PF” and “NPF” represent Parametric fusion and Non-Parametric Fusion, respectively. Different values of \(\beta\) during inference are also included.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Setting Labels (\%)} & \multicolumn{2}{c}{MoNuSeg} \\ \cline{2-4} & Dice (\%) & mIoU (\%) \\ \hline \multirow{3}{*}{PF} & 25 & 78.59 & 64.99 \\ & 50 & 78.85 & 65.36 \\ & 100 & 79.91 & 66.74 \\ \hline \multirow{3}{*}{NPF} & 25 & 78.47 & 64.88 \\ & 50 & 79.26 & 65.94 \\ \cline{1-1} & 100 & 80.16 & 67.06 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results under Semi-Supervised Setting. “PF” and “NPF” represent Parametric fusion and Non-Parametric Fusion, respectively.
underscoring its ability to capture both global and local interactions and retain pre-trained knowledge. All these findings reinforce the effectiveness and efficiency of our method in Semi-Supervised settings, demonstrating that it is a promising approach for future research and applications.
### Qualitative Comparisons
Fig. 3 provides a visual comparison between our method and our baseline method [24]. Both sets of results are obtained under a fully-supervised training setting. To facilitate a clearer distinction between the two, we have highlighted specific areas in each sample within a red box. Upon close examination, it's evident that our proposed method offers superior results with respect to the precision of boundary delineation and the accuracy of shape representation. These improvements are most apparent within the highlighted regions, where our method's predictions exhibit finer detail and higher fidelity to the original structures. A key factor contributing to this enhanced performance is the introduction of our multi-scale text-aware ViT-CNN fusion. This innovative approach significantly improves local feature extraction within the target medical domain, allowing for more accurate and detailed segmentation results. This clearly demonstrates the advantage of our method over traditional approaches, and its potential for providing superior outcomes in complex medical image analysis tasks.
### Ablation Study
**Ablation Studies on Mutli-scale Text-aware ViT-CNN Fusion.** To evaluate our ViT-CNN Fusion, we employed the state-of-the-art vision-language transformer dense finetuning method as our baseline, which didn't perform well in
Figure 3: Qualitative comparison. Left: Visualization results on MoNuSeg dataset. Right: Visualization results on QaTa-COV19 dataset.
medical image segmentation due to over-reliance on the pretrained backbone and lack of multi-scale dense features. To counter these issues, we propose a multi-scale text-aware ViT-CNN fusion for optimized pretrained transformers. An ablation study was conducted to ascertain the contribution of each component. This analysis involved sequentially introducing multi-scale architecture, dense vision-text alignment, ViT-CNN fusion, and a joint training loss function. Table (a)a shows the results, with significant Dice gains for each module: 8.27% for the multi-scale architecture, 2.17% for the vision-text alignment, 0.67% for the ViT-CNN fusion, and 1.02% for the joint training loss. The data underscores the effectiveness of each part of our method, particularly the substantial role of ViT-CNN Fusion in improving medical image segmentation tasks.
**Ablation Studies on Multi-Axis Consistency.** Our research introduces Multi-Axis Consistency, an innovative framework for generating robust pseudo labels in semi-supervised learning, by integrating different consistency regularization types. Table (b)b displays our results: each consistency regularization type improves semi-supervised performance compared to a supervised-only setting, highlighting their importance in semi-supervised learning. Notably, peak performance is achieved when all three types are combined, demonstrating the effectiveness of the Multi-Axis Consistency framework. This comprehensive approach leads to superior performance in semi-supervised learning, marking a significant advancement in generating pseudo labels and improving model performance.
**In-depth Discussion on Multi-scale ViT-CNN Fusion.** In this section, we address two key questions experimentally: 1Why does ViT-CNN fusion work in semi-supervised settings? Our results (Table (a)a and Fig. 4) demonstrate this module's effectiveness in fully-supervised learning. Fig. 5 further shows that ViT-CNN fusion is crucial in semi-supervised settings, with performance increasing as \(\beta\) decreases from 0.8 to 0.2. This suggests that both Transformer and CNN branches can independently perform well in such settings. 2) Why is multi-scale fusion important? We conducted ablation studies on fusion levels using different approaches: Non-Parametric Random Fusion and Parametric Channel
\begin{table}
\end{table}
Table 3: Ablation studies on proposed modules.
attention. Fig. 6 shows that increased feature fusion levels improve model performance, underscoring the importance of multi-scale dense features in medical image segmentation and the effectiveness of our proposed multi-scale ViT-CNN fusion method.
## 5 Conclusion
In this paper, we propose a novel semi-supervised learning framework for medical image segmentation. In our work, a Text-aware ViT-CNN Fusion scheme is proposed to take advantages of both pretrained ViTs and CNNs as well as extracting both abstract features and medical domain specific features. Besides, a novel Multi-Axis Consistency framework is proposed to vote for pseudo label to encourage semi-supervised training. Experiments on serveral widely used datasets have demonstrated the effectiveness of our method.
|
2309.11402 | Spatio-Temporal Weighted Regression Model with Fractional-Colored Noise:
Parameter estimation and consistency | Geographical and Temporal Weighted Regression (GTWR) model is an important
local technique for exploring spatial heterogeneity in data relationships, as
well as temporal dependence due to its high fitting capacity when it comes to
real data. In this article, we consider a GTWR model driven by a
spatio-temporal noise, colored in space and fractional in time. Concerning the
covariates, we consider that they are correlated, taking into account two
interaction types between covariates, weak and strong interaction. Under these
assumptions, Weighted Least Squares Estimator (WLS) is obtained, as well as its
rate of convergence. In order to evidence the good performance of the estimator
studied, it is provided a simulation study of four different scenarios, where
it is observed that the residuals oscillate with small variation around zero.
The STARMA package of the R software allows obtaining a variant of the $R^{2}$
coefficient, with values very close to 1, which means that most of the
variability is explained by the model. | Héctor Araya, Lisandro Fermín, Silfrido Gómez, Tania Roa, Soledad Torres | 2023-09-20T15:28:17Z | http://arxiv.org/abs/2309.11402v1 | Spatio - Temporal Weighted Regression Model with Fractional-Colored Noise: Parameter estimation and consistency
###### Abstract
Geographical and Temporal Weighted Regression (GTWR) model is an important local technique for exploring spatial heterogeneity in data relationships, as well as temporal dependence due to its high fitting capacity when it comes to real data. In this article, we consider a GTWR model driven by a spatio-temporal noise, colored in space and fractional in time. Concerning the covariates, we consider that they are correlated, taking into account two interaction types between covariates, weak and strong interaction. Under these assumptions, Weighted Least Squares Estimator (WLS) is obtained, as well as its rate of convergence. In order to evidence the good performance of the estimator studied, it is provided a simulation study of four different scenarios, where it is observed that the residuals oscillate with small variation around zero. The STARMA package of the R software allows obtaining a variant of the \(R^{2}\) coefficient, with values very close to 1, which means that most of the variability is explained by the model.
Geographically and Temporally Weighted Regression Fractional Colored Noise Consistency
**MSC:** Primary 62M30, Secondary 62M10.
## 1 Introduction
Spatio-temporal weighted regression models have been widely used to analyze and visualize geo-referenced information in many research areas. Some examples, can be evidenced in the exploration of spatio-temporal patterns of human behavior [3, 10], modeling the variation of housing prices as a function of their georeferencing [7], criminal activities [2, 13];, disease outbreaks [19] and, in methods for analyzing and visualizing data in space and time [1, 6, 17]. Within the theory of geospatial statistics, these models have allowed the deepening of environmental variables analysis such as the temperature present in certain locations and soil moisture, among others, through satellite images captured over the earth's surface at different moments in time and which, by means of strategically located temperature sensors, allow modeling the spatio-temporal behavior of the ground surface temperature. Such is the case of the work done
by the authors in [14]; where they propose a new algorithm based on a geographically and temporally weighted regression model, for the spatial downscaling of the radiometric spectrum of moderate resolution images from 1000 to 100 meters, in data related to ground surface temperature. It is worth mentioning that the use and implementation of these spatio-temporal weighted regression models, is largely due to the high fitting capacity it has with respect to real data, both globally and locally. Furthermore, the recommended use of spatio-temporal weighted regression models on georeferenced data is always advisable when the data present heterogeneity and stationarity at the spatio-temporal level. For example, the authors in [18]; where economic growth is compared between regions in India, using two different models, a global spatio-temporal regression model and a spatio-temporal weighted regression model. Thus, the researchers show through the results obtained a better fit by the spatio-temporal weighted regression model than that obtained with the global model.
To study these models, it is necessary to understand the complexity of the spatio-temporal covariance structure between the explanatory variables, and the behavior of the error considered within the model. Thus, given a spatio-temporal weighted regression model, which is specifically inspired by the geographically and temporally weighted regression model proposed by the authors in [9], we state the regression model:
\[Y_{i}=\beta_{0}(z_{i})+\sum_{j=1}^{p}\beta_{j}(z_{i})X_{i,j}+ \epsilon_{i},\quad i=1,\ldots,n\,;\quad p\in\mathbb{N} \tag{1}\]
where \(z_{i}=(t_{i},u_{i})\) denotes the coordinates of the observation at point \(z_{i}\), in the space \(u_{i}\in\mathbb{R}^{d}\) space at a time \(t_{i}\) in \(t_{i}\in\mathbb{R}^{+}\), \(\beta_{0}(z_{i})\) denotes the value of the intercept, \(\beta_{j}(z_{i})\) denotes the parameter associated with the \(j_{th}\) covariate \(X_{j}\) at point \(z_{i}\), and \(\epsilon_{i}\) is the colored fractional noise at point \(z_{i}\), defined in [11], i.e., \(\epsilon=(\epsilon_{i})_{i=1,\ldots,n}\) is a Gaussian noise that behaves like a fractional Brownian motion (fBm) in time and has white or colored spatial covariance. Basically, \(\epsilon_{i}\) by presenting these characteristics, it intuitively gives us an idea of the level of irregularity or variability that can present the spatio - temporal information that is known, in relation to what we want to estimate.
In this work, our main result, proves the strong consistency of the spatio-temporal weighted least squares estimator (WLSE), under certain Holder-type regularity conditions on the continuity of the spatio-temporal trajectories described by the covariates. This estimator is expressed as:
\[\hat{\beta}(z_{i}) = (X^{T}\mathcal{W}(z_{i})X)^{-1}X^{T}\mathcal{W}(z_{i})(X\beta(z_{ i})+\epsilon),\]
where \(z_{i}\) denotes the \(z_{i}^{th}\)-th spatio-temporal observation, \(X\) is a \(n\times(p+1)\) - order matrix corresponding to the covariate entries, \(Y\) is the \(n\)-dimensional vector of spatio - temporal observations, \(\mathcal{W}(z_{i})\) is a positive definite symmetric \(n\times n\) - order matrix, known as the weights matrix, and \(\epsilon=(\epsilon_{i})_{i=1:n}\) has as its associated covariance function:
\[\mathbb{E}(\epsilon_{i}\epsilon_{i^{\prime}}) = \frac{1}{2}\left(\int_{t_{i}^{-}}^{t_{i}^{+}}\int_{t_{i^{\prime }}^{-}}^{t_{i^{\prime}}^{+}}2H(2H-1)|t-t^{\prime}|^{2H-2}dt^{\prime}dt\right) \tag{2}\] \[\times \left(\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\openone\{ \nu_{(u_{i})}(u)\nu_{a,d}\|u-v\|^{-d+\alpha}\openone\{V_{(u_{i})}(v)}dudv} \right).\]
The above expression (2), is derived from the work [20], where the Riesz kernel function of order \(\alpha\), given by \(\gamma_{a,d}\|u-v\|^{-d+\alpha}\), is considered as the spatial covariance of the noise. Our main result is the convergence in \(L^{2}\) and in probability of the spatio -temporal weighted least squares estimator. Finally, in this work we perform a simulation study on possible scenarios that can be considered for estimating under regularity conditions assigned to the Hurst \(H\) index chosen for the spatial and temporal covariance. These considered cases are accompanied by a graphical display of the stability of the Mean Squared Error (MSE) of the spatio -temporally weighted least squares estimator in each situation.
The organization of the present work was structured as follows: Section 2 presents the weighted regression model considered in this work; the fractional colored noise, as well as the spatio-temporal point measure of the noise over the observations \(z_{i}\). We also derive the explicit form of the spatio-temporal colored fractional noise covariance function, and the correlation type between the explanatory variables along with the assumptions to be considered, are defined. The spatio-temporal weighted least squares estimator of the proposed model is shown. In Section 3, the convergence in quadratic norm of the weighted least-squares estimator is proven, via an auxiliary lemma that proves the convergence of the least - squares estimator in probability. Section 4 presents the results and simulation work performed based on the different scenarios considered and finally Section 5 includes an appendix showing the details of the proof for the auxiliary lemma that is considered in the proof of the paper's most important result.
The model
### Weighted regression model
The geographically and temporally weighted regression (GTWR) model is a spatio-temporal varying coefficient regression approach for exploring spatial nonstationarity and temporal dependence of a regression relationship for spatio-temporal data. The GTWR model can be expressed as follows:
\[Y_{i}=\beta_{0}(z_{i})+\sum_{j=1}^{p}\beta_{j}(z_{i})X_{i,j}+\epsilon_{i},\quad i =1,\ldots,n\,;\quad p\in\mathbb{N} \tag{3}\]
where \(z_{i}=(t_{i},u_{i})\) denotes the coordinates of the observation point \(z_{i}\), in space \(u_{i}\in\mathbb{R}^{d}\) at time \(t_{i}\in\mathbb{R}^{+}\), \(\beta_{0}(z_{i})\) indicates the intercept value, \(\beta_{j}(z_{i})\) indicates the parameter associated with the \(j_{th}\) covariate \(X_{j}\) at point \(z_{i}\), and \(\epsilon_{i}=\Delta W^{H}(z_{i})\) is the fractional colored noise at \(z_{i}\); i.e. \(\epsilon=(\epsilon_{i})_{i=1,\ldots,n}\) is a Gaussian noise which behaves like fractional Brownian motion (fBm) in time and has white or colored spatial covariance in space.
**Assumption M1**.: _The noise \(\epsilon=(\epsilon_{i})_{i=1,\ldots,m}\) is independent of the covariates \((X_{1},\ldots,X_{p})\), where \(X_{j}\in\mathbb{R}^{n}\) for every \(j=1,\ldots,p\)._
### Fractional-Colored Noise
We begin by describing the spatial covariance of the noise. Let us recall the frame-work from [20]. Let \(\mu\) be a non-negative tempered measure on \(\mathbb{R}^{d}\), i.e. a non-negative which satisfies the following condition
**Assumption N1**.: \(\int_{\mathbb{R}^{d}}(1+\|\xi\|^{2})^{-\ell}\mu(d\xi)<\infty,\quad\text{ for some }\quad\ell>0.\)__
Let \(f:\mathbb{R}^{d}\to\mathbb{R}^{+}\) be the Fourier transform of \(\mu\) in \(S(\mathbb{R}^{d})\) (Schwarz space of rapidly decreasing \(C^{\infty}\) functions on \(\mathbb{R}^{d}\), see. [21, 22] for details), i.e.
\[\int_{\mathbb{R}^{d}}f(u)\varphi(u)du=\int_{\mathbb{R}^{d}}\mathcal{F}\varphi( \xi)\mu(d\xi),\quad\text{ for all }\quad\varphi\in S(\mathbb{R}^{d}). \tag{4}\]
Let the Hurst parameter \(H\) be fixed in \((1/2,1)\). On a complete probability space \((\Omega,\mathcal{F},\mathbb{P})\), we consider a zero-mean Gaussian field \(W^{H}=\left\{W_{t}^{H}(A):t\geq 0,A\in B_{b}(\mathbb{R}^{d})\right\}\), defined on the set of bounded Borel measurable functions \(B_{b}(\mathbb{R}^{d})\), with covariance
\[\mathbb{E}\left(W_{t}^{H}(A)W_{s}^{H}(B)\right) = R_{H}(t,s)\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\openone_{A }(u)f(u-v)\openone_{B}(v)dudv\] \[:= \left\langle\openone_{[0,t]\times A},\openone_{[0,s]\times B} \right\rangle_{H},\]
where \(R_{H}\) is the covariance of the fBm
\[R_{H}(t,s)=\frac{1}{2}\left(t^{2H}+s^{2H}-|t-s|^{2H}\right),\quad\text{ for }\quad s,t\geq 0, \tag{6}\]
and \(\mathcal{H}\) is the canonical Hilbert space associated with the Gaussian process \(W\) is defined as the closure of the linear span generated by the indicator functions \(\openone_{[0,t]\times A}\), \(t\in[0,T]\), \(A\in B_{b}(\mathbb{R}^{d})\) with respect to inner product given by the right hand side of (5).
This can be extended to a Gaussian noise measure on \(B_{b}(\mathbb{R}^{+}\times\mathbb{R}^{d})\) by setting
\[W^{H}((s,t]\times A):=W_{t}^{H}(A)-W_{s}^{H}(A). \tag{7}\]
We suppose that the spatial covariance is given by a Riesz kernel \(f\) of order \(\alpha\) satisfying the following condition
**Assumption N2**.: _We consider \(f\) as following \(f_{a}(u):=\gamma_{a,d}\|u\|^{-d+\alpha}\), for \(-d<\alpha<d\) and \(\gamma_{a,d}=2^{d-\alpha}\pi^{d/2}\Gamma((d-\alpha)/2)/\Gamma(\alpha/2)\). In this case, \(\mu(d\xi)=\|\xi\|^{-\alpha}d\xi\)._
**Remark 2.1**.: _Under **N2** condition **N1** is satisfied for \(d-\alpha<2\ell\). The special case of white noise in space is identical to the particular case of condition **N2** with \(\alpha=0\), in which case \(\mu\) is the Lebesgue measure._
Fractional colored noise at observation point \(z_{l}\) is defined as \(\epsilon_{l}=\Delta W^{H}(z_{l})\), this represents the noise measured in the neighborhood
\[V(z_{l})=\left\{z\in\mathbb{R}^{+}\times\mathbb{R}^{d}\,:\,\|z-z_{l}\|=\max\{|t -t_{l}|,\|u-u_{l}\|\,\}\leq\delta_{n}\right\},\]
where \(\delta_{n}\) is such that the volume of \(V(z_{l})\) is \(1/n\); i.e.,
\[\lambda(V(z_{l})) = \int_{\mathbb{R}^{+}}\int_{\mathbb{R}^{d}}\mathrm{I}\!\!\!\!\!\! \mathrm{I}_{[\{l-t_{l}\}\leq\delta_{n}]}\mathrm{I}\!\!\!\!\!\!\mathrm{I}_{[\{u -u_{l}\}\leq\delta_{n}]}dtdu\] \[= 2\delta_{n}(\delta_{n})^{d}\,\lambda(S^{d-1})=\frac{1}{n},\]
with \(\lambda(S^{d-1})\) the volume of a d-dimensional hypersphere of unit radius.
We can rewrite \(V(z_{l})=(t_{l}^{-},t_{l}^{+}]\times V(u_{l})\), where \(t_{l}^{\pm}=t_{l}\pm\delta_{n}\) and \(V(u_{l})=\{u\in\mathbb{R}^{d}\,:\,\|u-u_{l}\|\leq\delta_{n}\}\). Then,
\[\epsilon_{l}=\Delta W^{H}(z_{l}):\,=W^{H}(V(z_{l}))=W^{H}_{t_{l}^{+}}(V(u_{l} ))-W^{H}_{t_{l}^{-}}(V(u_{l})). \tag{8}\]
**Remark 2.2**.: _In this paper we consider the discrete grid in \(\mathbb{R}^{d}\) of distance \(\delta_{n}\), i.e., each \(u_{l}\) in the grid has \(2^{d}\) neighbors that are at distance less than or equal to \(\delta_{n}\)._
Next, we show an important result related to the covariance of the noise
**Lemma 2.1**.: _The covariance function of fractional colored noise \(\epsilon=(\epsilon_{l})_{l=1:n}\) is given by_
\[\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}}) = \frac{1}{2}\left(\int_{t_{l}^{-}}^{t_{l}^{+}}\int_{t_{l^{\prime}} ^{-}}^{t_{l^{\prime}}^{+}}2H(2H-1)|t-t^{\prime}|^{2H-2}dt^{\prime}dt\right)\] \[\times \left(\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathrm{I}\!\!\! \!\!\mathrm{I}_{V(u_{l})}(u)\gamma_{u,d}\|u-v\|^{-d+a}\mathrm{I}\!\!\!\!\!\! \mathrm{I}_{V(u_{l^{\prime}})}(v)dudv\right) \tag{9}\]
_and the variance is \(\mathbb{E}(\epsilon_{l}^{2})=\sigma^{2}2^{2H}(\delta_{n})^{2H+d+a}\), with \(\sigma^{2}=Var\left(W^{H}\left(\mathrm{I}\!\!\!\!\!\mathrm{I}_{\{0\leq j\leq 1,\|u\|\leq 1\}}\right)\right)\)._
Proof.: The proof of Lemma 2.1 is left in Appendix, Section A.1.
### Correlated covariates
We assume that the covariates \(X_{j}\), for \(j=1,\ldots,p\), of the regression model are centered and locally correlated. The covariance function is \(\chi(z,z^{\prime})=\left(\chi_{jk}(z,z^{\prime})\right)_{j,k=1:p}\), with
\[\chi_{jk}(z,z^{\prime})=\mathbb{E}\left(\,X_{j}(z)X_{k}(z^{\prime})\right). \tag{10}\]
**Assumption C1**.:
1. _The covariance function_ \(\chi\) _is positive definite._
2. \(\chi\) _is_ \(\alpha_{\chi}-\)_Holder continuous; i.e. there exist_ \(C_{\chi}>0\) _such that_ \[|\chi(z,z^{\prime})-\chi(z_{l},z_{l^{\prime}})|\leq C_{\chi}(\|z-z_{l}\|+\|z^{ \prime}-z_{l^{\prime}}\|)^{\alpha_{\chi}}.\]
We consider the covariance function \(\Gamma(z,z^{\prime})=\left(\Gamma_{jk}(z,z^{\prime})\right)_{j,k=1:n}\) defined by
\[\Gamma_{jk}(z,z^{\prime})=Cov\left(X_{j}(z)X_{k}(z),X_{j}(z^{\prime})X_{k}(z^ {\prime})\right). \tag{11}\]
We suppose that \(\Gamma\) satisfy the following conditions.
**Assumption C2**.:
1. _The covariance function_ \(\Gamma\) _is positive definite._
2. \(\Gamma\) _is_ \(\alpha_{\Gamma}-\)_Holder continue; i.e. there exist_ \(C_{\Gamma}>0\) _such that_ \[|\Gamma(z,z^{\prime})-\Gamma(z_{l},z_{l^{\prime}})|\leq C_{\Gamma}(\|z-z_{l}\|+ \|z^{\prime}-z_{l^{\prime}}\|)^{\alpha_{\Gamma}}.\]
3. _Furthermore,_ \(\Gamma\) _is such that_ \[\left|\Gamma(z,z^{\prime})\right|\leq C_{k,d,\theta}\delta^{d+1+\theta},\] _for_ \(z,z^{\prime}\in\mathbb{R}^{d+1}\) _such that_ \(\|z-z^{\prime}\|>k\delta\)_, for_ \(\delta>0\) _and some_ \(k\in\mathbb{N}\)_,_ \(-(d+1)<\theta<d+1\)_, and_ \(C_{k,d,\theta}\geq 0\)
_._
* \(\Gamma\) _is such that_ \[|\Gamma(z,z)|\leq C_{k,d},\] _for_ \(z\in\mathbb{R}^{d+1}\) _and some_ \(k\in\mathbb{N}\)_._
**Remark 2.3**.: _Under assumption **C1** we consider two interaction types between covariates \(X_{j}^{\prime}s\):_
**- Weak interaction:**: when the parameter \(\theta\leq 0\). For instance, the independent case is obtained for \(\theta=0\), the \(k\)-dependent covariates case correspond to \(C_{k,d,\theta}=0\).
**- Strong interaction:**: when the parameter \(\theta>0\), then the spectral density of covariance function \(\Gamma\) is singular at zero, so \(\Gamma\) has heavy tails. The fractional time dependence correspond to \(\theta=2H-1\) and this has long-range dependence when \(\theta>0\) i.e. if \(H>\frac{1}{2}\). The fractional-colored spatial-temporal dependence corresponds to \(\theta=2H-1+\alpha\).
### The weighted least square estimator
For a given data set, the local parameters of weighted regression model (3) are estimated using the weighted least square procedure. Let be \(\beta(z_{i})\) the vector of the local parameters for the space-time point \(z_{i}\),
\[\beta(z_{i})=\left(\beta_{0}(z_{i}),\,\beta_{1}(z_{i}),\,\ldots,\,\beta_{p}(z _{i})\right)^{T}. \tag{12}\]
Here, the superscript \(T\) represents the transpose of a vector or matrix.
The local parameters \(\beta(z_{i})\) at point \(z_{i}\) is estimated by
\[\hat{\beta}(z_{i})=[X^{T}\mathcal{W}(z_{i})X]^{-1}X^{T}\mathcal{W}(z_{i})Y, \tag{13}\]
where \(X\) is the \(n\times(p+1)\) matrix of input covariables, \(Y\) is the \(n\)-dimensional vector of output observed variable, and \(\mathcal{W}(z_{i})\) is an \(n\times n\) weighting matrix of the form
\[\mathcal{W}(z_{i})=diag(\mathcal{W}_{i1},\,\ldots,\,\mathcal{W}_{in}). \tag{14}\]
The weights \(\mathcal{W}_{ij}\), for \(j=1,\,\ldots,\,n\), are obtained through an adaptive kernel function \(K\) in terms of the proximity of each data point to the point \(z_{i}\); i.e.
\[\mathcal{W}_{il}=K_{h}\left(z_{l}-z_{i}\right), \tag{15}\]
with \(K_{h}(z)=K(z/h)\). Here, \(K\,:\,\mathbb{R}^{d+1}\to\mathbb{R}\) is positive, symmetric such that \(f_{\mathbb{R}^{d+1}}\,K(z)dz=1\), and \(h\) is nonnegative parameter known as bandwidth, which produces a decay of influence with distance. The observations \(z_{l}\) near \(z_{i}\) have the largest influence on the estimate of the local parameters at point \(z_{i}\).
We suppose that the kernel \(K\) satisfies the additional following conditions:
**Assumption K1**.:
* _The kernel_ \(K\) _is bounded, i.e._ \(\left\|K\right\|_{\infty}<\infty\)_._
* \(K\) _is_ \(\alpha_{K}\)_-Holder continuous, i.e. there exist_ \(C_{K}>0\) _such that_ \[|K(z)-K(z^{\prime})|<C_{K}\|z-z^{\prime}\|^{\alpha_{K}}.\]
* \(\int_{\mathbb{R}^{d+1}}\max\left(\|z\|^{\alpha_{K}},\|z\|^{\alpha_{K}},\|z\|^ {\alpha_{K}}\right)K(z)dz<\infty\)_._
* _If_ \(\|z\|\geq\delta_{n}\)_, then_ \[K(z)=f_{K}\left(\|z\|\right)=\mathcal{O}(n^{-\gamma}\,L(n)),\] _with_ \(L\) _a slowly varying function at infinity and_ \(\gamma>0\)_._
Under condition **K1**, the kernel \(K\) is such that \(\int_{\mathbb{R}^{d+1}}zK(z)dz=0\).
The most commonly used adaptive kernel is the Gaussian function \(K(z)=\frac{1}{\sqrt{2\pi}}e^{-(d^{z}z(z))^{2}/2}\), where the space-time distance \(d^{z,s}\) is given as a function of the temporal distance \(d^{t}=|t|\) and the spatial distance \(d^{u}=\|u\|\); for instance, \((d^{s,t}(z))^{2}=\mu^{t}(d^{t})^{2}+\mu^{s}(d^{u})^{2}\) where \(\mu^{t}\) and \(\mu^{s}\) are temporal and spatial scale factors respectively.
## 3 Consistency
We study the consistency for the local weighted least square estimator \(\hat{\beta}(z_{i})\) obtained in (13) from (3). If we substitute \(Y=X\beta(z_{i})+\epsilon\) on (13) we obtained that:
\[\hat{\beta}(z_{i}) = (X^{T}\mathcal{W}(z_{i})X)^{-1}X^{T}\mathcal{W}(z_{i})(X\beta(z_{i })+\epsilon)\] \[= (X^{T}\mathcal{W}(z_{i})X)^{-1}(X^{T}\mathcal{W}(z_{i})X)\beta(z_ {i})+(X^{T}\mathcal{W}(z_{i})X)^{-1}X^{T}\mathcal{W}(z_{i})\epsilon\] \[= \beta(z_{i})+(X^{T}\mathcal{W}(z_{i})X)^{-1}X^{T}\mathcal{W}(z_{i })\epsilon.\]
Then,
\[\mathbb{E}\left(\hat{\beta}(z_{i})\right) = \beta(z_{i})+\mathbb{E}\left(X^{T}\mathcal{W}(z_{i})X)^{-1}X^{T} \mathcal{W}(z_{i})\mathbb{E}(\epsilon|X)\right)=\beta(z_{i}),\]
since from assumption **M1** we have \(\mathbb{E}(\epsilon|X)=\mathbb{E}(\epsilon)=0\). Thus, the estimator \(\hat{\beta}(z_{i})\) is unbiased, and the estimation error is written as:
\[\hat{\beta}(z_{i})-\beta(z_{i})=(X^{T}\mathcal{W}(z_{i})X)^{-1}(X^{T}\mathcal{ W}(z_{i})\epsilon). \tag{16}\]
**Remark 3.1**.: _We define the following notation_
1. \(f_{n,h}\approx\tilde{f}_{h}\)_, which is equivalent to_ \(\lim_{h\to 0}\lim_{n\to\infty}f_{n,h}=\lim_{h\to 0}\tilde{f}_{h}\)_, i.e. for_ \(n\) _large enough and_ \(h\) _small enough,_ \(f_{n,h}\) _is approximately equal to_ \(\tilde{f}_{h}\)_._
2. \(f_{n,h}\leq\tilde{f}_{h}\)_, which is equivalent to_ \(\lim_{h\to 0}\lim_{n\to\infty}f_{n,h}\leq\lim_{h\to 0}\tilde{f}_{h}\)_. Particularly, we write_ \(f_{n,h}\leq C\) _to state that_ \(C\) _is a bound for the sequence_ \(f_{n,h}\)_, for_ \(n\) _large enough and_ \(h\) _small enough._
3. \(f_{h}\approx\tilde{f}\)_, which is equivalent to_ \(\lim_{h\to 0}f_{h}=\tilde{f}\)_,_ \(f_{n}\approx\tilde{f}\) _when_ \(\lim_{n\to\infty}f_{n}=\tilde{f}\)_, and_ \(\tilde{f}_{h}\leq C\) _to state that_ \(C\) _is a bound for the sequence_ \(\tilde{f}_{h}\)_, for hsmall enough._
_This notation will be used along our work._
In order to study the consistency of the estimator \(\hat{\beta}(z_{i})\) given by (13), we will prove that there exists an appropriated normalization sequence \((b_{n,h})_{n\geq 1,h>0}\) of positive constants with \(b_{n,h}\to\infty\) as \(n\to\infty\) and \(h\to 0\), and such that
1. \(b_{n,h}^{-1}(X^{T}\mathcal{W}(z_{i})X)\to\chi(z_{i},z_{i})=\mathbb{E}[X^{T} \mathcal{W}(z_{i})X]\), as \(n\to+\infty\) and \(h\to 0\).
2. \(b_{n,h}^{-1}(X^{T}\mathcal{W}(z_{i})\epsilon)\to 0\), \(n\to+\infty\) and \(h\to 0\).
To prove \(i)\) we need an auxiliary lemma related to the almost sure convergence of the term \((X^{T}\mathcal{W}(z_{i})X)\) in (16).
**Lemma 3.1**.: _Under assumptions **CI-C2** and **KI**, \(\theta>0\) and \(\gamma>\frac{\theta}{d+1}\), we have that_
\[\frac{1}{nh^{d+1}}(X^{T}\mathcal{W}(z_{i})X)\xrightarrow[n\to\infty]{a.s.}\chi (z_{i},z_{i})=\mathbb{E}[X^{T}\mathcal{W}(z_{i})X].\]
Proof.: The proof of Lemma 3.1 is left in Appendix, Section A.2.
We are ready to present our main result.
**Theorem 3.1**.: _Assume that the regression model (3) satisfies the hypothesis **M1**, **N1**, **N2**, **CI-C2**and **KI**. Then, the local weighted least square estimator \(\hat{\beta}(z_{i})\) obtained in (13) is strongly consistent for \(2H+\alpha>1\), \(\theta>0\), and \(\frac{\theta}{d+1}<\gamma<1+\frac{\theta}{d+1}\) that is_
\[\hat{\beta}(z_{i})-\beta(z_{i})\xrightarrow[n\to\infty]{a.s.}0\]
_and, for \(2H+d+\alpha>0\) and \(d+1+\theta>0\) the convergence in probability is ensured._
Proof.: By Lemma 3.1, it remains to study the asymptotic behavior of \(\left(X^{T}\mathcal{W}(z_{i})\epsilon\right)\) as \(n\to\infty\). The \(j_{th}\) component of \(\left(X^{T}\mathcal{W}(z_{i})\epsilon\right)\) is
\[\left(\left(X^{T}\mathcal{W}(z_{i})\epsilon\right)\right)_{j}=\sum_{l=1}^{n}X_{ lj}\mathcal{W}_{il}\epsilon_{l}. \tag{17}\]
It is quite easy to see, from assumption **M1**, that \(\mathbb{E}\left[\left(\left(X^{T}\mathcal{W}(z_{i})\epsilon\right)\right)_{j} \right]=0\). Let us compute the variance of \(\left(\left(X^{T}\mathcal{W}(z_{i})\epsilon\right)\right)_{j}\),
\[\mathbb{E}\left(\left(\left(X^{T}\mathcal{W}(z_{i})\epsilon\right) \right)_{j}^{2}\right) = \sum_{l,l^{\prime}=1}^{n}\chi_{jj}(z_{l},z_{l^{\prime}})\mathcal{W }_{il}\mathcal{W}_{il^{\prime}}\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}}) \tag{18}\] \[= \sum_{l=1}^{n}\chi_{jj}(z_{l},z_{l})\mathcal{W}_{il}^{2}\mathbb{E} (\epsilon_{l}^{2})\] \[+ \sum_{\begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l^{ \prime}})\mathcal{W}_{il}\mathcal{W}_{il^{\prime}}\mathbb{E}(\epsilon_{l} \epsilon_{l^{\prime}})\] \[+ \sum_{\begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|>3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l^{ \prime}})\mathcal{W}_{il}\mathcal{W}_{il^{\prime}}\mathbb{E}(\epsilon_{l} \epsilon_{l^{\prime}})\] \[:= A_{j,n}^{(1)}(z_{i})+A_{j,n}^{(2)}(z_{i})+A_{j,n}^{(3)}(z_{i}),\]
where we split the sum into three terms associated with the distance between the observed points \(z_{l}\) and \(z_{l^{\prime}}\).
First, we study the term \(A_{j,n}^{(1)}(z_{l})\) in (18)
\[\begin{split}&\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+\alpha}}A_{j,n }^{(1)}(z_{l})\\ &=\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+\alpha}}\sum_{l=1}^{n}\chi_{ jj}(z_{l},z_{l})\mathcal{W}_{il}^{2}\mathbb{E}(\epsilon_{l}^{2})\\ &=\frac{2^{2H}\sigma^{2}}{nh^{d+1}}\sum_{l=1}^{n}\chi_{jj}(z_{l}, z_{l})K_{h}^{2}\left(z_{l}-z_{l}\right)\\ &=2^{2H}\sigma^{2}\int_{\mathbb{R}^{d+1}}\sum_{l=1}^{n}\chi_{jj}( z_{l},z_{l})\frac{1}{h^{d+1}}K_{h}^{2}\left(z_{l}-z_{l}\right)\,\|_{\mathcal{V}(z_{l})}( z)dz\\ &\leq 2^{2H}\sigma^{2}\int_{\mathbb{R}^{d+1}}\chi_{jj}(z,z)\frac{1}{ h^{d+1}}K_{h}^{2}\left(z-z_{l}\right)\,dz\\ &=2^{2H}\sigma^{2}\int_{\mathbb{R}^{d+1}}\chi_{jj}(z_{i}+hz,z_{i}+ hz)K^{2}\left(z\right)dz\\ &\approx\,C_{1}(H)\chi_{jj}(z_{i},z_{i})+\mathcal{O}\left(|h|^{ \alpha_{x}}\right),\end{split} \tag{19}\]
where \(C_{1}(H)=2^{2H}\sigma^{2}\|K\|_{2}^{2}\). The last inequality comes from the regularity of \(\chi\) from Condition **C1** and notations defined in Remark 3.1.
Secondly, we consider the term \(A_{j,n}^{(2)}(z_{l})\) in (18), i.e. when \(0<\|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\)
\[\begin{split}&\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+\alpha}}A_{j,n }^{(2)}(z_{i})\\ &=\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+\alpha}}\sum_{ \begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l^{ \prime}})\mathcal{W}_{il}\mathcal{W}_{il^{\prime}}\mathbb{E}(\epsilon_{l} \epsilon_{l^{\prime}})\\ &=\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+\alpha}}\sum_{ \begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l^{ \prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i})\mathbb{E}(\epsilon_{l} \epsilon_{l^{\prime}})\end{split} \tag{20}\]
We can bound the covariance term \(\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\) when \(0<\|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\) by
\[\begin{split}\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})& =\frac{1}{2}\left(|t_{l}-t_{l^{\prime}}+2\delta_{n}|^{2H}+|t_{l}-t_{l^{ \prime}}-2\delta_{n}|^{2H}-2|t_{l}-t_{l^{\prime}}|^{2H}\right)(\delta_{n})^{d+ \alpha}\\ &\times Cov\left(W^{H}\left(\left.\left\|\mathfrak{I}_{\{\|u-u_{l}/ \delta_{n}\|\leq 1\}}\right.\right.\right),W^{H}\left(\left.\left\|\mathfrak{I}_{\{\|u- u_{l}/\delta_{n}\|\leq 1\}}\right.\right)\right)\\ &\leq\left(2\delta_{n}\right)^{2H}(\delta_{n})^{d+\alpha}Var^{1/ 2}\left(W^{H}\left(\left.\left\|\mathfrak{I}_{\{\|u-u_{l}/\delta_{n}\|\leq 1 }}\right.\right)\right)Var^{1/2}\left(W^{H}\left(\left.\left\|\mathfrak{I}_{\|u -u_{l}/\delta_{n}\|\leq 1}\right.\right)\right)\\ &\leq 2^{2H}(\delta_{n})^{2H+d+\alpha}Var\left(W^{H}\left(\left. \left\|\mathfrak{I}_{\|u\|\leq 1}\right.\right)\right)\\ &=2^{2H}\sigma^{2}(\delta_{n})^{2H+d+\alpha}.\end{split} \tag{21}\]
Plugging inequality (21) into the equation (20) yields
\[\begin{split}&\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+\alpha}}A_{j,n} ^{(2)}(z_{i})\\ &\leq\frac{2^{2H}\sigma^{2}}{nh^{d+1}}\sum_{\begin{subarray}{c}1 \leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l^{ \prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i})\\ &=\frac{2^{2H}\sigma^{2}}{nh^{d+1}}\Bigg{[}\sum_{\begin{subarray}{ c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l})K_{h}(z _{l}-z_{i})K_{h}(z_{l}-z_{i})\\ &+\sum_{\begin{subarray}{c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l})K_{h}(z _{l}-z_{i})\left(K_{h}(z_{l^{\prime}}-z_{i})-K_{h}(z_{l}-z_{i})\right)\\ &+\sum_{\begin{subarray}{c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\left(\chi_{jj}(z_{l},z_{l^{ \prime}})-\chi_{jj}(z_{l},z_{l})\right)K_{h}(z_{l}-z_{i})K_{h}(z_{l}-z_{i})\\ &+\sum_{\begin{subarray}{c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\left(\chi_{jj}(z_{l},z_{l^{ \prime}})-\chi_{jj}(z_{l},z_{l})\right)K_{h}(z_{l}-z_{i})\left(K_{h}(z_{l^{ \prime}}-z_{i})-K_{h}(z_{l}-z_{i})\right)\Bigg{]}\\ \end{split} \tag{22}\]
From assumption **C1** and 20
\[\begin{split}&\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+\alpha}}A_{j,n} ^{(2)}(z_{i})\\ &\leq\frac{2^{2H}\sigma^{2}}{nh^{d+1}}\Bigg{[}\sum_{\begin{subarray} {c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l})K_{h}^{2}(z _{l}-z_{i})\\ &+C_{K}\sum_{\begin{subarray}{c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\chi_{jj}(z_{l},z_{l})K_{h}(z _{l}-z_{i})\left\|z_{l}-z_{l^{\prime}}\right\|^{\alpha_{K}}\\ &+C_{\chi}\sum_{\begin{subarray}{c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\left\|z_{l}-z_{l^{\prime}} \right\|^{\alpha_{K}}K_{h}^{2}(z_{l}-z_{i})\\ &+C_{k}C_{\chi}\sum_{\begin{subarray}{c}1\leq|\mu|^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{n}\end{subarray}}\left\|z_{l}-z_{l^{\prime}} \right\|^{\alpha_{K}}K_{h}(z_{l}-z_{i})\left\|z_{l}-z_{l^{\prime}}\right\|^{ \alpha_{K}}\\ &=A_{j,n}^{(2,1)}+A_{j,n}^{(2,2)}+A_{j,n}^{(2,3)}+A_{j,n}^{(2,4)}. \end{split} \tag{23}\]
Note that
\[\frac{1}{n}\sum_{l^{\prime}=1}^{n}\leavevmode\hbox{\small 1\kern-3.8pt \normalsize\kern 3.8pt\normalsize\kern 3.8pt}_{[0<\|z_{l}-z_{l^{\prime}}\|\leq 3 \delta_{\alpha_{l}}]}\quad\overset{\approx}{n\to\infty}\quad 3^{d+1}\int_{\mathbb{R} ^{d+1}}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\kern 3.8pt}_{V(z_{l})}(z^{ \prime})dz^{\prime}=3^{d+1}\lambda(V(z_{l}))=\frac{3^{d+1}}{n}, \tag{24}\]
then using (23) and (24) we have
\[\begin{split} A^{(2,1)}_{j,n}&=\frac{2^{2H}\sigma^ {2}}{nh^{d+1}}\sum_{l=1}^{n}\chi_{jj}(z_{l},z_{l})K^{2}_{h}(z_{l}-z_{i})\sum_{l ^{\prime}=1}^{n}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\kern 3.8pt}_{[0<\|z_{l}-z_{l^{ \prime}}\|\leq 3\delta_{\alpha}]}\\ &\leq\frac{2^{2H}3^{d+1}\sigma^{2}}{h^{d+1}}\int_{\mathbb{R}^{d+ 1}}\chi_{jj}(z,z)K^{2}_{h}(z-z_{i})dz\\ &=2^{2H}3^{d+1}\sigma^{2}\int_{\mathbb{R}^{d+1}}\chi_{jj}(z_{i}+ hz,z_{i}+hz)K^{2}(z)dz\\ &\approx\leavevmode\nobreak\ 2^{2H}3^{d+1}\sigma^{2}\chi_{jj}(z_{i},z_{i})\| K\|_{2}^{2}+\mathcal{O}(|h|^{\alpha_{x}}).\end{split} \tag{25}\]
As before, the last inequality comes from the regularity of \(\chi\) from Condition **C1** and notations defined in Remark 3.1.
\[\begin{split}\frac{1}{(\delta_{n})^{\alpha_{k}}}A^{(2,2)}_{j,n}& =\frac{2^{2H}\sigma^{2}C_{K}}{nh^{d+1}(\delta_{n})^{\alpha_{k}}} \sum_{l=1}^{n}\chi_{jj}(z_{l},z_{l})K_{h}(z_{l}-z_{i})\sum_{l^{\prime}=1}^{n} \left\|z_{l}-z_{l^{\prime}}\right\|^{\alpha_{K}}\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize\kern 3.8pt}_{[0<\|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{ \alpha}]}\\ &\leq\frac{2^{2H}\sigma^{2}C_{K}(3\delta_{n})^{\alpha_{k}}}{nh^{ d+1}(\delta_{n})^{\alpha_{k}}}\sum_{l=1}^{n}\chi_{jj}(z_{l},z_{l})K_{h}(z_{l}-z_{i}) \sum_{l^{\prime}=1}^{n}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize \kern 3.8pt}_{[0<\|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{\alpha}]}\\ &\leq\frac{2^{2H}3^{\alpha_{K}+d+1}\sigma^{2}C_{K}}{h^{d+1}}\int_ {\mathbb{R}^{d+1}}\chi_{jj}(z,z)K_{h}(z-z_{i})dz\\ &=2^{2H}3^{\alpha_{K}+d+1}\sigma^{2}C_{K}\int_{\mathbb{R}^{d+1}} \chi_{jj}(z_{i}+hz,z_{i}+hz)K(z)dz\\ &\approx 2^{2H}3^{\alpha_{K}+d+1}\sigma^{2}C_{K}\chi_{jj}(z_{i},z_{i})+ \mathcal{O}(|h|^{\alpha_{x}}).\end{split} \tag{26}\]
Again, the last inequality comes from the regularity of \(\chi\) from Condition **C1** and notations defined in Remark 3.1.
\[\begin{split}\frac{1}{(\delta_{n})^{\alpha_{x}}}A^{(2,3)}_{j,n}& =\frac{2^{2H}\sigma^{2}C_{\chi}}{nh^{d+1}(\delta_{n})^{\alpha_{x} }}\sum_{l=1}^{n}K^{2}_{h}(z_{l}-z_{i})\sum_{l^{\prime}=1}^{n}\left\|z_{l}-z_{l^ {\prime}}\right\|^{\alpha_{x}}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize \kern 3.8pt}_{[0<\|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{\alpha}]}\\ &\leq\frac{2^{2H}\sigma^{2}C_{\chi}(3\delta_{n})^{\alpha_{x}}}{nh^ {d+1}(\delta_{n})^{\alpha_{x}}}\sum_{l=1}^{n}K^{2}_{h}(z_{l}-z_{i})\sum_{l^{ \prime}=1}^{n}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\kern 3.8pt}_{[0<\|z_{l}-z_{l^{ \prime}}\|\leq 3\delta_{\alpha}]}\\ &\leq\frac{2^{2H}3^{\alpha_{x}+d+1}\sigma^{2}C_{\chi}}{h^{d+1}} \int_{\mathbb{R}^{d+1}}K^{2}_{h}(z_{l}-z_{i})dz\\ &=2^{2H}3^{\alpha_{x}+d+1}\sigma^{2}C_{\chi}\|K\|_{2}^{2}\end{split} \tag{27}\]
\[\begin{split}\frac{1}{(\delta_{n})^{\alpha_{x}+\alpha_{K}}}A^{(2, 4)}_{j,n}&=\frac{2^{2H}\sigma^{2}C_{K}C_{\chi}}{nh^{d+1}(\delta_{n} )^{\alpha_{x}+\alpha_{k}}}\sum_{l=1}^{n}K_{h}(z_{l}-z_{i})\sum_{l^{\prime}=1}^ {n}\left\|z_{l}-z_{l^{\prime}}\right\|^{\alpha_{x}+\alpha_{k}}\leavevmode \hbox{\small 1\kern-3.8pt\normalsize\kern 3.8pt}_{[0<\|z_{l}-z_{l^{\prime}}\|\leq 3 \delta_{\alpha}]}\\ &\leq\frac{2^{2H}\sigma^{2}C_{K}C_{\chi}(3\delta_{n})^{\alpha_{x}+ \alpha_{K}}}{nh^{d+1}(\delta_{n})^{\alpha_{x}+\alpha_{K}}}\sum_{l=1}^{n}K_{h}(z _{l}-z_{i})\sum_{l^{\prime}=1}^{n}\leavevmode\hbox{\small 1\kern-3.8pt \normalsize\kern 3.8pt}_{[0<\|z_{l}-z_{l^{\prime}}\|\leq 3\delta_{\alpha}]}\\ &\leq\frac{2^{2H}3^{\alpha_{x}+\alpha_{K}+d+1}\sigma^{2}C_{K}C_{ \chi}}{h^{d+1}}\int_{\mathbb{R}^{d+1}}K_{h}(z-z_{i})dz\\ &=2^{2H}3^{\alpha_{x}+\alpha_{K}+d+1}\sigma^{2}C_{K}C_{\chi}.\end{split} \tag{28}\]
Thus, from (25), (26), (27), and (28) we have
\[\begin{split}&\frac{1}{nh^{d+1}(\delta_{n})^{2H+d+a}}A^{(2)}_{J,n}( z_{i})\\ &\leq 2^{2H}3^{d+1}\sigma^{2}\chi_{jj}(z_{i},z_{i})\|K\|_{2}^{2}+ \mathcal{O}\left(|h|^{a_{Z}}\vee(\delta_{n})^{a_{K}}\vee(\delta_{n})^{a_{Z}} \vee(\delta_{n})^{a_{Z}+a_{K}}\right)\\ &=C_{2}(H)\chi_{jj}(z_{i},z_{i})+\mathcal{O}\left(|h|^{a_{Z}}\lor (\delta_{n})^{a_{K}}\vee(\delta_{n})^{a_{Z}}\right),\end{split} \tag{29}\]
where \(C_{2}(H)=2^{2H}3^{d+1}\sigma^{2}\|K\|_{2}^{2}\).
Finally we consider the case \(\|z_{l}-z_{l^{\prime}}\|>3\delta_{n}\), and we split the term \(A^{(3)}_{j,n}(z_{i})\) in three term:
\[\begin{split}&\frac{1}{n^{2}h^{2(d+1)}(\delta_{n})^{2H+d+a}}A^{(3 )}_{j,n}(z_{i})\\ &=\frac{1}{n^{2}h^{2(d+1)}(\delta_{n})^{2H+d+a}}\sum_{1\leq l \neq l^{\prime}\leq n}\chi_{jj}(z_{l},z_{l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}( z_{l^{\prime}}-z_{i})\\ &\times\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\mathds{1}_{ \|[l_{1}-t_{l^{\prime}}]\leq 3\delta_{n},\|u_{l}-u_{l^{\prime}}\|>3\delta_{n} ]}\\ &+\frac{1}{n^{2}h^{2(d+1)}(\delta_{n})^{2H+d+a}}\sum_{1\leq l\neq l ^{\prime}\leq n}\chi_{jj}(z_{l},z_{l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{ \prime}}-z_{i})\\ &\times\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\mathds{1}_{ \|[l_{1}-t_{l^{\prime}}]>3\delta_{n},\|u_{l}-u_{l^{\prime}}\|\leq 3\delta_{n} ]}\\ &+\frac{1}{n^{2}h^{2(d+1)}(\delta_{n})^{2H+d+a}}\sum_{1\leq l\neq l ^{\prime}\leq n}\chi_{jj}(z_{l},z_{l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{ \prime}}-z_{i})\\ &\times\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\mathds{1}_{ \|[l_{1}-t_{l^{\prime}}]>3\delta_{n},\|u_{l}-u_{l^{\prime}}\|>3\delta_{n}]}\\ &=A^{(3,1)}_{j,n}+A^{(3,2)}_{j,n}+A^{(3,3)}_{j,n}.\end{split} \tag{30}\]
In the case \(|t_{l}-t_{l^{\prime}}|\leq 3\delta_{n}\) and \(\|u_{l}-u_{l^{\prime}}\|>3\delta_{n}\) we can bond the covariance \(\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\) as follows
\[\begin{split}\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})& =\frac{1}{2}\left(|t_{l}-t_{l^{\prime}}+2\delta_{n}|^{2H}+|t_{l}-t _{l^{\prime}}-2\delta_{n}|^{2H}-2|t_{l}-t_{l^{\prime}}|^{2H}\right)\\ &\times\int_{V(u_{l})}\int_{V(u_{l^{\prime}})}\chi_{a,d}\|u-u^{ \prime}\|^{-d+a}dudu^{\prime}\\ &\leq\left(2\delta_{n}\right)^{2H}\chi_{a,d}(\delta_{n})^{-d+a} \lambda(V(u_{l}))\lambda(V(u_{l^{\prime}}))\\ &\leq 2^{2H}\chi_{a,d}\lambda^{2}(S^{d-1})\left(\delta_{n}\right)^{2H +d+a}.\end{split} \tag{31}\]
Then, from (30) and (31) we have
\[\begin{split} A^{(3,1)}_{j,n}&=\frac{1}{n^{2}h^{2(d+ 1)}(\delta_{n})^{2H+d+a}}\sum_{1\leq l\neq l^{\prime}\leq n}\chi_{jj}(z_{l},z_ {l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i})\\ &\times\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\mathds{1}_{ \|[l_{1}-t_{l^{\prime}}]\leq 3\delta_{n},\|u_{l}-u_{l^{\prime}}\|>3\delta_{n}]}\\ &\leq\frac{2^{2H}\chi_{a,d}\lambda^{2}(S^{d-1})}{n^{2}h^{2(d+1)}} \sum_{1\leq l\neq l^{\prime}\leq n}\chi_{jj}(z_{l},z_{l^{\prime}})K_{h}(z_{l}-z _{i})K_{h}(z_{l^{\prime}}-z_{i})\\ &\leq\frac{2^{2H}\chi_{a,d}\lambda^{2}(S^{d-1})}{h^{2(d+1)}} \int_{\mathbb{R}^{d+1}}\int_{\mathbb{R}^{d+1}}\chi_{jj}(z,z^{\prime})K_{h}(z-z _{i})K_{h}(z^{\prime}-z_{i})d\,zdz^{\prime}\\ &=2^{2H}\chi_{a,d}\lambda^{2}(S^{d-1})\int_{\mathbb{R}^{d+1}} \int_{\mathbb{R}^{d+1}}\chi_{jj}(z_{i}+hz,z_{i}+hz^{\prime})K(z)K(z^{\prime}) dzdz^{\prime}\\ &\approx 2^{2H}\chi_{a,d}\lambda^{2}(S^{d-1})\chi_{jj}(z_{i},z_{i})+2^{2 H}\chi_{a,d}\lambda^{2}(S^{d-1})|h|^{a_{Z}}\\ &\times\int_{\mathbb{R}^{d+1}}\int_{\mathbb{R}^{d+1}}(\|\mathbf{z}\|+ \|\mathbf{z}^{\prime}\|)^{a_{Z}}K(z)K(z^{\prime})dzdz^{\prime}\\ &=2^{2H}\chi_{a,d}\lambda^{2}(S^{d-1})\chi_{jj}(z_{i},z_{i})+ \mathcal{O}(|h|^{a_{Z}}).\end{split} \tag{32}\]
Again, the last inequality comes from the regularity of \(\chi\) from Condition **C1** and notations defined in Remark 3.1. Now, we study the case \(|t_{l}-t_{l^{\prime}}|>3\delta_{n}\) and \(\|u_{l}-u_{l^{\prime}}\|\leq 3\delta_{n}\). From (38) and (40) we bond the covariance \(\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\) as follows
\[\begin{split}\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})& =\frac{1}{2}\left(|t_{l}-t_{l^{\prime}}+2\delta_{n}|^{2H}+|t_{l}-t_{l^{ \prime}}-2\delta_{n}|^{2H}-2|t_{l}-t_{l^{\prime}}|^{2H}\right)(\delta_{n})^{d+ \alpha}\\ &\quad\times Cov\left(W^{H}\left(\leavevmode\hbox{\small 1\kern-3.8pt \normalsize 1}_{\{\|u-u_{l}/\delta_{n}\|\leq 1\}}\right),W^{H}\left(\leavevmode\hbox{ \small 1\kern-3.8pt\normalsize 1}_{\{\|u-u_{l}/\delta_{n}\|\leq 1\}}\right) \right)\\ &=\frac{1}{2}\left(\int_{t_{l}-\delta_{n}}^{t_{l}+\delta_{n}} \int_{t_{l^{\prime}}-\delta_{n}}^{t_{l^{\prime}}+\delta_{n}}2H(2H-1)|t-t^{ \prime}|^{2H-2}dt^{\prime}dt\right)(\delta_{n})^{d+\alpha}\\ &\quad\times Cov\left(W^{H}\left(\leavevmode\hbox{\small 1\kern-3.8pt \normalsize 1}_{\{\|u-u_{l}/\delta_{n}\|\leq 1\}}\right),W^{H}\left(\leavevmode\hbox{ \small 1\kern-3.8pt\normalsize 1}_{\{\|u-u_{l}/\delta_{n}\|\leq 1\}}\right) \right)\\ &\leq H(2H-1)(\delta_{n})^{2H-2}\left(\int_{t_{l}-\delta_{n}}^{t_{ l}+\delta_{n}}\int_{t_{l^{\prime}}-\delta_{n}}^{t_{l^{\prime}}+\delta_{n}}dt^{ \prime}dt\right)(\delta_{n})^{d+\alpha}\\ &\quad\times Var^{1/2}\left(W^{H}\left(\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}_{\{\|u-u_{l}/\delta_{n}\|\leq 1\}}\right)\right)Var^{1/2} \left(W^{H}\left(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\{\|u-u_{l}/\delta_{n}\| \leq 1\}}\right)\right)\\ &\leq 4H(2H-1)\sigma^{2}(\delta_{n})^{2H+d+\alpha}.\end{split} \tag{33}\]
Then, from (30) and (33) we have
\[\begin{split} A_{j,n}^{(3,2)}&=\frac{1}{n^{2}h^{2(d+ 1)}(\delta_{n})^{2H+d+\alpha}}\sum_{1\leq i\neq l^{\prime}\leq n}\chi_{j_{j}}(z_ {l},z_{l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i})\\ &\times\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})\mathbb{I}_{ \{|t_{l}-t_{l^{\prime}}|>3\delta_{n}|,|u_{l}-u_{l^{\prime}}|\leq 3\delta_{n}\}}\\ &\leq\frac{4H(2H-1)\sigma^{2}}{n^{2}h^{2(d+1)}}\sum_{1\leq j\neq l ^{\prime}\leq n}\chi_{j_{j}}(z_{l},z_{l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{ l^{\prime}}-z_{i})\\ &\leq\frac{2\sigma^{2}2H(2H-1)}{h^{2(d+1)}}\int_{\mathbb{R}^{d+1 }}\int_{\mathbb{R}^{d+1}}\chi_{j_{j}}(z,z^{\prime})K_{h}(z-z_{i})K_{h}(z^{ \prime}-z_{i})dzdz^{\prime}\\ &\approx 4H(2H-1)\sigma^{2}\chi_{j_{j}}(z_{i},z_{i})+\mathcal{O}(|h|^ {\alpha_{j}}).\end{split} \tag{34}\]
Again, the last inequality comes from the regularity of \(\chi\) from Condition **C1** and notations defined in Remark 3.1. For the case \(|t_{l}-t_{l^{\prime}}|>3\delta_{n}\) and \(\|u_{l}-u_{l^{\prime}}\|>3\delta_{n}\), we proceed analogously to the previous cases
\[\begin{split}\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})& =\frac{1}{2}\left(|t_{l}-t_{l^{\prime}}+2\delta_{n}|^{2H}+|t_{l}-t_{l^{ \prime}}-2\delta_{n}|^{2H}-2|t_{l}-t_{l^{\prime}}|^{2H}\right)\\ &\quad\times\left(\int_{V(u_{l})}\int_{V(u_{l})}\gamma_{a,d}\|u-u ^{\prime}\|^{-d+\alpha}dudu^{\prime}\right)\\ &=\frac{1}{2}\left(\int_{t_{l}-\delta_{n}}^{t_{l}+\delta_{n}} \int_{t_{l^{\prime}}-\delta_{n}}^{t_{l^{\prime}}+\delta_{n}}2H(2H-1)|t-t^{ \prime}|^{2H-2}dt^{\prime}dt\right)\\ &\quad\times\left(\int_{V(u_{l})}\int_{V(u_{l^{\prime}})}\gamma_{a, d}\|u-u^{\prime}\|^{-d+\alpha}dudu^{\prime}\right)\\ &=4H(2H-1)(\delta_{n})^{2H}\gamma_{a,d}\left(\delta_{n}\right)^{d+ \alpha}\lambda^{2}(S^{d-1})\\ &\leq 4H(2H-1)\gamma_{a,d}\lambda^{2}(S^{d-1})(\delta_{n})^{2H+d+ \alpha}.\end{split} \tag{35}\]
Thus, from (30) and (35)
\[\begin{split} A_{j,n}^{(3,3)}&=\frac{1}{n^{2}h^{2(d+1)} (\delta_{n})^{2H+d+\alpha}}\sum_{1\leq i\neq j\leq n}\chi_{jj}(z_{i},z_{i^{ \prime}})K_{h}(z_{i}-z_{i})K_{h}(z_{i^{\prime}}-z_{i})\\ &\times\mathbb{E}(\epsilon_{i}\epsilon_{i^{\prime}})\mathbb{I}_{ \{|l_{l}-l_{l^{\prime}}|>3\delta_{n}|u_{l^{\prime}}|>3\delta_{n}\}}\\ &\leq\frac{4H(2H-1)\gamma_{a,d}\lambda^{2}(S^{d-1})}{n^{2}h^{2(d +1)}}\sum_{1\leq i\neq l^{\prime}\leq n}\chi_{jj}(z_{i},z_{i^{\prime}})K_{h}(z_ {i}-z_{i})K_{h}(z_{i^{\prime}}-z_{i})\\ &\underset{h\to 0}{\leq}\frac{4H(2H-1)\gamma_{a,d}\lambda^{2}( S^{d-1})}{h^{2(d+1)}}\int_{\mathbb{R}^{d+1}}\int_{\mathbb{R}^{d+1}}\chi_{jj}(z,z ^{\prime})K_{h}(z-z_{i})K_{h}(z^{\prime}-z_{i})dzdz^{\prime}\\ &\underset{h\to 0}{\approx}4H(2H-1)\gamma_{a,d}\lambda^{2}( S^{d-1})\chi_{jj}(z_{i},z_{i})+\mathcal{O}(|h|^{\alpha_{\ell}}).\end{split} \tag{36}\]
Then, from (30), (32), (34) and (36) we have
\[\begin{split}&\frac{1}{n^{2}h^{2(d+1)}(\delta_{n})^{2H+d+\alpha}} A_{j,n}^{(3)}(z_{i})\\ &\underset{h\to\infty}{\leq}\\ &+\left(4H(2H-1)\gamma_{a,d}\lambda^{2}(S^{d-1})\right)\chi_{jj} (z_{i},z_{i})+\mathcal{O}\left(|h|^{\alpha_{\ell}}\right)\\ &\underset{h\to 0}{\approx}C_{3}(H)+\mathcal{O}\left(|h|^{ \alpha_{\ell}}\right),\end{split} \tag{37}\]
where \(C_{3}(H)=2^{2H}Y_{a,d}\lambda^{2}(S^{d-1})+4H(2H-1)\sigma^{2}+4H(2H-1)\gamma _{a,d}\lambda^{2}(S^{d-1})\). Substituting (19), (29) and (37) into the equation (18), and using that \(2\lambda(S^{d-1})(\delta_{n})^{d+1}=1/n\) we obtain
\[\begin{split}\frac{1}{n^{2}h^{2(d+1)}}\mathbb{E}\left(\left(X^{T }\mathcal{W}(z_{i})\epsilon\right)_{j}^{2}\right)&\leq\ \ \frac{\left(C_{1}(H)+C_{2}(H)\right)(\delta_{n})^{2H+d+\alpha}}{nh^{d+1}}+C_{3}( H)(\delta_{n})^{2H+d+\alpha}\\ &\approx\ \ \frac{C(H)}{n^{1+\nu^{\prime}}},\end{split}\]
where \(\nu^{\prime}=\frac{2H+\alpha-1}{d+1}>0\) if \(2H+\alpha-1>0\), and also we should consider \(\gamma<1+\frac{\theta}{d+1}\) to obtain \(nh^{d+1}\to\infty\). Thus, the convergence in \(L^{2}\), and therefore in probability, is ensured for \(2H+d+\alpha>0\). For \(2H+\alpha-1>0\), the \(L^{2}\) rate of \(\frac{1}{nh^{d+1}}\left(X^{T}\mathcal{W}(z_{i})\epsilon\right)_{j}\) is faster than \(1/n\); for instance, when \(H>\frac{1}{2}\) and \(\alpha\geq 0\). A direct application of Borell-Cantelli lemma allow us to obtain
\[\frac{1}{nh^{d+1}}\left(X^{T}\mathcal{W}(z_{i})\epsilon\right)_{j}\xrightarrow{ a.s.}0.\]
By Slutsky Theorem and Lemma 3.1, the convergence of \(|\hat{\beta}(z_{i})-\beta(z_{i})|\to 0\) is:
* In probability, for \(2H+d+\alpha>0\) and \(d+1+\theta>0\). In particular for \(H>0\), \(\alpha>-d\), and \(\theta>-(d+1)\).
* Almost surely, for \(2H+\alpha-1>0\), \(\theta>0\) y \(\frac{\theta}{d+1}>\gamma>1+\frac{\theta}{d+1}\). In particular condition \(2H+\alpha-1>0\) hold for \(H>1/2\) and \(\alpha\geq 0\).
## 4 Simulation study
This section reviews the theoretical results presented in the previous sections, this part of the work was performed using the software R. To represent the spatial location, we considered a grid defined on \(\mathbb{R}^{2}\in[0,1]\); as points we defined the center of each pixel, i.e., ordered pairs defined by \((0.05,0.05),\ldots,(0.95,0.95)\), a graphical representation of the locations can be seen in the following figure.
We will start by representing the Colored noise in space and time, which represents the noise of our model. Then the four different models studied will be presented, along with the estimation of the response surface to analyze the residuals for the different models considered.
### Colored noise in space and time
For the noise, the following values of \(H\) were considered: \(H_{s}=0.40\) for space, \(H_{t}=0.65\) and \(H_{t}=0.90\). A representation for different time instants, \(t_{1}\), \(t_{50}\) and \(t_{100}\), is shown in the figure below.
It is possible to appreciate in Figure 2 that for values of \(H_{t}\) close to 1 (2d, 2e and 2f) a lower roughness is observed on the simulated surface. In contrast to the case when the value of \(H_{t}\) is close to 0.5 ((2a, 2b and 2c)). This is a common behavior of fractional Brownian motion.
### Model
We simulate four different versions of the model presented in (3), where we consider that the intercept \(\beta_{0}\) is zero, and a covariate represented by a Spatio - Temporal Auto Regressive Moving Average (STARMA) sampled at the sites defined in 1. To represent the covariates, we have decided to use the STARMA models since they have attracted great interest due to their flexibility to represent the relationship between observation sites and their neighbors; some of the research areas where the relevance of these models can be appreciated are renewable energies [4, 25], environmental data [5], disease mapping [12], and regional studies [16], among others. (for a detailed revision of STARMA we recommend to review [15], and to simulate this process we recommend the R package STARMA [23]). It is important to note that in the model (3), the values of the coefficients \(\beta\) accompanying the covariate, also depend on the location of the points.
Figure 1: Spatial location.
Figure 2: Colored noise, in space with \(H_{s}=0.40\) and time with \(H_{t}=0.65\) (a, b, c) and \(H_{t}=0.90\) (d, e, f) in three different instants of time.
As examples of different situations, we consider the values of \(\beta\) presented in the work of [24]. The models considered are the following:
\[\text{Model 1:}Y_{t} =\beta_{1}(z_{i})X(z_{i})+\epsilon_{i}^{H_{s}=0.40,H_{t}=0.65},\quad i =1,\ldots,n\] \[\text{Model 2:}Y_{t} =\beta_{1}(z_{i})X(z_{i})+\epsilon_{i}^{H_{s}=0.40,H_{t}=0.90}, \quad i=1,\ldots,n\] \[\text{Model 3:}Y_{t} =\beta_{2}(z_{i})X(z_{i})+\epsilon_{i}^{H_{s}=0.40,H_{t}=0.65}, \quad i=1,\ldots,n\] \[\text{Model 4:}Y_{t} =\beta_{2}(z_{i})X(z_{i})+\epsilon_{i}^{H_{s}=0.40,H_{t}=0.90}, \quad i=1,\ldots,n\]
where, \(\beta_{1}=1+(4(x+y)/12)\) represents a plane with a slight inclination and \(\beta_{2}=1+(36-(6-(25x)/2)^{2})(36-(6-(25y)/2)^{2})/(324*8)\) a curved surface. \(X(z_{i})\) corresponds to a Spatio - Temporal Auto Regressive model of order \((1,1)\). \(\epsilon_{i}^{H_{s}=0.40,H_{t}=0.65}\) and \(\epsilon_{i}^{H_{s}=0.40,H_{t}=0.90}\) corresponds to two different noises considered. In the following graphic we present three different times, \(t_{1}\), \(t_{50}\) and \(t_{100}\) for the four different models considered
Figures 3a, 3b, 3c, 4a, 4b and 4c, present different scenarios where a bigger variability, in time, is considered, this is a consequence of \(H_{t}=0.65\). Meanwhile, figures 3d, 3e, 3f, 4d, 4e and 4f a decrease in variance is seen over time. On the other hand, regarding the spatial heterogeneity of the parameters, similar to the work of [8], in the models 3a, 3b, 3c, 3d, 3e, and 3f medium spatial heterogeneity is observed; in contrast to a high spatial heterogeneity for the models 4a, 4b and 4c, 4d, 4e, and 4f. These are the models that will be considered to estimate the parameter \(\beta_{1}(z_{i})\) of the model (3).
### Estimator performance
The estimation result, \(\hat{Y}_{i}=\hat{\beta}_{1}(z_{i})X(z_{i})\), in conjunction with the model defined by (3), is presented below for the four different models simulated at \(t_{1},t_{50}\) and \(t_{100}\).
Figure 4: Model 3 and 4, in space with \(H_{s}=0.40\) and time with \(H_{t}=0.65\) (a, b, c) and \(H_{t}=0.90\) (d, e, f) in three different instants of time.
Figure 5: Model 1 and 2, in space with \(H_{s}=0.40\) and time with \(H_{t}=0.65\) (a, b, c) and \(H_{t}=0.90\) (d, e, f) in three different instants of time.
In the figures above, the points represent \(Y_{i}(z_{i})\), \(i=1,\ldots,100\), while the surfaces represent \(\hat{Y}_{i}(z_{i})\), \(i=1,\ldots,100\). It is important to note that the estimation is slightly different for each time, in our example we consider \(t_{1},\ldots t_{100}\) (for more details check Remark 4.1). For all the models considered, it is possible to appreciate a similarity between what was simulated and the estimation performed. To verify the performance of the proposed estimation, the following table presents indexes to quantify the goodness of fit.
Considering that \(\beta\) is estimated for each time instant, we can see that the values of the minima, the respective quartiles, and maxima, presented for model 1 together with model 2, and for model 3 together with model 4, are very similar. As for the adjusted \(R^{2}\), which is an index of the goodness of fit, and which indicates the amount of variability explained by the explanatory variable, it is possible to notice that it decreases in models 2 and 4, with respect to models 1 and 3, respectively. These results are a consequence of the lower variability in models 2 and 4.
#### Quadratic Mean Error - QME
The way to build them was by iterations where the observations were accumulated according to time, i.e., in the first iteration the parameter was estimated with 100 observations (first regular grid or \(t_{1}\)). The second iteration considered 200 points (\(t_{1}\) and \(t_{2}\)) and so on until the 10000 observations were reached (100 observations for each of the 100 different times considered).
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{Model 1} \\ \hline Minimum & \(Q_{1}\) & Median & \(Q_{3}\) & Maximum & Adjusted \(R^{2}\) \\ \hline
1.0667 & 1.2388 & 1.3331 & 1.4284 & 1.5995 & 0.9419423 \\ \hline \multicolumn{6}{|c|}{Model 2} \\ \hline Minimum & \(Q_{1}\) & Median & \(Q_{3}\) & Maximum & Adjusted \(R^{2}\) \\ \hline
1.0683 & 1.2383 & 1.3330 & 1.4291 & 1.5986 & 0.9877432 \\ \hline \multicolumn{6}{|c|}{Model 3} \\ \hline Minimum & \(Q_{1}\) & Median & \(Q_{3}\) & Maximum & Adjusted \(R^{2}\) \\ \hline
1.0234 & 1.1055 & 1.1872 & 1.3032 & 1.4518 & 0.8695787 \\ \hline \multicolumn{6}{|c|}{Model 4} \\ \hline Minimum & \(Q_{1}\) & Median & \(Q_{3}\) & Maximum & Adjusted \(R^{2}\) \\ \hline
1.0244 & 1.1084 & 1.1882 & 1.3043 & 1.4534 & 0.9182242 \\ \hline \end{tabular}
\end{table}
Table 1: Minimums, Quartiles 1, 2 (Median), and 3, maximums and Adjusted \(R^{2}\)
Figure 6: Model 3 and 4, in space with \(H_{s}=0.40\) and time with \(H_{t}=0.65\) (a, b, c) and \(H_{t}=0.90\) (d, e, f) in three different instants of time
The above graphs show the variation of QME as a function of the number of observations over time. The first thing to note is that the range of the QME is quite small, indicating that, on average, the quadratic difference between the estimated parameter and the true parameter is very small. The second is that the behavior is quite similar for models 1, 2, and 3, where around \(t_{60}\) the behavior of the QME starts to stabilize. Meanwhile, in model 4, around \(t_{5}\) the QME values are bigger, and then decrease as the number of observations increases.
**Remark 4.1**.: _A GIF file of \(t_{1},t_{2},\ldots,t_{100}\) for each figure presented, can be found in the following links [https://github.com/TaniaAoaRojas/GTWR-Simulations_](https://github.com/TaniaAoaRojas/GTWR-Simulations_)
## Appendix A Appendix
### Proof of Lemma 2.1
Proof.: From equations (5) and (8) we have that the covariance function of \(\epsilon=(\epsilon_{l})_{l=1:n}\) is
\[\begin{split}\mathbb{E}(\epsilon_{l}\epsilon_{l^{\prime}})& =\mathbb{E}\left(W^{H}(V(z_{l}))W^{H}(V(z_{l^{\prime}}))\right)\\ &=\mathbb{E}\left(\left(W^{H}_{t_{l}^{+}}(V(u_{l}))-W^{H}_{t_{l} ^{-}}(V(u_{l}))\right)\left(W^{H}_{t_{l^{\prime}}}(V(u_{l^{\prime}}))-W^{H}_{t _{l^{\prime}}^{-}}(V(u_{l^{\prime}}))\right)\right)\\ &=\frac{1}{2}\left[|I_{l}-t_{l^{\prime}}+2\delta_{n}\right|^{2H}+ \left|I_{l}-t_{l^{\prime}}-2\delta_{n}\right|^{2H}-2\left|I_{l}-t_{l^{\prime}} \right|^{2H}\right]\\ &\times\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\openone_{V(u _{l})}(u)f(u-v)\openone_{V(u_{l^{\prime}})}(v)dudv\\ &=\frac{1}{2}\left(\int_{t_{l}^{-}}^{t_{l}^{+}}\int_{t_{l^{ \prime}}^{-}}^{t_{l^{\prime}}^{+}}2H(2H-1)|t-t^{\prime}|^{2H-2}dt^{\prime}dt \right)\\ &\times\left(\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\openone_{ V(u_{l})}(u)\gamma_{a,d}\left\|u-v\right\|^{-d+a}\openone_{V(u_{l^{\prime}})}(v) dudv\right)\end{split} \tag{38}\]
Figure 7: Quadratic Mean Error for each model simulated.
From (4) we have that the spatial covariance function can be rewritten as
\[\begin{split}&\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}1\!\!\!1_{V(u_{ i})}(u)f(u-v)1\!\!\!1_{V(u_{i^{\prime}})}(v)dudv\\ &=\int_{\mathbb{R}^{d}}F1\!\!\!1_{V(u_{i})}(\xi)\overline{F1\!\! \!1_{V(u_{i^{\prime}})}(\xi)}\mu(d\xi)\\ &=\int_{\mathbb{R}^{d}}\left(\int_{\left\|u-u_{i}\right\|\leq \delta_{u}}e^{-i\xi\cdot u}du\right)\left(\int_{\left\|v-u_{i^{\prime}}\right\| \leq\delta_{u}}e^{i\xi\cdot v}dv\right)\mu(d\xi)\\ &=\int_{\mathbb{R}^{d}}\left((\delta_{n})^{d}\int_{\left\|u \right\|\leq 1}e^{-i\xi\cdot(u_{i}+\delta_{u})}du\right)\left((\delta_{n})^{d} \int_{\left\|v\right\|\leq 1}e^{i\xi\cdot(u_{i^{\prime}}+\delta_{u^{\prime}})}dv \right)\mu(d\xi)\\ &=(\delta_{n})^{2d}\int_{\mathbb{R}^{d}}e^{-i\xi\cdot(u_{i}-u_{i^ {\prime}})}\left(\int_{\left\|u\right\|\leq 1}e^{-i\xi\cdot\delta_{u}}du\right) \left(\int_{\left\|v\right\|\leq 1}e^{i\xi\cdot\delta_{u}}dv\right)\mu(d\xi)\\ &=(\delta_{n})^{2d}\int_{\mathbb{R}^{d}}e^{-i\xi(u_{i}-u_{i^{ \prime}})/\delta_{n}}\left(\int_{\left\|u\right\|\leq 1}e^{-i\xi u}du\right) \left(\int_{\left\|v\right\|\leq 1}e^{i\xi v}dv\right)\left\|\frac{\xi}{\delta_{n}} \right\|^{-\alpha}(\delta_{n})^{-d}d\xi\\ &=(\delta_{n})^{d+\alpha}\int_{\mathbb{R}^{d}}e^{-i\xi(u_{i}-u_{i ^{\prime}})/\delta_{n}}\left(\int_{\left\|u\right\|\leq 1}e^{-i\xi u}du \right)\left(\int_{\left\|v\right\|\leq 1}e^{i\xi v}dv\right)\left\|\xi \right\|^{-\alpha}d\xi\\ &=(\delta_{n})^{d+\alpha}\int_{\mathbb{R}^{d}}\left(\int_{\left\| u-u_{i}/\delta_{n}\right\|\leq 1}e^{-i\xi u}du\right)\left(\int_{\left\|v-u_{i ^{\prime}}/\delta_{n}\right\|\leq 1}e^{i\xi v}dv\right)\mu(d\xi)\\ &=(\delta_{n})^{d+\alpha}Cov\left(W^{H}\left(1\!\!\!1_{\left\|u- u_{i}/\delta_{n}\right\|\leq 1}\right),W^{H}\left(1\!\!\!1_{\left\|u-u_{i^{\prime}}/ \delta_{n}\right\|\leq 1}\right)\right)\end{split} \tag{40}\]
In particular, for \(l=l^{\prime}\) we obtain the spatial variance
\[\begin{split}&\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}1\!\! \!1_{V(u_{i})}(u)f(u-v)1\!\!\!1_{V(u_{i})}(v)dudv\\ &=(\delta_{n})^{d+\alpha}\int_{\mathbb{R}^{d}}\left(\int_{\left\| u\right\|\leq 1}e^{-i\xi u}du\right)\left(\int_{\left\|v\right\|\leq 1}e^{i\xi v }dv\right)\left\|\xi\right\|^{-\alpha}d\xi\\ &=(\delta_{n})^{d+\alpha}\int_{\mathbb{R}^{d}}\left(\int_{\left\| u\right\|\leq 1}e^{-i\xi u}du\right)\left(\int_{\left\|v\right\|\leq 1}e^{i\xi v }dv\right)\mu(d\xi)\\ &=(\delta_{n})^{d+\alpha}Var\left(W^{H}\left(1\!\!\!1_{\left\|u \right\|\leq 1}\right)\right)\\ &=\sigma^{2}(\delta_{n})^{d+\alpha},\end{split} \tag{41}\]
where \(\sigma^{2}=Var\left(W^{H}\left(1\!\!\!1_{\left\|u\right\|\leq 1}\right)\right)\). Thus, the variance of the fractional colored noise is
\[\mathbb{E}(\epsilon_{l}^{2})=\sigma^{2}2^{2H}(\delta_{n})^{2H+d+\alpha}. \tag{42}\]
### Proof of Lemma 3.1
Proof.: The \(jk_{th}\) component of the matrix \(X^{T}\mathcal{W}(z_{i})X\) is
\[\left(X^{T}\mathcal{W}(z_{i})X\right)_{jk}=\sum_{l=1}^{n}X_{lj}X_{lk}\mathcal{ W}_{il}. \tag{43}\]
We study the asymptotic expectation of (43), from assumption **C1** we obtain
\[\frac{1}{nh^{d+1}}\mathbb{E}\left(\left(X^{T}\mathcal{W}(z_{i})X \right)_{jk}\right)\] \[=\frac{1}{nh^{d+1}}\sum_{l=1}^{n}\ \chi_{jk}(z_{l},z_{l})K_{h}\left(z_{l}-z_{i}\right)\] \[=\frac{1}{h^{d+1}}\int_{\mathbb{R}^{d+1}}\sum_{l=1}^{n}\chi_{jk}(z _{l},z_{l})K_{h}\left(z_{l}-z_{i}\right)1\hskip-2.845276pt\mathrm{I}_{V(z_{l}) }(z)dz \tag{43}\] \[\approx\ \frac{1}{h^{d+1}}\int_{\mathbb{R}^{d+1}}\chi_{jk}(z,z)K_{h} \left(z-z_{i}\right)dz\] \[=\int_{\mathbb{R}^{d+1}}\chi_{jk}(z_{i}+hz,z_{i}+hz)K\left(z \right)dz\] \[\approx\ \chi_{jk}(z_{i},z_{i})+\mathcal{O}\left(|h|^{a_{x}}\right).\]
**Remark A.1**.: _Note that condition_ (**C2**) _implies that the covariance matrix \(\chi(z_{i},z_{i})=\left(\chi_{jk}(z_{i},z_{i})\right)_{j,k=1:n}\) is an invertible matrix._
Continuing, we calculate the variance of (42).
\[Var\left(\left(X^{T}\mathcal{W}(z_{i})X\right)_{jk}\right) = \sum_{l,l^{\prime}=1}^{n}\Gamma_{jk}(z_{l},z_{l^{\prime}})K_{h}( z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i}) \tag{44}\] \[= \sum_{l=1}^{n}\Gamma_{jk}(z_{l},z_{l})\mathcal{W}_{il}^{2}\] \[+ \sum_{\begin{subarray}{c}1\leq j\neq i^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|\leq k\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{ l^{\prime}})\mathcal{W}_{il}\mathcal{W}_{il^{\prime}}\] \[+ \sum_{\begin{subarray}{c}1\leq j\neq i^{\prime}\leq n\\ \|z_{l}-z_{l^{\prime}}\|>k\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{l^{ \prime}})\mathcal{W}_{il}\mathcal{W}_{il^{\prime}}\] \[:= D^{(1)}_{jk,n}(z_{i})+D^{(2)}_{jk,n}(z_{i})+D^{(3)}_{jk,n}(z_{i}),\]
First, we study the term \(D^{(1)}_{jk,n}(z_{i})\) in (44). Let us consider the case \(0<\|z_{l}-z_{i}\|<\delta_{n}\)
\[\frac{1}{n^{2}h^{2(d+1)}}D^{(1,1)}_{jk,n}(z_{i})=\frac{1}{n^{2}h^{2(d+1)}}\sum _{\begin{subarray}{c}1\leq j\leq n\\ \|z_{l}-z_{i}\|<\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{l})K_{h}^{2} \left(z_{l}-z_{i}\right),\]
Using **C2** (iv) and Remark 2.2, we can obtain
\[\frac{1}{n^{2}h^{2(d+1)}}D^{(1,1)}_{jk,n}(z_{i}) \leq 2^{(d+1)}\frac{C_{k,d}}{n^{2}h^{2(d+1)}}\int_{\mathbb{R}^{d+1}}K _{h}^{2}\left(z-z_{i}\right)dz \tag{45}\] \[= 2^{(d+1)}\frac{C_{k,d}}{n^{2}h^{(d+1)}}\|K\|_{2}^{2}\]
Now, we consider the case \(0<\|z_{l}-z_{i}\|\geq\delta_{n}\). Using Assumption **K1** (iv), we obtain
\[\frac{1}{n^{2}h^{2(d+1)}}D_{jk,n}^{(1,2)}(z_{i}) =\frac{1}{n^{2}h^{2(d+1)}}\sum_{\begin{subarray}{c}1\leq l\leq n\\ \|z_{i}-z_{i}\|\geq\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{l})K_{h}^{2} \left(z_{l}-z_{i}\right)\] \[=\frac{1}{n^{2}h^{2(d+1)}}\sum_{\begin{subarray}{c}1\leq l\leq n \\ \|z_{i}-z_{i}\|\geq\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{i})f_{K}\left( \|z_{l}-z_{i}\|\right)K_{h}\left(z_{l}-z_{i}\right)\] \[\leq\frac{1}{n^{2}h^{2(d+1)}}\frac{L(n)}{n^{\gamma}}\sum_{ \begin{subarray}{c}1\leq l\leq n\\ \|z_{i}-z_{i}\|\geq\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{l})K_{h} \left(z_{l}-z_{i}\right)\] \[\leq\frac{L(n)}{n^{1+\gamma}}\frac{1}{h^{2(d+1)}}\int_{\mathbb{R }^{d+1}}\sum_{l=1}^{n}\left|\Gamma_{jk}(z_{l},z_{l})\right|K_{h}\left(z_{l}- z_{i}\right)\,\openoneone_{V(z_{l})}(z)dz\] \[\approx\frac{L(n)}{n^{1+\gamma}}\frac{1}{h^{2(d+1)}}\int_{\mathbb{ R}^{d+1}}\left|\Gamma_{jk}(z,z)\right|K_{h}\left(z-z_{i}\right)dz\] \[\leq\frac{L(n)}{n^{1+\gamma}}\frac{C_{k,d}}{h^{(d+1)}}\int_{ \mathbb{R}^{d+1}}K\left(z\right)dz=\frac{L(n)}{n^{1+\gamma}}\frac{C_{k,d}}{h ^{(d+1)}}, \tag{46}\]
where in the last inequality we use **C2** (iv). Consequently, by (45) and (46), we can get
\[\frac{1}{n^{2}h^{2(d+1)}}D_{jk,n}^{(1,1)}(z_{i})\leq C\frac{L(n)}{n^{1+\gamma} }\frac{1}{h^{(d+1)}}=\mathcal{O}\left(\frac{L(n)}{n^{1+\gamma}}\frac{1}{h^{( d+1)}}\right) \tag{47}\]
Secondly, we consider the term \(D_{jk,n}^{(2)}(z_{i})\) in (44), i.e. when \(0<\|z_{l}-z_{l^{\prime}}\|\leq k\delta_{n}\)
\[\frac{1}{nh^{(d+1)}}D_{jk,n}^{(2)}(z_{i})\] \[=\frac{1}{nh^{(d+1)}}\sum_{\begin{subarray}{c}1\leq l\neq l^{ \prime}\leq n\\ \|z_{i}-z_{l^{\prime}}\|\leq k\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{ l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i})\] \[=\frac{1}{nh^{(d+1)}}\left[\sum_{\begin{subarray}{c}1\leq l\neq l ^{\prime}\leq n\\ \|z_{i}-z_{l^{\prime}}\|\leq k\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{ l})K_{h}(z_{l}-z_{i})K_{h}(z_{l}-z_{i})\right.\] \[+\left.\sum_{\begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{i}-z_{l^{\prime}}\|\leq k\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{ l})K_{h}(z_{l}-z_{i})\left(K_{h}(z_{l^{\prime}}-z_{i})-K_{h}(z_{l}-z_{l})\right)\right. \tag{48}\] \[+\left.\sum_{\begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{i}-z_{l^{\prime}}\|\leq k\delta_{n}\end{subarray}}\left(\Gamma_{jk}(z_{l}, z_{l^{\prime}})-\Gamma_{jk}(z_{l},z_{l})\right)K_{h}(z_{l}-z_{i})K_{h}(z_{l}-z_{i})\right.\] \[+\left.\sum_{\begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{i}-z_{l^{\prime}}\|\leq k\delta_{n}\end{subarray}}\left(\Gamma_{jk}(z_{l}, z_{l^{\prime}})-\Gamma_{jk}(z_{l},z_{l})\right)K_{h}(z_{l}-z_{i})\left(K_{h}(z_{l^{ \prime}}-z_{i})-K_{h}(z_{l}-z_{i})\right)\right]\]
From regularity condition (**C2**) and (**K1**)
\[\begin{split}&\frac{1}{nh^{(d+1)}}D^{(2)}_{jk,n}(z_{l})\\ &\leq\frac{1}{nh^{(d+1)}}\left[\sum_{\begin{subarray}{c}1\leq j\neq l ^{\prime}\leq n\\ \|z_{j}-z_{j^{\prime}}\|\leq k\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{l} )K^{2}_{h}(z_{l}-z_{i})\right.\\ &+C_{K}\sum_{\begin{subarray}{c}1\leq l\neq l^{\prime}\leq n\\ \|z_{j}-z_{j^{\prime}}\|\leq k\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{ l})K_{h}(z_{l}-z_{i})\left\|z_{l}-z_{i^{\prime}}\right\|^{\alpha_{K}}\\ &+C_{\Gamma}\sum_{\begin{subarray}{c}1\leq l\neq l^{\prime}\leq n \\ \|z_{l}-z_{i^{\prime}}\|\leq k\delta_{n}\end{subarray}}\left\|z_{l}-z_{i^{ \prime}}\right\|^{\alpha_{l}}K^{2}_{h}(z_{l}-z_{i}) \tag{49}\]
\[\begin{split}&+C_{k}C_{\Gamma}\sum_{\begin{subarray}{c}1\leq j \neq l^{\prime}\leq n\\ \|z_{l}-z_{i^{\prime}}\|\leq k\delta_{n}\end{subarray}}\left\|z_{l}-z_{i^{ \prime}}\right\|^{\alpha_{l}}K_{h}(z_{l}-z_{i})\left\|z_{l}-z_{i^{\prime}} \right\|^{\alpha_{K}}\\ &=D^{(2,1)}_{jk,n}+D^{(2,2)}_{jk,n}+D^{(2,3)}_{jk,n}+D^{(2,4)}_{jk, n}.\end{split}\]
Note that, similarly to (24) we have \(\frac{1}{n}\sum_{i^{\prime}=1}^{n}\leavevmode\hbox{\small 1\kern-3.8pt \normalsize 1}_{[0<\|z_{l}-z_{i^{\prime}}\|\leq k\delta_{n}]}\underset{n\to \infty}{\approx}\frac{k^{d+1}}{n}\). Therefore, by (49), we can get
\[\begin{split}& D^{(2,1)}_{jk,n}=\frac{1}{nh^{(d+1)}}\sum_{l=1}^{n} \Gamma_{jk}(z_{l},z_{l})K^{2}_{h}(z_{l}-z_{i})\sum_{l^{\prime}=1}^{n} \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{[0<\|z_{l}-z_{i^{ \prime}}\|\leq k\delta_{n}]}\\ &\approx\frac{k^{d+1}}{nh^{(d+1)}}\sum_{l=1}^{n}\Gamma_{jk}(z_{l},z_{l})K^{2}_{h}(z_{l}-z_{i})\end{split} \tag{50}\]
We split the sum in two cases \(0<\|z_{l}-z_{i}\|<\delta_{n}\) and \(0<\|z_{l}-z_{i}\|\geq\delta_{n}\), then the same arguments as in the case of the term \(D^{(1)}_{jk,n}\), allow us to obtain
\[\begin{split}\frac{1}{nh^{(d+1)}}D^{(2,1)}_{jk,n}& \leq\frac{k^{d+1}}{n^{2}h^{2(d+1)}}\sum_{l=1}^{n}\Gamma_{jk}(z_{l },z_{l})K^{2}_{h}(z_{l}-z_{i})\\ &\leq\frac{k^{d+1}}{n^{2}h^{2(d+1)}}\sum_{\begin{subarray}{c}1 \leq l\leq n\\ \|z_{l}-z_{i}\|<\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{l})K^{2}_{h} \left(z_{l}-z_{i}\right)\\ &+\frac{k^{d+1}}{n^{2}h^{2(d+1)}}\frac{L(n)}{n^{\prime}}\sum_{ \begin{subarray}{c}1\leq l\leq n\\ \|z_{l}-z_{i}\|\geq\delta_{n}\end{subarray}}\Gamma_{jk}(z_{l},z_{l})K_{h} \left(z_{l}-z_{i}\right)\\ &\leq 2^{(d+1)}\frac{C_{k,d}}{n^{2}h^{2(d+1)}}\int_{\mathbb{R}^{d+1} }K^{2}_{h}\left(z-z_{i}\right)dz\\ &+\frac{L(n)}{n^{1+\gamma}}\frac{k^{d+1}}{h^{2(d+1)}}\int_{\mathbb{ R}^{d+1}}\sum_{l=1}^{n}\left|\Gamma_{jk}(z_{l},z_{l})\right|K_{h}\left(z_{l}-z_{i} \right)\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{V(z_{l})}(z)dz\\ &\leq\leavevmode\nobreak\ 2^{(d+1)}\frac{C_{k,d}}{n^{2}h^{(d+1)}}\|K \|^{2}_{2}+C_{k,d}\frac{L(n)}{n^{1+\gamma}}\frac{k^{d+1}}{h^{2(d+1)}}\int_{ \mathbb{R}^{d+1}}K_{h}\left(z-z_{i}\right)dz\\ &\leq C\frac{L(n)}{n^{1+\gamma}}\frac{k^{d+1}}{h^{(d+1)}}\end{split}\]
Continuing, we have that for \(D^{(2,2)}_{jk,n}\) similarly to the previous terms
\[\begin{split}\frac{1}{(\delta_{n})^{\alpha_{\Gamma}}}D^{(2,2)}_{jk,n}& =\frac{C_{\Gamma}}{nh^{d+1}(\delta_{n})^{\alpha_{\Gamma}}}\sum_{l=1}^{n}\Gamma _{jk}(z_{l},z_{l})K_{h}(z_{l}-z_{i})\sum_{l^{\prime}=1}^{n}\left\|z_{l}-z_{l^{ \prime}}\right\|^{\alpha_{\Gamma}}\,\text{\small$\mathbf{I}$}_{\{0<\|z_{l}- z_{l^{\prime}}\|\leq k\delta_{n}\}}\\ &\leq\frac{C_{K}(k\delta_{n})^{\alpha_{k}}}{nh^{d+1}(\delta_{n}) ^{\alpha_{k}}}\sum_{l=1}^{n}\Gamma_{jk}(z_{l},z_{l})K_{h}(z_{l}-z_{i})\sum_{l ^{\prime}=1}^{n}\text{\small$\mathbf{I}$}_{\{0<\|z_{l}-z_{l^{\prime}}\|\leq k \delta_{n}\}}\\ &\leq\frac{k^{\alpha_{\Gamma}+d+1}C_{K}}{h^{d+1}}\int_{\mathbb{ R}^{d+1}}\Gamma_{jk}(z,z)K_{h}(z-z_{i})dz\\ &=k^{\alpha_{\Gamma}+d+1}C_{K}\int_{\mathbb{R}^{d+1}}\Gamma_{jk} (z_{l}+hz,z_{i}+hz)K(z)dz\\ &\approx k^{\alpha_{\Gamma}+d+1}C_{\Gamma}\text{\small$\mathbf{ I}$}_{jk}(z_{l},z_{i})+\mathcal{O}(|h|^{\alpha_{\Gamma}}).\end{split} \tag{51}\]
\[\begin{split}\frac{1}{(\delta_{n})^{\alpha_{\Gamma}}}D^{(2,3)}_{jk,n }&=\frac{C_{\Gamma}}{nh^{d+1}(\delta_{n})^{\alpha_{\Gamma}}}\sum_ {l=1}^{n}K_{h}^{2}(z_{l}-z_{i})\sum_{l^{\prime}=1}^{n}\left\|z_{l}-z_{l^{ \prime}}\right\|^{\alpha_{\Gamma}}\,\text{\small$\mathbf{I}$}_{\{0<\|z_{l}- z_{l^{\prime}}\|\leq k\delta_{n}\}}\\ &\leq\frac{C_{\Gamma}(k\delta_{n})^{\alpha_{\Gamma}}}{nh^{d+1}( \delta_{n})^{\alpha_{\Gamma}}}\sum_{l=1}^{n}K_{h}^{2}(z_{l}-z_{i})\sum_{l^{ \prime}=1}^{n}\text{\small$\mathbf{I}$}_{\{0<\|z_{l}-z_{l^{\prime}}\|\leq k \delta_{n}\}}\\ &\approx\frac{k^{\alpha_{\Gamma}+d+1}C_{\Gamma}}{h^{d+1}}\int_{ \mathbb{R}^{d+1}}K_{h}^{2}(z-z_{i})dz\\ &=k^{\alpha_{\Gamma}+d+1}C_{\Gamma}\text{\small$\mathbf{I}$}K \text{\small$\mathbf{I}$}\big{\|}_{\infty}^{2}.\end{split} \tag{52}\]
\[\begin{split}\frac{1}{(\delta_{n})^{\alpha_{\Gamma}+\alpha_{k}}}D^{ (2,4)}_{jk,n}&=\frac{C_{K}C_{\Gamma}}{nh^{d+1}(\delta_{n})^{\alpha _{\Gamma}+\alpha_{K}}}\sum_{l=1}^{n}K_{h}(z_{l}-z_{i})\sum_{l^{\prime}=1}^{n} \left\|z_{l}-z_{l^{\prime}}\right\|^{\alpha_{\Gamma}+\alpha_{K}}\,\text{ \small$\mathbf{I}$}_{\{0<\|z_{l}-z_{l^{\prime}}\|\leq k\delta_{n}\}}\\ &\leq\frac{C_{K}C_{\Gamma}(k\delta_{n})^{\alpha_{\Gamma}+\alpha_{ K}}}{nh^{d+1}(\delta_{n})^{\alpha_{\Gamma}+\alpha_{K}}}\sum_{l=1}^{n}K_{h}(z_{l}-z_{i}) \sum_{l^{\prime}=1}^{n}\text{\small$\mathbf{I}$}_{\{0<\|z_{l}-z_{l^{\prime}}\| \leq k\delta_{n}\}}\\ &\approx\frac{k^{\alpha_{\Gamma}+\alpha_{K}+d+1}C_{K}C_{\Gamma}}{ h^{d+1}}\int_{\mathbb{R}^{d+1}}K_{h}(z-z_{i})dz\\ &=k^{\alpha_{\Gamma}+\alpha_{K}+d+1}C_{K}C_{\Gamma}\text{\small$ \mathbf{I}$}K\text{\small$\mathbf{I}$}\big{\|}_{\infty}.\end{split} \tag{53}\]
Thus, from (50), (51), (52), and (53) we have
\[\begin{split}&\frac{1}{n^{2}h^{2(d+1)}}D^{(2)}_{jk,n}(z_{i})\\ &\approx C\frac{L(n)}{n^{1+\gamma}}\frac{k^{d+1}}{h^{(d+1)}}+ \mathcal{O}\left(|h|^{\alpha_{\Gamma}}\vee(\delta_{n})^{\alpha_{K}}\vee( \delta_{n})^{\alpha_{\Gamma}}\right).\end{split} \tag{54}\]
Finally we consider the term \(D^{(3)}_{jk,n}(z_{i})\), using Assumptions **(C2)** we obtain
\[\begin{split}&\frac{1}{n^{2}h^{2(d+1)}}D^{(3)}_{jk,n}\\ &=\frac{1}{n^{2}h^{2(d+1)}}\sum_{1\leq i\neq l^{\prime}\leq n} \Gamma_{jk}(z_{l},z_{l^{\prime}})K_{h}(z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i}) \text{\small$\mathbf{I}$}_{\{\|z_{l}-z_{l^{\prime}}\|>k\delta_{n}\}}\\ &\leq\frac{C_{k,d,p}(\delta_{n})^{d+1+\beta}}{n^{2}h^{2(d+1)}} \sum_{1\leq i\neq l^{\prime}\leq n}K_{h}(z_{l}-z_{i})K_{h}(z_{l^{\prime}}-z_{i}) \\ &\approx\frac{C_{k,d,p}(\delta_{n})^{d+1+\beta}}{h^{2(d+1)}}\int_{ \mathbb{R}^{d+1}}\int_{\mathbb{R}^{d+1}}K_{h}(z-z_{i})K_{h}(z^{\prime}-z_{i}) dzdz^{\prime}\\ &=C_{k,d,p}(\delta_{n})^{d+1+\beta}\end{split} \tag{55}\]
Substituting (47), (54) and (55) into the Equation (44), and using that \(2\lambda(S^{d-1})(\delta_{n})^{d+1}=1/n\), we obtain
\[\frac{1}{n^{2}h^{2(d+1)}}Var\left(\left(X^{T}\mathcal{W}(z_{i})X \right)_{jk}\right) \leq C\frac{\left(1+k^{(d+1)}\right)L(n)}{n^{1+\gamma}h^{(d+1)}}+C _{k,d,\theta}(\delta_{n})^{d+1+\theta}\] \[\approx\frac{C^{\prime}}{n^{1+\nu}},\]
where \(\gamma>\nu=\frac{\theta}{d+1}\), and \(h\) such that \(L(n)n^{1-\gamma}h^{(d+1)}=n^{-1-\theta}\).
Whether \(\theta>0\) then \(\nu>0\) and the \(L^{2}\) rate of \(\frac{1}{nh^{d+1}}\left(X^{T}\mathcal{W}(z_{i})X\right)_{jk}\) is faster than \(1/n\), therefore Borell-Cantelli lemma allows us to obtain
\[\frac{1}{nh^{d+1}}\left(X^{T}\mathcal{W}(z_{i})X\right)_{jk}\xrightarrow[n \to\infty]{a.s.}\chi_{j,k}(z_{i},z_{i}).\]
Note that \(\theta>-d-1\), thus \(1+\nu>0\) and we obtain the \(L^{2}\) convergence and therefore the convergence in probability when \(\theta\leq 0\).
**Remark A.2**.: _Let us note that the equality \(L(n)n^{-1-\nu}h^{(d+1)}=n^{-1-\theta}\) imposes a condition on the speed at which \(h^{(d+1)}\) decreases to zero. In fact, we need that \(h^{(d+1)}=\frac{L(n)}{n^{\gamma-\theta}(d+1)}\) with \(\gamma>\theta/(d+1)\)._
## Acknowledgments
Hector Araya was partially supported by FONDECYT 11230051 project. Lisandro Fermin was partially supported by MathAmSud Tomcat 22-math-10. Tania Roa was partially supported by FONDECYT 3220043 Postdoc project. Soledad Torres was partially supported by Basal Project FB210005 and FONDECYT project 1221373. Lisandro Fermin and Soledal Torres were partially supported by FONDECYT projects 1230807. Hector Araya, Tania Roa and Soledal Torres were partially supported by ECOS210037(C21E07) and Mathamsud AMSUD210023 projects.
## References
* [1]Andrienko, G., Andrienko, N., Demsar, U., Dransch, D., Dykes, J., Fabrikant, S. I. and Tominski, C. (2010). Space, time and visual analytics. _International journal of geographical information science._**24**, 1577-1600.
* [2]Brunsdon, C.,Corcoran, J. and Higgs, G. (2007). Visualising space and time in crime patterns: A comparison of methods. Computers, environment and urban systems, 31(1), 52-75.
* [3]Chen, J., Shaw, S. L., Yu, H., Lu, F., Chai, Y. and Jia, Q. (2011). Exploratory data analysis of activity diary data: a space-time GIS approach. _Journal of Transport Geography._ 19(3), 394-404.
* [4]Dambreville, R., Blanc, P., Chanussot, J., and Boldo, D. (2014). Very short term forecasting of the global horizontal irradiance using a spatio-temporal autoregressive model. _Renewable Energy_, 72, 291-300.
* [5]De Luna, X., and Genton, M. G. (2005). Predictive spatio-temporal models for spatially sparse environmental data. _Statistica Sinica_, 547-568.
* [6]Demsar, U. and Virrantaus, K. (2010). Space-time density of trajectories: exploring spatio-temporal patterns in movement data. _International Journal of Geographical Information Science._ 24(10), 1527-1542.
* [7]Fotheringham, A. S., Brunsdon, C. and Charlton, M. (2003). _Geographically weighted regression: the analysis of spatially varying relationships._ John Wiley & Sons.
* [8]Fotheringham, A. S., Yang, W., and Kang, W. (2017). Multiscale geographically weighted regression (MGWR). _Annals of the American Association of Geographers_, 107(6), 1247-1265.
* [9]Fotheringham, A. S., Crespo, R. and Yao, J. (2015). Geographical and temporal weighted regression (GTWR). _Geographical Analysis._ 47(4), 431-452.
* [10]Kwan, M. P. (2000). Gender differences in space-time constraints. _Area._ 32(2), 145-156.
* [11]Mandelbrot, B. B. and Van Ness, J. W. (1968). Fractional Brownian motions, fractional noises and applications. _SIAM review._ 10(4), 422-437.
* [12]Martinez-Beneito, M. A., Lopez-Quilez, A., and Botella-Rocamora, P. (2008). An autoregressive approach to spatio-temporal disease mapping. _Statistics in medicine_, 27(15), 2874-2889.
* [13]Nakaya, T. and Yano, K. (2010). Visualising crime clusters in a space-time cube: An exploratory data-analysis approach using space-time kernel density estimation and scan statistics. _Transactions in GIS._ 14(3), 223-239.
* [14]Peng, Y., Li, W., Luo, X. and Li, H. (2019). A geographically and temporally weighted regression model for spatial downscaling of MODIS land surface temperatures over urban heterogeneous regions. _IEEE transactions on geoscience and remote sensing._ 57(7), 5012-5027.
* [15]Pfeifer, P. E. anf Deutrech, S. J. (1980). A three-stage iterative procedure for space-time modeling phillip. _Technometrics_ 22(1), 35-47.
* [16]Ramajo, J., Marquez, M. A., and Hewings, G. J. (2017). Spatiotemporal analysis of regional systems: A multiregional spatial vector autoregressive model for Spain. _International Regional Science Review_, 40(1), 75-96.
* [17]Rey, S. J. and Janikas, M. V. (2009). STARS: Space-time analysis of regional systems. _Handbook of applied spatial analysis: Software tools, methods and applications (pp. 91-112)_. Berlin, Heidelberg: Springer Berlin Heidelberg.
* [18]Sholihin, M., Soleh, A. M. and Djuraidah, A. (2017). Geographically and temporally weighted regression (GTWR) for modeling economic growth using R. _Repositories-Dept. of Statistics, IPB University._ 800-805.
* [19]Takahashi, K., Kulldorff, M., Tango, T. and Yih, K. (2008). A flexibly shaped space-time scan statistic for disease outbreak detection and monitoring. _International Journal of Health Geographics._ 7, 1-14.
* [20]Torres, S., Tudor, C. A. and Viens, F. (2014). Quadratic variations for the fractional-colored stochastic heat equation. _Electronic Journal Probability._ 19(76), 1-51.
* [21]Tudor, C. A. (2013). _Analysis of variations for self-similar processes. A stochastic calculus approach._ Springer, Cham.
* [22]Tudor, C. A. (2022). _Stochastic Partial Differential Equations with Additive Gaussian Noise. Analysis and inference._ World Scientific.
* [23]Tunay, K. B. (2010). Space-time autoregressive moving average (STARMA) models and estimation process. _Journal of Financial Researches and Studies._ 1(2), 47-66.
* [24]Que, X., Ma, X., Ma, C. and Chen, Q. (2020). A spatiotemporal weighted regression model (STWR v1. 0) for analyzing local nonstationarity in space and time. _Geoscientific Model Development._ 13(12), 6149-6164.
* [25]Zou, J., Zhu, J., Xie, P., Xuan, P., and Lai, X. (2018). A STARMA model for wind power space-time series. _In 2018 IEEE Power & Energy Society General Meeting (PESGM) (pp. 1-5). IEEE._
#### Declaration of generative AI and AI-assisted technologies in the writing process
During the preparation of this work the authors used Google translator and DeepL in order to check grammar. After using this tool/service, the authors reviewed and edited the content as needed and takes full responsibility for the content of the publication. |
2301.13437 | KiDS-1000: cross-correlation with Planck cosmic microwave background
lensing and intrinsic alignment removal with self-calibration | Galaxy shear - cosmic microwave background (CMB) lensing convergence
cross-correlations contain additional information on cosmology to
auto-correlations. While being immune to certain systematic effects, they are
affected by the galaxy intrinsic alignments (IA). This may be responsible for
the reported low lensing amplitude of the galaxy shear $\times$ CMB convergence
cross-correlations, compared to the standard Planck $\Lambda$CDM (cosmological
constant and cold dark matter) cosmology prediction. In this work, we
investigate how IA affects the Kilo-Degree Survey (KiDS) galaxy lensing shear -
Planck CMB lensing convergence cross-correlation and compare it to previous
treatments with or without IA taken into consideration. More specifically, we
compare marginalization over IA parameters and the IA self-calibration (SC)
method (with additional observables defined only from the source galaxies) and
prove that SC can efficiently break the degeneracy between the CMB lensing
amplitude $A_{\rm lens}$ and the IA amplitude $A_{\rm IA}$. We further
investigate how different systematics affect the resulting $A_{\rm IA}$ and
$A_{\rm lens}$, and validate our results with the MICE2 simulation. We find
that by including the SC method to constrain IA, the information loss due to
the degeneracy between CMB lensing and IA is strongly reduced. The best-fit
values are $A_{\rm lens}=0.84^{+0.22}_{-0.22}$ and $A_{\rm
IA}=0.60^{+1.03}_{-1.03}$, while different angular scale cuts can affect
$A_{\rm lens}$ by $\sim10\%$. We show that appropriate treatment of the boost
factor, cosmic magnification, and photometric redshift modeling is important
for obtaining the correct IA and cosmological results. | Ji Yao, Huanyuan Shan, Pengjie Zhang, Xiangkun Liu, Catherine Heymans, Benjamin Joachimi, Marika Asgari, Maciej Bilicki, Hendrik Hildebrandt, Konrad Kuijken, Tilman Tröster, Jan Luca van den Busch, Angus Wright, Ziang Yan | 2023-01-31T06:24:49Z | http://arxiv.org/abs/2301.13437v2 | KiDS-1000: cross-correlation with Planck cosmic microwave background lensing and intrinsic alignment removal with self-calibration
###### Abstract
Context:Galaxy shear - cosmic microwave background (CMB) lensing convergence cross-correlations contain additional information on cosmology to auto-correlations. While being immune to certain systematic effects, they are affected by the galaxy intrinsic alignments (IA). This may be responsible for the reported low lensing amplitude of the galaxy shear \(\times\) CMB convergence cross-correlations, compared to the standard Planck \(\Lambda\)CDM (cosmological constant and cold dark matter) cosmology prediction.
Aims:In this work, we investigate how IA affects the Kilo-Degree Survey (KiDS) galaxy lensing shear - Planck CMB lensing convergence cross-correlation and compare it to previous treatments with or without IA taken into consideration.
Methods:More specifically, we compare marginalization over IA parameters and the IA self-calibration (SC) method (with additional observables defined only from the source galaxies) and prove that SC can efficiently break the degeneracy between the CMB lensing amplitude \(A_{\rm lens}\) and the IA amplitude \(A_{\rm IA}\). We further investigate how different systematics affect the resulting \(A_{\rm IA}\) and \(A_{\rm lens}\), and validate our results with the MICE2 simulation.
Results:We find that by including the SC method to constrain IA, the information loss due to the degeneracy between CMB lensing and IA is strongly reduced. The best-fit values are \(A_{\rm lens}=0.84^{+0.22}_{-0.22}\) and \(A_{\rm IA}=0.60^{+1.03}_{-1.03}\), while different angular scale cuts can affect \(A_{\rm lens}\) by \(\sim 10\%\). We show that appropriate treatment of the boost factor, cosmic magnification, and photometric redshift modeling is important for obtaining the correct IA and cosmological results.
Conclusions:
## 1 Introduction
Weak lensing due to the distortion of light by gravity is a powerful probe of the underlying matter distribution and the encoded secrets of cosmological physics such as dark matter, dark energy, and the nature of gravity (Refregier, 2003; Mandelbaum, 2018). The auto-correlation statistics have been widely used in the analysis, both for galaxy lensing shear, e.g. "cosmic shear" (Hildebrandt et al., 2017; Hamana et al., 2020; Hikage et al., 2019; Asgari et al., 2021; Secco et al., 2022; Amon et al., 2022), and CMB lensing convergence (Planck Collaboration et al., 2020; Omori et al., 2017). Furthermore, cross-correlations between galaxy shear and CMB lensing have been measured extensively (Hand et al., 2015; Chisari et al., 2015; Liu and Hill, 2015; Kirk et al., 2016; Harnois-Deraps et al., 2016; Singh et al., 2017; Harnois-Deraps et al., 2017; Omori et al., 2019; Namikawa et al., 2019; Marques et al., 2020; Robertson et al., 2021). Cross-correlation statistics contain highly complementary information to auto-correlations, both for cosmology and the cross-check of systematics. They partly reveal the hidden redshift information in CMB lensing and are more sensitive to structure growth at redshifts between the epochs probed by galaxy shear and CMB lensing. Cross-correlations are also immune to additive errors in shear measurement and provide an external diagnosis of multiplicative errors (Schaan et al., 2017).
Most existing cross-correlation measurements have found a lower CMB lensing amplitude than the prediction of their as
sumed \(\Lambda\)CDM cosmology (Hand et al., 2015; Liu and Hill, 2015; Kirk et al., 2016; Harnois-Deraps et al., 2016, 2017; Singh et al., 2017a; Marques et al., 2020; Robertson et al., 2021). The ratio, which is normally referred as the CMB lensing amplitude, \(A_{\rm lens}\sim 0.5\)-0.9, although the deviation from unity is only within 1-2\(\sigma\). The low lensing amplitude is consistent across many combinations of data sets and analysis methods, suggesting the existence of a common systematic errors or a deviation from the best-fit _Planck_ cosmology. This might be related to the tension between galaxy lensing surveys and _Planck_ CMB observation (Lin and Ishak, 2017; Chang et al., 2019; Heymans et al., 2021), and the _Planck_ internal inconsistencies (Planck Collaboration et al., 2020a,b). In this paper we focus on the galaxy intrinsic alignment (IA), which can mimic weak lensing signals (Croft and Metzler, 2000; Catelan et al., 2001; Crittenden et al., 2001; Lee and Pen, 2001; Jing, 2002; Hirata and Seljak, 2004; Heymans et al., 2004; Bridle and King, 2007; Okumura et al., 2009; Joachimi et al., 2013; Kiessling et al., 2015; Blazek et al., 2015; Rong et al., 2015; Krause et al., 2016; Blazek et al., 2019; Troxel et al., 2018; Chisari et al., 2017; Xia et al., 2017; Samuroff et al., 2019; Yao et al., 2020a; Samuroff et al., 2021; Yao et al., 2020b). Here the CMB lensing convergence is expected to be anti-correlated with the intrinsic ellipticities of the foreground galaxy field, resulting in a dilution of the overall cross-correlation signal (Troxel and Ishak, 2014; Chisari et al., 2015; Kirk et al., 2015; Omori et al., 2019; Robertson et al., 2021). Taking IA into account can alleviate the tension in \(A_{\rm lens}\), at the expense of a significant loss of lensing constraining power, because of the degeneracy between the lensing amplitude \(A_{\rm lens}\) and the IA amplitude \(A_{\rm IA}\). Therefore, a common compromise is to fix both the IA model and its amplitude \(A_{\rm IA}\)(Kirk et al., 2016; Harnois-Deraps et al., 2017; Omori et al., 2019) or assume a strong prior (Robertson et al., 2021).
Since IA is already a major limiting factor in the current cross-correlation analysis, its mitigation will be essential for upcoming measurements with significantly smaller statistical errors. We utilize the IA self-calibration (SC) method (Zhang, 2010a,b; Troxel and Ishak, 2012a,b; Yao et al., 2017, 2019), which is a galaxy-galaxy lensing method but with a different weighting scheme, to mitigate the IA problem in the shear-convergence cross-correlation. It is based on the fact that the IA-galaxy correlation is insensitive to the redshift order, while it matters for lensing-galaxy correlation whether the lens is in front of the source or not. Therefore, we can isolate IA by comparing extra observables, i.e., the galaxy shear \(\times\) number density cross-correlation with a different weighting of the redshift pairs. This measurement of IA is independent of a physical model of the IA and requires no data external to the shear data. SC was first applied to KiDS450/KV450 (Yao et al., 2020a; Pedersen et al., 2020) and DECaLS DR3 (Yao et al., 2020b) and has enabled significant IA detections. The detected IA signal can then be applied to remove IA in the lensing shear auto-correlation and shear-convergence cross-correlation. The IA information is obtained from a shear \(\times\) number density cross-correlation within the same photometric redshift (photo-z) bin, more importantly, with different weighting schemes on the photo-z ordering, which is usually not used for cosmological parameter constraints. We find that this removal of IA losses almost no cosmological information.
In previous work Yao et al. (2020b), we have demonstrated the importance and methodology of including certain types of systematics in the SC lensing-IA separation method, namely galaxy bias, the covariance between the separated lensing signal and IA signal, the IA signal drop \(Q^{\rm lb}\) due to the photo-z selection, and the scale dependency of the signal drops \(Q^{\rm lbg}\) and \(Q^{\rm lbg}\). In this work, we further investigate other sources of systematics, including the boost factor (Mandelbaum et al., 2005), photo-z modeling bias (Yao et al., 2020a), and cosmic magnification (Bartelmann, 1995; Bartelmann and Schneider, 2001; Yang et al., 2017; Liu et al., 2021). Interestingly, as the survey goes to higher redshift, the contamination to the SC method from magnification will quickly increase to a non-negligible level. The cosmic magnification will change the observed galaxy number density due to the lensing-magnified flux and lensing-enlarged area, therefore biasing our SC analysis. We investigate the proper treatments for the above systematics together with the cosmological study.
This paper is organized as follows. In Sect. 2 we review the physics of galaxy shear \(\times\) CMB convergence and how our SC method works to subtract the IA information. In Sect. 3 we introduce the KiDS-1000 and _Planck_ data used in this work, and the MICE2 simulation (van den Busch et al., 2020; Fosalba et al., 2015) we use to validate how the SC method is affected by different systematics. We show the measurements of the observables in Sect. 4. The results and summary are shown in Sect. 5 and 6.
## 2 Methods
We apply our self-calibration method to separate the intrinsic alignment and the lensing signals and show how the intrinsic alignment will bias the galaxy shear-CMB convergence correlation. In this section, we review the theory of lensing cross-correlation and the self-calibration method, with a modification to account for the contamination from cosmic magnification.
### Galaxy shear \(\times\) CMB convergence
The gravitational field can distort the shape of the background source galaxy image and introduce an extra shape that is tangentially aligned to the lens. This gravitational shear \(\gamma^{\rm G}\) of the source galaxy contains integral information of the foreground overdensity along the line of sight (Bartelmann and Schneider, 2001). Similarly, the photons from the CMB are deflected, and the lensing convergence \(\kappa\) can be reconstructed from the CMB temperature and polarization observations (Planck Collaboration et al., 2020c). By correlating these two quantities \(\left<\gamma^{\rm G}\kappa\right>\), we probe the clustering of the underlying matter field \(\left<\delta\delta\right>\). In harmonic space while assuming flat space (Omori et al., 2019; Marques et al., 2020), we have:
\[C^{\kappa\mu_{\rm g},{\rm CMB}}(\ell)=\int_{0}^{\rm TCM}\frac{q^{\rm gal}( \chi)q^{\rm CMB}(\chi)}{\chi^{2}}P_{\delta}\bigg{(}k=\frac{\ell+1/2}{\chi},z \bigg{)}d\chi. \tag{1}\]
Eq. (1) is the galaxy-lensing CMB-lensing cross angular power spectrum, which probes the matter power spectrum \(P_{\delta}(k,z)\), as well as the background geometry \(\chi(z)\) if precision allows. Here \(z\) is the redshift, \(\chi\) is the comoving distance, \(k\) is the wavenumber, \(\ell\) is the angular mode, \(q^{\rm gal}(\chi)\) and \(q^{\rm CMB}(\chi)\) are the lensing efficiency functions for galaxy-lensing and CMB-lensing, with the analytical forms:
\[q^{\rm gal}(\chi_{\rm I})=\frac{3}{2}\Omega_{\rm m}\frac{H_{0}^{2}}{c^{2}}(1+z_ {\rm l})\int_{\chi_{\rm I}}^{\infty}n(\chi_{\rm s})\frac{(\chi_{\rm s}-\chi_{ \rm I})\chi_{\rm I}}{\chi_{\rm s}}d\chi_{\rm s}, \tag{2}\]
\[q^{\rm CMB}(\chi_{\rm I})=\frac{3}{2}\Omega_{\rm m}\frac{H_{0}^{2}}{c^{2}}(1+z_ {\rm l})\frac{(\chi_{\rm s}-\chi_{\rm I})\chi_{\rm I}}{\chi_{\rm s}}, \tag{3}\]
where \(\chi_{\rm s}\) and \(\chi_{\rm I}\) are the comoving distance to the source and lens, and the \(\chi_{\rm s}\) in Eq. (3) takes CMB as the source of light (\(z\sim 1100\)). We note the spacial curvature \(\Omega_{k}=0\) is assumed
so that the comoving angular diameter distances in Eqs. (2) and (3) are replaced with the comoving radial distances. Here \(n(\chi)\) gives the source galaxy distribution as a function of comoving distance, and it is connected with the galaxy redshift distribution via \(n(\chi)=n(z)dz/d\chi\). In this work, we only use one redshift bin due to the limit of the total S/N on the CMB lensing signal, while a tomographic example can be found in Harnois-Deraps et al. (2017). In the future with higher S/N, for example, for CMB-S4 \(\times\) LSST, tomography can be used to subtract more cosmological information.
The shear-convergence cross-correlation function measured in real space is given by the Hankel transformation:
\[w^{\rm Gcz}(\ell)=\frac{1}{2\pi}\int_{0}^{\infty}d\ell\ell\ell C^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm CMB}{}(\ell)J_{2}(\ell\theta), \tag{4}\]
where \(J_{2}(x)\) is the Bessel function of the first kind and order 2. The "G" represents the gravitational lensing shear \(\gamma^{\rm G}\), to be separated from the intrinsic alignment \(\gamma^{\rm I}\) in the following section.
Also for the current low S/N reasons, we choose not to investigate full cosmological constraints in this work. Instead, we perform a matched-filter fitting, with lensing amplitude \(A_{\rm lens}\) that su\(\hat{\gamma}^{\rm Gcz}=A_{\rm lens}w^{\rm Gcz}\), where \(\hat{w}^{\rm Gcz}\) is the measured correlation function, and \(w^{\rm Gcz}\) is the theoretical model.
### Intrinsic alignment of galaxies
The observed galaxy shear estimator contains three components: gravitational shear, an intrinsic alignment term, and random noise, namely, \(\hat{\gamma}=\gamma^{\rm G}+\gamma^{\rm I}+\gamma^{\rm N}\). Both the gravitational shear and the IA term are related to the underlying matter overdensity \(\delta\) and are associated with the large-scale structure. This means that when we cross-correlate the galaxy shape and the CMB convergence, there will be contributions from both lensing and IA:
\[\left\langle\hat{\gamma}\kappa\right\rangle=\left\langle\gamma^{\rm G}\kappa \right\rangle+\left\langle\gamma^{\rm I}\kappa\right\rangle. \tag{5}\]
Therefore the IA part of the correlation will contaminate the measurement and lead to a bias in the lensing amplitude \(A_{\rm lens}\) or the cosmological parameters when assuming \(\left\langle\hat{\gamma}\kappa\right\rangle=\left\langle\gamma^{\rm G}\kappa\right\rangle\).
The IA-convergence correlation function is linked to the IA-convergence power spectrum
\[C^{\rm IA,CM}=\int_{0}^{\rm YCM}\frac{n(\chi)q^{\rm CMB}(\chi)}{\chi^{2}}P_{ \delta,\gamma^{\rm I}}\left(k=\frac{\ell+1/2}{\chi},z\right)d\chi. \tag{6}\]
Here \(P_{\delta,\gamma^{\rm I}}\) is the 3D matter-IA power spectrum. The conventional method is to assume an IA model with some nuisance parameters, which will enter the fitting process. The most widely used IA model is the non-linear linear tidal alignment model (Catelan et al. 2001; Hirata & Seljak 2004; Bridle & King 2007), expressed as:
\[P_{\delta,\gamma^{\rm I}}=-A_{\rm IA}(L,z)\frac{C_{1}\rho_{\rm m,0}}{D(z)}P_{ \delta}(k;\chi), \tag{7}\]
which is proportional to the non-linear matter power spectrum \(P_{\delta}\), suggesting that the IA is caused by the gravitational tidal field. \(A_{\rm IA}\) is the IA amplitude, which can be redshift(\(z\))- and luminosity(\(L\))- dependent (Joachimi et al. 2011). Its redshift evolution has been measured recently in simulations (Chisari et al. 2016; Samuroff et al. 2021) and suggestions in observations with low significance (Johnston et al. 2019; Yao et al. 2020; Secco et al. 2022; Tonegawa & Okumura 2022). The other related quantities include: the mean matter density of the universe at \(z=0\), expressed as \(\rho_{\rm m,0}=\rho_{\rm crit}\Omega_{\rm m,0}\); \(C_{1}=5\times 10^{-14}(h^{2}M_{\rm sun}/{\rm Mpc}^{-3})\) the empirical amplitude taken from Brown et al. (2002) and the normalized linear growth factor \(D(z)\). We note that the IA model in Eq. (7) can be replaced by more complicated models as in Krause et al. (2016); Blazek et al. (2015, 2019); Fortuna et al. (2021) for different samples (Yao et al. 2020b; Samuroff et al. 2021; Zjupa et al. 2020). The self-calibration method can introduce new observables to constrain IA with additional constraining power, and in the future when the signal-to-noise (S/N) allows, it can be extended to constrain more complicated IA models.
### Self-calibration of intrinsic alignment
The IA self-calibration (SC) method (Zhang 2010b; Yao et al. 2017, 2019, 2020a,b) uses the same galaxy sample as both the source and the lens, which is different from most galaxy-galaxy lensing studies. It introduces two observables: the shape-galaxy correlation in the same redshift bin w\({}^{\rm 78}\), and a similar correlation \(w^{\rm 78}|_{\rm S}\) using the pairs where the photo-z of the source galaxy is lower than the photo-z of the lens galaxy, namely
\[z_{\gamma}^{\rm P}<z_{\rm g}^{\rm P} \tag{8}\]
(this will be denoted as "the SC selection").
In this work, we extend our methodology to include the impact from cosmic magnification (Bartelmann 1995; Bartelmann & Schneider 2001; Yang et al. 2017; Liu et al. 2021). Because of the existence of magnification, the intrinsic galaxy number density field \(\delta_{g}\) is affected by the foreground lensing convergence \(\kappa^{\rm gal}\), leading to a lensed galaxy overdensity
\[\delta_{\rm g}^{\rm L}=\delta_{\rm g}+g_{\rm mag}\kappa^{\rm gal}, \tag{9}\]
where the prefactor writes \(g_{\rm mag}=2(\alpha-1)\) for a complete and flux-limited sample. It accounts for the increase in galaxy number density due to lensing-magnified flux (\(\alpha=-d\ln N/d\ln F\), where \(N(F)\) denotes the galaxy number \(N\) that is brighter than the flux limit \(F\)) and the decrease of galaxy number density due to the lensing-area-enlargement (-2 in \(g_{\rm mag}\)). The observed shape-galaxy correlation is given by
\[\left\langle\hat{\gamma}\delta_{\rm g}^{\rm L}\right\rangle=\left\langle(\gamma ^{\rm I}+\gamma^{\rm I})(\delta_{\rm g}+g_{\rm mag}\kappa^{\rm gal})\right\rangle. \tag{10}\]
The two SC observables can be written as:
\[w_{ii}^{\rm 78}(\theta) =w_{ii}^{\rm 68}(\theta)+w_{ii}^{\rm 88}(\theta)+g_{\rm mag} \left[w_{ii}^{\rm Gczpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{\rm zpl}{}^{ \rm zpl}{}^{\rm zpl}{
The SC photo-z selection \(z_{\gamma}^{\rm P}<z_{\rm g}^{\rm p}\) largely reduces the lensing signal, leading to \(Q^{\rm Gg}\ll 1\). The IA signal does not rely on the ordering along the line-of-sight, with \(Q^{\rm kg}\sim 1\). The lensing-drop \(Q^{\rm Gg}\) and the IA-drop \(Q^{\rm kg}\) are dependent on the photo-z quality, as described in Zhang (2010b); Yao et al. (2017, 2020a,b). If the photo-z quality is perfect, the SC selection will result in no lensing signal so that \(Q^{\rm Gg}\) approaches 0. For incorrect photo-zs, the SC selection fails and \(Q^{\rm Gg}\) is \(\sim 1\). Given a photo-z distribution \(n^{\rm P}(z^{\rm p})\) and the true-z distribution \(n(z)\), the lensing-drop \(Q^{\rm Gg}\) and IA-drop \(Q^{\rm kg}\) can be theoretically derived, following Yao et al. (2020a,b), with more technical details in Appendix A. We also present a toy model to visualize how the SC selection works in Fig. 1.
We quantitatively test the terms in Eq. (11), and they generally follow \(|w^{\rm Ae\mu}|<|w^{\rm Ge\mu}|\ll|w^{\rm kg}|<|w^{\rm Gg}|\) for \(z<0.9\) data, therefore in previous analysis (Zhang 2010b; Yao et al. 2020a,b) the magnification terms were neglected. For the \(z\sim 1\) galaxies, however, the magnification term \(w^{\rm Ge\mu}\) quickly approaches \(w^{\rm kg}\) and becomes a non-negligible source of contamination to the SC method. In Fig. 2 we show a theoretical comparison of the angular power spectra. We can write the SC selection for the magnification term as \(w^{\rm Ge\mu}|_{\rm S}=Q^{\rm Gg}w^{\rm Ge\mu}\). The drop of the signal \(Q^{\rm Gg}\sim Q^{\rm kg}\sim 1\) given that these are not \(z\)-pair-dependent correlations, therefore the magnification signal \(w^{\rm Ge\mu}\) will contaminate the IA signal \(w^{\rm lg}\) due to similar behavior, leaving the lensing signal \(w^{\rm Gg}\) unaffected. We note the \(w^{\rm kg}\) term is negligible in this work.
After measuring the galaxy-galaxy lensing observables \(\{w^{\rm gg},\ w^{\rm gg}|_{\rm S}\}\) and the drops of the signals \(\{Q^{\rm Gg},\ Q^{\rm kg}\}\) (see Eq. (13), (14) and Appendix A for more details), the corresponding lensing-galaxy correlation \(w^{\rm Gg}\), IA-galaxy correlation \(w^{\rm g}\) and shear-magnification correlation \(w^{\rm Gg}\) can be linearly obtained:
\[w^{\rm Gg}_{ii}(\theta)=\frac{Q^{\rm kg}_{i}(\theta)w^{\rm gg}_{ ii}(\theta)-w^{\rm gg}_{\rm g}|_{\rm S}(\theta)}{Q^{\rm kg}_{i}(\theta)-Q^{\rm Gg}_{ i}(\theta)}, \tag{15}\] \[w^{\rm lg}_{ii}(\theta)+w^{\rm Ge\mu}_{ii}(\theta)=\frac{w^{\rm gg }_{ii}|_{\rm S}(\theta)-Q^{\rm Gg}_{ii}(\theta)w^{\rm gg}_{ii}(\theta)}{Q^{\rm kg }_{i}(\theta)-Q^{\rm Gg}_{i}(\theta)}. \tag{16}\]
In previous work, the IA information was directly extracted in \(w^{\rm lg}\). However, as shown in Fig. 2 and Eq. 16, for KiDS the subtracted signal suffers from the contamination from a magnification term \(w^{\rm Gg}\). By constraining the measurements of \(\{w^{\rm Gg},\ w^{\rm gg}\}\), \(w^{\rm gg}\)+\(w^{\rm Ge\mu},\ w^{\rm wg}{}^{\rm CMB}\) ] together, including the covariance, will lead to robust constraints on both the lensing amplitude and the nuisance parameters. For the current stage where the S/N for the measurements are not very high, we choose to ignore the possible scale-dependent features for the effective galaxy bias \(b_{g,{\rm eff}}\) and IA amplitude \(A_{\rm IA}\), and assume they are linear and deterministic. The parameters \(\{A_{\rm lens},\,A_{\rm IA},\,b_{\rm g,eff},\,g_{\rm mag}\}\) are connected to the observables following:
\[w^{\rm Gg}(\theta) =b_{g,{\rm eff}}w^{\rm Gm}_{\rm theory}(\theta), \tag{17}\] \[w^{\rm gg}(\theta)+w^{\rm Ge\mu}(\theta) =b_{g,{\rm eff}}A_{\rm IA}w^{\rm lim}_{\rm theory}(\theta)+g_{\rm mag }w^{\rm G\mu}_{\rm theory}(\theta),\] (18) \[w^{\rm gg}{}^{\rm CMB}(\theta) =A_{\rm lens}w^{\rm G\mu, CMB}_{\rm theory}(\theta)+A_{\rm IA}w^{\rm G \mu}_{\rm theory}(\theta), \tag{19}\]
where "m" stands for matter, which is the case if one sets the effective galaxy bias \(b_{g,{\rm eff}}=1\). We separate the CMB convergence and the galaxy convergence (due to magnification) with \(\kappa^{\rm CMB}\)
Figure 1: A toy model to illustrate the different redshift dependences for the lensing signal and the IA signal, and why the SC selection Eq. (8) works. We place many lens galaxies at photo-z \(z_{\rm g}^{\rm p}=0.5\) (the grey dotted line), while allowing the photo-z of the source galaxies \(z_{\rm g}^{\rm p}\) to change (x-axis) to evaluate the corresponding lensing correlation function \(w^{\rm Gg}\) or IA correlation function \(w^{\rm IA}\) at different angular separation \(\theta\). The true-z bias a Gaussian scatter of 0.04 (this number is chosen for exhibition, so that the lensing/IA signals have comparable maximum/minimum values) around the photo-z, for both source galaxies and lens galaxies. As the gravitational lensing shear is an optical shape that requires \(z_{\rm g}<z_{\gamma}\), it will have a non-symmetric power around \(z_{\rm g}^{\rm p}\), as the positive solid curves show. This also demonstrate \(Q^{\rm Gg}\ll 1\) according to Eq. (13). As the IA shape is a dynamical shape, it does not have requirements on the relative redshifts, leading to a symmetric power around \(z_{\rm g}^{\rm p}\), as the negative dashed curves show. This also demonstrate \(Q^{\rm Gg}\sim 1\) according to Eq. (14). These relations hold for signals at different angular separations (different colors). The different IA models (which could deviate from Eq. 7 and \(A_{\rm IA}=1\) being assumed) will only change the relative amplitudes of the negative signals at different scales, but not the redshift-dependency around \(z_{\rm g}^{\rm p}\). We note at such a redshift range, the magnification signal is much smaller than the IA signal.
Figure 2: A theoretical comparison between the galaxy-shear \(C^{\rm Gg}(\ell)\), galaxy-IA \(C^{\rm kg}(\ell)\) and shear-magnification \(g_{\rm mag}C^{\rm G\mu\mu}(\ell)\) angular power spectra, with the best-fit of our baseline analysis and the redshift distribution \(n(z)\) from KiDS-1000 \(0.5<z^{\prime}<1.2\) shear catalog. The dashed lines represent negative signals. This figure demonstrates that the magnification contamination is important in the self-calibration method for the high-\(z\) KiDS source sample.
and \(\kappa^{\rm gal}\). On the LHS of Eq. (17), (18) and (19) are the measurements, while on the RHS the correlations \(w(\theta)\) are the theoretical predictions assuming _Planck_ cosmology (Planck Collaboration et al. 2020), see Table 1. We note the \(Q\) values being used to obtain the LHS are also cosmology dependent, however, the sensitivity is weak as the cosmological part is mostly canceled when taking the ratio in Eq. (13) and (14). We tested if the fiducial cosmology is changed to any of the KiDS-1000 cosmologies in Table 1, the \(Q\)s will change by \(\sim 1\%\), similar to Yao et al. (2020), and the resulting changes to the fitting parameters \(\{A_{\rm IA}\), \(g_{\rm zeff}\), \(g_{\rm zmag}\), \(A_{\rm lens}\}\) are negligible. However, considering the RHS, those four fitting parameters are sensitive to the fiducial cosmology used to produce the \(w_{\rm theory}\) values when magnification exists, which differs from previous analysis (Yao et al. 2020). The theoretical predictions \(w_{\rm theory}\) are calculated with ccl1(Chisari et al. 2019) and cam2(Lewis et al. 2000). The effective galaxy bias \(b_{\rm geff}\) in this work is used to separate from the true galaxy bias of this sample, as we will discuss later it can absorb several sources of systematics.
Footnote 1: Core Cosmology Library, [https://github.com/LSSTDESC/CCL](https://github.com/LSSTDESC/CCL)
Footnote 2: Code for Anisotropies in the Microwave Background, [https://camb.info/](https://camb.info/)
The theoretical prediction of \(w_{\rm theory}^{\rm GLm}(\theta)\) is given in Eq. (4), and \(w_{\rm theory}^{\rm kepl}\) (\(\theta\)) is obtained similarly with the Hankel transform from its power spectrum as in Eq. (6). The \(w_{\rm theory}^{\rm GLm}\), \(w_{\rm theory}^{\rm lm}\) and \(w_{\rm theory}^{\rm Gepl}\) terms are the Hankel transform from the following angular power spectra:
\[C^{\rm Gm}(\ell) =\int_{z_{\rm min}}^{z_{\rm max}}\frac{q^{\rm gal}(\chi)n(\chi)}{ \chi^{2}}P_{\delta}\left(k=\frac{\ell+1/2}{\chi},z\right)d\chi, \tag{20}\] \[C^{\rm lm}(\ell) =\int_{z_{\rm min}}^{z_{\rm max}}\frac{n(\chi)n(\chi)}{\chi^{2}}P _{\delta}z^{\prime}\left(k=\frac{\ell+1/2}{\chi},z\right)d\chi,\] (21) \[C^{\rm Gepl}(\ell) =\int_{z_{\rm min}}^{z_{\rm max}}\frac{q^{\rm gal}(\chi)q^{\rm gal }(\chi)}{\chi^{2}}P_{\delta}\left(k=\frac{\ell+1/2}{\chi},z\right)d\chi. \tag{22}\]
As discussed in previous work (Yao et al. 2020), by including the effective galaxy bias \(b_{\rm geff}\), we can obtain an unbiased estimation of \(A_{\rm IA}\). This information will be propagated into Eq. (19) to break the degeneracy between \(A_{\rm IA}\) and \(A_{\rm lens}\). In this work, we further extend the fitting to include the impact from magnification with the nuisance parameter \(g_{\rm zmag}\). We will show later that an unbiased CMB lensing amplitude \(A_{\rm lens}\) can be obtained from the simultaneous fitting of Eq. (17), (18) and (19).
## 3 Data
In this section, we introduce the data we use for the \(\left<\gamma_{\rm K}\rm CMB\right>\) cross-correlation study. Additionally, we use mock KiDS data, based on the MICE2 simulation (see van den Busch et al. (2020) for details) to quantify the potential bias in the SC method due to magnification, photo-z modeling, and the boost factor.
### KiDS-1000 shear catalog
We use the fourth data release of the Kilo-Degree Survey that covers \(1006\,\rm deg^{2}\), known as KiDS-1000 (Kuijken et al. 2019). It has images from four optical bands \(u_{g}ri\) and five near-infrared bands \(ZYJHK_{s}\). The observed galaxies can reach a primary \(r-\)band median limiting \(5\sigma\) point source magnitude at \(\sim 25\). The shear catalog (Giblin et al. 2021) contains \(\sim 21\) M galaxies and is divided into five tomographic bins in the range \(0.1<z_{B}<1.2\) based on the n\(\sigma\)(Benitez 2000) algorithm. The ellipticity dispersion \(\sigma_{\epsilon}\) is \(\sim 0.27\) per component, and the shear multiplicative bias is generally consistent with 0.
The KiDS data are processed by heli (Erben et al. 2013) and Astro-WISE (Begeman et al. 2013; de Jong et al. 2015). Shears are measured using _lens_fit (Miller et al. 2013), and photometric redshifts are obtained from PSF-matched photometry and calibrated using external overlapping spectroscopic surveys (Hildebrandt et al. 2021).
The application of SC requires not only an accurate redshift distribution \(n(z)\), but also relatively accurate photo-z for each galaxy, serving for the SC selection (Eq. 8). We discussed in previous work (Yao et al. 2020) that the quality of photo-z is very important for the lensing-IA separation. Therefore in this work, we choose to combine the three high-z bins, namely bin \(3+4+5\) in KiDS-1000 data, as a large bin so that the photo-z error for an individual galaxy is relatively small compared to the total bin width. The photo-z and the SOM-calibrated redshift distributions are shown in Fig. 3. We choose to use the high-z bins because the CMB lensing efficiency Eq. (3) peaks at \(z\sim 1\) to 2 (see lower panel of Fig. 3), while the S/N for the cross-correlation is very low for the two low-z bins of KiDS-1000.
To account for the selection functions for the shape of the footprint (Mandelbaum et al. 2006) of the overlapped region and the varying galaxy number density due to observation (Johnston et al. 2021; Rezaie et al. 2020), we divide the region into 200 sub-regions with a resolution of Heplip \(N_{\rm side}=512\) (\(\sim 50\) arcmin\({}^{2}\) per pixel), and generate random points with 20 times the number of galaxies of the KiDS-1000 shear catalog in each sub-region. The pixels within the same sub-region are assigned the same galaxy numbers. This random catalog is used for the SC-related galaxy-galaxy lensing calculation, while its potential defects will not extend to cross-correlations.
### Planck legacy lensing map
We use the CMB lensing map \(\kappa(\@vec{\theta})\) from the _Planck_ data release (Planck Collaboration et al. 2020). The CMB lensing map is reconstructed with the quadratic estimator with the minimum-variance method combining the temperature map and the polarization map, after foreground removal with the SMICA method (Planck Collaboration et al. 2020). It covers \(f_{\rm sky}=0.671\) of the whole sky with the maximum multiple \(\ell=4096\).
In this work we combine the footprint from the _Planck_ lensing map and the mask of the KiDS-1000 shear catalog, leading to an overlapped region of \(\sim 829\,\rm deg^{2}\). We include the _Planck_ Wiener filter (Planck Collaboration et al. 2020)
\[\hat{k}_{\ell m}^{\rm WF}=\frac{C_{\ell}^{\phi,\rm fid}}{C_{\ell}^{\phi,\rm fid }+N_{\ell}^{\phi}}\hat{k}_{\ell m}^{\rm MV} \tag{23}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline Survey & \(h_{0}\) & \(\Omega_{b}h^{2}\) & \(\Omega_{c}h^{2}\) & \(n_{s}\) & \(\sigma_{8}\) \\ \hline _Planck_ & 0.673 & 0.022 & 0.120 & 0.966 & 0.812 \\ \hline KiDS \(\xi_{\pm}\) & 0.711 & 0.023 & 0.088 & 0.928 & 0.895 \\ \hline KiDS \(C(\ell)\) & 0.704 & 0.022 & 0.132 & 0.999 & 0.723 \\ \hline KiDS COSEBI & 0.727 & 0.023 & 0.105 & 0.949 & 0.772 \\ \hline \end{tabular}
\end{table}
Table 1: The \(\Lambda\)CDM cosmological parameters adopted in this work, corresponding to the best-fit cosmology from Planck Collaboration et al. (2020), and the KiDS-1000 multivariate maximum posterior (MAP) results from the two-point correlation functions \(\xi_{\pm}\), the band powers \(C(\ell)\), and the COSEBIs (Complete Orthogonal Sets of E/B-Integrals) as in Asgari et al. (2021).
to strengthen the CMB lensing signal at large scales, which will also lead to a suppression of the power spectrum at small scales, where the noise dominates (Dong et al. 2021). The Wiener filter is used both in the CMB lensing \(\kappa\) map and in the theoretical predictions of Eq. (1) to prevent potential bias. After the application of the Wiener filter, we use \(\textsc{Helpy}\)3(Gorski et al. 2005; Zonca et al. 2019) to convert the \(\kappa_{\ell m}\) to the desired \(\kappa\)-map, and rotate from the galactic coordinates of _Planck_ to the J2000 coordinates of KiDS with Astropy (Astropy Collaboration et al. 2013). The two-point correlation functions are calculated with TreeCorr 4(Jarvis et al. 2004).
Footnote 3: [https://github.com/healpy/healpy](https://github.com/healpy/healpy)
Footnote 4: [https://github.com/rmjarvis/TreeCorr](https://github.com/rmjarvis/TreeCorr)
### MICE2 mock catalog
Additionally, we use the MICE2 simulation gold samples (van den Busch et al. 2020; Fosalba et al. 2015), which highly mimic the KiDS-1000 shear catalog galaxies, to validate our SC method, concerning cosmic magnification and photo-z PDF model bias. MICE2 uses a simulation box width of \(3.1~{}h^{-1}\)Gpc, particle mass resolution of \(2.9\times 10^{10}~{}h^{-1}M_{\odot}\), and a total particle number of \(\sim 6.9\times 10^{10}\). The fiducial cosmology is flat \(\Lambda\)CDM with \(\Omega_{\rm m}=0.25\), \(\sigma_{8}=0.8\), \(\Omega_{\rm b}=0.044\), \(\Omega_{\Lambda}=0.75\) and \(h=0.7\). The halos are identified with Friends-of-Friends as in Crocce et al. (2015). The galaxies are populated within the halos with a mixture of halo abundance matching (HAM) and halo occupation distribution (HOD) up to \(z\sim 1.4\)(Carretero et al. 2015).
We note that in the MICE2 simulation that we use for the KiDS samples, intrinsic alignment is not yet included in the galaxy shapes (while an IA-included version can be found in Hoffmann et al. (2022), but for DES). So that we aim to get \(A_{\rm IA}=0\) to validate the SC method, while considering systematics from cosmic magnification and photo-z model bias, in addition to what has been addressed in Yao et al. (2020b). We use the galaxy positions (ra, dec), the two noiseless shear components (\(\gamma_{1}\), \(\gamma_{2}\)), and BPZ-measured photo-z \(z_{B}\) to calculate the SC correlations as in Eq. (11) and (12). We test the signal drop \(Q_{\rm s}\) of Eq. (13) and (14) with our photo-z PDF model and with true-z from simulation (van den Busch et al. 2020). We compare the results using MICE2 gold samples (which highly mimic the KiDS-1000 shear catalog galaxies) with magnification (Eq. 9) and without magnification. For the MICE2 galaxies with magnification, we tested how it will bias the IA measurement, and proved that when the magnification effect is also included in the model, IA can be measured in an unbiased way. The validations will be shown later in our results with some details in Appendix A.
## 4 Measurements
We show the estimation of the signal-drops for lensing and IA due to the SC selection (as in Eqs. 13 and 14), i.e. the lensing-drop \(Q^{\rm Gk}\) and the IA-drop \(Q^{\rm Ik}\) in Fig. 4. They are responsible for the lensing-IA separation later in Fig. 6, following Eq. (15) and (16). We follow the processes in Yao et al. (2020a,b) and adopt a bi-Gaussian photo-z probability distribution function (PDF) model with a secondary peak representing the photo-z outlier problem. We require the PDF model to have the same mean-z as in Fig. 3, while closest describing the projection from \(n^{\rm P}(z^{\rm P})\) to \(n(z)\). We will also show for the first time how the assumed photo-z PDF model can affect the results in the next section, with more details shown in Appendix A.
We calculate the SC correlation function estimator,
\[w^{\gamma\rm g}(\theta)=B(\theta)\frac{\sum_{\rm ED}w\gamma_{j}^{+}}{(1+\bar {m})\sum_{\rm ED}w_{j}}-\frac{\sum_{\rm ER}w_{j}\gamma_{j}^{+}}{(1+\bar{m}) \sum_{\rm ER}w_{j}}\, \tag{24}\]
to obtain the measurements of \(w^{\gamma\rm g}\) and \(w^{\gamma\rm g}|_{\rm S}\) from the tangential shear of each galaxy \(\gamma_{j}^{+}\). Here we sum over the ellipticity-density pairs (\(\sum_{\rm ED}\)) and the ellipticity-random pairs (\(\sum_{\rm ER}\)) in an annulus centered on \(\theta\), where the shear weight \(w_{j}\) of the \(j\)-th galaxy and the average multiplicative bias \(\bar{m}\) are accounted for. The estimator is binned in angular \(\theta\) space, with 9 logarithmic bins from 0.5
Figure 4: The lensing-drop \(Q^{\rm Gk}\) and the IA-drop \(Q^{\rm Ik}\) as a function of \(\ell\) and \(\theta\) by applying the SC selection Eq. (8), see Eq. (13) and (14). These values are adopted to obtain the separation of \(w^{\rm Gk}\) and \(w^{\rm Ik}+w^{\rm GH\mu}\), following Eq. (15) and (16). The left panel shows the calculation from power spectra and the right panel from correlation functions. The right panel is used to transfer \(\{w^{\gamma\rm g},\ w^{\gamma\rm g}|_{\rm S}\}\) to \(\{w^{\gamma\rm Gk},\ w^{\rm Ik}\}\) later in Fig. 6.
Figure 3: The photo-z distribution and the SOM-reconstructed redshift distribution of the combined galaxy sample in this work. The corresponding galaxy lensing efficiency Eq. (2) and its comparison with CMB lensing efficiency Eq. (3) are shown in the lower panel.
to 300 arcmin. We use the averaged multiplicative bias \(\bar{m}\) from averaging over the three z-bins, weighted by the effective galaxy number density. This gives \(\bar{m}=-0.0036\).
We account for the impact of the boost factor (Mandelbaum et al. 2005; Singh et al. 2017b; Joachimi et al. 2021), which is \(B\) in Eq. (24). It is defined as
\[B(\theta)=\frac{\sum_{\rm ED}w_{j}}{\sum_{\rm RD}w_{j}}, \tag{25}\]
which is used to quantify the small-scale bias due to the clustering of lens galaxies and source galaxies (Bernardeau 1998; Hamana et al. 2002; Yu et al. 2015). We show the measurements of the boost factor for \(w^{\rm 8L}\) and \(w^{\rm 8L}\)[\(\rm{s}\) as in Eq. (11) and (12) in Fig. 5. The fact that the boost factors for \(w^{\rm 8L}\) and \(w^{\rm 8L}\)[\(\rm{s}\) are identical suggests this bias can be absorbed by the galaxy bias \(b_{\rm geff}\) parameter if magnification is absent (\(g_{\rm mag}=0\)), leading to an unbiased \(A_{\rm IA}\) and \(A_{\rm lens}\). The impact from the boost factor can potentially break the linear galaxy bias assumption, but later in Fig.6 we show the linear assumption is fine. The impact of the boost factor and magnification existing together will be shown later.
In Fig. 6 we show the SC measurements. In the left panel, the measured shape-galaxy correlations \(w^{\rm 8L}\) are shown in blue: (1) the boost factor ignored case (\(B=1\)) is shown as blue crosses, while (2) the boost factor corrected case is shown as blue triangles. With the SC selection Eq. (8), requiring \(z_{\gamma}^{\rm P}<z_{\gamma}^{\rm P}\) for each galaxy pair, the lensing component will drop to \(Q^{\rm Gg}\sim 0.3\) and the IA component will drop to \(Q^{\rm Gg}\sim 0.85\) (for more details on \(Q^{\rm Gg}\) and \(Q^{\rm Is}\), see Fig. 4 and Appendix A). Therefore, the selected correlations \(w^{\rm 8L}\)[\(\rm{s}\) will drop to the orange down-triangles. Similarly, the boost factor ignored case is shown as crosses.
The separated lensing-galaxy signal \(w^{\rm Gg}\) and IA-galaxy signal \(w^{\rm lg}\) (which is contaminated by magnification-shear signal \(g_{\rm mag}w^{\rm Gg}\)) are shown in the right panel of Fig. 6. The blue and orange curves are the theoretical predictions with the best-fit \(\{A_{\rm IA}\), \(b_{\rm geff}\), \(g_{\rm mag}\}\). For the fitting, we cut off the shaded regions at both large scales and small scales. The small scale cut at \(\theta=1\) arcmin is based on the linear galaxy bias assumption, as including the \(\theta<1\) arcmin data will make the fitting significantly worse (increasing the fitting \(\chi^{2}\) from 7.5 to 50, with degree-of-freedom changed from 8 to 10). We note this scale cut could include the impacts from the 3D non-linear galaxy bias (Fong and Han 2021) and other small-scale effects such as massive neutrinos or baryon feedback in the matter power spectrum (Hildebrand et al. 2017; Asgari et al. 2021). We emphasize that these systematics will be absorbed by the effective galaxy bias parameter \(b_{\rm geff}\) -- without breaking the scale-independent bias assumption -- so that the IA amplitude will not be affected. As discussed previously in Yao et al. (2020a,b), the SC method requires significant separation between \(w^{\rm 8L}\) and \(w^{\rm 8L}\) [\(\rm{s}\) to accurately get \(w^{\rm Gg}\) and \(w^{\rm lg}\). Therefore, we introduce a large-scale cut at \(\theta=20\) arcmin due to insufficient separation for the left panel of Fig. 6.
Similarly, we measure the \(\langle\gamma\kappa\rangle\) correlation with the estimator
\[w^{\rm 8L}(\theta)=\frac{\sum_{ij}w_{j}\gamma_{i}^{\rm P}\kappa_{i}}{(1+\bar{m}) \sum_{ij}w_{j}}, \tag{26}\]
where \(\kappa_{i}\) is the CMB lensing convergence in the \(i\)-th pixel of the pixelized map, taking the pixel center for its (ra, dec) coordinates, with \(n_{\rm side}=2048\) in Healpy. The measured \(w^{\rm 8L}\) are shown in Fig. 7. The tangential shear is shown as blue dots. We also show the measurements with randomly shuffling galaxy positions and the shear in red crosses as a null test. We test the 45 deg rotated cross shear for both the above cases and they are consistent with zero. The theoretical prediction with the best-fit \(A_{\rm lens}\) and \(A_{\rm IA}\) are shown as the green curve. If one assumes there is no IA in the measurements and uses \(A_{\rm IA}=0\), the theoretical values for the pure lensing signal are shown in orange.
Note in Fig. 7, because we use the Wiener-filtered \(\kappa\) map from _Planck_, both the \(w^{\rm 8L}\) measurements and the theoretical predictions are suppressed at small scales. The Wiener filter can significantly reduce the impact of the noise of the _Planck_ lensing map and improve the S/N of the measurements.
Together with the measurements in Figs. 6 and 7, we obtain observables of this work, which are the LHS terms of Eqs. (17), (18) and (19). We use Jackknife resampling to obtain the covariance. 200 Jackknife regions are used, which is much larger than the length of the data vector (12), based on the analysis of Mandelbaum et al. (2006); Hartlap et al. (2007). The Jackknife regions are separated using the K-means algorithm kmeans_rader5. The normalized covariance matrix is shown in Fig. 8. We find strong anti-correlation between \(w^{\rm Gg}\) and \(w^{\rm lg}\) as expected (Yao et al. 2020b). Note here in Fig. 8, \(w^{\rm lg}\) means the separated signal in the RHS of Eq.(16), including both the IA part and the contamination from magnification. There is no significant correlation between \(w^{\rm 8L}\) and the other two observables. This covariance will be used in the Monte Carlo Markov Chain (MCMC) to find the best-fit parameters of \(\{A_{\rm IA}\), \(b_{\rm geff}\), \(g_{\rm mag}\), \(A_{\rm lens}\}\), while all the other cosmological parameters are fixed to Planck as in Table 1.
Footnote 5: [https://github.com/esheldon/kmeansradec](https://github.com/esheldon/kmeansradec)
## 5 Results
### Validation with MICE2
In this subsection, we apply the IA self-calibration to the MICE2 mock catalog to test the impact of the systematics and validate the recovery of the IA signal. The processes of the mock data are identical to the descriptions in Sec. 4, but only focusing on the self-calibration part. The measurements are similar to Fig. 6 so
Figure 5: The boost factors for \(w^{\rm 8L}\) and \(w^{\rm 8L}\)[\(\rm{s}\) are shown in blue and orange, respectively. The overlapping lines suggest the two signals are affected by the boost factor in almost the same way. We show the boost factor is significant at small scales for the SC observables.
we choose to skip them. We perform the MCMC calculation using emcee (Foreman-Mackey et al. 2013). We consider flat priors in \(-5<A_{\rm IA}<5\), \(0<b_{\rm g,eff}<2\) and \(-3<g_{\rm mag}<3\).
#### 5.1.1 Impact from magnification
We show how the magnification signal affects the original SC method (Zhang 2010; Yao et al. 2020a,b) and the correction introduced in this work, focusing on the \(g_{\rm mag}-A_{\rm IA}\) space.
In Fig. 9, we show that if magnification is not included in the modeling, \(g_{\rm mag}\) is therefore not constrained. The existing magnification signal will be treated as the IA signal, leading to a non-vanishing \(A_{\rm IA}\sim 0.3\), which significantly deviates from the MICE2 input \(A_{\rm IA}=0\). When the magnification model is included in the analysis, \(A_{\rm IA}\) is then consistent with 0. This demonstrates the importance of including the magnification model in the SC analysis with high-z data. The results are also summarized in Fig. 10. The results are shown in Fig. 11. The results are shown in Fig. 12. The results are shown in Fig. 13. The results are shown in Fig. 14. The results are shown in Fig. 15. The results are shown in Fig. 16. The results are shown in Fig. 17. The results are shown in Fig. 18. The results are shown in Fig. 19. The results are shown in Fig.
rized later in the comparisons in Fig. 11 for MICE2, and in Fig. 14 for KiDS data.
We note that in the green case of Fig. 9 that considered both IA and magnification, \(g_{\rm mag}\) and \(A_{\rm IA}\) strongly degenerate. Therefore the constraining power in \(A_{\rm IA}\) has a significant loss compared with the blue case, which ignores magnification. This degeneracy can be broken in the future with higher S/N in the observables. This is because the shape of \(w^{\rm g}\) and \(w^{\rm Gx}\) are different at small scales for correlation functions as in Fig. 6, and on large scales for power spectra as in Fig. 2. The IA-model-dependency will be discussed later with other results. Based on the above analysis, we conclude it is important to include magnification modeling for SC when using high-z data.
#### 5.1.2 Impact from modeling \(p(z|z^{\rm P})\)
Since the SC selection Eq. (8) plays an important role in the lensing-IA separation process, it is crucial to understand how the following aspects affect SC: (1) the quality of the photo-z \(z^{\rm P}\), (2) the true redshift distribution \(n(z)\), and (3) the link between them \(p(z|z^{\rm P})\). The quality of photo-z and the reconstruction of \(n(z)\) has been studied thoroughly for KiDS data (Kuijken et al. 2019; van den Busch et al. 2022; Hildebrandt et al. 2021; van den Busch et al. 2020), we, therefore, trust these results and leave the alternative studies for SC to future works. The uncalibrated PDF that projects \(z^{\rm P}\to z\), on the other hand, has some known problems, for example when Probability Integral Transform (PIT) is applied (Newman & Gruen 2022; Hasan et al. 2022).
In this work, we use a bi-Gaussian PDF model to project the photo-z distribution \(n^{\rm P}(z^{\rm P})\) to the SOM redshift distribution \(n(z)\), which are previously shown in Fig. 3. This modeling ignores the potential differences for galaxies in the same \(z\)-bin (Peng et al. 2022; Xu et al. 2023). However, this is an alternative process, considering the PDF problem for a single galaxy. This analytical approach is also much faster in calculation than using different PDFs for different galaxies.
We use Fig. 10 to demonstrate how large this photo-z PDF modeling bias is with different approaches. We use MICE2 simulation with galaxy number density affected by magnification. When the SC calculation uses true-z to calculate the signal drops \(Q^{\rm Gg}\) and \(Q^{\rm lg}\), and the magnification model is also considered, we find the resulting \(A_{\rm IA}\) is consistent with 0, which is the MICE2 input. The scatter on \(A_{\rm IA}\) is \(\sim\) 0.1, thanks to the noiseless shapes in MICE2. If the \(Q\)s are calculated with the assumed photo-z PDF model, without including the magnification model, then \(A_{\rm IA}\) will be biased towards the negative direction. We proved with our fiducial analysis that, even if there exists a bias in \(Q^{\rm Gg}\) due to the assumed photo-z model, as long as the magnification model is used, this bias will be absorbed by the \(g_{\rm mag}\) parameter, so that the IA amplitude \(A_{\rm IA}\) is unbiased (consistent with 0 in the MICE2 case). The results are also shown later in the comparisons in Fig. 11 for MICE2, and in Fig. 14 for KiDS data.
We note that the bias due to photo-z modeling is not an essential problem for SC. In the future, if the photo-z outlier problem (or the redshift-color degeneracy problem) can be understood better, then a more reliable photo-z model can be used for our SC study. Alternatively, if the photo-z algorithms can give unbiased PDFs for each galaxy, this problem can also be directly solved.
Figure 10: The impact from photo-z PDF model bias. The blue case uses photo-z from the BPZ algorithm and true-z for each galaxy to calculate Eq. 17 and the resulting \(Q^{\rm Gg}\) and \(Q^{\rm lg}\), which are the “sim” cases in Fig. 11. This \(A_{\rm IA}\) is consistent with 0, which is the MICE2 input. The green case uses the bi-Gaussian photo-z model for the calculation, which are the “model” cases in Fig. 11, while ignoring the magnification contribution. This lead to unconstrained \(g_{\rm mag}\) and biased \(A_{\rm IA}\). In the red case, which also uses the photo-z model, but includes the magnification model, the resulting \(A_{\rm IA}\) is still consistent with 0, with the bias from photo-z model error absorbed by \(g_{\rm mag}\).
Figure 9: The impact of the magnification signal on the IA measurement in MICE2. The green and blue contours are with and without magnification models, respectively. If the magnification model is used in the fitting, as in green, the IA amplitude \(A_{\rm IA}\) is consistent with 0, which is the MICE2 input.
### Inference on real data
With the above demonstration that our treatments for magnification and photo-z PDF are appropriate, and the resulting bias in \(A_{\rm IA}\) is very small (\(\Delta A_{\rm IA}<0.1\) and \(<1\sigma\) as shown in Fig. 11), we move on to apply SC to KiDS data and its cross-correlation with Planck lensing. We show the analysis of the following three situations:
(1) The case "ignore IA". We only use the observed \(w^{\rm yx}\), while only including \(A_{\rm lens}\) in the fit and ignoring the contamination by IA (by setting \(A_{\rm IA}=0\)).
(2) The case "IA w/o SC". We only use the observed \(w^{\rm yx}\), but consider both \(A_{\rm lens}\) and \(A_{\rm IA}\) following Eq. (19).
(3) The case "with SC". We use both \(w^{\rm yx}\) in Fig. 7 and the SC correlations in Fig. 6. Both the CMB lensing amplitude \(A_{\rm lens}\) and the nuisance parameters \(\{A_{\rm IA}\), \(b_{\rm geff}\), \(g_{\rm mag}\}\) will be used in the analysis, following Eqs. (17), (18) and (19).
The results are shown in Fig. 12. We use flat priors in \(0<A_{\rm lens}<2\), \(-5<A_{\rm IA}<5\), and for the IA self-calibration nuisance parameters we use \(0<b_{\rm geff}<4\), \(-5<g_{\rm mag}<5\).
For case (1) "ignore IA", shown in blue, \(A_{\rm IA}\) is unconstrained in the fitting, giving the best-fit \(A_{\rm lens}=0.74^{+0.18}_{-0.17}\).
For case (2) "IA w/o SC", when we consider the existence of IA and apply the IA model as in Eq. (7), but do not use the measurements from SC (Fig. 6 and Eq. 17, 18), there will be a strong degeneracy between \(A_{\rm lens}\) and \(A_{\rm IA}\), as shown in orange. There is a significant loss of constraining power in the lensing amplitude, with the best-fit \(A_{\rm lens}=0.79^{+0.43}_{-0.46}\) and \(A_{\rm IA}=0.47^{+3.17}_{-3.47}\).
For case (3) "with SC", the introduced measurements of \(w^{\rm Gg}\) and \(w^{\rm lg}\) can not only break the degeneracy between \(A_{\rm lens}\) and \(A_{\rm IA}\) (see Eq. 17, 18 and 19), but also bring more constraining power to \(A_{\rm IA}\), so that the best-fit of \(A_{\rm lens}\) will not only be unbiased (according to the validation using simulation) but also has significantly improved constraining power. The best-fit values are \(A_{\rm lens}=0.84^{+0.22}_{-0.22}\), \(A_{\rm IA}=0.60^{+1.08}_{-1.03}\), \(b_{\rm geff}=0.88^{+0.06}_{-0.06}\), and \(g_{\rm mag}=-0.30^{+1.60}_{-1.62}\). In Fig. 12 we only show \(A_{\rm IA}\) and \(A_{\rm lens}\), which are the focus of this work, while \(b_{\rm geff}\) and \(g_{\rm mag}\) are only related with the SC observables but not CMB lensing. Also as discussed in Yao et al. (2020), the existence of the effective galaxy bias \(b_{\rm geff}\) can also absorb some systematics (so it could be a biased bias), leaving the constraint on \(A_{\rm IA}\) unbiased (as shown in Fig. 11). For example, we tested if magnification is absent, the effect of boost factor will be purely absorbed by \(b_{\rm geff}\), giving unbiased \(A_{\rm IA}\) and \(A_{\rm lens}\). The effective galaxy bias could also absorb the differences in the assumed fiducial cosmology, with \(b_{\rm geff}\sim 1.24\) with KiDS COSEBI cosmology, for example. The redshift distribution \(n(z)\) can differ slightly with/without accounting for the lensing weight (considering the lensing/clustering part in the galaxy-shape correlation), with a \(\sim 0.024\) difference in the mean-z, which can lead to \(\sim 8\%\) difference in the theoretical lensing signal and \(\sim 2\%\) difference in the theoretical IA signal. Other unaddressed sources of systematics such as baryonic feedback and massive neutrinos could have similar effects. We can also see from the validation using MICE data that although the resulting \(b_{\rm geff}\) is lower than the expectation, the \(A_{\rm IA}\) result is unbiased. The \(g_{\rm mag}\) result also resides in a reasonable range, considering the KiDS i-band magnitude (Kuijken et al. 2019) and comparing it with Duncan et al. (2014). The above three cases of IA treatments are also summarized later in Fig. 13 and 14 together with more tests and other works.
The corresponding best-fit curves are shown in Fig. 2 and 6 with \(A_{\rm IA}=0.60^{+1.03}_{-1.03}\), \(b_{\rm geff}=0.88^{+0.06}_{-0.06}\), and \(g_{\rm mag}=-0.30^{+1.60}_{-1.62}\). Even though the impact of magnification is comparable to the IA signal, we can see in both the angular power spectrum and correlation function that the shapes of IA and magnification are different. For example, as shown in Fig. 6, the tidal alignment model \(w^{\rm lg}\) and magnification \(g_{\rm mag}w^{\rm G
Figure 11: We validate our SC method with MICE2 simulation, which does not have IA implemented; therefore, \(A_{\rm IA}=0\) is expected. The results are shown in green, with “MICE(mag)” meaning magnification is included in the MICE simulation, while “MICE(nomag)” means magnification is not included, “Q(sim)” and “Q(model)” mean if the signal drops \(Q\) values are calculated from true-z from simulation or photo-z PDF model, and “w/o mag” and “w/ mag” show if the case includes magnification model in the fitting process. The upper two data are the results from Fig. 9, showing the impact of the modeling magnification. The 2nd to the 4th data are the results from Fig. 10, showing the impact of Q calculation using different PDFs. The 4th data correspond to our fiducial analysis later for KiDS data, with potential bias \(\Delta A_{\rm IA}<0.1\). The bottom data is a reference case assuming no magnification effects in the data, corresponding to our previous work Yao et al. (2020,a).
Figure 12: The constraints on lensing amplitude \(A_{\rm lens}\) and the IA amplitude \(A_{\rm IA}\), with three different methods: assume there is no IA in the measured \(w^{\rm yx}\) (blue), consider the impact of IA with conventional IA model but do not use SC (orange), use SC to subtract IA information and constrain together with the CMB lensing cross-correlation (green). When IA is ignored, \(A_{\rm IA}\) is unconstrained. The similar height and width of \(A_{\rm lens}\) PDFs between blue and green prove that by including SC, the \(A_{\rm IA}-A_{\rm lens}\) degeneracy can be efficiently broken so that the constraining power loss in \(A_{\rm lens}\) is very small.
scale, while different at small scale. Therefore, in principle, the degeneracy between IA and magnification can be broken for future data with higher S/N so that the shape/slope information of the observables can be used. The current degeneracy is due to the low S/N so that the amplitudes of \(A_{\rm IA}\) and \(g_{\rm mag}\) degenerate. Furthermore, if a more complicated IA model is used, for example, as in Blazek et al. (2019); Abbott et al. (2022), the small-scale IA will be different. Based on the study of Shi et al. (2021), for a wide range of stellar mass, the small-scale IA should have a higher amplitude (either a direct raise in the amplitude or a "drop-raise" pattern as we go to smaller scales) than the current model so that the IA-magnification degeneracy can be broken further. The appropriate IA model will require studies in many aspects, and with higher S/N in the measurements, thus we leave this topic for future work.
We investigate how different choices can change our results. We first compare the different scale cuts for \(w^{\gamma}\). Besides the baseline analysis of \(A_{\rm lens}=0.84^{+0.22}_{-0.22}\) with \(\theta>20\) arcmin, two more tests are made with a larger scale cut off \(\theta>40\) arcmin and a smaller scale cut off \(\theta>2\) arcmin, as shown in Fig. 7, which give us \(A_{\rm lens}=0.97^{+0.25}_{-0.25}\) and \(A_{\rm lens}=0.77^{+0.21}_{-0.22}\), respectively. The comparisons are shown in Fig. 13. The large-scale lensing amplitude is higher than the small-scale one, which agrees with the finding in Planck Collaboration et al. (2020c) and other cross-correlation work (Sun et al., 2022). In this work, we only report this large-scale v.s. small scale difference. However, the current S/N of CMB convergence - galaxy shear correlation and the model assumptions do not allow us to investigate further on this topic.
We then compare the different choices in the SC method. We find that if the magnification model is ignored in the analysis, the existing magnification signal in the data will be treated as an IA signal, leading to an over-estimated \(A_{\rm IA}=0.81^{+0.36}_{-0.41}\) and an over-estimated \(A_{\rm lens}=0.87^{+0.18}_{-0.18}\). On the other hand, we previously argued that, when magnification is absent, the impact from the boost factor will be purely absorbed by the effective galaxy bias \(b_{\rm g,eff}\), leaving \(A_{\rm IA}\) and \(A_{\rm lens}\) unbiased. Unfortunately, this does not hold anymore when magnification is present: if the boost factor is not corrected, all the parameters will be biased as follows \(A_{\rm IA}=1.86^{+1.01}_{-0.15}\), \(b_{\rm g,eff}=0.67^{+0.06}_{-0.06}\), \(A_{\rm lens}=1.00^{+0.23}_{-0.23}\) and \(g_{\rm mag}=1.55^{+1.28}_{-1.31}\). We include the comparisons of \(A_{\rm lens}\) and \(A_{\rm IA}\) for the above-described cases in Fig. 13 and 14 and emphasis the importance of taking magnification and boost factor into consideration. We also show the impact of the assumed fiducial cosmology: if the fiducial cosmology is switched from _Planck_ to KiDS-1000 COSEBI as in Table 1, both \(A_{\rm lens}\) and \(A_{\rm IA}\) will change as shown in Fig. 13 (bottom-red) and 14 (bottom-blue).
With the above results in simulation and data, summarized in Fig. 11, 13 and 14, we show that our measurements on \(A_{\rm IA}\) and \(A_{\rm lens}\) are unbiased from magnification, boost factor, and the assumed photo-z PDF model. These are the new developments considering the existence of magnification at high redshift \(z\sim 1\), beyond the study of Yao et al. (2020b).
Additionally, we compare our analysis with previous works. The comparisons of \(A_{\rm lens}\) are shown in Fig. 13. We find that most of the previous works ignored the IA contamination (Hand et al., 2015; Liu and Hill, 2015; Kirk et al., 2016; Harnois-Deraps et al., 2016; Singh et al., 2017; Harnois-Deraps et al., 2017; Namikawa et al., 2019; Marques et al., 2020). For the ones that considered IA, they either fixed the IA amplitude (Kirk et al., 2016; Omori et al., 2019) or used a strong prior (Robertson et al., 2021) to break the degeneracy between \(A_{\rm lens}\) and \(A_{\rm IA}\), which will otherwise cause a strong loss in constraining power as we show in Fig. 12. We are the first to directly achieve the IA amplitude measurement within the same data and break the lensing-IA degeneracy. Our
Figure 14: The comparisons of the constraints on \(A_{\rm IA}\). We show the results of this work in blue, which contains our fiducial analysis with SC applied, and the comparisons of (1) without SC, (2) with SC but ignoring magnification, (3) with SC but ignoring boost factor, and (4) switching to KiDS fiducial cosmology. We show comparisons with other works using KiDS-1000 data in orange, and some works using DES or HSC data in green.
Figure 13: The comparisons of the constraints on \(A_{\rm lens}\) with previous measurements. Our baseline analysis “with SC” is consistent with 1. We also show some cases where IA is ignored in the analysis and if IA is considered but the \(A_{\rm IA}-A_{\rm lens}\) degeneracy is not broken with SC. These main results in blue are similar to Fig. 12. We show tests with different scale cuts and different treatments to magnification, boost factor, and different (KiDS) fiducial cosmology in red. We compare with other works, separated into ignoring IA (orange) and assuming a strong prior of IA (green). We note that for different work, the different fiducial cosmology (the “Planck”; “WMAP”, “KiDS” labels on the y-axis) can lead to \(\sim 10\%\) difference in \(A_{\rm lens}\).
baseline analysis is consistent with most of the previous results, showing the contamination from IA is not significant, mainly due to the total S/N of CMB lensing - galaxy shear cross-correlation is only at \(3\sim 5\,\sigma\) level at the current stage. However, the correct treatment for IA will be more and more important in the future with stage IV cosmic shear surveys and CMB observations.
The comparisons of the \(A_{\rm IA}\) constraint with other results using KiDS-1000 data are shown in Fig. 14, including the prior assumed in Robertson et al. (2021) and the cosmic shear tomography constraint in Asgari et al. (2021). Although the redshift range is slightly different, the above works have consistent results on \(A_{\rm IA}\). These comparisons will become more interesting for the next-stage observations.
As an extended study, we investigate how the choice of fiducial cosmology affects the SC results, namely \(A_{\rm IA}\). In Fig. 14 we show the results with the fiducial _Planck_ cosmology and the KiDS-1000 two-point correlation function \(\xi_{\rm k}\) best-fit cosmology. We further compare the results with the KiDS-1000 band power \(C(\ell)\) cosmology and the COSEBIs cosmology in Fig. 15. The results from Asgari et al. (2021) (shown in orange) are arranged in increasing order from bottom to top. We find that when assuming the same cosmology, the SC results (shown in blue) also follow the same (weak) trend, meanwhile, they agree very well with the cosmic shear results. We note the SC results will provide extra information in constraining IA in cosmic shear in the future.
## 6 Summary
In this work, we achieved the first application of the self-calibration (SC) method of intrinsic alignment (IA) of galaxies to its cosmological application. We proved that with SC, the lensing-IA degeneracy could be efficiently broken, i.e., in this CMB lensing \(\times\) galaxy shear cross-correlation work, it means breaking the degeneracy between the lensing amplitude \(A_{\rm lens}\) and the IA amplitude \(A_{\rm IA}\). We showed that for previous treatments, IA are either ignored or being considered with a strong assumed prior on \(A_{\rm IA}\). We demonstrated in Fig. 12, 13 and 14 that with SC to break the degeneracy, the constraining power in both \(A_{\rm lens}\) and \(A_{\rm IA}\) is preserved.
We demonstrated that the proper angular scale cuts on \(w^{\prime\prime}\) are important. Our baseline analysis using information from \(\theta>20\) arcmin gives \(A_{\rm lens}=0.84^{+0.22}_{-0.22}\). If we use information only at larger scales with \(\theta>40\) arcmin, the constraint is \(A_{\rm lens}=0.97^{+0.25}_{-0.25}\). If we include information at much smaller scales with \(\theta>2\) arcmin, the constraint is \(A_{\rm lens}=0.77^{+0.21}_{-0.24}\). At the current stage, they do not differ significantly from each other (even considering they are strongly correlated), as shown in Fig. 13. However, we note that these differences at different scales also exist in other works Planck Collaboration et al. (2020c) and Sun et al. (2022). We, therefore, emphasize the importance of understanding the possible systematics at different scales for future studies with higher S/N.
Comparing our CMB lensing amplitude \(A_{\rm lens}\) with other works in Fig. 13, we found consistent results with different treatments of IA throughout almost all the works. We conclude that IA is not a significant source of systematics for the current stage. However, it will soon become more important with the stage IV observations. Nevertheless, we emphasize that the correct treatment to break the lensing-IA degeneracy is very important to maintain the cosmological constraining power. Our constraint on the IA amplitude \(A_{\rm IA}\) in Fig. 14 is also consistent with the existing analysis on IA with KiDS-1000 data. We note that the SC-subtracted IA information can be used as extra constraining power for any of these analyses.
On the technique side, we further developed the SC method considering more sources of systematics beyond Yao et al. (2020b). We showed at \(z\sim 1\), the impact of galaxy shear \(\times\) cosmic magnification component \(w^{\prime\alpha\mu}\) contaminates the separated IA \(\times\) galaxy number density signal \(w^{\prime\rm B}\), and is non-negligible as shown in Fig. 2 and 6. We use Eq. (16) and (18) to show how the magnification term enters our observable and how we include it in the theory as a correction. We show in Fig. 13 and 14 that the correction of magnification is important when applying SC to higher redshift data, in order to get the correct constraint on IA. We also discussed that, with the contamination from magnification, boost factor can no longer be absorbed by the effective galaxy bias \(b_{\rm g,eff}\), and need to be accounted for correctly, as shown in Eq. (24), (25) and Fig. 6, 13, 14.
We also validated our analysis with MICE2 simulation, focusing on two aspects: (1) how good can the magnification model mitigate the contamination from the magnification-shear signal; and (2) will the assumed photo-z PDF model (which is used to calculate the signal drop \(Q^{\rm dg}\) and \(Q^{\rm dg}\)) bias the IA measurement. With the strong constraining power from MICE2 with no shape noise, we can show in Fig. 11 that, when the magnification model is included in the analysis, the IA amplitude can be obtained correctly (consistent within \(1\sigma\) range of 0, which is the input of MICE2). Additionally, the bias from the assumed photo-z model is negligible when the magnification model is used, as the effective magnification prefactor \(g_{\rm mag}\) will absorb the introduced error. We, therefore, emphasize the importance of including the magnification model in the SC analysis, especially for future high-z surveys like LSST, Euclid, WFIRST, and CSST. We further notice the contamination from magnification will make SC no longer an IA-model-independent method, therefore, SC is more suitable for low-z data when considering alternative IA models.
Comparing with our first measurements with KV-450 data (Yao et al. 2020a), a lot of improvements have been added in the SC method, including:
(1) the covariance, the galaxy bias, the scale-dependency for the lensing-drop \(Q^{\rm dg}\), the IA-drop \(Q^{\rm dg}\), and appropriate scale-cuts,
Figure 15: The comparisons of \(A_{\rm IA}\) between SC-subtracted results (blue) and cosmic shear tomography subtracted results (orange) with cosmologies from different 2-point statistics. The cosmologies are shown in Table 1.
which have been introduced in Yao et al. (2020b);
(2) the boost factor, the cosmic magnification, and the photo-z PDF modeling, which are introduced in this work;
(3) its first validation using simulation, and its first application to cosmology in order to break the lensing-IA degeneracy, introduced in this work.
With these improvements, we manage to achieve consistent IA results between SC and cosmic shear, as shown in Fig. 15, while previously we got \(A_{\rm IA}=2.31^{+0.42}_{-0.42}\) with the old version of SC (Yao et al., 2020a) and \(A_{\rm IA}=0.981^{+0.694}_{-0.678}\) for cosmic shear (Hildebrandt et al., 2020) with KV-450 data.
Despite SC-obtained \(A_{\rm IA}\) is consistent with the MICE input IA, and when applying to data it is consistent with the KiDS cosmic shear results Agari et al. (2021) and the other CMB lensing work Robertson et al. (2021), as well as \(g_{\rm mag}\) is in reasonable agreement with (Duncan et al., 2014), our results still suffer from an unrealistiely low effective galaxy bias \(b_{\rm g,eff}=0.88\), which is different from our previous work (Yao et al., 2020b). We discussed this value may absorb the contribution from (1) fiducial cosmology, (2) lensing weight in \(n(z)\), (3) insufficient modeling in non-linear galaxy bias, baryonic effects, and massive neutrinos, (4) incorrect photo-z v.s. true-z connection as discussed in Appendix A and (5) possible other sources of systematics. We emphasize the complication and leave this point for future studies.
We note that there could still exist other systematics other than the galaxy bias, such as beyond Limber approximation (Fang et al., 2020), non-flat \(\Lambda\)CDM (Yu et al., 2021), selection bias on shear measurement (Li et al., 2021). But they have either much smaller impacts compared with IA or are strongly reduced due to our scale cuts. Therefore, they are beyond the scope of this paper.
###### Acknowledgements.
The authors thank Yu Yu, Hai Yu, Jiaxin Wang for useful discussions. This work is supported by National Key R&D Program of China No. 2022YFF0503043. JY acknowledges the support of the National Science Foundation of China (12203084), the China Postdoctoral Science Foundation (2021T140451), and the Shanghai Post-doctoral Excellence Program (2021419). HYS acknowledges the support from CMS-CSST-2021-001 and CMS-CSST-2021-001, NSFC of China under grant 11973070. The Shanghai Committee of Science and Technology grant No.19ZR1466600 and Key Research Program of Frontier Sciences, CAS, Grant No. ZBDS-L17-013. ZF acknowledges the support of the National Science Foundation of China (116213213, 11433001). XL acknowledges the support of NSFC of China under Grant No. 11803028, YNU Grant No. C176220100008, and a grant from the CAS Interdisciplinary Innovation Team. BJ acknowledges support by STC Consolidated Grant ST/V000780/1. MB is supported by the Polish National Science Center through grants no. 2020/38/EP3709305, 2018/03/EP3709388 and 2002/39/B/ST/93/03494, and the Polish Ministry of Science and Higher Education through grant DIR/UK/2018/12. HH is supported by a Heisenberg grant of the Deutsche Forschungsgemeinschaft (Hi 1495/5-1) as well as an ERC Consolidator Grant (No.770935). TT acknowledges support from the Leverhulme Trust. AW is supported by an European Research Council Consolidator Grant (No. 770935). ZY acknowledges support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award on the Federal Ministry of Education and Research (Germany). The computations in this paper were run on the \(\pi\,2.0\) cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University. The codes LY produced for this paper were written in Python. JY thanks all its developers and especially the people behind the following packages: SCIPY (Jones et al., 2001-), NUMPY (van der Walt et al., 2011), ASTROPY (Astropy Collaboration et al., 2013) and MATPLOTIB (Hunter, 2007), TreeCorr (Jarus et al., 2004), CCL (Chisari et al., 2019), CAMB (Lewis et al., 2000), Helap (Gorski et al., 2005; Zonca et al., 2019), emcee (Foreman-Mackey et al., 2013), rifiso6, kmeans_radec7, corner (Foreman-Mackey, 2016), ChainConsumer7. The KiDS-1000 results in this paper are based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 177.A-3016, 17.A-3017 and 17.A-3018, and on data products produced by Target/OmegaGEN, INAF-OACN, INAF-OAPD, and the KiDS production team, on behalf of the KiDS consortium. Author contributions: All authors contributed to the development and writing of this paper. The authorship list is given in three groups: the lead authors (Y, HS, ZXL) followed by two alphabetical groups. The first alphabetical group includes those who are key contributors to both the scientific analysis and the data products. The second group covers those who have either made a significant contribution to the data products, or to the scientific analysis.
Footnote 6: [https://github.com/esheldon/fitsio](https://github.com/esheldon/fitsio)
|
2309.07260 | Self-duality properties and localization centers of the electronic wave
functions at high magic angles in twisted bilayer graphene | Twisted bilayer graphene (TBG) is known for exhibiting highly correlated
phases at magic angles due to the emergence of flat bands that enhance
electron-electron interactions. In the TBG chiral model, electronic wave
function properties depend on a single parameter ($\alpha$), inversely
proportional to the relative twist angle between the two graphene layers. In
previous studies, as the twist angles approached small values, strong
confinement, and convergence to coherent Landau states were observed. This work
explores flat-band electronic modes, revealing that flat band states exhibit
self-duality; they are coherent Landau states in reciprocal space and exhibit
minimal dispersion, with standard deviation $\sigma_k=\sqrt{3\alpha/2\pi}$ as
$\alpha$ approaches infinity. Subsequently, by symmetrizing the wave functions
and considering the squared TBG Hamiltonian, the strong confinement observed in
the $\alpha\rightarrow\infty$ limit is explained. This confinement arises from
the combination of the symmetrized squared norm of the moir\'e potential and
the quantized orbital motion of electrons, effectively creating a quantum well.
The ground state of this well, located at defined spots, corresponds to Landau
levels with energy determined by the magic angle. Furthermore, we demonstrate
that the problem is physically analogous to an electron attached to a
non-Abelian $SU(2)$ gauge field with an underlying $C_3$ symmetry. In regions
of strong confinement, the system can be considered as Abelian. This allows to
define a magnetic energy in which the important role of the wave function
parity and gap closing at non-magic angles is revealed. Finally, we investigate
the transition from the original non-Abelian nature to an Abelian state by
artificially changing the pseudo-magnetic vector components from an $SU(2)$ to
a $U(1)$ field, which alters the sequence of magic angles. | Leonardo A. Navarro-Labastida, Gerardo G. Naumis | 2023-09-13T18:52:49Z | http://arxiv.org/abs/2309.07260v1 | Self-duality properties and localization centers of the electronic wave functions at high magic angles in twisted bilayer graphene
###### Abstract
Twisted bilayer graphene (TBG) is known for exhibiting highly correlated phases at magic angles due to the emergence of flat bands that enhance electron-electron interactions. The connection between magic angles and the Quantum Hall effect remains a topic of ongoing research. In the TBG chiral model, electronic wave function properties depend on a single parameter (\(\alpha\)), inversely proportional to the relative twist angle between the two graphene layers and which includes the interlayer interaction strength. In previous studies, as the twist angles approached small values, strong confinement and a convergence to coherent Landau states were observed. However, the origin of these phenomena remained elusive. This work explores flat-band electronic modes, revealing that flat band states exhibit self-duality; they are coherent Landau states in reciprocal space and exhibit minimal dispersion, with standard deviation \(\sigma_{k}=\sqrt{3\alpha/2\pi}\) as \(\alpha\) approaches infinity. Subsequently, by symmetrizing the wave functions and considering the squared TBG Hamiltonian, the strong confinement observed in the \(\alpha\rightarrow\infty\) limit is explained. This confinement arises from the combination of the symmetrized squared norm of the moire potential and the quantized orbital motion of electrons, effectively creating a quantum well. The ground state of this well, located at defined spots, corresponds to Landau levels with energy determined by the magic angle. Furthermore, we demonstrate that the problem is physically analogous to an electron attached to a non-Abelian \(SU(2)\) gauge field with an underlying \(C_{3}\) symmetry. In regions of strong confinement, the system can be considered as Abelian, aligning with the picture of a simple harmonic oscillator. This allows to define a magnetic energy in which the important role of the wave function parity and gap closing at non-magic angles is revealed. Finally, we investigate the transition from the original non-Abelian nature to an Abelian state by artificially changing the pseudo-magnetic vector components from an \(SU(2)\) to a \(U(1)\) field, which alters the sequence of magic angles.
## I Introduction
Superconductivity in twisted bilayer graphene (TBG) is known to occur when the rotation angle between layers is able to produce a flat band in which electrons have zero group velocity [1]. Such angles are known as "magic angles." This important discovery has unveiled the significance of two-dimensional (2D) materials in understanding unconventional superconductivity in cuprates and heavy fermion systems, as they share similar quantum phase diagrams and present a new paradigm in moire materials [1; 2; 3]. After the discovery of superconductivity in TBG [1], other works reinforced the observation that flat bands are quite important to the existence of unconventional superconductivity and strongly correlated phases in twisted multilayer graphene systems [2; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. TBG flat bands, also known as zero mode states, share a lot of mathematical similarities to the ground state of the quantum Hall effect (QHE) [26; 27; 28]. It was also known that magic angles exhibit a remarkable 3/2 sequence or quantization rule, characterized by the vanishing of the Fermi velocity and the appearance of flat bands [26; 27; 28; 29; 30].
G. Tarnoposky et. al. [26] found the simplest model for magic angles in TBG by turning off one of the hoppings between layers. This model was crucial for understanding the underlying symmetries such as intralayer inversion symmetry and the parity of magic angles. It also allowed for a deeper analysis of the structure of the zero mode wave function. [27].
Zero energy modes at magic angles have been investigated in many recent works [31; 32; 33; 34; 35; 36; 37; 38; 15; 38]. There were mathematical hints for a possible connection with the QHE and the lowest Landau level [26; 28; 32; 39; 40]. Other works, revealed interesting connections with FQHE, topological matter, Weyl semimetals, Floquet systems, and anomalous edge states [5; 7; 38; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51].
Working in magic-angle TBG it was indeed proved that the squared Hamiltonian of this system is closely related to the quantum harmonic oscillator and QHE [28]. The ground state is a flat band in which the wave function converges into coherent Landau-level states of the QHE. Another important result was the explanation of the mystery of the "3/2 magic angle recurrence rule" by using scaling arguments [28]. This rule is intimately related to the quantization of angular momentum. Consequently, for each magic angle, there exists a well-defined attached angular quantum number, which can be interpreted as interlayer currents [28]. This explanation of the basic principles underlying the magic-angle phenomenon provides valuable insights into addressing new fundamental
questions at the intersection of the fractional quantum Hall effect (FQHE) and unconventional superconductivity. These questions are the subject of intense study in strongly correlated systems [52].
However, despite our previous works [28; 30; 53], several questions remain unanswered. One of these questions pertains to the mechanism behind the strong localization of wavefunctions in magic-angle invariant spots once the lattice is properly scaled by the parameter \(\alpha\), which encapsulates the energetic interaction coupling between layers and the angle. Additionally, we have yet to explore the consequences of nearly coherent Landau states. Here we show that zero modes behave as minimal dispersion packets as expected. We also explain how the wavefunction confinement arises around certain localization centers due to an effective potential produced by the moire potential and the orbital motion of the electron. Moreover, we show that the magic angle order parity is a crucial property associated with flat bands in twisted bilayer graphene. We also establish some connections between the angular momentum and non-Abelian pseudo-magnetic fields.
The present work is divided as follows. Section II introduces the Hamiltonian for the chiral twisted bilayer graphene model and the pseudo-magnetic field that emerges due to the effect of the parameter \(\alpha\). Section III finds self-duality localization properties between reciprocal and real space and demonstrates that zero-mode states are coherent Landau states. Section IV analyzes confinement conditions for the electronic wavefunction in the asymptotic limit \(\alpha\rightarrow\infty\) and the symmetries of the zero energy wavefunction. Section V explores the non-Abelian nature of TBG and its connection with the magnetic QHE. Section VI analyzes the non-Abelian nature of the pseudo-magnetic field by changing artificially its structure to make it more Abelian and how the scaling and recurrence are modified. Finally, section VII gives some conclusions and further research directions.
## II Chiral squared TBG Hamiltonian
The BM (Bistritzer-MacDonald) Hamiltonian was the first model to capture the nature of magic angle recurrence in TBG [29]. Interestingly, taking \(AA\) tunneling between layers equal to zero the spectrum in TBG has an extra chiral symmetry so, this reduced model is called the cTBG or TKV (Tarnopolsky-Kruchkov-Vishwanath) model. In the chiral basis, the bi-spinor is \(\Phi(\mathbf{r})=\left(\psi_{1}(\mathbf{r}),\psi_{2}(\mathbf{r}),\chi_{1}(\mathbf{r}),\chi_{2 }(\mathbf{r})\right)^{T}\) where indexes \(1,2\) denotes each graphene layer and \(\psi_{j}(\mathbf{r})\) and \(\chi_{j}(\mathbf{r})\) are the Wannier orbitals on each sub-lattice of the graphene's unit cell.
The chiral Hamiltonian is given by [26; 31; 54],
\[\mathcal{H}=\begin{pmatrix}0&D^{*}(-\mathbf{r})\\ D(\mathbf{r})&0\end{pmatrix} \tag{1}\]
where the zero-mode operator is defined as,
\[D(\mathbf{r})=\begin{pmatrix}-i\bar{\partial}&\alpha U(\mathbf{r})\\ \alpha U(-\mathbf{r})&-i\bar{\partial}\end{pmatrix} \tag{2}\]
with \(\bar{\partial}=\partial_{x}+i\partial_{y}\). The coupling potential between layers is,
\[U(\mathbf{r})=\sum_{\nu=1}^{3}e^{i\phi(\nu-1)}e^{-i\mathbf{q}_{\nu}\cdot\mathbf{r}} \tag{3}\]
where the phase factor is \(\phi=2\pi/3\) and the vectors are given by,
\[\mathbf{q}_{1} =k_{\theta}(0,-1) \tag{4}\] \[\mathbf{q}_{2} =k_{\theta}(\frac{\sqrt{3}}{2},\frac{1}{2})\] \[\mathbf{q}_{3} =k_{\theta}(-\frac{\sqrt{3}}{2},\frac{1}{2})\]
the moire modulation vector is \(k_{\theta}=2k_{D}\sin\frac{\theta}{2}\) with \(k_{D}=\frac{4\pi}{3a_{0}}\) is the magnitude of the Dirac wave vector and \(a_{0}\) is the lattice constant of monolayer graphene. The cTBG model has only \(\alpha\) as a parameter, defined as \(\alpha=\frac{w_{1}}{v_{0}k_{\theta}}\) where \(w_{1}=110\) meV is the interlayer coupling of stacking AB/BA and \(v_{0}=\frac{19.81eV}{2k_{D}}\) is the Fermi velocity. The diagonal operators \(\partial\) and \(\bar{\partial}\) are dimensionless as eq. (1) is written in using units where \(v_{0}=1\), \(k_{\theta}=1\). The twist angle only enters in the dimensionless parameter \(\alpha\) and scaling energy \(\epsilon/\alpha\).
In \(k\)-space, the moire Brillouin zone (mBZ) has
\[\mathbf{b}_{1,2}=\mathbf{q}_{2,3}-\mathbf{q}_{1} \tag{5}\] \[\mathbf{b}_{3}=\mathbf{q}_{3}-\mathbf{q}_{2}\]
as the moire reciprocal vectors. Some important high symmetry points of the mBZ are \(\mathbf{K}=(0,0)\), \(\mathbf{K^{\prime}}=-\mathbf{q}_{1}\), and \(\mathbf{\Gamma}=\mathbf{q}_{1}\)[30]. It is also convenient to define a set of unitary vectors \(\mathbf{q}_{\nu}^{\perp}\) perpendicular to the set \(\mathbf{q}_{\nu}\) and defined as,
\[\mathbf{q}_{1}^{\perp} =(1,0) \tag{6}\] \[\mathbf{q}_{2}^{\perp} =\big{(}-\frac{1}{2},\frac{\sqrt{3}}{2}\big{)}\] \[\mathbf{q}_{3}^{\perp} =\big{(}-\frac{1}{2},-\frac{\sqrt{3}}{2}\big{)}\]
The moire vectors unitary cell are given by \(\mathbf{a}_{1,2}=(4\pi/3k_{\theta})(\sqrt{3}/2,1/2)\). Note that \(\mathbf{q}_{\nu}\cdot\mathbf{a}_{1,2}=-\phi\) for
\(\nu=1,2,3\). In our previous works [28; 30; 53], we demonstrated that squaring the Hamiltonian \(\mathcal{H}\) allows us to simplify it into a \(2\times 2\) matrix that we call the squared Hamiltonian \(H^{2}\). In this work, we introduce notation changes in the definitions used inside \(H^{2}\). The reasons will become evident later on. \(H^{2}\) is given by,
\[H^{2}= \tag{7}\] \[\begin{pmatrix}-\nabla^{2}+\alpha^{2}(\mathbf{A}^{2}+i[A_{x},A_{y}])& \alpha(-2i\mathbf{A}_{-}\cdot\nabla+\nabla\times\mathbf{A}_{-})\\ \alpha(-2i\mathbf{A}_{+}\cdot\nabla+\nabla\times\mathbf{A}_{+})&-\nabla^{2}+\alpha^{2} (\mathbf{A}^{2}-i[A_{x},A_{y}])\end{pmatrix}\]
where we defined,
\[\mathbf{A}_{\pm}\equiv\mathbf{A}(\pm\mathbf{r})=\sum_{\nu=1}^{3}e^{\pm i\mathbf{q}_{\nu}\cdot \mathbf{r}}\mathbf{q}_{\nu}^{\perp} \tag{8}\]
here \(\mathbf{A}_{\pm}\) is a pseudo-magnetic vector potential with \(C_{3}\) symmetry and \(\mathbf{A}^{2}=|\mathbf{A}_{\pm}|^{2}\). The squared norm of the coupling potential is an effective intralayer confinement potential,
\[|U(\pm\mathbf{r})|^{2}=\mathbf{A}^{2}\mp i[A_{x},A_{y}] \tag{9}\]
where the confinement potential \(|U(\pm\mathbf{r})|^{2}\) is separated into its purely symmetric \(\mathbf{A}^{2}(\mathbf{r})\) and anti-symmetric \(i[A_{x},A_{y}]\) parts defined as,
\[\mathbf{A}^{2}(\mathbf{r})=3-\sum_{\nu}\cos{(\mathbf{b}_{\nu}\cdot\mathbf{r})} \tag{10}\] \[\Delta(\mathbf{r})=\sqrt{3}\sum_{\nu}(-1)^{\nu}\sin{(\mathbf{b}_{\nu} \cdot\mathbf{r})}\]
here \(\Delta(\mathbf{r})=i[A_{x},A_{y}]\) where \(A_{x}\) and \(A_{y}\) are the non-Abelian components of the \(SU(2)\) pseudo-magnetic vector potential (See Appendix A). It is important to remark that the pseudo-magnetic vector potential satisfies the relation \(\mathbf{\nabla}\cdot\mathbf{A}_{\pm}=0\), so is a Coulomb gauge invariant field and \(\mathbf{\nabla}\times\mathbf{A}_{+}=\mathbf{B}_{+}\) (layer 1) and \(\mathbf{\nabla}\times\mathbf{A}_{-}=\mathbf{B}_{-}\) (layer 2). The magnetic field is thus given by,
\[\mathbf{B}(\pm\mathbf{r})=\pm i\sum_{\nu}e^{\pm i\mathbf{q}_{\nu}\cdot\mathbf{r}}\mathbf{e}_{z} \tag{11}\]
where we have used the identity \(\mathbf{e}_{z}=\mathbf{q}_{\nu}\times\mathbf{q}_{\nu}^{\perp}\) and \(\mathbf{e}_{z}\) is a unitary vector in the direction perpendicular to the graphene's plane.
Notice that squaring the chiral TBG model is akin to a supersymmetric transformation [55; 56; 57; 58; 59], which seems to play a role in the proposed equivalence between the squared TBG electron Hamiltonian and an electron coupling to a \(SU(2)\) non-Abelian pseudo-magnetic field [53].
## III Self-duality properties and convergence into coherent Landau states
It has been demonstrated that twisted bilayer graphene has Landau levels [15; 25; 37; 60]. They play a crucial role in its remarkable properties like superconductivity, fractional Chern insulator phases [16; 18; 38; 47; 49; 52; 61; 62]. However, there are some gaps related to the understanding of electronic localization in TBG from the perspective of one particle. For example, why for \(\alpha\rightarrow\infty\) does the wavefunction localize at specific regions in real space and \(k\)-space? and how both spaces relate?
In a recent previous paper we demonstrated that the wave function in TBG exhibits an almost coherent Landau state nature with a dispersion \(\sigma=1/\sqrt{3\alpha}\) which is only reached in the asymptotic limit [28]. This asymptotic limit squeezes the bands and makes these theoretically coherent states difficult to measure but here we are not worried about such a fact at this moment. We are more concerned about making some analogies and connections with Landau levels. Here we are going to discuss some properties of the wave functions and their relationship with coherent states.
Coherent states are self-dual in the sense that their Fourier transforms in reciprocal space look similar to the real space but with inverted parameters. As a consequence, they satisfy the minimal uncertainty relation between real and momentum space. Let us now explore if such property is valid for TBG zero modes.
As seen in Fig. 1, the electronic probability density in real space for the ninth magic angle \(\alpha_{9}\), with normalized coordinates as \(\frac{y-\mathbf{R}}{\sqrt{\alpha}}\), where \(\mathbf{R}\approx 1.047\) is the position of one of the numerically found maximums (this value suggests that \(\mathbf{R}\approx\pi/3\) but we do not have a proof of this conjecture), is almost a Gaussian. For comparison, in Fig. 1 we plot a Gaussian with the same dispersion. Fig. 1 reveals that the electronic distribution has a power-law fat tail decay. Interestingly, this makes the electronic density somewhat similar to the velocity-distribution fluctuations in turbulence [63].
However, \(\alpha\) squeezes these fat tails as this scaling parameter increases. This is shown in Fig. 2 where we plot the electronic probability in real space from the second to the ninth magic angles written in normalized coordinates, i.e., with zero mean and standard deviation one. Clearly, as the system goes to higher magic angles the fat tail diminishes and asymptotically converges to an invariant Gaussian distribution.
As the positions of maximal electronic density probability near the origin are located at \(\mathbf{R}\approx\pm 1.047\mathbf{q}_{\nu}\), the density can be approximated by a Gaussian distribution near \(\mathbf{R}\) as,
\[|\psi(\mathbf{r})|^{2}\approx\frac{3A_{M}}{2\pi\sigma}e^{-\frac{1}{2\sigma^{2}}| \mathbf{r}\pm\mathbf{R}|^{2}} \tag{12}\]
where \(A_{M}=8\pi^{2}/(3\sqrt{3})\) is the normalized moire unit cell area and \(\sigma=1/\sqrt{3\alpha}\) is the standard deviation. Note that
eq. (12) is independent of \(\alpha\). To include the fat tails, we can use another function \(W_{\alpha}(\mathbf{r})\) which is \(\alpha\) dependent such that,
\[|\psi(\mathbf{r})|^{2}\approx\frac{A_{M}}{2\pi\sigma}e^{-\frac{1}{2\sigma^{2}}|\mathbf{r }+\mathbf{R}|^{2}}|W_{\alpha}(\mathbf{r})|^{2} \tag{13}\]
in agreement with other works [60; 64]. These fat tails are interesting as they allow to produce wave function overlaps though, at the same time, are strongly localized in certain regions.
Coherent states have the property of being minimal dispersion wave packets. We explore this property for TBG by looking at the reciprocal space. As the wave functions follow Bloch's theorem, they can be written as [26],
\[\Psi_{\mathbf{k}}(\mathbf{r})=\begin{pmatrix}\psi_{\mathbf{k},1}(\mathbf{r})\\ \psi_{\mathbf{k},2}(\mathbf{r})\end{pmatrix}=\sum_{l,n}\begin{pmatrix}a_{ln}\\ b_{ln}e^{i\mathbf{q}_{1}\cdot\mathbf{r}}\end{pmatrix}e^{i(\mathbf{K}_{ln}+\mathbf{k})\cdot\bm {r}} \tag{14}\]
where \(a_{ln}\) and \(b_{ln}\) are Fourier coefficients for layer 1 and layer 2 respectively. \(\mathbf{k}\) is a generic reciprocal wave vector and \(\mathbf{K}_{ln}=l\mathbf{b}_{1}+n\mathbf{b}_{2}\). The vectors \(\mathbf{b}_{1}=(\frac{\sqrt{3}}{2},\frac{3}{2})\) and \(\mathbf{b}_{2}=(-\frac{\sqrt{3}}{2},\frac{3}{2})\) are the two Moire Brillouin zone vectors defined in section II.
In Fig. 3 panel (a) we present the Fourier coefficients squared norm for the zero mode wave function at the \(\Gamma\) point for \(\mathbf{K}_{x}=n(\mathbf{b}_{2}-\mathbf{b}_{1})\), given by \(|a_{-n,n}|^{2}\), for magic angles between \(\alpha_{2}\) to \(\alpha_{9}\). We can clearly see the Gaussian shape of the peaks, which turn out to be similar to the wave function in real space seen in Fig. 2 of our previous work [28]. This is in agreement with the idea of states converging into coherent states. As we can see, the coefficients \(|a_{-n,n}|^{2}\) for \(\alpha_{2}\) are strongly localized while for higher magic angles \(\alpha_{9}\), the two original mirrors symmetric Gaussian's are quite separated, while the dispersion increases. For the real space case, the situation is reversed because the Gaussian's are more localized and their dispersion is reduced for higher magic angles (See Ref. [28]). In Fig. 3 panel (b), we show the peak position of the Gaussian in \(k\)-space (\(|\mathbf{K}_{-\tilde{n},\tilde{n}}|\)), were \((-\tilde{n},\tilde{n})\) correspond to the reciprocal point with maximal norm Fourier coefficient, i.e., the positions of the maximums in reciprocal space along one direction. This is compared with the inverse of the difference between the wave function peaks positions in real space (\(\tilde{\mathbf{r}}\)) and the limiting localization center for \(\alpha\to\infty\), i.e., we plot \(1/|\tilde{\mathbf{r}}-\mathbf{R}|\).
On the other hand, panel (c) presents the dispersion in \(k\)-space, denoted by \(\sigma_{k}\), as a function of \(\alpha\), showing that the dispersion increases with \(\alpha\). This is easy to explain. Considering that \(\psi(\mathbf{r})\) are almost coherent states, in a previous work [28] we showed that the dispersion in real space converges to \(\sigma=1/\sqrt{3\alpha}\). Therefore, using that the Fourier transform of a Gaussian is another Gaussian with inverse standard deviation, we obtain that the dispersion in reciprocal space goes as,
\[\sigma_{k}=\sqrt{\frac{3\alpha}{2\pi}} \tag{15}\]
in agreement with Fig. 3 panel (c). Both in Fig. 3 panels (b) and (c), the vertical lines indicate magic angles. The solid lines are the theoretical results and the markers are the numerical results. We use the log-log scale for visual convenience. From these results, we can conclude that indeed our states converge into coherent states because they satisfy Heisenberg's uncertainty relation with minimal dispersion, i.e.,
Figure 1: Electronic density, in log scale, of the higher magic angle \(\alpha_{9}\) for the \(\mathbf{\Gamma}\)-point, and as a function of the position along \(y\)-axis. Black points are the numerical data obtained from the Hamiltonian. A normalized \(y^{\prime}\) variable was used such that \(y^{\prime}=(y-1.047)/\sqrt{\alpha}\). The red curve is a Gaussian fit for \(\psi_{1}(\mathbf{r})\). Notice the fat tails of the electronic density when compared with a Gaussian.
Figure 2: Electronic density, in log scale, from the second to ninth magic angles for the \(\mathbf{\Gamma}\)-point and as a function of the position along the \(y\)-axis. For simplicity, a normalized \(y^{\prime}\) variable was used such that \(y^{\prime}=(y-1.047)/\sqrt{\alpha}\). Notice the convergence into a Gaussian.
\[\sigma_{r}\sigma_{k}\approx\sqrt{\frac{1}{3\alpha}}\sqrt{\frac{3\alpha}{2\pi}}= \sqrt{\frac{1}{2\pi}} \tag{16}\]
or using natural units \(h=1\) (Plank's constant) we end with,
\[\Delta_{r}\Delta_{k}\approx\hbar \tag{17}\]
where \(\Delta_{r}=\sigma_{r}^{2}\) and \(\Delta_{k}=\sigma_{k}^{2}\). The result \(\hbar\) is a consequence of the model because we are treated with a \(2D\) model and each degree of freedom contributes \(\hbar/2\) to the dispersion, in analogy to a \(2D\) quantum harmonic oscillator.
To give more insight into the localization centers in reciprocal space, Fig. 4 presents a color map for the Fourier coefficients \(|a_{mn}|^{2}\) (layer 1) for the \(\Gamma\)-point wave function. From panel (a) to panel (d) the magic angle order increases and the maxima of the Fourier coefficients departs radially from the center. Pink arrows indicate where the sixth localization center lies.
According to these numerical results, the maximums of the electronic probability in k-space are near,
\[\tilde{n}\mathbf{b}_{\nu}\pm 1.047\mathbf{q}_{\nu} \tag{18}\]
and their corresponding rotated versions by \(2\pi/3\). In real space, the maxima are at,
\[\mathbf{R}\approx\frac{1}{\tilde{n}}\hat{\mathbf{R}}_{-\phi}(\mathbf{b}_{\nu})+1.047\hat{ \mathbf{R}}_{-\phi}(\mathbf{q}_{\nu}) \tag{19}\]
Here \(\hat{\mathbf{R}}_{-\phi}\) represents a rotation by an angle \(\phi=\frac{2\pi}{3}\) and \(\tilde{n}\approx\sqrt{3}\alpha_{m}/2\). For the other layer, the same behavior occurs with the Fourier coefficients (\(|b_{mn}|^{2}\)). Therefore, we can summarize such behavior as follows. As \(\alpha\to\infty\), wave functions become strongly confined in certain spots. In reciprocal space, the confinement is also present but decreases with growing \(\alpha\) and at the same time, the location of the maximums goes to infinity. To delve deeper into such properties, in the following section we discuss how and why confinement at certain locations arises.
Figure 3: Fourier coefficients in reciprocal space in the direction \(\mathbf{K}_{-\tilde{n},\tilde{n}}=\tilde{n}\mathbf{b}_{3}\). Panel (a) shows the squared norm of Fourier coefficients \(|a_{-n,n}|^{2}\) from the second to the ninth magic angles along the direction \(\mathbf{K}_{\varepsilon}=n(\mathbf{b}_{2}-\mathbf{b}_{1})\). Panel (b) presents the convergence, in log-log scale, for the values \(|K_{-\tilde{n},\tilde{n}}|\) (purple dots) and \(1/|\tilde{\mathbf{r}}-\mathbf{R}|\) (black squares) with \(\mathbf{R}\approx 1.047\mathbf{q}_{1}\). The associated lines for each marker are the linear fits \(|K_{-n,n}|\approx 1.34\alpha\) (orange dashed) and \(1/|\tilde{\mathbf{r}}-\mathbf{R}|\approx 2.12271+0.626839\alpha\) (brown solid). Panel (c) shows the standard deviation in the log-log scale for the Gaussian distribution at the maximum point \(\mathbf{K}_{-\tilde{n},\tilde{n}}\). Here it is numerically proved that \(\sigma_{k}=\sqrt{\frac{3\alpha}{2\pi}}\) in \(k\)-space with the relation \(\sigma_{k}=1/(\sqrt{2\pi}\sigma_{r})\), where the indexes \(k\) and \(r\) represents \(k\)-space or real-space, respectively. This result shows that solutions are coherent states because they minimize the dispersion \(\sigma_{r}\sigma_{k}=1/\sqrt{2\pi}\) thus, with minimal uncertainty relation \(\sigma_{r}^{2}\sigma_{k}^{2}=\hbar\), where \(\hbar=h/2\pi\) using natural units \(h=1\) as the Plank’s constant.
Figure 4: Fourier coefficients squared norm color map for the zero-mode wavefunction for high magic angles. Panel (a), \(\alpha_{6}=8.313\), (b) \(\alpha_{7}=9.829\), (c) \(\alpha_{8}=11.345\) and (d) \(\alpha_{9}=12.855\). All correspond to the \(\Gamma\)-point coefficients. The arrows indicate the positions of the maximal norm Fourier coefficients, and are the centers of the coherent Landau states in reciprocal space. The centers are located at \(\tilde{n}\mathbf{b}_{\nu}\pm 1.047\mathbf{q}_{\nu}\) where \(\tilde{n}\approx\sqrt{3}\alpha_{m}/2\) for \(m\to\infty\) higher magic angles, and \(C_{3}\) rotations relate produce the extra points seen in the figure. Observe how as the magic angle order grows, the maxima are pushed away from the center.
Confinement and wave function symmetries
As was discussed in the previous section and in previous works [28; 30], the wave functions in real space converge into very sharp Gaussian packets which are located at the invariant points \(\mathbf{R}\). In this section, we discuss the origin of this effect as well as some symmetry properties of the wave function required to understand how the confinement arises. Let us show first how at higher magic angles the wave function in real space can be decoupled into symmetric and anti-symmetric parts. These are spatially located at different regions and depend on the magic angle order parity. To clarify these points, it is convenient to write the zero-mode equation of the squared Hamiltonian,
\[\begin{split}(-\nabla^{2}+\alpha^{2}(&\mathbf{A}^{2}+i [A_{x},A_{y}]))\psi_{1}(\mathbf{r})\\ &+\alpha(-2i\mathbf{A}_{-}\cdot\nabla+\nabla\times\mathbf{A}_{-})\psi_{2 }(\mathbf{r})=0\end{split} \tag{20}\]
At this point we remark that the eigenfunctions of \(\mathcal{H}\) are simultaneously eigenfunctions of \(H^{2}\), however, the vise-verse is not. Here we will work with \(H^{2}\) because has more physical relevance for the present discussion, however, the numerical calculations of the wave function that we will present in what follows are in the \(4\times 4\) chiral basis of \(\mathcal{H}\). As explained elsewhere [30], any linear combination of degenerate eigenfunctions of \(\mathcal{H}\) are solutions of \(H^{2}\), so there is a phase involved. In spite of this, the electronic density and energy contributions are not affected if they are calculated in \(H^{2}\) or \(\mathcal{H}\) as the phase factor is eliminated.
For simplicity, in this analysis, we will first consider the \(\Gamma\)-point. In this case the symmetry allows to write \(\psi_{2}(\mathbf{r})=i\mu_{\alpha}\psi_{1}(-\mathbf{r})\) with \(\mu_{\alpha}=\pm 1\) as the magic angle order parity [26]. For odd parity magic angle order, i.e, for \(\alpha_{2m+1}\) we have \(\mu_{\alpha}=+1\), while for even parity (\(\alpha_{2m}\)) \(\mu_{\alpha}=-1\).
We now define symmetric or anti-symmetric functions as \(\psi_{\pm}(\mathbf{r})=\psi_{1}(\mathbf{r})\pm\psi_{1}(-\mathbf{r})\). Therefore, the pair of zero mode eqn. (20) can be rewritten as,
\[\begin{split}(-\nabla^{2}&+\alpha^{2}\mathbf{A}^{2}-i \mu_{\alpha}\alpha(-2i\mathcal{A}_{\mp}\cdot\nabla+\nabla\times\mathcal{A}_{ \mp}))\psi_{\pm}\\ &+(\alpha^{2}\Delta-i\mu_{\alpha}\alpha(-2i\mathcal{A}_{\pm} \cdot\nabla+\nabla\times\mathcal{A}_{\pm}))\psi_{\mp}=0\end{split} \tag{21}\]
where we also defined the symmetry/anti-symmetry non-Abelian pseudo-magnetic field as,
\[\mathcal{A}_{\pm}=(\mathbf{A}_{+}\pm\mathbf{A}_{-})/2 \tag{22}\]
Our numerical results in Fig. 5 and Fig. 6 highlight that indeed the solutions are decoupled spatially in this symmetric or anti-symmetric basis. For example, in Fig. 5 the magic angle (\(\alpha_{8}=11.345\)) has even order parity (\(m=8\)) with \(\mu_{\alpha}=-1\). In panels (a)-(b) we present the real and imaginary parts respectively of the symmetric solution \(\psi_{+}\). The blue dots indicate the corresponding maxima. In panels (c)-(d) we present a similar plot for \(\psi_{-}\). The maxima of \(\psi_{-}\) are in different locations than those in \(\psi_{+}\). Moreover, for even parity, the anti-symmetric solution doubles the number of maxima when compared with the symmetric solution. Quite remarkably, if we continue with the next
Figure 6: Symmetric and anti-symmetric wave functions in a \(3\times 3\) unit cell for \(\alpha_{9}=12.855\). The blue circles indicate where the electronic wave function is localized and the dashed lines show unit cells defined by the vectors \(\mathbf{a}_{1}\)\(\mathbf{a}_{2}\). Symmetric/anti-symmetric wave functions are defined as \(\psi_{\pm}=\psi_{1}(\mathbf{r})\mp i\mu_{\alpha}\psi_{2}(\mathbf{r})\). Considering the \(\Gamma\)-point \(\psi_{2}(\mathbf{r})=i\mu_{\alpha}\psi_{1}(-\mathbf{r})\) symmetric/anti-symmetric solutions changes as \(\psi_{\pm}=\psi_{1}(\mathbf{r})\pm\psi_{1}(-\mathbf{r})\). (a-b) Real and imaginary parts of the symmetric wave function \(\psi_{+}\). (c-d) Real and imaginary parts of the anti-symmetric wave function \(\psi_{-}\). Note that symmetric and anti-symmetric solutions are almost decoupled.
Figure 5: Symmetric (\(\psi_{+}(\mathbf{r})\)) and anti-symmetric(\(\psi_{-}(\mathbf{r})\)) wave functions in a \(3\times 3\) unit cell for \(\alpha_{8}=11.345\). The blue circles indicate where the electronic wave function is localized and the dashed lines show unit cells defined by the vectors \(\mathbf{a}_{1}\)\(\mathbf{a}_{2}\). Symmetric/anti-symmetric wave functions are defined as \(\psi_{\pm}=\psi_{1}(\mathbf{r})\mp i\mu_{\alpha}\psi_{2}(\mathbf{r})\). Considering the \(\Gamma\)-point \(\psi_{2}(\mathbf{r})=i\mu_{\alpha}\psi_{1}(-\mathbf{r})\) symmetric/anti-symmetric solutions changes as \(\psi_{\pm}=\psi_{1}(\mathbf{r})\pm\psi_{1}(-\mathbf{r})\). (a-b) Real and imaginary parts of the symmetric wave function \(\psi_{+}\). (c-d) Real and imaginary parts of the anti-symmetric wave function \(\psi_{-}\). Note that symmetric and anti-symmetric solutions are almost spatially decoupled.
magic angle, the parity changes to an odd magic angle (\(\alpha_{9}=12.855\)) with \(\mu_{\alpha}=+1\). Note that in Fig. 6 the situation is reversed, now \(\psi_{+}\) has the double of peaks when compared with \(\psi_{+}\). The localization centers of \(\psi_{+}\) and \(\psi_{-}\) are interchanged when compared with \(\alpha_{8}\).
Observe how both in Fig. 5-6, magenta dashed lines indicate moire unit cells while the supercell here is \(3\times 3\) bigger as the pseudo-magnetic potentials define a bigger magnetic unit cell [28]. This bigger period is seen in the coupling potential as \(U(\mathbf{r}+\mathbf{a}_{1,2})=e^{-i\phi}U(\mathbf{r})\), thus this requires a translation of \(3\mathbf{a}_{1,2}\) to recover the crystal periodicity and a phase factor \(e^{3i\phi}=1\). In such a bigger unit cell, the potential is periodic and in fact, leads to the quantization rule for the magic angles [28]. The \(3\times 3\) unitary cells are essential to clearly understand the inversion symmetries of the wave functions as if only one unitary moire cell is used, defined by \(\mathbf{a}_{1,2}\), the extra phases make the interpretation very difficult.
Our numerical results indicate distinct localization regions for \(\psi_{+}\) and \(\psi_{-}\), suggesting that in equation (21), each term can be separately set to zero to satisfy the equation, owing to the strong confinement. Thus, as a solution, we propose that eq. (21) can be decoupled into,
\[(-\nabla^{2}+\alpha^{2}\mathbf{A}^{2}-i\mu_{\alpha}\alpha(-2i\mathcal{A}_{\mp} \cdot\nabla+\nabla\times\mathcal{A}_{\mp}))\psi_{\pm}\approx 0 \tag{23}\]
\[(\alpha^{2}\Delta-i\mu_{\alpha}\alpha(-2i\mathcal{A}_{\pm}\cdot\nabla+\nabla \times\mathcal{A}_{\pm}))\psi_{\mp}\approx 0 \tag{24}\]
As explained in Appendix B, by using eqns. (23) and (24) it can be proved that the following eq. is obtained,
\[(-\nabla^{2}+\alpha^{2}\mathbf{A}^{2}(\mathbf{r})-\alpha^{2}\Delta(\mathbf{r}))\psi_{\pm}\approx 0 \tag{25}\]
where in eq. (25) it is supposed \(\alpha\rightarrow\infty\) and thus \(\nabla\times\mathcal{A}_{\pm}(\mathbf{r})\to 0\) is negligible as it scales as \(\alpha\). This indeed supports the use of well-defined parity wave functions as was done in a previous work [28].
As is seen in eq. (25), the potential \(\mathbf{A}^{2}(\mathbf{r})-\Delta(\mathbf{r})\) governs the electronic localization behavior in the asymptotic limit \(\alpha\rightarrow\infty\). However, note that taking \(\mathbf{r}\rightarrow-\mathbf{r}\) in eq. (25) changes the sign of \(\Delta(-\mathbf{r})=-\Delta(\mathbf{r})\) while keeping invariant the other terms. This property allows for the decoupling of the symmetric and anti-symmetric potentials as,
\[(-\nabla^{2}+\alpha^{2}\mathbf{A}^{2}(\mathbf{r}))\psi_{\pm} \approx 0 \tag{26}\] \[\Delta(\mathbf{r})\psi_{\pm} \approx 0\]
To satisfy the second of the previous equations, we must have \(\Delta(\mathbf{r})\approx 0\) in regions where \(\psi_{\pm}\neq 0\). Fig. 7 (a) confirms numerically that such condition is correct, i.e., wave functions are localized in the lines for which \(\Delta(\mathbf{r})=0\). Moreover, this implies that localization occurs whenever \([A_{x},A_{y}]=0\). Therefore, locally the system is Abelian. As shown in Appendix A, the positions where \(\Delta(\mathbf{r})=0\) occur at high-symmetry directions so the localization centers, for the vertex at the origin, will have numerically found positions near,
\[\mathbf{R}\approx\pm R\mathbf{q}_{\nu} \tag{27}\]
where \(R=1.047...\) is the magnitude of \(\mathbf{R}\). It gives the radial distance of the maximum to the vertex of the cell. Its value is determined from the condition \((-\nabla^{2}+\alpha^{2}\mathbf{A}^{2}(\mathbf{r}))\psi_{\pm}\approx 0\). Also, the angular part of the wavefunction will behave closely to \(\cos{(3m\theta)}\), in agreement with the results obtained in a previous work where we showed that the angular momentum becomes quantized by \(3m\), as also suggested by figures 5 and 6. In Fig. 7 (b) we present \(\mathbf{A}^{2}(\mathbf{r})\). We observe that there are no relevant features that give any indication of a possible confinement. However, such confinement arises when we consider the angular momentum. This is best seen by working near the origin and using polar coordinates.
Figure 7: Confinement spots and potentials in the unit cell defined using the vectors \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\). (a) Anti-symmetric potential \(\Delta(\mathbf{r})\) and (b) symmetric potential \(\mathbf{A}^{2}(\mathbf{r})\). The black points are the localization centers of the electronic zero-mode wave function. In the anti-symmetric potential \(\Delta(\mathbf{r})\), magenta lines indicate angular confinement directions where locally the non-Abelian commutator is zero, \(\Delta(\pm 1.047\mathbf{q}_{\nu})=i[A_{x},A_{y}]=0\) the directions are defined by vectors \(\pm 1.047\mathbf{q}_{\nu}\). The symmetric potential \(\mathbf{A}^{2}(\mathbf{r})\) is also important because it tells us some information related to radial confinement. In (b), cyan circles have a radius \(1.047\), and black points lie around these circles. More importantly, \(\mathbf{R}\approx\pm 1.047\mathbf{q}_{\nu}\) corresponds to special points restricted by the angular confinement directions of \(\Delta(\mathbf{r})\). These special points are also related to tunneling paths (magenta lines) that are energetically favorable and connect electronic density centers by a saddle point.
The first equation in (26) now looks as,
\[-\left(\frac{\partial^{2}\psi_{\pm}}{\partial r^{2}}+\frac{1}{r}\frac{\partial \psi_{\pm}}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}\psi_{\pm}}{\partial \theta^{2}}\right)+\alpha^{2}\mathbf{A}^{2}(\mathbf{r})\psi_{\pm}=0 \tag{28}\]
As the third term in the Laplacian is the angular momentum, we see that an effective potential appears which contains the moire symmetric potential part plus the centrifugal barrier, which is a result of the orbital motion of the electron. Elsewhere it was shown [28] that the magic angle is given by \(\alpha_{m}\approx 3m/2\) and asymptotically, \(L_{z}\psi_{\pm}\approx m\psi_{\pm}\). Also, we can discard the second term of the Laplacian, as derivatives scale with \(\alpha\) inside the boundary layer of the equation [28]. We obtain that,
\[-\frac{\partial^{2}\psi_{\pm}}{\partial r^{2}}+\frac{9}{4}m^{2}\left(\frac{1} {r^{2}}+\mathbf{A}^{2}(\mathbf{r})\right)\psi_{\pm}\approx 0 \tag{29}\]
A bound state will appear if the effective potential has a minimum. As we also have the condition on the angular part that confines electrons in certain directions, here we will discuss the minimum that results in the \(y\) direction. This is seen in Fig. 8 where we plot the potentials \(\mathbf{A}^{2}(0,y),1/y^{2}\) and the effective one \(V_{eff}=1/y^{2}+\mathbf{A}^{2}(0,y)\). As seen in the plot, the minima are close to the numerically found limiting confinement centers for the wave functions, indicated in Fig. 8 by vertical lines. The minimum can be found from,
\[\left(\frac{dV_{eff}}{dy}\right)_{y=R}=-\frac{2}{R^{3}}+3\sin\left(3R/2\right)=0 \tag{30}\]
We found numerically that the minimum is approximately \(R\approx 0.88\). Notice that the obtained minimum is shifted with respect to the numerical obtained value, i.e., the error is \(\Delta R\approx 1.047-0.88\approx 0.16\) which is around \(15\%\). The reason is that we made several strong approximations like neglecting overlaps between localization centers, the correct shape of the angular part which introduces a factor in the angular momentum, etc. Around the localization center, the effective potential can be approximated with a parabola. Therefore, we obtain an effective harmonic oscillator equation,
\[-\frac{\partial^{2}\psi_{\pm}}{\partial y^{2}}+\left(\frac{3m}{2}\right)^{2} \left(V_{eff}(R)+\frac{\omega^{2}(R)}{2}(y-R)^{2}\right)\psi_{\pm}\approx 0 \tag{31}\]
where the frequency is,
\[\omega^{2}(R)=\left(\frac{d^{2}V_{eff}(y)}{dy^{2}}\right)_{y=R}=\frac{6}{R^{4 }}+\frac{9}{2}\cos\left(3R/2\right) \tag{32}\]
On the other hand, the result from the scaling argument \(\sigma\) has an associated frequency \(\omega=3\alpha\) (See Ref. [28]), as the energy re-scales as \(1/\alpha^{2}\). Thus, the scaled frequency is \(\omega^{\prime}=\frac{\omega}{\alpha}=3\) and so \(\omega^{2}=9\) where primes are omitted. Therefore, comparing \(\omega^{2}=9\) with \(\omega^{2}(R)\) at \(R=1.047\) we found that \(\omega^{2}(R)\approx 9.489\), hence, the error is \(\Delta\omega=\omega^{2}-\omega^{2}(R)\approx 0.489\) which is around \(5\%\). For \(R\approx 0.88\), the frequency is \(\omega^{2}(R)\approx 11.121\). The error is \(\Delta\omega=\omega^{2}-\omega^{2}(R)\approx 2.121\) which is around \(19\%\).
The zero mode can thus be interpreted as the ground state of this effective harmonic oscillator with an energy shift determined by \(m^{2}V_{eff}(R)\) and guiding center \(R\). Thus, this explains the Gaussians shapes for the electronic density discussed in the previous section. Finally, it is important to remark that our analysis was made for the \(\Gamma\) point. The reason is that such mode is at the top of the band and thus signals the magic angles whenever its corresponding energy goes to zero [30]. At other \(\mathbf{k}\) points, numerical calculations indicate that the wavefunctions also converge towards the same localization center [30]. This can be easily explained by examining equation (14). In the limit \(\alpha\rightarrow\infty\), the peaks in reciprocal space satisfy \(|\mathbf{K}_{l,n}|=|\mathbf{l}\mathbf{b}_{1}+n\mathbf{b}_{2}|\gg|\mathbf{k}|\) when \(l\) and \(n\) are much bigger than \(1\). Consequently, \(\mathbf{k}\) can be safely neglected in all expressions, leading to the collapse of all \(\mathbf{k}\) values into the same equation.
## V Relationship with the non-Abelian magnetic quantum Hall effect
In this section, we will explore some interesting connections with non-Abelian magnetic fields. We now write the squared Hamiltonian,
\[\begin{split} H^{2}&=(-\nabla^{2}+\mathbf{A}^{2})\tau_{ 0}+i\alpha^{2}[A_{x},A_{y}]\tau_{z}-2i\alpha\hat{\mathbf{A}}\cdot\mathbf{\nabla}\\ &+\alpha(\partial_{x}\hat{A}_{y}-\partial_{y}\hat{A}_{x})\end{split} \tag{33}\]
where \(\hat{\tau}_{j}\) (with \(j=1,2,3\)) is the set of Pauli matrices in the pseudo-spin layer degree, and the identity \(2\) matrix \(\hat{\tau}_{0}\). Moreover, \(A_{x}\) and \(A_{y}\), and its matrices SU(2) versions
Figure 8: Effective potential \(V_{eff}(r)\) along the axis \(\mathbf{r}=(0,y)\). The blue curve is the function \(1/y^{2}\) while the green curve is \(A^{2}(0,y)\). Electrons are confined in the well around the local minima of the effective potential at \(R\approx 0.88\). In this plot, we include two dashed vertical lines that indicate the position where the numerically found electronic wave function has its localization center (\(R\approx 1.047\)) for the limit \(\alpha\rightarrow\infty\).
\(\hat{A}_{x}\) and \(\hat{A}_{y}\) are defined in Appendix A. Written in such way, we can identify the Zeeman coupling energy as,
\[\begin{split}\hat{F}_{xy}&=\partial_{x}\hat{A}_{y}- \partial_{y}\hat{A}_{x}+i\alpha[\hat{A}_{x},\hat{A}_{y}]\\ &=-\hat{\mathbf{B}}\cdot\hat{\mathbf{\tau}}+i\alpha[\hat{A}_{x},\hat{A}_ {y}]\end{split} \tag{34}\]
where upper hats represent matrices. For convenience, we re-scale the spatial coordinates as \(\mathbf{r}^{\prime}=\mathbf{r}/\alpha\) from where \(\mathbf{\nabla}^{\prime}=(\alpha\mathbf{\nabla})\) and \((\mathbf{\nabla}^{\prime})^{2}=(\alpha\mathbf{\nabla})^{2}\). The re-scaled position Hamiltonian is,
\[\begin{split}(H/\alpha)^{2}&=(-\nabla^{2}+\mathbf{A}^{ 2}(\mathbf{r}/\alpha))\tau_{0}+i[A_{x}(\mathbf{r}/\alpha),A_{y}(\mathbf{r}/\alpha)]\tau_{z} \\ &-2i\hat{\mathbf{A}}(\mathbf{r}/\alpha)\cdot\mathbf{\nabla}-\frac{1}{\alpha} \hat{\mathbf{B}}(\mathbf{r}/\alpha)\cdot\hat{\mathbf{\tau}}\end{split} \tag{35}\]
where now the primes are dropped. As explained in Appendix A, the strong confinement of electrons allows to suppose an almost uniform magnetic field. This is as also seen in the effective eq. (29). Therefore, we can write \(\mathbf{A}\cdot\hat{\mathbf{p}}\approx-\mathbf{B}\cdot\hat{\mathbf{L}}\) where \(\hat{\mathbf{L}}\) is the total angular moment. Under such simplification, the re-scaled Hamiltonian is,
\[\begin{split}\hat{H}^{2}&=\overbrace{(-\nabla^{2}+ \mathbf{A}^{2}(\mathbf{r}/\alpha))\tau_{0}}^{\text{diagonal energy}}+\underbrace{i[A_{x}(\mathbf{r}/\alpha),A_{y}(\mathbf{r}/\alpha)]\tau_{z}} _{\text{non-Abelian energy}}\\ &\underbrace{-\hat{\mathbf{B}}(\mathbf{r}/\alpha)\cdot(2\hat{\mathbf{L}}+ \frac{\mathbf{e}_{z}}{\alpha})}_{\text{off-diagonal energy}}\end{split} \tag{36}\]
Note that only the last term depends on \(\alpha\) and taking the asymptotic limit \(\alpha\to\infty\) we have that the Zeeman energy \(-\frac{1}{\alpha}\mathbf{B}(\mathbf{r}/\alpha)\cdot\mathbf{\tau}\to 0\). This fact is corroborated in Fig. 9, where it can be observed that for the first magic angle, the expected value of the Zeeman energy scaled by \(\alpha\) is significant. However, for the third magic angle, it is very small, around \(0.1\) on the logarithmic scale. Therefore, it is expected to be similarly small for higher magic angles, and neglecting it should not significantly impact the results. Thus, in the asymptotic limit \(\alpha\to\infty\), \(2\hat{\mathbf{B}}(\mathbf{r}/\alpha)\cdot\hat{\mathbf{L}}>>\hat{\mathbf{B}}(\mathbf{r}/\alpha) \cdot\mathbf{e}_{z}/\alpha\), i.e., \(E_{Magnetic}>>E_{Zeeman}\). Hence, the Hamiltonian in this limit can be simplified into,
\[\hat{H}^{2}=\overbrace{(\mathbf{p}+\hat{\mathbf{A}}(\mathbf{r}/\alpha))^{2}}^{C_{3}\text{ magnetic field}}+\underbrace{i[A_{x}(\mathbf{r}/\alpha),A_{y}(\mathbf{r}/\alpha)]\tau_{z}}_{\text{non-Abelian operator}} \tag{37}\]
where \(\hat{H}^{2}=(H/\alpha)^{2}\) and \(\mathbf{p}=-i\mathbf{\nabla}\) is the canonical momentum operator. Accordingly, \(\hat{H}^{2}\) it's expected to have a non-Abelian QHE.
Let us know discuss how the magic angle order parity enters inside the orbital magnetic energy related to the angular momentum chirality. To understand this we start by writing the zero mode equation \(H^{2}\psi(\mathbf{r})=0\) together with eq. (36) at the \(\Gamma\)-point, where \(\psi_{2}(\mathbf{r})=i\mu_{\alpha}\psi_{1}(-\mathbf{r})\). Using the results of Appendix A in the limit \(\alpha\to\infty\), such that the wave function at the \(\Gamma\)-point is strongly confined, we obtain,
\[\begin{split}(-\nabla^{2}&+\mathbf{A}^{2}(\mathbf{r}/\alpha )+\Delta(\mathbf{r}/\alpha))\psi_{1}(\mathbf{r})\\ &-2i\mu_{\alpha}\mathbf{B}(\mathbf{r}/\alpha)\cdot\hat{\mathbf{L}}\psi_{1}(- \mathbf{r})=0\end{split} \tag{38}\]
The corresponding expected values over the zero mode wavefunction at the \(\Gamma\)-point are,
\[\langle\Gamma|T(\mathbf{r}/\alpha)|\Gamma\rangle+\langle\Gamma|\mathbf{A}^{2}(\mathbf{r}/ \alpha)|\Gamma\rangle-2i\mu_{\alpha}\langle\Gamma|\mathbf{B}(\mathbf{r}/\alpha)\cdot \hat{\mathbf{L}}|\Gamma\rangle=0 \tag{39}\]
where \(T(\mathbf{r}/\alpha)\) is the kinetic energy, i.e., minus the Laplacian, and we have used that the anti-symmetric potential is canceled inside the unit cell \(\langle\Gamma|\Delta(\mathbf{r}/\alpha)|\Gamma\rangle=0\) (see Fig. 7(a)). At magic angles we can use the energy equipartition found in a previous work [30], from where \(\langle\Gamma|T(\mathbf{r}/\alpha)|\Gamma\rangle=\langle\Gamma|\mathbf{A}^{2}(\mathbf{r}/ \alpha)|\Gamma\rangle\). Thus,
\[\langle\Gamma|\mathbf{A}^{2}(\mathbf{r}/\alpha)|\Gamma\rangle-i\mu_{\alpha}\langle \Gamma|\mathbf{B}(\mathbf{r}/\alpha)\cdot\hat{\mathbf{L}}|\Gamma\rangle=0 \tag{40}\]
where is important to note that,
\[\begin{split}-i\mu_{\alpha}\mathbf{B}(\mathbf{r}/\alpha)\cdot\hat{\mathbf{L} }&=-i\sum_{\nu}(-i)e^{-i\mathbf{q}_{\nu}\cdot\mathbf{r}/\alpha}\mathbf{e}_{z} \cdot(\mu_{\alpha}\mathbf{q}_{\nu}\times\hat{\mathbf{p}})\\ &=-\sum_{\nu}e^{-i\mathbf{q}_{\nu}\cdot\mathbf{r}/\alpha}\mathbf{e}_{z}\cdot( \mu_{\alpha}\hat{\mathbf{L}}_{\nu})\\ &=-\sum_{\nu}\mathbf{B}_{\nu}(\mathbf{r}/\alpha)\cdot(\mu_{\alpha}\hat{\bm {L}}_{\nu})\end{split} \tag{41}\]
where \(\mathbf{B}_{\nu}(\pm\mathbf{r}/\alpha)=\pm ie^{\pm i\mathbf{q}_{\nu}\cdot\mathbf{r}/\alpha}\) and we defined,
\[\hat{\mathbf{M}}_{\nu}=\mu_{\alpha}\hat{\mathbf{L}}_{\nu} \tag{42}\]
Figure 9: Zeeman energy \(\log|\langle\Gamma|\mathbf{B}\cdot\hat{\mathbf{\tau}}|\Gamma\rangle/\alpha|\) as function of \(\alpha\) for the zero mode wavefunction at the \(\Gamma\)-point. As \(\alpha\) increases, the Zeeman energy is quite small, and for higher magic angles \(\alpha_{8}\) or \(\alpha_{9}\) can be negligible. Dashed vertical lines indicate the first three magic angles.
as the pseudo-magnetic orbital momentum at the direction \(\nu\), with \(\hat{\mathbf{L}}_{\nu}=\mathbf{q}_{\nu}\times\hat{\mathbf{p}}\) a kind of angular momentum operator. We can understand its origin as a consequence of the strong confinement as in the angular momentum \(\hat{\mathbf{L}}_{z}=\mathbf{r}\times\mathbf{p}\), \(\mathbf{r}\) takes only values different from zero at \(\mathbf{r}\approx\mathbf{q}_{\nu}\). Therefore, we can interpret \(\hat{\mathbf{L}}_{\nu}\) as the contribution to the angular momentum of each confinement center, as these centers are not in the origin of coordinates. Such observation was empirically made by analyzing the numerical data in a previous paper [28]. In the asymptotic limit \(\alpha\rightarrow\infty\) we have that [30]\(\langle\Gamma|\mathbf{A}^{2}(\mathbf{r}/\alpha)|\Gamma\rangle\to 1\) from where,
\[1-\int d^{2}\mathbf{r}\psi_{1}^{\dagger}(\mathbf{r})\sum_{\nu}e^{-i\mathbf{q }_{\nu}\cdot\mathbf{r}/\alpha}\mathbf{e}_{z}\cdot(\mu_{\alpha}\hat{\mathbf{L}}_{\nu})\psi_ {1}(-\mathbf{r})=0 \tag{43}\]
therefore,
\[1- \mu_{\alpha}\mathbf{e}_{z}\cdot\sum_{\nu}\int d^{2}\mathbf{r}\psi_{1}^{ \dagger}(\mathbf{r})e^{-i\mathbf{q}_{\nu}\cdot\mathbf{r}/\alpha}\hat{\mathbf{L}}_{\nu}\psi_{1 }(-\mathbf{r}) \tag{44}\] \[=1-\mu_{\alpha}|\mathbf{e}_{z}|^{2}\sum_{\nu}(\frac{\mu_{\alpha}}{3})\] \[=1-\mu_{\alpha}^{2}=0\]
where are used natural units \(e=\hbar=1\) and rescaled energies \(1/\alpha^{2}\), normalized over the moire unit cell area. Each contribution of plane waves in the sum contributes \(1/3\) to the integral, i.e.,
\[\frac{1}{\alpha A_{M}}\langle\psi_{1}(\mathbf{r})|\mathbf{B}_{\nu}(\mathbf{r}/\alpha)\cdot \hat{\mathbf{L}}_{\nu}|\psi_{1}(-\mathbf{r})\rangle=\frac{\mu_{\alpha}}{3} \tag{45}\]
where \(A_{M}=8\pi^{2}/(3\sqrt{3})\) is the normalized moire unit cell area. This proves that parity and the three directional components of the angular momentum are essential to satisfy the magic angle condition. Moreover. eq. (42) indicates that the parity is related with the chirality of the magnetic energy.
To corroborate the chirality of the magnetic energy, in Fig. 10, we plot \(\langle\Gamma|\mathbf{B}(\mathbf{r}/\alpha)\cdot\hat{\mathbf{L}}|\Gamma\rangle/\alpha\) versus \(\alpha\) at the \(\Gamma\)-point as obtained from the numerical data of the wave function, by using techniques described in previous works [28; 30]. In the \(y\)-axis, this magnetic energy jumps from \(\mu_{\alpha}=+1\rightarrow-1\) or vice-versa. Because we rescaled the coordinates, the energy is also rescaled as \(E^{\prime 2}=(E/\alpha)^{2}\), and thus the result does not depend on \(\alpha\).
Fig. 10 also shows the relation between \(\mu_{\alpha}=+1\) counter-clockwise rotation (red arrows) and \(\mu_{\alpha}=-1\) clockwise rotation (blue arrows) as the \(z\)-component rotation of the magnetic angular momentum. The values \(\alpha_{m}^{*}\) indicate the intermediate values between magic angles \(\alpha_{m}\) and \(\alpha_{m+1}\). At these special values, the gap closes and the zero mode hybridizes with its neighbor upper band changing the chirality of the angular momentum.
Thus, an important characteristic of TBG is the gap closing in between magic angles due to the hybridization of the lowest band with its neighbor upper band. This is a crucial condition because is a transition that changes the chirality of the angular momentum and the magic angle order parity \(\mu_{\alpha}=\pm 1\). At the same time, on each gap closing appears a new quanta of angular momentum, and consequently, the magnetic angular momentum increases as \(\alpha\rightarrow\infty\).
So far, in this analysis is clear that parity of the wavefunction and the sign \(\mu_{\alpha}\) plays a crucial role in the energetic balance for magic angles flat bands, nevertheless, only at higher magic angles does the wave function reaches a purely symmetric or anti-symmetric solution and in this way, the angular momentum quantum number and the magic angle order parity governs the physics behind flat bands.
## VI Competition of non-Abelian and Abelian fields
The chiral TBG model is quite interesting and exhibits remarkable properties due to its non-Abelian nature introduced by the coupling potential \(U(\mathbf{r})\) between layers [65; 66]. In fact, flat bands and superconductivity in TBG are consequences of the underlying pseudo-magnetic fields generated by the twist angle. However, what if we could tune non-Abelian fields to become Abelian using an artificial parameter? How would this modification affect the periodicity and quantization of magic angles? To explore this effect, we can define a new
Figure 10: Orbital magnetic energy \(-\langle\Gamma|\mathbf{B}\cdot\hat{\mathbf{L}}|\Gamma\rangle/\alpha\) as function of \(\alpha\) in the limit \(\alpha\rightarrow\infty\) for the zero mode wavefunction at \(\Gamma\)-point, obtained from the numerical data of the wave function as in previous works [28; 30]. Vertical dashed lines (black) indicate magic angles. The red and blue arrows indicate the magnetic orbital rotation, \(\mu_{\alpha}=+1\) is counter-clockwise and \(\mu_{\alpha}=-1\) is clockwise rotation. Here are considered scaled coordinates \(\mathbf{r}^{\prime}=\mathbf{r}/\alpha\), when \(\alpha\rightarrow\infty\) approximately \(\alpha\approx 3m\) where \(m>>1\) is the order of the magic angle and \(-\langle\Gamma|\mathbf{B}(\mathbf{r}/\alpha)\cdot\hat{\mathbf{L}}|\Gamma\rangle/\alpha \approx\mu_{\alpha}\). The transition points \(\alpha_{m}^{*}\), in between magic angles \(\alpha_{m}\) and \(\alpha_{m+1}\), occurs when the flat band touches the upper band generating a transition and consequently changes the magnetic orbital orientation. These touching points relate to the magic angle recurrence. Similarly, in the other layer \(\mathbf{B}(\mathbf{r}/\alpha)\rightarrow\mathbf{B}(-\mathbf{r}/\alpha)\).
coupling potential as follows,
\[U_{\beta}(\mathbf{r})=U(\mathbf{r})+\beta U(-\mathbf{r}) \tag{46}\]
where \(\beta\) is the artificial parameter that controls the non-Abelian nature of TBG. Suppose that \(\beta\in[0,1]\), with \(\beta=0\) we recovered the cTBG case while \(\beta=1\) is presumably an Abelian case. Using this new potential we can write a new Hamiltonian as,
\[\mathcal{H}_{\beta}=\begin{pmatrix}0&D_{\beta}^{*}(-\mathbf{r})\\ D_{\beta}(\mathbf{r})&0\end{pmatrix} \tag{47}\]
where the zero mode operator is,
\[D_{\beta}(\mathbf{r})=\begin{pmatrix}-i\bar{\partial}&\alpha U_{\beta}(\mathbf{r})\\ \alpha U_{\beta}(-\mathbf{r})&-i\bar{\partial}\end{pmatrix} \tag{48}\]
The Abelian case \(\beta=1\) gives,
\[D_{1}(\mathbf{r})=\begin{pmatrix}-i\bar{\partial}&0\\ 0&-i\bar{\partial}\end{pmatrix}+\begin{pmatrix}0&\alpha U_{1}(\mathbf{r})\\ \alpha U_{1}(-\mathbf{r})&0\end{pmatrix} \tag{49}\]
however, \(U_{1}(-\mathbf{r})=U_{1}(\mathbf{r})\) so,
\[D_{1}(\mathbf{r})=-i\bar{\partial}\tilde{\tau}_{0}+\alpha U_{1}(\mathbf{r})\hat{\tau }_{x} \tag{50}\]
where \(U_{1}(\mathbf{r})=2\sum_{\nu}\epsilon^{i(\nu-1)\phi}\cos{(\mathbf{q}_{\nu}\cdot\mathbf{r})}\) is the symmetric coupling potential. Now is clear from these expressions that the vector potential commute and the initial \(SU(2)\) gauge field change to a \(U(1)\) field.
Fig. 11 shows the zero energy mode in log scale as a function of \(\alpha\) for different values of \(\beta\). The non-Abelian structure of cTBG clearly plays a vital role in magic angle recurrence. Interestingly, even at \(\beta=1\) it exhibits a decaying behavior; however, it does not have a well-defined \(3/2\) magic angle recurrence rule. Furthermore, when \(\beta=0\to 1\) the band gap has an extra squeezing as \(\Delta\sim\Delta_{\alpha}e^{-C\beta}\) where \(C\) is a scaling constant and \(\Delta_{\alpha}\) is the original band gap of cTBG independent of the parameter \(\beta\).
## VII Conclusion
In this work, we studied twisted bilayer graphene (TBG) at small magic angles to understand the properties of the electron wave functions. We corroborated that zero mode states converge into coherent Landau states with minimal dispersion. In reciprocal space, they have the same shape (almost Gaussian) as in real space but with inverted parameters. These coherent states exhibit minimal dispersion with a standard deviation in reciprocal space of \(\sigma_{k}=\sqrt{3\alpha/2\pi}\) as \(\alpha\) approaches infinity.
Importantly, as \(\alpha\) approaches infinity, the zero mode equation decouples into its symmetric and antisymmetric components. Exploiting this property and the squared Hamiltonian, we have elucidated the reason for the confinement of the electronic wavefunction as \(\alpha\) tends to infinity. Specifically, this confinement arises from the interplay between the squared norm of the moire potential and the quantized orbital motion of electrons, resulting in the formation of a quantum well. Inside this well, an effective harmonic oscillator is identified, giving rise to Landau levels.
As the squared Hamiltonian gives rise to an effective quantum oscillator, we also showed how to relate it with the non-Abelian quantum Hall effect. Then we defined a magnetic and Zeeman energy. The Zeeman energy is negible for high order magic angles, while the magnetic term can be interpreted as an orbital magnetic energy with a well defined chirality. This highlight the important role of the \(\Gamma\) point wave function parity, as it changes at each gap closing. Finally, we also altered the non-Abelian intrinsic behavior of TBG to see how the \(3/2\) quantization rule of flat bands is destroyed by such artifact.
Therefore, we conclude that the relationship with between TBG physics and the QHE is not coincidental. Our recent analytical work on flat bands in graphene without twists has also confirmed such conclusion in a very clear and concise way [67].
This work was supported by (L.A.N.L. and G.G.N.) and CONAHCyT project 1564464. Leonardo Navarro is supported by a CONAHCyT PhD schoolarship. We thank Eslam Khalaf at Harvard University (now at Texas University) for valuable comments on the section concerning the artificial potential.
Figure 11: Energy \(E\), in log scale, as a function of \(\alpha\) at the \(\mathbf{\Gamma}\)-point. The \(\beta\) parameter transforms the original chiral model with a non-Abelian nature to an Abelian system. In the curve \(\beta=1\), the off-diagonal term is proportional to \(\hat{\tau}_{x}\) and there is no well-defined \(3/2\) magic angle recurrence as for the cTBG (\(\beta=0\)). Vertical lines indicate magic angles.
## VIII Appendix A: Non-Abelian pseudo-magnetic field and angular momentum
As explained before, electrons in TBG behaves like a \(SU(2)\) non-Abelian pseudo-magnetic vector potential. In matrix notation, it follows that,
\[\hat{\mathbf{A}}=(\hat{A}_{x},\hat{A}_{y}) \tag{51}\]
with \(\hat{A}_{x}=A_{1,x}\hat{\tau}_{1}+A_{2,x}\hat{\tau}_{2}\) and \(\hat{A}_{y}=A_{1,y}\hat{\tau}_{1}+A_{2,y}\hat{\tau}_{2}\) where we used the set of Pauli matrices \(\hat{\tau}_{j}\) (with \(j=1,2,3\)) in the pseudo-spin layer degree, and the identity matrix \(\hat{\tau}_{0}\). Explicitly, the components of \(\hat{\mathbf{A}}\) are,
\[\begin{split} A_{1,x}&=\sum_{\nu}\cos{(\mathbf{q}_{ \nu}\cdot\mathbf{r})}\mathbf{q}_{\nu}^{\perp,x},\\ A_{2,x}&=\sum_{\nu}\cos{(\mathbf{q}_{\nu}\cdot\mathbf{r})} \mathbf{q}_{\nu}^{\perp,y},\\ A_{1,y}&=\sum_{\nu}\sin{(\mathbf{q}_{\nu}\cdot\mathbf{r})} \mathbf{q}_{\nu}^{\perp,x},\\ A_{2,y}&=\sum_{\nu}\sin{(\mathbf{q}_{\nu}\cdot\mathbf{r})} \mathbf{q}_{\nu}^{\perp,y}.\end{split} \tag{52}\]
Note that \(\hat{\mathbf{A}}\) is non-Abelian as follows from the fact that \([\hat{\mathbf{A}}_{\nu},\hat{\mathbf{A}}_{\eta}]\neq 0\) for \(\nu\neq\eta\). On the other hand, the off-diagonal terms of \(H^{2}\) related to the angular momentum and interlayer currents [30] have two contributions,
\[\nabla\times\mathbf{A}_{\pm}=\mathbf{B}_{\pm} \tag{53}\]
where \(\mathbf{B}_{\pm}\) represents a pseudo-magnetic field while the other term is,
\[-2i\mathbf{A}_{\pm}\cdot\nabla=-2\mathbf{B}_{\pm}\cdot\hat{\mathbf{L}} \tag{54}\]
Explicitly, we have that,
\[\mathbf{A}(\pm\mathbf{r})\cdot\hat{\mathbf{p}}=-\sum_{\nu}B_{\nu}(\pm\mathbf{r})\mathbf{e}_{z} \cdot(\mathbf{q}_{\nu}\times\hat{\mathbf{p}}) \tag{55}\]
where is convenient to define \(\mathbf{q}_{\nu}\times\hat{\mathbf{p}}=\hat{\mathbf{L}}_{\nu}\) as an operator similar to the angular momentum at the direction \(\nu\), defined by the reciprocal vectors \(\mathbf{q}_{\nu}\). We can interpret \(\hat{\mathbf{L}}_{\nu}\) as the contribution to the angular momentum of each confinement center as \(\mathbf{r}\approx\mathbf{q}_{\nu}\). Accordingly, we can re-express the last relation in a compact form as,
\[\mathbf{A}(\pm\mathbf{r})\cdot\hat{\mathbf{p}}=-\sum_{\nu}\mathbf{B}_{\nu}(\pm\mathbf{r})\cdot \hat{\mathbf{L}}_{\nu} \tag{56}\]
where \(\mathbf{A}(\pm\mathbf{r})=\sum_{\nu}e^{\pm i\mathbf{q}_{\nu}\cdot\mathbf{r}}\mathbf{q}_{\nu}^{\perp}\) with \(\mathbf{q}_{\nu}^{\perp}=\mathbf{q}_{\nu}\times\mathbf{e}_{z}\). The well known relation \(\mathbf{A}\cdot\hat{\mathbf{p}}=-\mathbf{B}\cdot\hat{\mathbf{L}}\) is used here and comes from an uniform and symmetric gauge magnetic vector potential which can be expressed as \(\mathbf{A}=-\frac{1}{2}\mathbf{r}\times\mathbf{B}\), where \(\mathbf{r}\) is the position vector and \(\mathbf{B}\) is the magnetic field. It can be used due to the confinement nature of the wave function which allows to suppose a local uniform magnetic field in the spirit of eq. (31).
Clearly we need to recognize the differences in cTBG compared to the conventional QHE in a radial symmetric potential, i.e., cTBG has a \(C_{3}\) symmetry and the periodicity of the superlattice. Moreover, the pseudo-magnetic fields are position-dependent, and therefore, spatially inhomogeneous. Surprisingly, despite these differences, cTBG satisfies this magnetic property due to the local Abelian features induced by confinement.
Hence, Eq. (56) is analogous to the relation \(\mathbf{A}\cdot\hat{\mathbf{p}}=-\mathbf{B}\cdot\hat{\mathbf{L}}\) used in symmetric gauge magnetic fields. Note in eq. (56) that the direct product between the pseudo-magnetic field and the angular momentum is a superposition of three-plane waves. This off-diagonal operator is quite important for engineering flat bands at magic angles, moreover, introduces the magic angle order parity in the energy equipartition rule balance for flat bands.
On the other hand, the squared TBG system is a \(2\times 2\) matrix operator where the layer degree of freedom introduces \(SU(2)\) Pauli matrices \(\mathbf{\tau}\), in this manner, is convenient to re-express the off-diagonal operator using matrices to consider the effect of both layers, from where it follows that,
\[-2i\hat{\mathbf{A}}\cdot\mathbf{\nabla}=\begin{pmatrix}0&2\sum_{\nu}e^{-i\mathbf{q}_{\nu }\cdot\mathbf{\tau}}\mathbf{q}_{\nu}^{\perp}\cdot\hat{\mathbf{p}}\\ 2\sum_{\nu}e^{i\mathbf{q}_{\nu}\cdot\mathbf{\tau}}\mathbf{q}_{\nu}^{\perp}\cdot\hat{\mathbf{p} }&0\end{pmatrix} \tag{57}\]
since \(\hat{\mathbf{A}}\cdot\hat{\mathbf{p}}\approx-\hat{\mathbf{B}}\cdot\hat{\mathbf{L}}\) follows that,
\[\begin{split}-2i\hat{\mathbf{A}}\cdot\mathbf{\nabla}&=\begin{pmatrix}0& 2A(\mathbf{r})\cdot\hat{\mathbf{p}}\\ 2A(-\mathbf{r})\cdot\hat{\mathbf{p}}&0\end{pmatrix}\\ &=2\begin{pmatrix}0&-B(\mathbf{r})\cdot\hat{\mathbf{p}}\\ -B(-\mathbf{r})\cdot\hat{\mathbf{p}}&0\end{pmatrix}\end{split} \tag{58}\]
This operator is responsible for coupling the layers with pseudo-magnetic potentials \(B(\mathbf{r})\) (layer 1) and \(B(-\mathbf{r})\) (layer 2). This matrix form gives us more insight into the non-Abelian nature of the pseudo-magnetic potentials related to the \(SU(2)\) layer degree of freedom.
## IX Appendix B: symmetrized zero mode equation at the asymptotic limit \(\alpha\to\infty\)
As was mentioned in sec. IV, at the asymptotic limit the zero mode equation is decoupled into two separate
equations as follows,
\[(-\nabla^{2}+\alpha^{2}\mathbf{A}^{2}-i\mu_{\alpha}\alpha(-2i\mathcal{A}_{\mp}\cdot \nabla+\nabla\times\mathcal{A}_{\mp}))\psi_{\pm}\approx 0 \tag{59}\]
\[(\alpha^{2}\Delta-i\mu_{\alpha}\alpha(-2i\mathcal{A}_{\pm}\cdot\nabla+\nabla \times\mathcal{A}_{\pm}))\psi_{\mp}\approx 0 \tag{60}\]
From where if we consider scaling of the spatial coordinates as, \(\mathbf{r}^{\prime}=\mathbf{r}/\alpha\) and therefore, \(\mathbf{\nabla}^{\prime}=(\alpha\mathbf{\nabla})\) and \((\mathbf{\nabla}^{\prime})^{2}=(\alpha\mathbf{\nabla})^{2}\) it follows that energy scale proportional to \(\alpha^{2}\), thus eq. (59) and eq. (60) changes as,
\[(-\nabla^{2}+\mathbf{A}^{2}(\mathbf{r}/\alpha)-2\mu_{\alpha}\mathcal{A}_{\mp}(\mathbf{r}/ \alpha)\cdot\nabla)\psi_{\pm}\approx 0 \tag{61}\]
\[(\Delta(\mathbf{r}/\alpha)-2\mu_{\alpha}\mathcal{A}_{\pm}(\mathbf{r}/\alpha)\cdot \nabla)\psi_{\mp}\approx 0 \tag{62}\]
where the term \(\nabla\times\mathcal{A}_{\pm}(\mathbf{r}/\alpha)=\frac{1}{\alpha}\mathcal{B}_{\pm}\to 0\) as \(\alpha\to\infty\). From eq. (62) follows that,
\[\Delta(\mathbf{r}/\alpha)\psi_{\mp}=2\mu_{\alpha}\mathcal{A}_{\pm}(\mathbf{r}/\alpha) \cdot\nabla\psi_{\mp} \tag{63}\]
thus, substituting eq. (63) into eq. (61) is easy to show that,
\[(-\nabla^{2}+\mathbf{A}^{2}(\mathbf{r}/\alpha)-\Delta(\mathbf{r}/\alpha))\psi_{\pm}\approx 0 \tag{64}\]
From this last expression is clear that we can decouple into two separate equations,
\[(-\nabla^{2}+\mathbf{A}^{2}(\mathbf{r}/\alpha))\psi_{\pm}\approx 0 \tag{65}\]
and
\[\Delta(\mathbf{r}/\alpha)\psi_{\pm}\approx 0 \tag{66}\]
These equations give the localization behavior in the asymptotic limit \(\alpha\to\infty\). Both eqns. (65) and (66) gives information related to the radial and angular confinement position, respectively. In particular, the angular directions are defined by \(\Delta(\mathbf{r})=0\) giving confinement paths along the unitary vectors \(\pm\mathbf{q}_{\nu}\), this is analogous to saying that \([A_{x},A_{y}]=0\), therefore, the electronic wave function is locally Abelian. In this manner, cTBG can be interpreted at the asymptotic limit \(\alpha\to\infty\) as an effective quasi-1D system along these preferential directions.
|
2309.17073 | Potential biases and prospects for the Hubble constant estimation via
electromagnetic and gravitational-wave joint analyses | GW170817 is a binary neutron star merger that exhibited a gravitational wave
(GW) and a gamma-ray burst, followed by an afterglow. In this work, we estimate
the Hubble constant ($H_0$) using broad-band afterglow emission and
relativistic jet motion from the Very Long Baseline Interferometry and Hubble
Space Telescope images of GW170817. Compared to previous attempts, we combine
these messengers with GW in a simultaneous Bayesian fit. We probe the $H_0$
measurement robustness depending on the data set used, the assumed jet model,
the possible presence of a late time flux excess. Using the sole GW leads to a
$20\%$ error ($77^{+21}_{-10}$ km/s/Mpc, medians, 16th-84th percentiles),
because of the degeneracy between viewing angle ($\theta_v$) and luminosity
distance ($d_L$). The latter is reduced by the inclusion in the fit of the
afterglow light curve, leading to $H_0=96^{+13}_{-10}$ km/s/Mpc, a large value,
caused by the fit preference for high viewing angles due to the possible
presence of a late-time excess in the afterglow flux. Accounting for the latter
by including a constant flux component at late times brings
$H_0=78.5^{+7.9}_{-6.4}$ km/s/Mpc. Adding the centroid motion in the analysis
efficiently breaks the $d_L-\theta_v$ degeneracy and overcome the late-time
deviations, giving $H_0 = 69.0^{+4.4}_{-4.3}$ km/s/Mpc (in agreement with
Planck and SH0ES measurements) and $\theta_v = 18.2^{+1.2}_{-1.5}$ deg. This is
valid regardless of the jet structure assumption. Our simulations show that for
next GW runs radio observations are expected to provide at most few other
similar events. | Giulia Gianfagna, Luigi Piro, Francesco Pannarale, Hendrik Van Eerten, Fulvio Ricci, Geoffrey Ryan | 2023-09-29T09:09:10Z | http://arxiv.org/abs/2309.17073v2 | Potential biases and prospects for the Hubble constant estimation via electromagnetic and gravitational-wave joint analyses
###### Abstract
GW170817 is a binary neutron star merger that exhibited a gravitational wave (GW) and a gamma-ray burst, followed by an afterglow. In this work, we estimate the Hubble constant (\(H_{0}\)) using the broad-band afterglow emission and the relativistic jet motion from the Very Long Baseline Interferometry and Hubble Space Telescope images of GW170817. Compared to previous attempts, we combine these messengers with GW in a simultaneous Bayesian fit. We probe the \(H_{0}\) measurement robustness depending on the data set used, the assumed jet model, the possible presence of a late time flux excess. Using the sole GW leads to a \(\sim 20\%\) error (\(77^{+21}_{-10}\) km s\({}^{-1}\)Mpc\({}^{-1}\), medians, 16th-84th percentiles), because of the degeneracy between viewing angle (\(\theta_{v}\)) and luminosity distance (\(d_{L}\)). The latter is reduced by the inclusion in the fit of the afterglow light curve, leading to \(H_{0}=96^{+13}_{-10}\) km s\({}^{-1}\)Mpc\({}^{-1}\), a large value, caused by the fit preference for high viewing angles due to the possible presence of a late-time excess in the afterglow flux. Accounting for the latter by including a constant flux component at late times brings \(H_{0}=78.5^{+7.9}_{-6.4}\) km s\({}^{-1}\)Mpc\({}^{-1}\). Adding the centroid motion in the analysis efficiently breaks the \(d_{L}-\theta_{v}\) degeneracy and overcome the late-time deviations, giving \(H_{0}=68.9^{+4.4}_{-4.3}\) km s\({}^{-1}\)Mpc\({}^{-1}\) (in agreement with _Planck_ and SH0ES measurements) and \(\theta_{v}=17.8^{+1.3}_{-1.5}\) deg. This is valid regardless of the jet structure assumption. Our simulations show that for the next GW runs radio observations are expected to provide at most few other similar events.
keywords: neutron star mergers - gamma-ray bursts - gravitational waves - cosmology: cosmological parameters
## 1 Introduction
The \(\Lambda\)CDM model is the currently adopted standard model of cosmology. Great effort has been put into the estimation of one of its key parameters, the Hubble constant \(H_{0}\), the current expansion rate of the Universe. The \(\Lambda\)CDM model calibrated with data from the _Planck_ mission, that is, from early-Universe physics, predicts the Hubble constant to 1% precision: \(67.4\pm 0.5\) km s\({}^{-1}\)Mpc\({}^{-1}\)(Planck Collaboration et al., 2016) (we quote medians and 68% credible intervals). However, \(H_{0}\) can also be empirically measured locally (\(z<1\)), in the late-time Universe. The latter kind of measurements, such as from SH0ES (Supernovae \(H_{0}\) for the Equation of State, Riess et al., 2019) and H0ItoOW (\(H_{0}\) lenses in COSMOGRAIL's Wellspring, Wong et al., 2019), favour larger values of \(H_{0}\): \(74.0\pm 1.4\) km s\({}^{-1}\)Mpc\({}^{-1}\) and \(73.3\pm 1.8\) km s\({}^{-1}\)Mpc\({}^{-1}\), respectively. Thus, the early-Universe data seem to be consistently predicting a low value of \(H_{0}\), while the late-time Universe data a higher one, leading to a more than \(3\sigma\) discrepancy (see for an extensive discussion Verde et al., 2019).
A way to solve this discrepancy is to measure the Hubble constant through an independent method, using, for example, gravitational waves (GWs), where the distance is directly estimated from fitting the waveform, relying only on the general theory of relativity. This estimation does not depend on cosmic distance ladders. GWs can determine the Hubble constant if the redshift is provided by an electromagnetic (EM) counterpart, for example by a kilonova (Taylor et al., 2012; Feeney et al., 2021). This is the so-called standard sirens method (Nissanke et al., 2010). Even when a unique counterpart cannot be identified, the redshifts of all the potential host candidates can be incorporated in the analysis, when the localization volume is sufficiently small. This is not as constraining as the first scenario, but it is still informative, once many detections are available. In this case, more than 50 binary neutron stars are needed to reach a 6% \(H_{0}\) measurement (Chen et al., 2018). The same holds also for GWs emitted by binary black holes, even if the localization volumes are usually much larger than for binary neutron stars. In this case \(\sim 500\) events are needed to reach a precision \(<7\%\) on \(H_{0}\)(Chen et al., 2022; Bom and Palmese, 2023).
The main problem of the standard sirens method is the degeneracy between the luminosity distance and inclination (the angle between the total angular momentum and the line of sight) estimated from GWs. They are measured from the amplitude of the two GW polarizations. At small inclinations, the cross and plus polarizations
have nearly the same amplitude, but the larger the inclination, the more they decrease and start to differ (Usman et al., 2019). This means that the GW signal is strongest at small inclinations (face-on or face-off), but, in these cases, we cannot measure distance and inclination separately. Therefore, associated EM observations can lead to a tighter measurement of \(H_{0}\) by providing additional constraints on the inclination.
The first measurement of \(H_{0}\) using GWs was obtained with the first binary neutron star merger observation GW170817, by combining the distance from the GW signal and the recession velocity of the host galaxy, resulting in \(H_{0}\) of \(74^{+16}_{-8}\) km s\({}^{-1}\)Mpc\({}^{-1}\)(Abbott et al., 2017). GW170817 was detected by the two Advanced LIGO detectors (Aasi et al., 2015) and Advanced Virgo (Acernese et al., 2015) on August 17, 2017 (Abbott et al., 2017). It was identified as the collision of two neutron stars, which is theoretically expected to be followed by a highly relativistic jet, from which a gamma-ray burst (GRB) of short duration (\(\lesssim 2\) s) is produced (Blinnikov et al., 1984; Paczynski, 1986; Eichler et al., 1989; Paczynski, 1991; Narayan et al., 1992; Rhoads, 1997; Piran, 2005; Nakar, 2007; Berger, 2014; Nakar, 2020; Salafia & Ghirlanda, 2022). This was proven by the joint detection of the GW event and of the short, hard burst GRB 170817A (Goldstein et al., 2017; Savchenko et al., 2017; Abbott et al., 2017); then observations in the X-ray (Troja et al., 2017) and, later, radio frequencies (Hallinan et al., 2017) showed the afterglow emission. These observations are consistent with a short GRB viewed off-axis (e.g., Troja et al., 2017; Margutti et al., 2017; Haggard et al., 2017; Finstad et al., 2018; Alexander et al., 2018; Gill & Granot, 2018; Dobie et al., 2018; Granot et al., 2018; D'Avanzo, P. et al., 2018; Lazzati et al., 2018; Lyman et al., 2018; Margutti et al., 2018; Mooley et al., 2018; Troja et al., 2018; Fong et al., 2019; Hajele et al., 2019; Lamb et al., 2019; Wu & MacFadyen, 2019; Piro et al., 2019; Ryan et al., 2020; Troja et al., 2020, 2021; Takahashi & Ioka, 2021; Hajela et al., 2022; Gianfagna et al., 2023; McDowell & MacFadyen, 2023; Hayes et al., 2023). Moreover, radio observations that measure the superluminal motion of the jet centroid in radio and optical images were performed (Mooley et al., 2018; Ghirlanda et al., 2019; Mooley et al., 2022).
The EM information on the inclination derived from the afterglow and the relativistic jet motion of GW170817 allow us to improve the Hubble constant measurement for the reason stated above (see also Bulla et al., 2022, for a review). The common practice is to use the GW analysis results (posterior) for inclination and luminosity distance, and apply these as _a priori_ information on the inclination obtained by fitting the EM data sets (or the other way around). This can be done using Bayesian analysis. The results retrieved in this way run from low values such as \(H_{0}=66.2^{+4.4}_{-4.2}\) km s\({}^{-1}\)Mpc\({}^{-1}\), from Dietrich et al. (2020), who fit the kilonova emission and the jet centroid motion, and \(H_{0}=69.5\pm 4\) km s\({}^{-1}\)Mpc\({}^{-1}\), from Wang & Giannios (2021), who use the afterglow emission, to high values such as \(H_{0}=75.5^{+11.6}_{-9.6}\) km s\({}^{-1}\)Mpc\({}^{-1}\), by Guidorzi et al. (2017), who fit the afterglow up to 40 days from the merger. Wang et al. (2023) estimate \(H_{0}=71.80^{+4.15}_{-4.07}\) km s\({}^{-1}\)Mpc\({}^{-1}\), modelling the jet with hydrodynamic simulations, including also a sub-relativistic kilonova outflow. Palmese et al. (2023) use the same model as Wang et al. (2023), fitting the afterglow, but including _a priori_ information on the Lorentz factor from the jet centroid motion. They find \(H_{0}=75.46^{+5.34}_{-5.39}\) km s\({}^{-1}\)Mpc\({}^{-1}\). Hotokezaka et al. (2018) fit the afterglow and the jet centroid motion, finding \(H_{0}=68.9^{+4.7}_{-4.6}\) km s\({}^{-1}\)Mpc\({}^{-1}\). In general, the smaller is the viewing angle, the higher is the luminosity distance (because of their degeneracy), the lower is \(H_{0}\).
At present, these EM-informed measurements are a factor 2 more precise than the first standard-siren measurement for GW170817 that fitted GW data only (Abbott et al., 2017). For this reason, this method is very compelling. However, there are potential systematics that should be addressed. An open issue, as outlined in Nakar & Piran (2021), is the sensitivity of EM-derived parameters, as the inclination, on the assumed jet structure. A related problem is the presence of deviations from the assumed model due to a possible flux excess at late times. In this work, we asses these issues with a comprehensive approach. We estimate the Hubble constant exploiting the GW, the broad-band afterglow and the centroid motion of the relativistic jet of the GW170817 event. We test the sensitivity of the results on the jet structure, and check for potential biases, both due to the jet model assumption and to the possible presence of an excess at late times in the afterglow. We fit the GW, the afterglow and the centroid motion data sets simultaneously using a Bayesian approach (Gianfagna et al., 2023) and compare the results obtained fitting only afterglow and GW data, and then including also the centroid motion of the relativistic jet. We focus on the degeneracy between the viewing angle and the luminosity distance. As already stated above, because of this degeneracy, the standard sirens method at present cannot give a Hubble constant estimation at the _Planck_ or SH0ES level of precision. Here we study how we can break this degeneracy including different types of EM messengers. We also test the robustness of the derived \(H_{0}\) by implementing the different jet models on real data and simulations.
In Section 2 we present the data sets used in this work, analysed following the method presented in Section 3. In Section 4 we show the results that we obtained, both for the energetics, microphysics and geometry of the event, and for the Hubble constant. Finally in Section 5 we summarize our conclusions.
## 2 Data
This work used three data sets pertaining to the GW170817 event and analyzes them simultaneously. These are the broad-band afterglow emission, the centroid position of the jet as a function of time, and the GW strain timeseries.
Regarding the afterglow emission, we include in the analysis data in the X-ray (_Chandra_ and _XMM_), radio (frequencies from 0.7 to 15 GHz for VLA, ATCA, uGMRT, orHERLIN, MeerKAT) and optical (Hubble Space Telescope, HST) published in Troja et al. (2017); Fong et al. (2019); Makhathini et al. (2021); Troja et al. (2021); O'Connor & Troja (2022), see also Fig. 1.
We also include the centroid motion of the relativistic jet, visible in optical and radio images (Mooley et al., 2018; Ghirlanda et al., 2019; Mooley et al., 2022). For this analysis, we use the positions and uncertainties of the data points from VLBI (Very Long Baseline Interferometry) at 75, 206, 230 days reported in Mooley et al. (2018); Ghirlanda et al. (2019), and from HST at 8 days (Mooley et al., 2022). For the latter we use the positions (RA, Dec) and their statistical uncertainties, to which we add in quadrature the two systematic uncertainty contributions to take into account the different reference frame of the optical and radio images (as in Mooley et al., 2022).
The GW data of GW170817 are publicly available at the GW Open Science Center1(Abbott et al., 2021). We use the cleaned version of the strain data, where the glitch discussed in Abbott et al. (2017) has been removed.
## 3 Joint analysis of electromagnetic and gravitational-wave data
We use the Bayesian inference to process the GW and EM data. The two domains can be joint in one analysis as the models describing the two emissions (EM and GW) have parameters in common, namely the viewing angle and the luminosity distance. We perform three fits, one including only GWs, one folding GWs and the afterglow emission, and one including also the jet centroid motion data set. In this Section we describe the GW and the EM models, along with the joint fit method (see also Gianfagna et al., 2023).
### Electromagnetic and gravitational-wave models
#### 3.1.1 Afterglow light curve and centroid motion
We model the broad-band afterglow light curve using the python package afterglowpy(Ryan et al., 2020). The observer frame flux of synchrotron radiation is estimated for various jet geometries. In this work we use a Gaussian structured jet model, where the energy drops according to \(E(\theta)=E_{0}\exp(-\theta^{2}/2\theta_{c}^{2})\), up to a truncating angle \(\theta_{w}\). \(E_{0}\), \(\theta_{c}\) and \(\theta_{w}\) are free parameters in the fit, representing the on-axis isotropic equivalent kinetic energy of the blast wave, the jet opening angle and the jet total angular width. We also use a power law jet model, where the energy is given by \(E(\theta)=E_{0}(1+(\theta^{2}/(b\theta_{c}^{2}))^{-b/2}\), where \(b\) is the power law index. The electrons are shock-accelerated and emit synchrotron radiation, with an energy distribution given by a power law with slope \(-p\), the fraction of their post-shock internal energy is \(\epsilon_{e}\), while the fraction of post-shock internal energy in the magnetic field is denoted by \(\epsilon_{B}\). Furthermore, the circumburst medium number density \(n_{0}\), the viewing angle \(\theta_{v}\), between the jet axis and the line of sight, and the luminosity distance \(d_{L}\) are also free parameters. The participation fraction \(\chi_{N}\) is fixed to 1.0.
In order to model also the jet centroid motion, we use an extended version of afterglowpy(Ryan et al, in prep), where the afterglow image centroid position and sizes can be estimated. The imaging plane is perpendicular to the line of sight of the observer, and the centroid position and sizes are computed as an intensity-weighted quantity. The outputs of the model that we use in this work are the centroid position in the sky (right ascension, RA, and declination, Dec) and the flux expected at each particular time. At 8 days, in the optical, only the kilonova emission is visible, and not the afterglow. For this reason, we place a \(5\sigma\) upper limit for the optical flux of \(4\times 10^{-5}\) mJy. The parameters are the same as above, with an extra three parameters: \(\mathrm{RA_{0},Dec_{0}}\), which represent the jet origin in the sky image, and the position angle PA, which is the orientation of the jet direction in the image.
The prior probability distributions are reported in Table 1. The prior for the viewing angle \(\theta_{v}\) is isotropic, meaning a sinusoidal distribution from \(0^{\circ}\) to \(90^{\circ}\) (uniform in cosine). For the luminosity distance, we use a uniform-in-volume prior (\(\propto d_{L}^{2}\)) from 1 to 75 Mpc, which distributes mergers uniformly throughout a Euclidean universe.
#### 3.1.2 Gravitational waves
We use the IMRPhenomPv2_NRTidal waveform approximant (Hannam et al., 2014; Dietrich et al., 2017, 2019, 2019) to model GWs from binary neutron star mergers. The intrinsic parameters (the source physical parameters that shape the emitted signal) used by this model refer to the masses, the spins and the tidal deformabilities of the two neutron stars. The two component masses \(m_{1}\) and \(m_{2}\), for which we follow the common convention \(m_{1}\geq m_{2}\), will be quoted as the chirp mass (Finn and Chernoff, 1993; Blanchet et al., 1995; Cutler and Flanagan, 1994; Poisson and Will, 1995),
\[\mathcal{M}=\frac{(m_{1}m_{2})^{3/5}}{(m_{1}+m_{2})^{1/5}}\,, \tag{1}\]
and the mass ratio \(q=m_{2}/m_{1}\leq 1\). The components of the dimensionless spin angular momenta of each neutron star, \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\), constitute six additional parameters, which are: \(a_{1}\) and \(a_{2}\), the dimensionless spin magnitudes; \(\theta_{1}\) and \(\theta_{2}\), the tilt angles between the spins and the orbital angular momentum; \(\phi_{1,2}\), the azimuthal angle separating the spin vectors; and \(\phi_{JL}\), the opening angle of the cone of
Figure 1: Top panel. Broad-band afterglow of GW170817: data and fits. From bottom to top, red points refer to the X-ray observations by _Chandra_ and _XMM_ at 5 keV, orange ones to observations by _HST_, F606W filter, in the optical band, and blue tones to observations in the radio band from VLA (Very Large Array) at 3 GHz. The continuous and dotted lines represent the fit of the GW, broad-band afterglow, and centroid motion (GW+AG+C) data and of the GW and afterglow (GW+AG) data, respectively. For sake of simplicity, the fit for the radio band is plotted only for the observations at 3 GHz, but it is not limited to this single frequency. Bottom panel. Centroid motion of the relativistic jet from HST and VLBI images at 8 (negative RAs), 75, 206 and 230 days (Mooley et al., 2018; Ghirlanda et al., 2019). The blue dots represents the positions predicted by the model, the blue contours represent the 68% probability region.
precession of the orbital angular momentum about the system's total angular momentum. The tidal deformability of each star is described by the dimensionless parameters \(\Lambda_{1}\) and \(\Lambda_{2}\).
The extrinsic parameters (that further shape the observed GW signal) in this model are the RA and DEC of the source (i.e., its sky position), the luminosity distance \(d_{L}\) of the source, the inclination angle \(\theta_{\rm{IN}}\) between the total angular momentum of the binary and the line of sight from the source to the observer, the polarization angle \(\psi\), and the phase and time of coalescence. In this work, we fix the sky-position (RA and Dec) of the source to the one of AT 2017gfo (Abbott et al., 2017c). The GW RA and Dec correspond to the RA\({}_{0}\), Dec\({}_{0}\) parameters of the centroid motion model (Section 3.1.1); however, the precision on RA and Dec from the GW data does not reach the mas level, as instead do RA\({}_{0}\) and Dec\({}_{0}\), so the analysis does not benefit from promoting the RA and Dec of the GW and the EM models to common, free parameters. Moreover, we do not report the time of coalescence in the results, as this is of little interest in the context of our study, and we marginalize over the phase of coalescence. The latter marginalization is justified by the small spin magnitudes (see Table 1), and hence the negligible precession effects Romero-Shaw et al. (2020).
Given that the GRB jet develops around the total angular momentum, the inclination angle \(\theta_{\rm{IN}}\) and the viewing angle \(\theta_{\nu}\) introduced in Sec. 3.1.1 are essentially the same quantity, and thus a common parameter of the GW and EM domains. More precisely, in the case of GW170817, the two angles are supplementary (see Eq.(1) in Gianfagna et al., 2023). The other parameter shared by the GW and EM domains is the luminosity distance. This implies that there are 23 parameters when addressing the GW and EM domains in our approach.
The priors for the intrinsic and extrinsic GW parameters are set as in Romero-Shaw et al. (2020), in the case of the "Low Spin" analysis, see Table 1.
### Joint fit
We use Bayesian inference to analyse jointly the data from the GW and EM domains, which we denote as \(d_{GW}\) and \(d_{EM}\), respectively (the same methodology is presented in Gianfagna et al., 2023). The three main components are: a _prior distribution_, which models the available knowledge about a given parameter before data collection in a statistical distribution; the _likelihood function_, which encloses the information about the parameter from observed data; the _posterior distribution_, which combines the prior distribution and the likelihood function using the Bayes theorem. Thus, the multi-dimensional posterior probability distribution for our set of parameters \(\vec{\theta}\) is:
\[p(\vec{\vartheta}|d_{\rm{EM}},d_{\rm{GW}})\equiv\frac{\mathcal{L}_{\rm{EM+GW }}(d_{\rm{EM}}.d_{\rm{GW}}|\vec{\vartheta})_{\rm{\pi}}(\vec{\vartheta})}{ \mathcal{Z}_{\vec{\vartheta}}} \tag{2}\]
where \(\mathcal{L}_{\rm{EM+GW}}(d_{\rm{EM}}.d_{\rm{GW}}|\vec{\vartheta})\) is the likelihood function that folds the EM and GW domains, \(\pi(\vec{\vartheta})\) is the multi-dimensional prior probability distribution for our parameters, and \(\mathcal{Z}_{\vec{\vartheta}}\) is the Bayesian evidence. This is obtained by marginalizing the joint likelihood over the GRB and GW parameters:
\[\mathcal{Z}_{\vec{\vartheta}}=\int\mathcal{L}_{\rm{EM+GW}}(d_{\rm{EM}},d_{ \rm{GW}}|\vec{\vartheta})\pi(\vec{\vartheta})d\vec{\vartheta}\,. \tag{3}\]
When the two data sets are independent, as is the case here, the likelihood \(\mathcal{L}_{\rm{EM+GW}}\)(see also Fan et al., 2014; Biscoveanu et al., 2020) is simply given by the product of the EM and GW likelihoods
\[\mathcal{L}_{\rm{EM+GW}}(d_{\rm{EM}},d_{\rm{GW}}|\vec{\vartheta})=\mathcal{L} _{\rm{EM}}(d_{\rm{EM}}|\vec{\vartheta})\times\mathcal{L}_{\rm{GW}}(d_{\rm{GW}} |\vec{\vartheta})\,. \tag{4}\]
The EM and GW likelihoods are both Normal distributions. The GW likelihood function is defined in, e.g., Finn (1992); Romano and Cornish (2017); Romero-Shaw et al. (2020); in this likelihood, both the data and the model are expressed in the frequency domain.
In the EM case, when only the afterglow is folded with the GW data (GW+AG), the likelihood function is proportional to \(\exp(-\chi^{2}/2)\), where \(\chi^{2}\) is given by the comparison between the expected flux and the entire broadband set of afterglow data.
In the case of the fit of the afterglow, centroid motion and GW strain (GW+AG+C), we assume the afterglow and the centroid motion data sets to be independent. The centroid data set includes the positions (RA and Dec) at each time and their respective fluxes. We assume the likelihood function to be a multivariate Normal distribution, where the expected centroid positions and fluxes from the model are compared with the three offset positions (RA and Dec) and the corresponding flux measurements. Moreover, we assume the covariance matrix to be diagonal (see Ryan et al, in prep, for more details). We place the centre of the centroid motion reference system at the positions corresponding to the observations at 75 days, as in Mooley et al. (2018); Ghirlanda et al. (2019); Mooley et al. (2022).
We use the Bayesian inference library bilky(Ashton et al., 2019; Smith et al., 2020) and the dynamic nested sampling package dynesty(Speagle, 2020) to simultaneously fit the EM and GW data sets. We use 2000 live points and multiple bounding ellipsoids as bounding strategy. The corner plots are created with the corner package (Foreman-Mackey, 2016).
### Hubble constant estimation
At small redshifts, as in the GW170817 case, the luminosity distance does not depend on the cosmological model, so the Hubble constant
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Prior functional form & Bounds \\ \hline \(d_{L}\) [Mpc] & \(\propto d_{L}^{2}\) & [1, 75] \\ \(\theta_{\nu}\) [deg] & \(\sin(\theta_{\nu})\) & [0, 90] \\ \hline \(\log_{10}E_{0}/\rm{erg}\) & Uniform & [49, 56] \\ \(\theta_{\rm{C}}\) [deg] & Uniform & [0, 90] \\ \(\theta_{\rm{W}}\) [deg] & Uniform & [0, 90] \\ \(\log_{10}n_{0}\) & Uniform & [-7, 2] \\ \(p\) & Uniform & [2, 3] \\ \(\log_{10}\epsilon_{\rm{E}}\) & Uniform & [-5, 0] \\ \(\log_{10}\epsilon_{\rm{B}}\) & Uniform & [-5, 0] \\ RA\({}_{0}\) [mas] & Uniform & [-10, 10] \\ Dec\({}_{0}\) [mas] & Uniform & [-10, 10] \\ PA [deg] & Uniform & [0, 360] \\ \hline \(\mathcal{M}\) [\(M_{\odot}\)] & Uniform & [1.18, 1.21] \\ \(q\) & Uniform & [0.125, 1] \\ \(a_{1}\) & Uniform & [0, 0.05] \\ \(a_{2}\) & Uniform & [0, 0.05] \\ \(\theta_{1}\) [deg] & \(\sin(\theta_{1})\) & [0, 180] \\ \(\theta_{2}\) [deg] & \(\sin(\theta_{2})\) & [0, 180] \\ \(\phi_{1,2}\) [deg] & Uniform & [0, 360] \\ \(\phi_{\rm{L}}\) [deg] & Uniform & [0, 360] \\ \(\psi\) [deg] & Uniform & [0, 180] \\ \(\Lambda_{1}\) & Uniform & [0, 5000] \\ \(\Lambda_{2}\) & Uniform & [0, 5000] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Prior probability distributions for the shared, the EM and GW fitted parameters.
can estimated from
\[v_{H}=H_{0}\cdot d_{L}\,, \tag{5}\]
where \(v_{H}\) is the local "Hubble flow" velocity, in this case at the position of GW170817, and \(d_{L}\) is the luminosity distance to the source. We follow the same procedure as Abbott et al. (2017), assuming a Normal distribution for \(v_{H}=3017\pm 166\) km s\({}^{-1}\).
## 4 Results and Discussion
We assume a Gaussian jet profile throughout the work, with the exception of Sec. 4.2, where we assume a power law profile, in order to test the sensitivity of the results on the jet model.
The parameter medians and 16th-84th percentiles are collected in Table 2. The second column reports the results of the GW-only fit, while the third and fourth column refer to the fit including the broadband afterglow and GW (GW+AG), and to the complete fit that also includes the centroid (GW+AG+C), respectively.
The results from the GW fit are in agreement with previous works Abbott et al. (2017, 2019); Romero-Shaw et al. (2020), the \(H_{0}\) value that we retrieve from the GW-only fit is \(H_{0}=77^{+21}_{-10}\) km s\({}^{-1}\)Mpc\({}^{-1}\) (median, 16th-84th percentiles), see Fig. 2, bottom panel, and Fig. 3. As we already pointed out above, one of the main sources of uncertainty in the GW measurement of the inclination and of the distance (and \(H_{0}\)) is due to their degeneracy, see the light blue contours in Fig. 2, top panel. This means that it is hard to distinguish whether a source is further away with the binary orbit facing Earth (face-on or face-off), or closer but highly inclined (edge-on, Usman et al., 2019). If we assume to have inclinations from 0 to 90 deg (like in our case), \(d_{L}\) is a decreasing function of the inclination (viewing angle, \(\theta_{v}\)). Another independent messenger is needed to break this degeneracy, which, in this case, comes from the afterglow.
The afterglow light curve alone, however, is not enough to efficiently break this degeneracy. Including it in the fit, only helps in shrinking the degeneracy region, see green filled contours in Fig. 2, top panel, the uncertainty on the viewing angle is reduced by a factor \(\sim 3\) (from \(\theta_{BN}=146^{+16}_{-18}\) deg to \(130^{+5}_{-5}\) deg), the one on the distance by a factor \(\sim 2\) (from \(d_{L}=39.2^{+5.4}_{-8.6}\) Mpc to \(31.3^{+3.0}_{-3.6}\) Mpc). However, this is not an accurate measurement, in fact the medians are on the high-\(\theta_{v}\)-low-\(d_{L}\) end of the GW 1\(\sigma\) region, leading to quite a low distance (and large viewing angle, \(\theta_{v}=50^{+5}_{-5}\) deg), which is however within 3\(\sigma\) from the generally accepted value of \(\sim 40\) Mpc. Our \(H_{0}\) value from the GW+AG fit is quite high: we retrieve \(H_{0}=96^{+13}_{-10}\) km s\({}^{-1}\)Mpc\({}^{-1}\) (median, 16th-84th percentiles), see the green filled contours in Fig. 2, bottom panel, and Fig. 3, top panel.
As explained in more details in Sections 4.1 and 4.3, this result is mostly driven by the possible presence of a late time additional component, which can be seen in the top panel of Fig. 1. The GW+AG model (dotted line) fits very well the light curve, especially the data points at late time. The latter force the model to prefer a high \(\theta_{v}\), with respect to the fit including also the jet centroid motion, GW+AG+C, represented with a solid line. Indeed, Wang and Giannios (2021), using the same mesegness but limiting the light curve data up to \(\sim 300\) days (when no flux deviation is present yet), retrieve \(H_{0}=69.5\pm 4\) km s\({}^{-1}\)Mpc\({}^{-1}\), with a \(d_{L}=43.4\pm 1\) Mpc and \(\theta_{v}=22\pm 1\) deg. The jet structure model they use is from 3-dimensional general-relativistic magnetohydrodynamical simulations. Our results are in agreement with them if we account for the possible late time excess with a constant flux component, see Section 4.3. Also Wang et al. (2023) fit the afterglow light curve and include an additional component at late times, in particular a sub-relativistic kilonova outflow. They estimate \(H_{0}=71.80^{+4.15}_{-4.07}\) km s\({}^{-1}\)Mpc\({}^{-1}\). The kilonova component helps in the fit of the light curve, keeping the viewing angle around 30 deg. Also in this case, they model the jet using hydrodynamic simulations. Guidorzi et al. (2017) get \(H_{0}=75.5^{+11.6}_{-9.6}\) km s\({}^{-1}\)Mpc\({}^{-1}\), assuming a Top Hat jet and fitting the afterglow data up to 40 days from the merger. The latter is the reason why the \(H_{0}\) uncertainties are larger with respect to more recent works, their \(\theta_{v}\) posterior distribution peaks at \(\sim\)30 deg. However, the Top Hat jet is not the best choice for GW170817 light curve, as it cannot reproduce the slope before the peak.
It is also interesting to note that this similar results are obtained using jet structure different from the one adopted here. We will come back to this point and explore how \(H_{0}\) changes depending on the jet structure in Section 4.2.
From these results, we find that limiting the analysis to GW+AG domains could be subject to possible systematics in the \(H_{0}\) determination due to the detections at late times in the afterglow light curve. This adds up to the degeneracy between \(\theta_{v}\) and \(\theta_{c}\), proper of a Gaussian modelling of the jet, that is evident in the left panel of Fig. 4. Here we show the marginalised, 2D posterior probability distributions for the jet opening angle \(\theta_{c}\) and the viewing angle \(\theta_{v}\) in the cases of the the joint GW+AG fit (in red contours).
For the reasons stated above, in order to break both the \(d_{L}-\theta_{v}\) GW degeneracy and the \(\theta_{v}-\theta_{c}\) EM one, we have to include not only the afterglow light curve, but also the centroid motion in the analysis. We note that the sole centroid motion is not enough to break the \(d_{L}-\theta_{v}\) degeneracy, being itself subjected to some level of degeneracy between these two parameters, see Appendix A for a more detailed discussion.
The results for the GW+AG+C fit, using both the afterglow and the centroid, are written in Table 2, fourth column. This fit not only shifts the viewing angle to lower values, but also shrinks further the degeneracy between \(\theta_{v}\) and \(\theta_{c}\), see left panel of Fig. 4, blue contours. This happens because the relativistic jet motion strongly constrain the viewing angle. From Fig. 1, we see that the GW+AG+C model does not fit well the late time light curve, especially in the X-rays and in the radio bands, unlike the GW+AG fit, recognizing it as a possible excess not due to the afterglow emission. We try to account for this adding a constant flux component at late times. The latter helps in fitting that part of the light curve, but results in very similar posteriors to the GW+AG+C fit without it (see the full results in Section 4.3). This shows that, adding the afterglow centroid motion in the analysis, provides robustness to the fit.
The \(\theta_{v}\) and \(d_{L}\) posteriors of the GW+AG+C fit are in the low-\(\theta_{v}\)-high-\(d_{L}\) GW 1\(\sigma\) region of the degeneracy, predicting a distance of \(43.8^{+1.5}_{-1.4}\) Mpc and a viewing angle of \(17.8^{+1.3}_{-1.5}\) deg. The centroid addition in the fit helps in shrinking the uncertainties in these parameters, which are smaller by a factor of 4-5 for the distance and 8-9 for \(\theta_{v}\) with respect to the GW analysis, breaking their degeneracy, see the purple and yellow filled contours in the top panel of Fig. 2. From the GW+AG+C fit we obtain \(H_{0}=68.9^{+4.4}_{-4.3}\) km s\({}^{-1}\)Mpc\({}^{-1}\) (median, 16th-84th percentiles). It is to be noted that, adding the centroid in the analysis, brings to an about three times more precise \(H_{0}\) measurement than the GW-only standard-srien measurement. The _Planck_ estimate of \(67.74\pm 0.46\) km s\({}^{-1}\)Mpc\({}^{-1}\) and the SHoES value of \(74.0\pm 1.4\) km s\({}^{-1}\)Mpc\({}^{-1}\)(Riess et al., 2019), are both within 1\(\sigma\), see Fig. 2, bottom panel, and Fig. 3, top panel. This result is in agreement also with other works, like Hotokezaka et al. (2018), who use the posteriors from GW, and fit the afterglow flux and the centroid motion, finding \(H_{0}=68.9^{+4.7}_{-4.6}\) km s\({}^{-1}\)Mpc\({}^{-1}\). Palmese et al. (2023) use the same model as Wang et al. (2023) (hydrodynamic simulations), and use a prior on the jet break Lorentz factor from the
centroid measurements, which acts also on the jet opening angle and on the viewing angle. They find \(H_{0}=75.46^{+5.34}_{-5.39}\) km s\({}^{-1}\)Mpc\({}^{-1}\). Leaving the Lorentz factor free leads to an opening angle of around 7 deg, which is instead consistent with our GW+AG results for \(\theta_{c}\).
Our values of \(\theta_{v}\) and \(\theta_{c}\) from the GW+AG+C fit are in agreement with other works that included the centroid motion in their analysis. Ghirlanda et al. (2019) predicts \(\theta_{c}=3.1\pm 1\) deg, with a viewing angle of about 15 deg, while Mooley et al. (2018, 2022) an opening angle of \(<5\) deg, and a viewing angle \(<24\) deg.
### About the difference between GW+AG and GW+AG+C fits
The GW+AG and GW+AG+C produce quite different results, not only regarding the Hubble constant, the luminosity distance and the viewing angle, but also the energetics and microphysics of the jet. This, as we stated above, is due to the light curve data points at late times, which are well captured by the GW+AG fit, but not by the GW+AG+C fit, see Fig. 1. The viewing angle values are about \(5\sigma\) away, which is quite singular, considering that the event is the same. In the GW+AG+C the centroid motion is able to constrain very well \(\theta_{v}\) to \(17.8^{+1.3}_{-1.5}\) deg, which then translates into a constraint
\begin{table}
\begin{tabular}{l c c c c c c} \hline Parameter & GW-only & GW+AG & GW+AG+C & GW+AG & GW+AG+C & GW+AG & GW+AG+C \\ & & GJ & GJ & PLJ & PLJ & GJ + Constant & GJ + Constant \\ \hline \(\log_{10}E_{0}\) & \(52.3^{+0.8}_{-0.8}\) & \(53.7^{+1.3}_{-1.2}\) & \(52.1^{+0.8}_{-0.9}\) & \(53.9^{+1.1}_{-1.2}\) & \(52.8^{+0.90}_{-0.86}\) & \(53.9^{+1.2}_{-1.2}\) \\ \(\theta_{c}\) [deg] & \(7.73^{+0.86}_{-0.80}\) & \(2.80^{+0.25}_{-0.21}\) & \(5.57^{+0.62}_{-0.62}\) & \(2.16^{+0.20}_{-0.16}\) & \(5.37^{+0.97}_{-0.87}\) & \(2.59^{+0.20}_{-0.18}\) \\ \(\theta_{W}\) [deg] & \(57^{+19}_{-19}\) & \(47^{+26}_{-25}\) & \(58^{+18}_{-18}\) & \(49^{+24}_{-25}\) & \(52^{+22}_{-21}\) & \(48^{+25}_{-26}\) \\ \(\log_{10}n_{0}\) & \(-0.7^{+0.8}_{-0.8}\) & \(-2.9^{+1.3}_{-1.1}\) & \(-0.4^{+0.8}_{-0.8}\) & \(-2.4^{+1.1}_{-1.2}\) & \(-1.4^{+0.9}_{-0.9}\) & \(-2.7^{+1.2}_{-1.2}\) \\ \(p\) & \(2.11^{+0.01}_{-0.01}\) & \(2.11^{+0.01}_{-0.01}\) & \(2.12^{+0.01}_{-0.01}\) & \(2.12^{+0.01}_{-0.01}\) & \(2.12^{+0.01}_{-0.01}\) & \(2.12^{+0.01}_{-0.01}\) & \(2.12^{+0.01}_{-0.01}\) \\ \(\log_{10}\epsilon_{\rm e}\) & \(-1.7^{+0.7}_{-0.7}\) & \(-2.7^{+1.0}_{-1.2}\) & \(-1.3^{+0.7}_{-0.7}\) & \(-2.6^{+1.0}_{-1.0}\) & \(-1.9^{+0.8}_{-0.8}\) & \(-2.9^{+1.1}_{-1.1}\) \\ \(\log_{10}\epsilon_{\rm B}\) & \(-3.8^{+0.8}_{-0.8}\) & \(-3.0^{+1.1}_{-1.3}\) & \(-3.8^{+0.8}_{-0.8}\) & \(-3.4^{+1.1}_{-1.2}\) & \(-3.6^{+0.8}_{-0.9}\) & \(-3.2^{+1.2}_{-1.2}\) \\ \(b\) & & & & \(7.5^{+1.6}_{-1.1}\) & \(10.9^{+0.7}_{-1.0}\) & & \\ \(c_{\rm radio}\) & & & & & \(-2.99^{+0.23}_{-0.20}\) & \(-2.89^{+0.24}_{-0.25}\) \\ \(c_{\rm optical}\) & & & & & \(-5.25^{+0.23}_{-0.22}\) & \(-5.24^{+0.24}_{-0.23}\) \\ \(c_{\rm X-rays}\) & & & & & & \(-7.48^{+0.06}_{-0.03}\) & \(-7.48^{+0.09}_{-0.10}\) \\ RA\({}_{0}\) [mas] & & & \(-2.2^{+0.2}_{-0.2}\) & & \(-1.9^{+0.2}_{-0.2}\) & & \(-2.3^{+0.2}_{-0.2}\) \\ Dec\({}_{0}\) [mas] & & & \(-0.2^{+0.3}_{-0.3}\) & & \(-0.2^{+0.3}_{-0.3}\) & & \(-0.2^{+0.3}_{-0.4}\) \\ PA [deg] & & & \(85^{+5}_{-3}\) & & \(85^{+4}_{-3}\) & & \(85^{+5}_{-3}\) \\ \hline \(d_{\rm L}\) [Mpc] & \(39.2^{+5.4}_{-8.6}\) & \(31.3^{+3.0}_{-3.6}\) & \(43.8^{+1.5}_{-1.4}\) & \(23.7^{+3.8}_{-3.4}\) & \(43.0^{+1.4}_{-1.4}\) & \(38.6^{+2.5}_{-3.0}\) & \(44.7^{+1.4}_{-1.4}\) \\ \(\theta_{v}\) [deg] & & \(50^{+5}_{-5}\) & \(17.8^{+1.3}_{-1.5}\) & \(63^{+5}_{-4}\) & \(19.7^{+1.3}_{-1.3}\) & \(35.2^{+5.7}_{-6.2}\) & \(16.8^{+1.1}_{-1.2}\) \\ \(\theta_{N}\) [deg] & \(146^{+16}_{-18}\) & \(130^{+5}_{-4}\) & \(162^{+13}_{-1.5}\) & \(117^{+5}_{-4}\) & \(160.3^{+1.3}_{-1.8}\) & \(144.8^{+5.7}_{-6.2}\) & \(163.2^{+1.1}_{-1.2}\) \\ \hline \(\mathcal{M}[M_{\odot}]\) & \(1.1975^{+0.0001}_{-0.0001}\) & \(1.1975^{+0.0001}_{-0.0001}\) & \(1.1975^{+0.0001}_{-0.0001}\) & \(1.1975^{+0.0001}_{-0.0001}\) & \(1.1975^{+0.0001}_{-0.0001}\) \\ \(q\) & \(0.88^{+0.08}_{-0.10}\) & \(0.87^{+0.09}_{-0.09}\) & \(0.87^{+0.09}_{-0.09}\) & \(0.88^{+0.8}_{-0.9}\) & \(0.87^{+0.08}_{-0.09}\) & \(0.87^{+0.08}_{-0.09}\) \\ \(a_{1}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) \\ \(a_{2}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.02^{+0.01}_{-0.01}\) & \(0.02^{+0.01}_{-0.02}\) & \(0.
also on \(\theta_{c}=2.80^{+0.25}_{-0.21}\) deg. This happens because of the degeneracy between the two angles, proper of the Gaussian jet light curve (see Fig. 4). Its rising slope depends on their ratio, which, in this fit, is about 6.4. In the GW+AG, instead, there are no constraints on \(\theta_{v}\) or \(\theta_{c}\) individually, but just on their ratio, from the rising slope of the light curve (see also Ryan et al., 2020; Nakar and Piran, 2021). This still leads to the same ratio of about 6.5, but \(\theta_{v}=50\pm 5\) deg and \(\theta_{c}=7.73^{+0.86}_{-0.80}\) deg, about 5 sigma away from the GW+AG+C case.
The GW+AG fit, not being constrained by the centroid data set, is free to account for the mild decay of the light curve at late times by anticipating the non-relativistic phase. In particular, we estimate the non-relativistic time (Ryan et al., 2020) to be \(t_{NR}=880^{+290}_{-210}\) days (GW+AG), with respect to \(t_{NR}=13600^{+3000}_{-2600}\) days (GW+AG+C). Therefore, at late times, according to the parameters of the GW+AG fit, the jet is non-relativistic. The anticipation of the relativistic phase is obtained mainly by acting on the \(E_{0},n_{0}\) parameters. However, the one order of magnitude lower energy and two orders of magnitude higher circumburst density would shift the flux at low values and the break at earlier times, since \(t_{b}\propto(E_{0}/n_{0})^{1/3}(\theta_{v}+1.24\theta_{c})^{8/3}\)(Ryan et al., 2020). This is balanced by the fit with higher values of \(\theta_{c}\) and \(\theta_{v}\), in order to bring back the jet break (the peak at about 130 days, and to adjust the rising slope of the light curve. This influences
Figure 2: Top panel. Contour plot of the viewing angle and luminosity distance for the GW, GW+AG and GW+AG+C fits. The contours represent the 68%, 95%, 99.7% probability regions. Bottom panel. The same contour plot as above, but switching to \(H_{0}\), instead of \(d_{L}\). The magenta and yellow regions represent the 1 \(\sigma\) of the _Planck_ and SHoES measurements respectively. The Gaussian jet results are represented with filled contours, while the power law jet with empty contours and dashed lines.
also the early decreasing slope (before the non-relativistic phase), as a higher \(\theta_{c}\) (wider jet) provides a larger surface area, so the jet is brighter and the flux is higher. The parameters \(d_{L}\), \(\epsilon_{e}\), \(\epsilon_{B}\) have mainly the role of shifting the flux. The \(p\) parameter stays the same in the two fits, as it is constrained by the spectrum. \(\theta_{V}\) is unconstrained in both fits, however it is better constrained in the GW+AG fit, mainly because \(\theta_{c}\) is larger, and, being a Gaussian jet, \(\theta_{w}\) has to be lower than \(\theta_{c}\).
In other words, the good fit in the GW+AG case is provided by a combined effect of the high \(\theta_{c}\) (in the decreasing slope right after the peak) and the anticipation of the non-relativistic phase (in the slope at late times). Therefore, in the GW+AG fit a large \(\theta_{v}\), with a broader profile and less energy on the jet axis, is preferred. In the GW+AG+C fit the centroid motion strongly constrains \(\theta_{v}\) to smaller values, so a highly collimated jet with a large energy on the jet axis and a less dense circumburst medium are preferred instead.
### Changing the structure of the jet
In the case of a power law jet, the degeneracy between \(\theta_{v}\) and \(\theta_{c}\) is not as strong as for the Gaussian geometry, the rising phase slope is a function of \(b\), \(\theta_{v}\) and \(\theta_{c}\) (see Eq.(33) of Ryan et al., 2020). This can be seen in the right panel of Fig. 4, where the GW+AG fit is represented in red contours. The GW+AG and GW+AG+C are written in the fifth and sixth column of Table 2, while the fits of the afterglow light curve and centroid motion are in Fig. 5. Also for this jet structure, the GW+AG and GW+AG+C produce quite different results, and the reasoning in Section 4.1 is still valid. The majority of the parameters from the GW+AG+C and GW+AG fits, assuming a power law model, are in agreement within \(1\sigma\) with the Gaussian jet model, this is probably due to the fact that, at early times, the afterglow light curve rises, so \(b\) has to be large. At the same time, the larger is \(b\), the more the Gaussian and power law structures are similar. For example, in the case of the GW+AG+C fit, for \(b=10.9\), the \(E(\theta)\) of a Gaussian and power law structures are very similar within \(\sim 3\theta_{c}\), after which the decay is shallower for the power law structure.
The GW+AG fit produces a larger \(\theta_{v}=63^{+5}_{-4}\) deg, a smaller \(\theta_{c}=5.57^{+0.69}_{-0.62}\) deg and smaller \(d_{L}=23.7^{+3.8}_{-3.4}\) Mpc, than the Gaussian jet, these parameters are, however, in agreement within \(2\sigma\) with the latter. The 2D posterior for \(\theta_{v}\) and \(d_{L}\) are represented in Fig. 2, top panel, in green dashed contours. The microphysics and the energetics are in agreement within \(1\sigma\) with the Gaussian jet results.
In the GW+AG+C fit, the parameters are in agreement within \(1\sigma\) with the Gaussian jet model, except for \(\theta_{c}=2.16^{+0.20}_{-0.16}\) deg, which is within \(2\sigma\). The \(\theta_{v}\) and \(\theta_{c}\) 2D posteriors for the power law jet are represented in the right panel of Fig. 4, we can see that \(3\sigma\) there are samples with large viewing angles, usually preferred instead from the GW+AG fits. The results for \(\theta_{v}\) is \(19.7^{+1.3}_{-1.3}\) and for \(d_{L}=43.0^{+1.4}_{-1.4}\). The 2D posterior distributions of \(\theta_{v}\) and \(d_{L}\) are in Fig. 2, top panel, in black dashed contours, almost superimposed to the Gaussian jet results (purple and yellow coloured contours). Also in this case there is a small region of the parameter space at \(3\sigma\) at large \(\theta_{v}\) and small \(d_{L}\), which gets cancelled when estimating \(H_{0}\), bottom panel.
The \(H_{0}\) values retrieved in these fits are \(70.2^{+4.6}_{-4.4}\) km s\({}^{-1}\)Mpc\({}^{-1}\) for the GW+AG+C fit, and \(127^{+22}_{-19}\) km s\({}^{-1}\)Mpc\({}^{-1}\) (medians, 16th-84th percentiles) for the GW+AG fit, the \(H_{0}\) posteriors are represented in the central panel of Fig. 3, in blue and red respectively. For this event, if we use the complete data set (the most robust fit), the change in jet model does not significantly influence \(H_{0}\), the power law jet predicts an \(H_{0}\) larger than 1.3 km s\({}^{-1}\)Mpc\({}^{-1}\), which is a 2% difference,
Figure 3: Histograms of the Hubble constant \(H_{0}\) posterior from the GW-only fit, in black, the GW+AG in red and the GW+AG+C in blue. The vertical dashed lines represent the 16th and 84th percentiles of each distribution. The magenta and yellow shaded regions represent the \(1\sigma\) interval of the _Planck_ and SH0ES measurements respectively. Top panel: Gaussian jet. Central panel: power law jet. Bottom panel: Gaussian jet with the addition of a constant component at late times.
with respect to a Gaussian jet, but still in agreement within the uncertainties. In the GW+AG there is a 30% difference, but the two \(H_{0}\)s are compatible within 1\(\sigma\).
In order to asses if the unknown jet structure leads to systematics in the estimation of \(H_{0}\), we simulate an afterglow light curve and centroid movement using a Gaussian jet, then we fit them twice, assuming a Gaussian and a power law structure. To keep this simulation as similar to GW170817 as possible, we keep the GW170817 detection times, errors and frequencies for the afterglow light curve and centroid motion, but we adopt fluxes and positions predicted by the model, with a Gaussian variation. We simulate the EM data sets assuming a Gaussian jet and the parameters in Table 2, medians in the fourth column. In this way, we do not include the excess in the flux at late times, which we are not interested in, as we are focusing on the influence of the jet structure. These EM data sets are then fitted with GW two times, one assuming a Gaussian jet, and the other assuming a power law jet.
For both jet structures, we retrieve the parameters of the GW, the energetics and microphysics in agreement within 1\(\sigma\) with the median values in Table 2, fourth column. Focusing on the distance and the geometry of the system, assuming a Gaussian jet, we retrieve \(\theta_{v}=19.3^{+1.5}_{-1.7}\) deg, \(\theta_{c}=3.01^{+0.28}_{-0.25}\) deg and \(d_{L}=43.8^{+1.5}_{-1.7}\) Mpc, while for a power law jet \(\theta_{v}=20.2^{+1.6}_{-1.8}\) deg, \(\theta_{c}=2.40^{+0.24}_{-0.21}\) deg and \(d_{L}=43.6^{+1.5}_{-1.5}\) Mpc. As for the case of GW170817, the power law jet tends to give a slightly higher (lower) viewing angle (jet opening angle), which is in agreement within 2\(\sigma\) with the simulated values of 17.8 deg (2.8 deg). This, however, does not influence much the luminosity distance. The \(H_{0}\) posteriors that we retrieve from these fits are represented in Fig. 6, with purple (Gaussian jet fit) and green (power law jet fit) colors. It seems that the Hubble constant, as \(d_{L}\), is not influenced by the different structure, resulting in \(H_{0}=68.8^{+4.5}_{-4.4}\) km s\({}^{-1}\)Mpc\({}^{-1}\) for a Gaussian jet and \(H_{0}=69.4^{+4.5}_{-4.4}\) km s\({}^{-1}\)Mpc\({}^{-1}\) for a power law jet (medians, 16th-84th percentiles). This is a less than 1% difference, which is well inside the 1\(\sigma\) range, nonetheless is also at the same level of the _Planck_ uncertainty on \(H_{0}\). For this reason, in the future, with a larger number of events, it could be important to assess if this, at the mo
Figure 4: Contour plots of the viewing angle and jet opening angle. The contours represent the 68%, 95%, 99.7% probabilities. The blue contour lines represent the result from the joint GW+AG+C fit, while the red ones represent the result from the GW+AG fit. Left panel: Gaussian jet. Right panel: power law jet.
Figure 5: Same as Fig. 1, but assuming a power law structure for the jet.
ment negligible difference, is just a statistical fluctuation or a real fluctuation due to the changing jet structure.
### Adding a constant component in the flux at late times
In the case of GW170817, the high viewing angle preference mainly arise at late times, where there is a flux excess. This is either due to some missing emission at late times in the jet model itself, or due to a new component becoming visible, like a kilonova afterglow or the emission from a long-lived pulsar (Troja et al., 2020; Hajela et al., 2019; Piro et al., 2019). If a flux additive component is included in the fit, indeed the jet viewing angle slightly decreases (as the jet opening angle), see for example Troja et al. (2021); Balasubramanian et al. (2021); Hajela et al. (2022); Wang et al. (2023), Ryan et al (in prep).
We fit the same data set in Fig. 1, adding a constant flux component of the type \(F_{\nu}=F_{\nu,\rm{age}}+10^{\circ}\), where \(F_{\nu,\rm{age}}\) is the flux predicted by afterglow/c \(c\) is a parameter in the fit. This is done only at late times and at all frequencies. The \(c\) parameter has three possible values, depending on the frequency: \(c_{\rm{radio}}\), with a uniform prior in [-3.5,-2], \(c_{\rm{optical}}\) with a uniform prior in [-5.5, -4.5] and \(c_{\rm{X-rays}}\), with uniform prior in [-8, -7].
The results of the GW+AG+C and GW+AG fit are written in Table 2, last two columns, while the fit of the broad-band afterglow and centroid motion are in Fig. 7.
This model can well fit the afterglow light curve and the centroid motion, in both cases. The parameters values from the GW+AG+C fit are in agreement with \(1\sigma\) with the ones from the simple Gaussian jet model (Table 2, fourth column). In the GW+AG+C the viewing angle \(\theta_{\nu}=1.68^{+1.1}_{-1.2}\) deg is lower with respect to the simple Gaussian jet model, so the distance \(d_{L}=44.7\pm 4.4\) Mpc is slightly larger (see, for example, Fig. 8, top panel). The viewing angle and the jet opening angle are better constrained, but the error on the distance is unvaried with respect to the previous analysis. This fit leads to an \(H_{0}\) of \(67.8^{+4.3}_{-4.2}\) km s\({}^{-1}\)Mpc\({}^{-1}\) (see bottom panel of Fig. 3 and bottom panel of Fig. 8), which is in agreement with the value from the GW+AG+C fit with a simple Gaussian jet.
The inclusion of a constant component that accounts for the late-time behaviour does not significantly influence the parameter posteriors with respect to the model without it, so we can say that the GW+AG+C fit and model are robust.
In the case of the GW+AG fit, the addition of the constant component brings some improvements in the results. The jet parameters are compatible within \(1\sigma\) with the GW+AG+C (with constant) fit, except for \(\theta_{c}=5.37^{+0.97}_{-0.87}\) deg and \(\theta_{\nu}=35.2^{+5.7}_{-6.2}\) deg, which are within \(3\sigma\), and \(d_{L}=38.6^{+2.5}_{-3.0}\) Mpc, which is within \(2\sigma\). Thanks to the inclusion of the constant component at late times, the viewing angle decrease with respect to the GW+AG fit with the simple Gaussian jet model, and the error on the distance also about 2 times better that the GW fit, cutting part of the tails of the \(\theta_{\nu}-d_{L}\) degeneracy, see Fig. 8, top panel. Indeed, the Hubble constant value that we retrieve is \(78.5^{+7.9}_{-6.4}\) km s\({}^{-1}\)Mpc\({}^{-1}\), see the bottom panels of Fig. 3 and Fig. 8, that is compatible within \(1\sigma\) with the GW+AG+C fit, including a constant component. Moreover, this result is compatible within \(1\sigma\) with Guidorzi et al. (2017); Wang and Giannios (2021); Wang et al. (2023).
### Prospects for jet centroid observations
Using GW, afterglow light curve and centroid motion, leads to \(H_{0}=68.9^{+4.4}_{-4.3}\) km s\({}^{-1}\)Mpc\({}^{-1}\). However, the precision of this measure is not at the level of SH0ES or _Planck_, in order to reach these, we would need at least \(\sim 10\) events and \(\sim 60\) events respectively. In this Section, we estimate the likelihood that a new GW event, followed
Figure 6: Histogram of the \(H_{0}\) posterior distribution for a simulated event. The GW-only fit is represented in black (same distribution as Fig. 3), the GW+AG+C assuming a Gaussian jet in the fit is represented in violet, while the GW+AG+C assuming a power law jet is represented in green. The vertical dashed lines represent the 16th and 84th percentiles of each distribution. The magenta and yellow shaded regions represent the \(1\sigma\) interval of the _Planck_ and SH0ES measurements respectively.
Figure 7: Same as Fig. 1, but including an additional constant flux component in the model at late times.
by the detection of the afterglow light curve and the measurement of the centroid motion, is seen in the next GW Observing runs O4 and O5.
From the GW simulations of Petrov et al. (2022), we generate the EM counterparts of more than a thousand binary neutron star events detectable in O4 (Singer, 2021) and in O5 (Singer, 2021), assuming a Gaussian jet. Each GW event is characterized by a viewing angle and a luminosity distance, which we use to generate their afterglow light curve and centroid motion. We assume all the other parameters to be the same as GW170817 (see Table 2, forth column). Moreover, we assume that all the events are well localized and easy to be followed up by the radio telescopes. This will lead to very optimistic rates. We adopt VLBI as the reference radio facility, both for O4 and O5, so we assume a sensitivity in the radio band of 24\(\mu\)Jy (the observations of GW170817 afterglow centroid motion reached an RMS of about 8\(\mu\)Jy), and a resolution of 1.5 mas (Ghirlanda et al., 2019). These performances can be achieved also, for example, with the European VLBI Network (EVN)2.
Footnote 2: EVN website
The centroid data set is composed of the same detection times of GW170817, but we adopt fluxes and positions predicted by the model. We assume that the centroid motion is visible if the offset between two data points is above the assumed resolution. Regarding the afterglow light curve, we define an event as detectable if its afterglow peak is above the sensitivity.
In the case of O4 (operating from 2023 to 2025), the GW rate of events is \(34^{+78}_{-25}\)yr\({}^{-1}\)(Petrov et al., 2022). We find that 7% of the total have detectable flux, resulting in a rate of \(2.4^{+5.5}_{-1.8}\)yr\({}^{-1}\) (for the whole sky). Regarding jet centroid observations, we find that only 0.13% of events has a detectable afterglow flux and centroid, see red dots in Fig. 9, in agreement with Mastogiovanni et al. (2021). This translate into a rate of \(0.05^{+0.11}_{-0.03}\)yr\({}^{-1}\), therefore it is very unlikely that the jet centroid will be measured again during O4.
In the case of the O5 run, which is due after 2027, the predicted GW rate is \(190^{+410}_{-130}\)yr\({}^{-1}\). We find that 6% have a peak flux above the sensitivity, resulting in a rate of \(11^{+25}_{-0.7}\)yr\({}^{-1}\). The jet centroid motion is visible in 0.09% of the cases leading to a rate of \(0.17^{+0.36}_{-0.12}\)yr\({}^{-1}\). The latter is still a very low rate, with a slightly lower event fraction than O4, due to the fact that O5 will probe larger distances, which very unlikely will have a detectable centroid motion. The event rate for the GW, afterglow light curve and centroid motion is slightly larger than O4, despite the same number of events at distances lower than 100 Mpc (\(\sim 1\)yr\({}^{-1}\) both for O4 and O5). For this reason, we can say that the rate fluctuation is just due to the small number of events.
As is shown in Fig. 9, at large distances we mainly see on-axis or almost on-axis events (with a small \(\theta_{\nu}\)), these events will not have a visible jet motion, as the observer is within (or just outside) the jet's opening angle. This results in a small or null offset, which is hardly detected with sensitivities of the order of the \(mas\). However, if the jet has a large viewing angle, the peak of the afterglow will be at low fluxes, not reaching the VLBI sensitivity. Indeed, for the O4 run, the events that have a coincident detectable GW, afterglow light curve and centroid motion are very similar to GW170817 (at small distances and with \(\theta_{\nu}\sim 20\) deg, red dots in Fig. 9).
Figure 8: Same as Fig. 2, but using a Gaussian jet with the addition of a constant flux component at late times.
Figure 9: The dots represent GW events simulated by (Petrov et al., 2022), in the case of the O4 run (Singer, 2021). Depending on their \(\theta_{\nu}\) and \(d_{L}\), we highlight in blue the ones that have a detectable afterglow counterpart in the radio band and in red the ones that have also a detectable centroid motion.
## 5 Conclusions
The estimation of the Hubble constant \(H_{0}\) exploiting GW, also known as standard sirens method, is a very powerful tool to try to solve the Hubble tension. However, its main issue is the degeneracy between the viewing angle and the luminosity distance of the event, which precludes reaching the level of precision of _Planck_ and SHOES. In this work, we use this method to estimate \(H_{0}\), with additional constraints that help in breaking this degeneracy. Using Bayesian analysis, we fit simultaneously the EM and GW domains for the event GW170817. The electromagnetic data set includes the broadband afterglow and the centroid motion of the relativistic jet from HST and VLBI observations. From here, we estimate the Hubble constant and we test its robustness depending on the data set used, on the assumed structure of the jet and on the presence of a possible late time flux excess in the afterglow light curve.
A GW-only fit leads to an Hubble constant value of \(H_{0}=77^{+21}_{-10}\,\mathrm{km}\,\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}\) (median, 16th-84th percentiles). The almost 20% error is due to the degeneracy stated above. The latter can be broken exploiting independent EM messengers, like the afterglow light curve and centroid motion, at least in the case of GW170817.
In GW+AG analysis, we join the GW and the afterglow light curve. This fit reduces the \(\theta_{v}-d_{L}\) degeneracy, but gives \(H_{0}=96^{+13}_{-10}\,\mathrm{km}\,\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}\). This high value follows from the low value of distance (\(d_{L}=31.3^{+3.0}_{-3.6}\) Mpc) and high value of viewing angle (\(\theta_{v}=50^{+5}_{-5}\) deg). This behaviour is caused by a possible late time excess in the afterglow flux, these data points are well modelled and are driving the result of the fit. Therefore, for the specific case of GW170817, using only the afterglow as EM counterpart is not enough to get a reliable measurement of \(H_{0}\).
The GW+AG+C fit, instead, joining the GW, the afterglow light curve and the centroid motion, breaks the \(\theta_{v}-d_{L}\) degeneracy and results in \(H_{0}=68.9^{+4.4}_{-4.3}\,\mathrm{km}\,\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}\), which is in agreement with other estimations of this parameter using GW170817 and is about 3 times more precise than the GW-only \(H_{0}\) measurement. This is because of the very strong constraint on the viewing angle given by the afterglow centroid data set. As a consequence, the latter model does not fit well (even if the residuals are \(\leq 3.5\sigma\)) the late time flux data points. The viewing angle is \(\theta_{v}=17.8^{+1.3}_{-1.5}\) deg and the distance is \(d_{L}=43.8^{+1.5}_{-1.4}\) Mpc. Thus, in the GW+AG+C fit a small value of \(\theta_{v}\), and consequently a highly collimated jet and a large energy on the jet axis, is preferred. In the GW+AG fit a large \(\theta_{v}\), with a broader profile and less energy on the jet axis, is preferred instead.
The possible excess in the afterglow light curve at late times can be explained as either something missing in the jet model, or as a new emission becoming visible (Troja et al., 2020; Hajela et al., 2019; Piro et al., 2019). In either cases, adding a constant flux component to the GW+AG+C model at late times leads to posterior probabilities that are in agreement within \(1\sigma\) with the fit without this constant component (\(H_{0}=67.8^{+4.3}_{-4.2}\) km s\({}^{-1}\)Mpc\({}^{-1}\)), but helps in better fit the late times data. This shows that the model and the GW+AG+C results are robust. Instead, adding this constant flux component to the fit of GW+AG leads to more acceptable values of viewing angle, luminosity distance and Hubble constant: \(\theta_{v}=35.2^{+5.7}_{-6.2}\) deg, \(d_{L}=38.6^{+2.5}_{-3.0}\) Mpc and \(H_{0}=78.5^{+7.9}_{-6.4}\) km s\({}^{-1}\)Mpc\({}^{-1}\). The latter is compatible within \(1\sigma\) with the GW+AG+C fit.
Finally, it seems that the Hubble constant is not influenced by the assumption on the structure of the jet (either Gaussian or power law), at the present level of precision.
The best \(H_{0}\) precision reached with this method is \(4\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}\), in the case of GW+AG+C fit. This is not good enough to prefer either the _Planck_ or the SHOES \(H_{0}\), yet. More events are needed to reach their level of precision. However, in the future, we do not expect many events that have coincident detections of GW, afterglow light curve and centroid motion. Using the GW simulations from Petrov et al. (2022), we generate the EM counterparts of more than a thousand binary neutron star events detected in O4 (Singer, 2021) and O5 (Singer, 2021). With the VLBI image resolution and sensitivity, we estimate that, both for O4 and O5, the rate of GW, afterglow light curve and centroid motion joint detections is a fraction of event per year.
To conclude, by introducing additional constraints based on astronomical observations, there is the potential to introduce systematic biases, that could affect the standard sirens measurements (Govreens-Segal and Nakar, 2023). As we show in this work, the viewing angle in the EM modelling is affected by the type of data set used. For this reason, it is fundamental to include in the analysis all the messengers available, the higher their number, the more robust are the results. At the moment, the uncertainty of the standard sirens methods is still too large with respect to the early or late-time Universe \(H_{0}\)s, but in the future attention should be taken, to avoid biases in the \(H_{0}\) measurement from the standard-sirens method. For example, in a GW170817-like case, measurements at very late times could confirm or qualify as a systematic the milder decrease of the flux at late times. Regarding jet centroid studies, measures at both early and late times will be important to constrain its motion and its viewing angle. For these reasons, highly sensitive instruments are needed, like _Athena_(Piro et al., 2022) in the X-rays or SKA (Square Kilometre Array, Braun et al., 2019) in the radio band. In the distant future (mid 2030s), facilities like Next Generation VLA (ngVLA) will reach a _mas_ resolution (or lower, Beasley et al., 2019), increasing the chances of detecting the motion of the relativistic jet.
## Acknowledgements
We thank Gabriele Bruni for the helpful discussion about radio telescopes present and future performances. We acknowledge support by the European Union horizon 2020 programme under the AHEAD2020 project (grant agreement number 871158). This work has been also supported by ASI (Italian Space Agency) through the Contract no. 2019-27-HH0. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the
Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.16345 | Influence of the interstellar magnetic field and 11-year cycle of solar
activity on the heliopause nose location | Context. The heliosphere is formed by the interaction between the solar wind
(SW) plasma emanating from the Sun and a magnetised component of local
interstellar medium (LISM) inflowing on the Sun. A separation surface called
the heliopause (HP) forms between the SW and the LISM. Aims. In this article,
we define the nose of the HP and investigate the variations in its location.
These result from a dependence on the intensity and direction of the
interstellar magnetic field (ISMF), which is still not well known but has a
significant impact on the movement of the HP nose, as we try to demonstrate in
this paper. Methods. We used a parametric study method based on numerical
simulations of various forms of the heliosphere using a time-dependent
three-dimensional magnetohydrodynamic (3D MHD) model of the heliosphere.
Results. The results confirm that the nose of the HP is always in a direction
that is perpendicular to the maximum ISMF intensity directly behind the HP. The
displacement of the HP nose depends on the direction and intensity of the ISMF,
with the structure of the heliosphere and the shape of the HP depending on the
11-year cycle of solar activity. Conclusions. In the context of the planned
space mission to send the Interstellar Probe (IP) to a distance of 1000 AU from
the Sun, our study may shed light on the question as to which direction the IP
should be sent. Further research is needed that introduces elements such as
current sheet, reconnection, cosmic rays, instability, or turbulence into the
models. | P. Bladek, R. Ratkiewicz | 2023-09-28T11:08:39Z | http://arxiv.org/abs/2309.16345v1 | Influence of the interstellar magnetic field and 11-year cycle of solar activity on the heliopause nose location
###### Abstract
Context:The heliosphere is formed by the interaction between the solar wind (SW) plasma emanating from the Sun and a magnetised component of local interstellar medium (LISM) inflowing on the Sun. A separation surface called the heliopause (HP) forms between the SW and the LISM.
Aims:In this article, we define the nose of the HP and investigate the variations in its location. These result from a dependence on the intensity and direction of the interstellar magnetic field (ISMF), which is still not well known but has a significant impact on the movement of the HP nose, as we try to demonstrate in this paper.
Methods:We used a parametric study method based on numerical simulations of various forms of the heliosphere using a time-dependent three-dimensional magnetohydrodynamic (3D MHD) model of the heliosphere.
Results:The results confirm that the nose of the HP is always in a direction that is perpendicular to the maximum ISMF intensity directly behind the HP. The displacement of the HP nose depends on the direction and intensity of the ISMF, with the structure of the heliosphere and the shape of the HP depending on the 11-year cycle of solar activity.
Conclusions:In the context of the planned space mission to send the Interstellar Probe (IP) to a distance of 1000 AU from the Sun, our study may shed light on the question as to which direction the IP should be sent. Further research is needed that introduces elements such as current sheet, reconnection, cosmic rays, instability, or turbulence into the models.
## 1 Introduction
The interaction of the supersonic solar wind (SW) with the local interstellar medium (LISM) leads to the formation of a cavity in the LISM called the heliosphere, which is filled with the SW plasma. The interaction between the SW plasma outflowing spherically and symmetrically from the Sun and the counter-flowing LISM plasma in a uniform rectilinear motion, and neglecting the magnetic fields in both media, results in the axisymmetric shape of the heliosphere. The axial symmetry of the heliosphere also holds when the direction of the interstellar magnetic field (ISMF) is the same as that of the LISM inflow, if the interplanetary magnetic field (IMF) is neglected. Mathematically speaking, the heliosphere is separated from the LISM by the heliopause (HP), a discontinuity surface that is the pressure equilibrium surface of both media. The supersonic SW slows down before the HP through a shock wave called the termination shock (TS). If the interstellar plasma is also supersonic, it slows down on the other side of HP through a shock wave known as a bow shock (BS). The area between the TS and the HP is an inner heliosheath (HHS). A layer located between the HP and the BS is an outer heliosheath (OHS), if the BS exists. Otherwise, the OHP between the HP and the region where the LISM is disturbed by the heliosphere causes flowing around the HP. In this way, the SW plasma flows around the inner part of the HP, and the LISM plasma flows around the outer part of the HP (Fig. 1). In an axisymmetric heliosphere, a line running through the centre of the Sun and parallel to the velocity vector of the LISM intersects the surfaces of the TS, HP, and BS at points that are called the noses of the TS, HP, and BS (Fig. 1).
When we include the ISMF \(\mathbf{B_{in}}\), not parallel to the LISM velocity vector \(\mathbf{V_{in}}\), the noses of TS, HP, and BS change positions (Fig. 2). The noses of the TS and HP deflect in one direction and the nose of the BS deflects in the other direction (Fig. 2) (compare with Ratkiewicz et al. 1998, 2000).
The nose of the heliosphere is cited in various contexts in many publications. In some, the nose is identified with the stagnation point of the interstellar medium flow (e.g. Drake et al. (2010); Desai et al. (2015); Dayeh et al. (2019); McComas et al. (2020) and Shrestha et al. (2023)). Many articles refer to the HP nose as 'the nose region of the heliosheath', 'the nose of the heliosphere' (e.g. McComas & Schwadron (2006); Lee et al. (2009); Fisk & Gloeckler (2009, 2014, 2015); Galli et al. (2019); Kornbleuth et al. (2020, 2021a); Opher et al. (2017) and Shrestha et al. (2023)), the nose direction (e.g. Muller et al. (2008); Opher et al. (2013, 2015, 2017, 2021) and Zirstein et al. (2020)) or 'the upwind (nose) direction' (e.g. McComas et al. (2020) and Kornbleuth et al. (2021b)). However, relatively little attention has been paid to the consideration of the location of the heliosphere nose, more precisely the location of the noses of the TS, HP, and BS and the displacement of the nose. This issue was first addressed in the 1980s, using the so-called Newtonian approximation as a model of the heliosphere (see Fig. 2 and Fig. 3 Fahr et al. (1986), Fahr et al. (1988), Ratkiewicz & Banaszkiewicz (1987) and Banaszkiewicz & Ratkiewicz (1989)). Recently, this issue has been revisited in
(Ratkiewicz & Baraniecka, 2023), where the HP nose is defined in Fig. 2. The above articles show that the location of the HP nose depends upon the direction and intensity of the ISMF and always deflects in a direction quasi-perpendicular to the direction of the undisturbed ISMF lines (Ratkiewicz et al., 2000).
In this article, using the definition of the HP nose in Fig. 1 (compare with Fig. 2 in Ratkiewicz & Baraniecka, 2023), we discuss possible configurations of the HP nose, arising under the influence of various ISMF intensities and directions, for the SW velocity during the minimum and maximum of the 11-year cycle of solar activity, (Fig. 3). We show that the HP nose clearly deviates from the LISM inflow direction and is always directed towards the ISMF maximum, just behind the HP.
The paper is organised as follows: in Sect. 2, we describe the simulation method and three sets of boundary conditions; in Sect. 3, we present the results of our modelling; in Sect. 4, we present our conclusions.
## 2 Simulation method and boundary conditions
We used a three-dimensional (3D) magnetohydrodynamic (MHD) model of the interaction between the SW and LISM, as described by Ratkiewicz et al. (2002, 2008), with a number of revisions introduced, including: a magnetised SW, the boundary conditions adjusted to simulate solar cycle effects, and an improved modelling of the neutral hydrogen within
Figure 1: Schematic of the IHS and OHS in the parallel LISM velocity and magnetic field vectors (axisymmetric case). The TS, HP, and BS noses are on one line.
Figure 2: Schematic of the heliosphere and LISM for an ISMF not parallel to the LISM velocity vector. The TS and HP noses deviate in one direction (quasi-perpendicular to the undisturbed \(\mathbf{B_{u}}\)), and the BS nose in the opposite direction (Ratkiewicz et al., 2000).
the constant flux approximation, as described by Strumik & Ratkiewicz (2022). The set of MHD equations in Eq. 1 includes a source term \(S\) on the right-hand side to describe a resonance charge exchange with a constant flux of hydrogen, and a second source term, \(Q\), that maintains the divergence-free magnetic field (Ratkiewicz et al., 2002, 2008).
\[\frac{\partial\mathbf{U}}{\partial t}+\nabla\cdot\hat{\mathbf{F}}=\mathbf{Q}+\mathbf{S} \tag{1}\]
where \(\mathbf{U},\mathbf{Q}\), and \(\mathbf{S}\) are column vectors, and \(\hat{\mathbf{F}}\) is a flux tensor defined as
\[\mathbf{U}=\left[\begin{array}{c}\rho\\ \rho\mathbf{u}\\ \mathbf{B}\\ \rho E\end{array}\right],\hat{\mathbf{F}}=\left[\begin{array}{c}\rho\mathbf{u}\\ \rho\mathbf{u}+\mathbf{I}(p+\frac{\mathbf{B}}{3\pi})-\frac{\mathbf{B}\mathbf{B}}{4\pi}\\ \mathbf{u}\mathbf{B}-\mathbf{B}\mathbf{u}\\ \rho H\mathbf{u}-\frac{\mathbf{B}\mathbf{u}\mathbf{B}}{4\pi}\end{array}\right],\]
\[\mathbf{Q}=-\left[\begin{array}{c}\mathbf{0}\\ \frac{\mathbf{B}}{4\pi}\\ \mathbf{u}\\ \mathbf{u}\cdot\frac{\mathbf{B}}{4\pi}\end{array}\right]\nabla\cdot\mathbf{B},\mathbf{S}=\rho v _{c}\left[\begin{array}{c}0\\ \mathbf{V_{H}}-\mathbf{u}\\ 0\\ \frac{1}{2}V_{H}^{2}+\frac{3k_{B}T_{H}}{2m_{H}}-\frac{1}{2}u^{2}-\frac{k_{B}T} {(\gamma-1)m_{H}}\end{array}\right].\]
Here, \(\rho\) is the ion mass density, \(p=2nk_{B}T\) is the pressure, \(n\) is the ion number density, \(T\) and \(T_{H}\) (\(T_{H}=const.\)) are ion and hydrogen atom temperatures, \(\mathbf{u}\) and \(\mathbf{V_{H}}\) (\(\mathbf{V_{H}}=const.\)) are the ion and hydrogen atom velocity vectors, respectively, \(\mathbf{B}\) is the magnetic field vector, \(\mathbf{E}=\frac{1}{\gamma-1}\frac{\rho}{\rho}+\frac{\mathbf{u}\mathbf{B}}{2\pi}+\frac{\mathbf{ B}\mathbf{B}}{8\pi\rho}\) is the total energy per unit mass, and \(H=\frac{Y}{Y-1}\rho+\frac{\mathbf{u}\mathbf{B}}{2}+\frac{\mathbf{B}\mathbf{B}}{4\pi\rho}\) is enthalpy. The \(\gamma\) is the ratio of specific heats and \(\mathbf{I}\) is the 3 x 3 identity matrix. The charge exchange collision frequency is \(v_{c}=n_{H}\sigma n_{u}\), where \(n_{H}\) (\(n_{H}=\text{const}\)) is the hydrogen atom number density, \(\sigma\) is the charge exchange cross-section, and \(u_{s}=\sqrt{(\mathbf{u}-\mathbf{V_{H}})^{2}+\frac{128k_{B}(\sigma+T_{H})}{9\pi m_{H}}}\) is the effective average relative speed of protons and hydrogen atoms, assuming a Maxwellian spread of velocities both for protons and hydrogen atoms. The flows are taken to be adiabatic, with \(\gamma=\frac{5}{5}\). The additional constraint of a divergence-free magnetic field, \(\nabla\cdot\mathbf{B}=0\), in the numerical simulations is accomplished by adding the source term \(\mathbf{Q}\) to the right-hand side of (Eq. 1), which is proportional to the divergence of the magnetic field. Adding \(\mathbf{Q}\) to the right-hand side of (Eq. 1) assures that any numerically generated \(\nabla\cdot\mathbf{B}\neq 0\) is advected with the flow, and allows one to limit the growth of \(\nabla\cdot\mathbf{B}\neq 0\).
In order to thoroughly analyse the movement of the nose HP, we considered three cases:
Case one 'iso without IMF' - numerical simulations are carried out during the period of a maximum of the solar cycle activity for isotropic SW without the IMF.
Case two 'iso' - numerical simulations are carried out during the period of a maximum of the solar cycle activity for isotropic SW with the IMF (see Fig. 3b).
Case three 'anis' - numerical simulations are carried out during the period of a minimum of the solar cycle activity for slow and fast solar wind, taking into account the IMF (see Fig. 3a and 3c).
The LISM parameters are set at the outer boundary, \(5000AU\) from the Sun: velocities and temperatures of ionised and neutral LISM components are equal and \(V_{is}=26.4km/s\), \(T_{is}=6400K\), proton number density \(n_{p}=0.06cm^{-3}\), and neutral hydrogen number density \(n_{H}=0.11cm^{-3}\).
Figure 3: Adapted Fig. 1 (a-d) from McComas et al. (2008), original caption ’(a–c) Polar plots of the solar wind speed, colored by IMF polarity for Ulysses’ three polar orbits colored toindicate measured magnetic polarity. In each,the earliest times are on the left (nine o’clock position) and progress aroundcounterclockwise. (d) Contemporaneous values for the smoothed sunspot number (black) and heliospheric current sheettilt (red), lined up to match Figures 1a–1c. In Figures 1a–1c, the solar wind speed is plotted over characteristic solarimages for solar minimum for cycle 22 (8/17/96), solar maximum for cycle 23 (12/07/00), and solar minimum for cycle 23(03/28/06). From the center out, we blend images from the Solar and Heliospheric Observatory (SOHO) Extremeultraviolet Imaging Telescope (Fe XII at 1950 nm), the Mauna Loa K coronameter (700–950 nm), and the SOHO C2white light coronagraph’.
The SW parameters are set at the inner boundary, \(10AU\) from the Sun: In case one, velocity \(V_{sw}=420km/s\), number density SW protons \(n_{p}=0.052cm^{-3}\), and the IMF is neglected. In case two, \(V_{sw}\) and \(n_{sw}\) are the same as case one, but the IMF is set according to the Parker model as an Archimedean spiral, and its strength at \(1AU\) equals \(35.5\)\(\mu G\). In case three, for the slow SW, velocity \(V_{sw}=420km/s\), number density \(n_{p}=0.052cm^{-3}\); for the fast SW, velocity \(V_{sw}=798km/s\), number density \(n_{p}=0.027cm^{-3}\); the IMF for the slow and fast SW is the same as case two.
To show the effect of the ISMF intensity and direction on the deviation of the HP nose from the direction of the LISM inflow, we considered different HP configurations for the ISMF intensities of \(2\mu G\), \(3\mu G\), and \(4\mu G\) and for the angle between the LISM velocity vector and the direction of the ISMF vector, called an inclination angle \(\alpha\), \(0^{\circ}\), \(30^{\circ}\), \(60^{\circ}\), and \(90^{\circ}\).
## 3 Results
### The behaviour of the HP in the case of LISM velocity parallel to the ISMF
The HP shape and HP nose for the three cases of intensity of the ISMF with the ISMF direction parallel to the LISM velocity vector are shown in Figs. 3(a), 3(b), and 3(c).
For the ISMF parallel to the LISM velocity, the heliosphere is axisymmetric, so in the x-y and x-z planes it looks the same and in the y-z plane, the isolines form circles with the Sun in the centre (compare with Figs. 2(a), 2(b), 2(c) Ratkiewicz et al. 2000). As shown in Figs. 3(a), 3(b), and 3(c), the characteristic feature for various ISMF intensities is that the more the ISMF compresses the HP, the greater the ISMF intensity is. The HP nose is in each case located at the intersection of the x-axis with the HP (blue dots). On the other hand, the greater the ISMF intensity, the farther the HP nose is from the Sun, so that, in the case of an ISMF parallel to the LISM velocity vector, the greater the ISMF intensity, the farther the HP nose is extended towards the direction of the interstellar medium.
### The behaviour of the HP in the case of LISM velocity perpendicular to the ISMF
The heliosphere for case one, iso without IMF, of the ISMF direction perpendicular to the LISM velocity is shown in the x-y, x-z, and y-z planes in Figs 4(a), 4(b), and 4(c) for an ISMF intensity of \(4\mu G\) to illustrate the shape of the HP and the nose location.
For an ISMF perpendicular to the LISM velocity, the heliosphere (in comparison to the parallel field described above) loses its axial symmetry. In the x-y plane, the nose of the HP, squeezed by the perpendicular lines of the ISMF field, approaches the Sun (see Fig. 4(a)), and the profile of the HP towards its tail increases its distance from the Sun. In the x-z plane, the HP is compressed along the z-axis (see Fig. 4(b)). In the y-z plane (see Fig. 4(c)), the HP has the shape of a flattened circle (compare with Fig. 4 Ratkiewicz et al. 2000). As can be seen in Figs. 4(a), 4(b), and 4(c), the maximum ISMF intensity is in the direction perpendicular to the Sun's line of sight (compare Ratkiewicz et al. 2000).
Figure 6 shows the same results as Fig. 4(a), except for the ISMF intensity, which in Fig. 4(a) is two times greater (\(4\mu G\)) than in Fig. 6 (\(2\mu G\)). This comparison shows the greater compression of the HP nose in Fig. 4(a), which is manifested by a shorter distance of the HP nose and the Sun and by greater distances between the heliosphere surface and the x-axis towards the tail.
Figure 4: HP shape and location of the noses of the HP. Velocity streamlines for inclination angle equal \(0^{\circ}\) are shown in the x-y plane
### The behaviour of the HP in the case of ISMF direction oblique to the LISM velocity
In numerical simulations, two inclination angles from the range \(0^{\circ}<\alpha<90^{\circ}\) are taken into account, namely, \(\alpha=30^{\circ}\) and \(\alpha=60^{\circ}\). In order to determine the offset of the HP nose, cases one through three (see Sect. 2) were first analysed for the angle \(\alpha=60^{\circ}\).
Figures (a)a, (b)b, and (c)c show the deviation of the HP nose from the direction of the LISM velocity and the shape of the HP for the ISMF intensity \(2\mu G\) using the streamlines. To better analyse the behaviour of the HP nose, comparisons with Figs. (a)a, (b)b, and (c)c and Figs. (a)a, (b)b, and (c)c were created, which show the same results as Figs. (a)a, (b)b, and (c)c, but with the use of magnetic fieldlines. It is easy to see that the HP for solar maximum, without an IMF (case one; see Figs. (a)a and (a)a) is greater than for case two with the IMF (see Figs. (b)b and (b)b). The heliosphere for the minimum 11-year cycle of solar activity, with the IMF taken into account, (case three; see Figs. (c)c and (c)c) differs from the previous examples (Figs. (b)b and (b)b). In all three cases, the nose of the HP clearly deviates from the direction of the LISM velocity.
The next sequence of figures ( (a)a, (b)b, (c)c and (a)a, (b)b, (c)c) describes the same results as Figs. (a)a, (b)b, and (c)c for an ISMF intensity of \(3\mu G\) and \(4\mu G\) for \(\alpha=60^{\circ}\). Figures (a)a, (a)a, and (a)a (case one) show that the greater the ISMF intensity, the closer the HP nose is to the Sun and the greater the distance of the heliosphere surface from the x-axis towards the tail. It is clear that case one, where an IMF is not included, is different from case two and case three. A comparison of Figs. (b)b with (c)c, (b)b with (c)c, and (b)b with (c)c reveals a different structure of the heliosphere regardless of the ISMF intensity. A comparison of Figs. (a)a, (b)b, (c)c with (a)a, (b)b, (c)c and with (a)a, (b)b, (c)c shows that the greater the ISMF intensity, the greater the distance of the heliosphere surface from the x-axis towards the tail. However, in each case, the nose of the HP clearly deviates from the direction of the LISM velocity perpendicular to the maximum of the ISMF intensity.
Figure 5: Velocity streamlines (white) and magnetic fieldlines (black) shown for the ISMF magnitude for the inclination angle equal to \(90^{\circ}\) and for ISMF intensity \(4\mu G\), without the IMF
Figure 6: Velocity streamlines (white) and magnetic fieldlines (black) shown for the ISMF magnitude for the inclination angle equal to \(90^{\circ}\) and for ISMF intensity \(2\mu G\), without the IMF, in the x-y plane
Figure 8: Magnetic fieldlines (black) shown for the magnetic field magnitude for the inclination angle \(\alpha=60^{\circ}\) and for ISMF intensity \(2\mu G\)
Figure 7: Velocity streamlines (white) shown for the magnetic field magnitude for the inclination angle \(\alpha=60^{\circ}\) and for ISMF intensity \(2\mu G\)
Figure 10: Magnetic fieldlines (black) shown for the magnetic field magnitude for the inclination angle \(\alpha=60^{\circ}\) and for ISMF intensity \(4\mu G\)
Figure 9: Magnetic fieldlines (black) shown for the magnetic field magnitude for the inclination angle \(\alpha=60^{\circ}\) and for ISMF intensity \(3\mu G\)
The next sequence of figures, (a)a, (b)b, and (c)c, describes case one (iso SW without the IMF) for the ISMF intensity of \(2\mu G\), \(3\mu G\), and \(4\mu G\) for \(\alpha=30^{\circ}\). Figures (a)a, (b)b, and (c)c show that the HP nose for the inclination angle \(\alpha=30^{\circ}\) deviates from the LISM velocity more than for the angle \(\alpha=60^{\circ}\) (compare Figs. (a)a, (a)a, and (a)a) regardless of the ISMF intensity (see Fahr et al. 1986, 1988). Furthermore, the heliosphere shape differs for varies ISMF intensity. The next sequence of figures, (a)a and (b)b, describe case two (isotropic SW with the IMF) for the ISMF intensity of \(3\mu G\) and \(4\mu G\) for \(\alpha=30^{\circ}\). In case two, Figs. (a)a and (b)b, with the IMF and the isotropic SW for inclination angle \(\alpha=30^{\circ}\) show differences in the heliosphere structures for the ISMF intensities of \(3\mu G\) and \(4\mu G\). The next sequence of Figs. (a)a and (b)b describes case three (SW with the IMF, anis SW) for the ISMF intensities of \(2\mu G\) and \(4\mu G\) and the inclination angle of \(\alpha=30^{\circ}\). In case three, the Figs. (a)a and (b)b, with the IMF, and the anisotropic SW for an inclination angle of \(\alpha=30^{\circ}\), show even greater differences in the heliosphere structures for the ISMF intensities of \(2\mu G\) and \(4\mu G\).
## 4 Summary and conclusions
The three different models of the heliosphere were created using the 3D MHD numerical program for three different boundary conditions:
1. the interaction of the isotropic SW (maximum of an 11-year solar cycle) with the LISM, without considering the IMF,
2. the interaction of the isotropic SW with the LISM, considering the IMF,
3. the interaction of the anisotropic SW (minimum of an 11-year solar cycle) with the LISM, and considering the IMF.
The purpose of this article is to define the nose of the HP and investigate the differences in its location resulting from a dependence on the intensity and direction of the ISMF, which is still not well known, but which has a significant impact on the HP nose movement, as we have tried to demonstrate in this paper. We explored the differences and similarities between the three models, taking into account the different study setups. We analysed the ISMF direction parallel, perpendicular, and oblique to the LISM velocity vector.
For the ISMF parallel to the LISM velocity, the heliosphere is axisymmetric. What is characteristic in this case is that the more the ISMF compresses the HP, the greater the ISMF intensities are. The HP nose is in each case at the intersection of the x-axis with the HP. Simultaneously, the greater the ISMF intensity, the farther the HP nose is from the Sun (see Figs. (a)a, (b)b, and (c)c).
For the ISMF perpendicular to the LISM velocity, the heliosphere loses its axial symmetry. In the x-y plane, the nose of the HP, squeezed by the perpendicular lines of the ISMF field, approaches the Sun, and the profile of the HP towards its tail increases its distance from the Sun. In the x-z plane, the HP is compressed along the z-axis. In the y-z plane, the HP has the shape of a flattened circle. The maximum ISMF intensity is in the direction perpendicular to the Sun's line of sight (see our Figs. (a)a, (b)b, (c)c and (d)d and compare with Ratkiewicz et al. (2000)).
In the case of the oblique ISMF direction to the LISM velocity, when \(\alpha=60^{\circ}\), the HP for solar maximum without an IMF (case one; see Figs. (a)a and (a)a) is greater than for case two with the IMF (see Fig. (b)b and (b)b). The heliosphere for the minimum 11-year cycle of solar activity with the IMF taken into account (case three, see Figs. (c)c and (c)c) differs from the previous cases
Figure 11: Magnetic fieldlines (black) shown for the magnetic field magnitude for the inclination angle \(\alpha=30^{\circ}\), without the IMF, and with iso SW
(Figs. (b)b and (b)b). In all three cases, the nose of the HP clearly deviates from the direction of the LISM velocity.
A comparison of Figs. (a)a, (a)a, and (a)a (case one) clearly shows that the greater the ISMF intensity, the closer the HP nose is to the Sun and the greater the distance of the heliosphere surface from the x-axis towards the tail. Figures (a)a, (b)b, and (c)c show that the HP nose for the inclination angle \(\alpha=30^{\circ}\) deviates from the LISM velocity more than for angle \(\alpha=60^{\circ}\), regardless of the ISMF intensity (see Fahr et al. 1986, 1988). Besides this, the heliosphere shape differs for various ISMF intensities.
The following considerations concern only cases two and three, in which the IMF is included. Comparisons of Figs. (b)b with (c)c, (b)b with (c)c show a different structure of the heliosphere regardless of the ISMF intensity. Comparisons of Figs. (a)a, (b)b, (c)c, with (a)a, (b)a, (b)a, and with (a)a, (b)b, (c)c show that the greater the ISMF intensity, the greater the distance of the heliosphere surface from the x-axis towards the tail. In each case, the nose of the HP clearly deviates from the direction of the LISM velocity perpendicular to the maximum of the ISMF intensity.
Figures (a)a and (b)b, with the IMF, and with isotropic SW (case two), for an inclination angle of \(\alpha=30^{\circ}\), show different structures of the heliosphere for the ISMF intensities of \(3\mu G\) and \(4\mu G\). Figures (a)a and (b)b, with the IMF, and with anisotropic SW (case three) for the ISMF intensities of \(2\mu G\) and \(4\mu G\) and an inclination angle of \(\alpha=30^{\circ}\) present even more different structures of the heliosphere.
The analysis carried out in this article for the three simplest cases of heliosphere models leads to the conclusion that the HP nose defined in the works of Fahr et al. (see 1986, 1988) and Ratkiewicz & Baraniccka (2023), and this article, is always located in the direction perpendicular to the maximum intensity of the ISMF. The displacement of the HP nose depends upon the direction and intensity of the ISMF, with the structure of the heliosphere and the shape of the HP, depending on the 11-year cycle of solar activity. The discussion on the nose of the HP also
Figure 12: Magnetic fieldlines (black) shown for the magnetic field magnitude for the inclination angle \(\alpha=30^{\circ}\), with the IMF, and with iso SW
Figure 13: Magnetic fieldlines (black) shown for the magnetic field magnitude for the inclination angle \(\alpha=30^{\circ}\), with the IMF, and with anis SW
revealed the richness of the structures and shapes of the heliosphere.
In the context of the planned space mission to send the IP to a distance of 1000\(AU\) from the Sun for the first time in human history (Brandt et al., 2022), our study may shed light on the question of which direction to send the IP. Based on these results, the conclusion arises that further research should continue through the introduction of issues such as current sheet, reconnection, cosmic rays, instability, or turbulence into the models (Kornbleuth et al. (2021); McComas & Schwadron (2006); McComas et al. (2008); Opher et al. (2017, 2021, 2023) and Strumik & Ratkiewicz (2022)).
We emphasise, however, that the assumptions about the constant velocity, temperature, intensity and direction of the magnetic field, and the density of plasma and neutral atoms adopted so far as boundary conditions in LISM are accepted, but we do not know what they are for sure (Strumik & Ratkiewicz, 2022). The results from all MHD heliosphere models are far from realistic, since the MHD approach has significant simplifications. In our opinion, taking into account the above statements, as well as the phenomena and processes mentioned earlier, and the fact that the interstellar field can be structured and heterogeneous, the task to be solved requires more sophisticated numerical programs based on the use of artificial intelligence techniques that we intend to apply in our future work.
## 5 Acknowledgments
The authors thank Marek Strumik for his support with numerical calculations. Calculations were carried out at the Centre of Informatics, Tricity Academic Supercomputer & Network. The authors also thank Hans Fahr for reviewing the article and for his constructive remarks and suggestions, which increased its quality.
|
2302.14620 | Pragmatical physics-based model for ladle lifetime prediction | In this paper we develop a physics-based model for the erosion of lining in
steel ladles. The model predicts the temperature evolution in the liquid slag,
steel, refractory bricks and outer steel casing. The flows of slag and steel is
due to forced convection by inert gas injection, vacuum treatment (extreme
bubble expansion), natural convection and waves caused by the gas stirring. The
lining erosion is due to dissolution of refractory elements into the steel or
slag. The mass and heat transfer coefficients inside the ladle, during gas
stirring, is modeled based on wall functions which take the distribution of
wall shear velocities as a critical input. The wall shear velocities are
obtained from CFD (Computational Fluid Dynamics) simulations for sample of
scenarios, spanning out the operational space, and using curve fitting a model
could be built. The model is capable of reproducing both thermal evolution and
erosion evolution well. Deviations between model predictions and industrial
data are discussed. The model is fast and has been tested successfully in a
"semi-online" application. The model source code is made available to the
public on https://github.com/SINTEF/refractorywear. | Stein Tore Johansen, Bjørn Tore Løvfall, Tamara Rodriguez Duran | 2023-02-28T15:01:23Z | http://arxiv.org/abs/2302.14620v1 | ## 1 Pragmatical physics-based model for ladle lifetime prediction
## 1 Pragmatical physics-based model for ladle lifetime prediction
By Stein Tore Johansen\({}^{\star}\), Bjorn Tore Levfall\({}^{\star}\) and Tamara Rodriguez Duran\({}^{\text{2}}\)
_Abstract_
In this paper we develop a physics-based model for the erosion of lining in steel ladles. The model predicts the temperature evolution in the liquid slag, steel, refractory bricks and outer steel casing. The flows of slag and steel is due to forced convection by inert gas injection, vacuum treatment (extreme bubble expansion), natural convection and waves caused by the gas stirring. The lining erosion is due to dissolution of refractory elements into the steel or slag. The mass and heat transfer coefficients inside the ladle, during gas stirring, is modelled based on wall functions which take the distribution of wall shear velocities as a critical input. The wall shear velocities are obtained from CFD (Computational Fluid Dynamics) simulations for sample of scenarios, spanning out the operational space, and using curve fitting a model could be built. The model is capable of reproducing both thermal evolutions and erosion evolution well. Deviations between model predictions and industrial data are discussed. The model is fast and has been tested successfully in a "semi-online" application. The model source code is made available to the public on [https://github.com/SINTEF/refractorywear](https://github.com/SINTEF/refractorywear).
## Introduction
In the steel industry ladles are frequently used to keep, process or transport steel. Ladles are designed to typically hold metal masses ranging from 80 to 300 tons (Figure 1). The melt typically consists of high temperature liquid steel and some slag, which when interacting inner wall of the ladle will harm the wall integrity and lead to significant wear. In order to reduce the wear, temperature resistant and chemically resistant refractory brick are applied to build an inner barrier, typically three layers of wear bricks (inner lining) which should last for a long time in contact with the liquid steel, and at the same time protect the ladle from showing hot areas.. In this paper we will address the inner lining erosion at a Sidenor plant. In this case the ladle application is Secondary Metallurgy (SM).
During SM many processes may going on. The SM ladles have installed a porous plug at the bottom. Gas (Ar of N) injected through the plug is responsible for liquid steel stirring. The rising flow of the liquid steel promotes the inclusion decantation from the steel to the slag, and homogenizes the temperature and chemical composition.
The main objective of the SM is to obtain the correct chemical composition and have enough temperature for the casting process. In addition, there are several important tasks which must be complete during the SM, as
for example inclusion and gases removal. In order to reach these objectives, Sidenor has a SM mill consisting of two Ladle furnaces (LF) and a Vacuum Degasser (VD). Each of the LFs have three electrodes, which are responsible of heating the slag, steel and ferro-additions. The ladle contains the steel and the slag for all the production process from the EAF to the end of the casting process. The liquid steel has a temperature of around 1700 K in the ladle, and it is covered with slag. The slag avoids the contact between the steel and the atmosphere, has lower density than steel and consists basically of lime and oxide elements. The slag conditioning can be improved during the SM by adding slag-formers.
In order to handle the liquid steel and slag with such high temperature, the ladle is built with a strong outer steel shell whose inside is covered with layers of insulating materials (refractory). The refractory is made of ceramic and its most important properties are:
1. handle the high temperature
2. favorable thermal properties
3. high resistance against erosion when in contact with liquid steel and slag
The inner layer of refractory bricks, which are in contact with the liquid steel, are eroded by the interaction with the hot metal and the slag. Each heat erodes away the refractory bricks, and after several heats, they are so eroded that it is not safe to use the ladle one more time. The refractory is visually checked after each heat and depending of its state, the ladle may be used one more heat, put aside for repair or demolished. In case of repair, the upper bricks of the ladle, which are more eroded due to slag chemical impact, will be
Figure 1: Left: Sketch of cross section of a typical steel ladle, with wear refractory bricks, permanent lining (between wear bricks and steel casing), steel casing, bottom bricks, bottom plug for bottom gas blowing and slide gate for transfer into casting undish. Right: Hot ladle that has been in use and is waiting for the next heat. Maximum steel capacity is around 150 tons.
replaced by new ones. Once the ladle is repaired, it is taken back into production. Later, based on continuing visual inspection, the ladle may be deemed ready for demolition. In this case, the entire inner lining is removed and relied with new bricks.
One important goal for Sidenor is to reduce the refractory costs by finding new methods for extending the refractory life. One of the key points is to use the same ladle during more heats without compromising the safety, but another important issue is to understand better the mechanism that drives the refractory erosion, in order to avoid as much as possible the worst working practices and so to enlarge the working life.
#### Target for the development
The main goal is to develop a model whose results can help to decide whether the ladle could work one more heat safely. The model should exploit both historic and current production data. The model added to the knowledge of the operators could be exploited and contribute to cognitive elements of the model.
In addition, the model should give information about which parameters dominates the ladle refractory erosion and give tips about which precautions may be taken to extend the refractory lifetime.
### Previous works on ladle lining erosion
In the past many works have been published, dealing with properties of refractory bricks ((Mahato et al., 2014),(Wang et al., 2015)), advising on improvement to produce high quality bricks. A more general review of MgOC refractories was given by Kundu and Sarkar (Kundu and Sarkar, 2021).
The corrosion-erosion mechanisms have been studied in a few papers ((Kasimagawa et al., 2014), (Jansson, 2008), (Mattila et al., 2002), (Huang et al., 2013), (LMMGROUP, 2020), (Zhu et al., 2018)). In the opinion of these authors, the most thorough approach was given by Zhu et al. (Zhu et al., 2018). Bai et al. (Bai et al., 2022) investigated the impact of slag penetration into the MgOC bricks.
In order to predict erosion of the refractory both temperatures, fluid compositions and mass transfer mechanisms but be in place. The heat balance was studied is some specialized works ((Camdali and Tunc, 2006),(Glaser et al., 2011), (Zimmer et al., 2008), (Duan et al., 2018), and (Zhang et al., 2009)). The effects of slag composition was studies in multiple works ((Bai et al., 2022; Jansson, 2008; Kasimagawa et al., 2014; Mattila et al., 2002; Sarkar et al., 2020; Sheshukov et al., 2016; Zhu et al., 2018)). A critical step in developing prediction models is the local mass transfer lining and slag metal. This mass transfer has this far been treated by semiempirical models ((Wang et al., 2022), (Wang et al., 2022)). In one work 3D computational fluid dynamics was applied (Wang et al., 2022), and where predictions seems to agree with observations. However, they did not report the diffusivities used in their model and the underlying erosion-corrosion models were empirical and tuned to the data. It was found that these tuning factors would depend on the operating conditions.
In the industry the refractory wear is known to be a result of i) Thermal stresses, ii) Dissolution of the refractory bricks into slag/metal, and iii) dissolution of the binder materials into slag/metal. In
addition, mechanical stresses imposed on the refractory during cleaning operations will impact on erosion and lifetime. In addition to these multiple phenomena, several others are involved (WanHaoRefractory, 2023).
The impact of thermal stresses will be most severe at the bottom of the ladle when hot steel meets colder refractory. As the velocity of the metal at the moment of impact is high, this is where we expect the maximum thermal stresses. The colder the ladle wall is when it meets hot steel at high speed, the larger is the risk of cracks formation on the ladle wall.
It must be noted that time between heats have significant effects on thermo-stress induced erosion. The temperature distribution on the ladle refractory wall at the filling time is an important parameter that can be predicted using the model to be presented below. However, the addition of a heating burner at the ladle waiting station is not included for now.
The pragmatism-based approach to a model for ladle lining erosion
We have previously defined a methodology "Pragmatism in Industrial Modelling" (J. Zoric et al., 2015; Johansen and Ringdalen, 2018; Johansen, Stein Tore et al., 2017; Zoric, Josip et al., 2015) which is especially suited for developing fast and sufficiently accurate industrial models. In a twin paper (Johansen et al., 2023) we have outlined the methodology that was applied in this work and the learning that may be exploited in future projects. Here we explain the details of the physics-based model.
The objective of the model is to be able to advice or support operators in assessing if it is safe to use the refractory in one more heat. In such an application, the erosion state of the refractory must be updated from heat to heat and simulation for a next virtual heat could be performed. The virtual heat should then contain as much information as possible about the next heat. The result of such a simulation and visual or optical inspection of the lining would then lay the foundation for the final assessment.
#### Model simplifications and assumptions
The pragmatic model must be fast as we wish to simulate a transient ladle operation, lasting in the order of two hours, in less than a minute. This is critical as we wish to simulate all ladle operations within a year in a few hours in order to be applied directly in production, do tuning, or do parameter sensitivity analysis.
Figure 2 gives some ideas about the phenomena involved. The heating elements (electrodes) can be submerged in the slag, or work from above. They produce electric arcs that heat the liquid steel. The flow of the slag and liquid steel is not only a function of the gas flow rate applied for blowing, but also is influenced by several effects, such as the mass of steel and slag, vacuum pressure and the thermo-physical properties of the fluids.
The ladles are 3D objects, but due to speed requirements some overall model simplifications were done:
* Model is 2D (cylinder symmetrical) with the porous bottom plug placed in the center. As a consequence, we assume that the gas/steel/slag flows can be seen as rotationally symmetric
* The stirring gas is inert (only provides mixing)
* In the side walls only the radial heat balance is included
* In the bottom only vertical heat balance is included
* Solubility of MgO in the slag and solubility of C in the steel are assumed constant.
* The metal and the slag phases are stratified and are assumed to be internally perfectly mixed. The phases exchange mass and energy with each other and the refractory
* Above the slag energy is exchanged by radiation only
* Refractory erosion due to thermomechanical stresses is not considered
Volumes and mass balances
As the model will compute situations with different amounts of steel and slag in the ladle, we have to take into account all these possible situations. Now the total volume of the slag and metal is represented by
\[V_{tot}=V_{steel}+V_{slag}=\alpha_{steel}V_{tot}+\alpha_{slag}V_{tot} \tag{1}\]
Accordingly, the mass of liquids inside the ladle is:
Figure 2: Idealized, simplified ladle, with slag (red), metal(blue), gas bubbles, heating elements and refractory (brown)
\[M_{\it tot}=\rho_{\it steq}\alpha_{\it steq}V_{\it tot}+\rho_{\it slag}\alpha_{ \it slag}V_{\it tot}\] [2]
In our first approach, we neglect the volumes of the protruding impact element at the bottom and the volumes modified by eroded bricks. In this case, we have that the height of the metal-slag interface is positioned at height
\[H_{\it sm}=\alpha_{\it steq}V_{\it tot}\ /\left(\pi R^{2}\right),\]
and the thickness of the slag layer is:
\[H_{\it g-sm}=\alpha_{\it slag}V_{\it tot}\ /\left(\pi R^{2}\right)\]
The mass balance for the ladle must also be respected. That is for the slag
\[\frac{dM_{\it slag}}{dt}=\dot{M}_{\it slag,EAF}-\dot{M}_{\it slag,{\it supped }}+\sum_{k=1}^{N_{\it slag}}\dot{m}_{\it slag,k}\]
Here \(\dot{M}_{\it slag,EAF}\) and \(\dot{M}_{\it steq,EAF}\) are the transient mass flow rates of slag and steel coming into the ladle during tapping from the EAF. \(\dot{m}_{\it slag,k}\) is the mass flow rate of added slag former of type k. Typically a slag former of type k, total mass \(m_{\it slag,k}\), can be assumed to be added during one numerical time step, between time \(t^{*}\) and \(t^{*+1}\), such that
\[M_{\it slag}^{\ \ \ \ \ n+1}=M_{\it slag}^{\ \ \ \ \ n}+\Delta t\left(\dot{M}_{\it slag, EAF}-\dot{M}_{\it slag,{\it supped}}\right)+m_{\it slag,k}\]
For the metal we have:
\[\frac{dM_{\it steq}}{dt}=\dot{M}_{\it steq,EAF}-\dot{M}_{\it steq,{\it supped }}+\sum_{k=1}^{N_{\it slag}}\dot{m}_{\it alloy,k}\]
\(\dot{M}_{\it slag,{\it supped}}\) and \(\dot{M}_{\it steq,{\it supped}}\) are the transient mass flow rates of slag and steel tapped out of the ladle. Similarly, \(\dot{m}_{\it slag,k}\) is the mass flow rate of added alloy of type k. As for the slag, an alloy of type k, total mass \(m_{\it alloy,k}\), can be assumed to be added during one numerical time step, between time \(t^{*}\) and \(t^{*+1}\), such that
\[M_{\it steq}^{\ \ \ \ \ n+1}=M_{\it steq}^{\ \ \ \ \ n}+\Delta t\left(\dot{M}_{\it steq,EAF}-\dot{M}_{\it steq,{\it supped}}\right)+m_{\it alloy,k}\]
Based on the equations [5]-[8], the phase densities, the purge gas fractions present in each phase, and corrections for the eroded ladle radius, we can compute the transient interface position for the metal and slag interfaces.
Thermal model
_Ladle walls_
The ladle side wall is built with a number of radial layers, as shown in Figure 3. Next, we let the numerical grid, as seen the figure, represent each vertical layer of wear bricks, and stack multiple layers on top of each other to represent the entire side wall of the ladle. The colors in Figure 3 represent different properties of the materials. The bottom part of the refractory is built of a stack of disks, which also may be represented by Figure 3, but now rotated 90 degree clockwise.
In this manner, the numerical grid for the ladle wall and casing temperature will consist of one one one-dimensional grid (here 7 cells) for the bottom and N one-dimensional grids for the vertical wall (Nx7 cells). For the horizontal and radial heat balance we have
\[\frac{\partial}{\partial t}\left(\rho C_{p}T^{w}\right)_{i}=\frac{\partial}{ r\partial r}\left(\lambda r\frac{\partial T^{w}}{\partial r}\right) \tag{9}\]
Equation [9] is discretized for each layer according to
\[\begin{array}{l}\underbrace{2\pi\Delta x_{i}T_{k}^{*}\Delta y_{k}}_{\Delta v }\left(\rho C_{p}\right)\frac{T_{i,k}^{*,n+1}-T_{i,k}^{*,n}}{\Delta t}=\\ 2\pi\Delta x_{i}\left(\lambda_{k}^{*}r_{k}^{+}\left(T_{i,k+1}^{*,n+1}-T_{i,k}^ {*,n+1}\right)-\lambda_{k}^{-}r_{k}^{-}\left(T_{i,k}^{*,n+1}-T_{i,k-1}^{*,n+1} \right)\right)\end{array} \tag{10}\]
Above + and - represent the value at the positive and negative sides of the cell-face. \(\Delta x_{i}\) is the vertical height of the grid cell at level i cell while \(r_{k}\) is the radial position index for the cell. We use harmonic averages for the cell-face thermal conductivities \(\lambda_{k+1}^{-}\) and \(\lambda_{k}^{+}\)
\[\lambda_{k}^{+}=\lambda_{k+1}^{-}=\frac{2\lambda_{k+1}}{\Delta y_{k+1}}\frac{ 2\lambda_{k}}{\Delta y_{k}}/\left(\frac{2\lambda_{k+1}}{\Delta y_{k+1}}+\frac {2\lambda_{k}}{\Delta y_{k}}\right) \tag{11}\]
and where \(r_{k}\) is defined according to Figure 3:
\[r_{k}^{+}\equiv r_{k+1}^{-}=r_{k}^{+}+\Delta y_{k}/2\equiv r_{k+1}-\Delta y_{ k+1}/2 \tag{12}\]
In the cell contacting the hot liquid steel and slag (k=1) we have
\[\underbrace{2\pi\Delta x_{i}T_{i}\Delta y_{1}}_{\Delta V}\left(\rho C_{p}\right) \frac{T_{i,1}^{w,n+1}-T_{i,1}^{w,n}}{\Delta t}=2\pi\Delta x_{i}\left(\begin{array} []{c}\lambda_{i}^{+}r_{j}^{+}\left(T_{i,2}^{w,n+1}-T_{i,1}^{w,n+1}\right)-\\ \\ r_{i}^{-}\left(\begin{array}{c}\alpha_{i,metal}\tilde{h}_{i}^{metal,inner} \left(T_{metal}-T_{i,1}^{w}\right)+\\ \\ \alpha_{i,slab}\tilde{h}_{i}^{slag,inner}\left(T_{slag}-T_{i,1}^{w}\right)+\\ \\ \alpha_{i,gas}\tilde{h}_{i}^{radiation}\left(T_{lid}-T_{i,1}^{w}\right)\end{array} \right)\end{array}\right)\] [13]
In eq. [13]\(\alpha_{i,metal}\), \(\alpha_{i,slag}\) and \(\alpha_{i,gas}\) are the local volume fractions of the phases contacting the element \(\Delta x_{i}\) at a given time.
\[\tilde{h}_{i}^{metal,inner}=\frac{2\lambda_{i,1}}{\Delta y_{1}}\tilde{h}_{i}^ {metal,flow}\ /\left(\frac{2\lambda_{i,1}}{\Delta y_{1}}+\tilde{h}_{i}^{metal, flow}\right)\] [14]
\[\tilde{h}_{i}^{slag,inner}=\frac{2\lambda_{i,1}}{\Delta y_{1}}\tilde{h}_{i}^{ slag,flow}\ /\left(\frac{2\lambda_{i,1}}{\Delta y_{1}}+\tilde{h}_{i}^{slag, flow}\right)\] [15]
\[\tilde{h}_{i}^{gas,inner}=\frac{2\lambda_{i,1}}{\Delta y_{1}}\tilde{h}_{i}^{ rad}\ /\left(\frac{2\lambda_{i,1}}{\Delta y_{1}}+\tilde{h}_{i}^{rad}\right)\approx \frac{2\lambda_{i,1}}{\Delta y_{1}}\] [16]
Where the external temperature is given by \(T_{EXT}\). The radiation heat transfer coefficient is given by
\[\tilde{h}_{i}^{rad}=\sigma\varepsilon_{i}(T_{i,1,w}^{\ \ \ \ 2}+T_{EXT}^{\ \ \ \ 2})(T_{i,1,w}+T_{EXT})\] [17]
and where the wall temperature is further approximated the temperature in the near wall cell at the previous time step:
\[\tilde{h}_{i}^{rad}=\sigma\varepsilon_{i}(T_{i,1}^{w,n2}+T_{EXT}^{\ \ \ \ 2})(T_{i,1}^{w,n}+T_{EXT})\] [18]
Figure 3: Element of the refractory where the transient thermal heat balance is addressed.
For the outer wall at \(\ y_{N}=y_{7}\) (steel casing) we have:
\[\underbrace{2\pi\Delta x_{r_{N}}\Delta y_{N}}_{\Delta V}\left(\rho C_{{}_{P}} \right)\frac{T_{i,N}^{w,n+1}-T_{i,N}^{w,n}}{\Delta t}=2\pi\Delta x_{i}\left(r_{ N}^{+}\left(\tilde{h}_{i}^{ext}\left(T_{EXT}-T_{i,1}^{w}\right)\right)-\lambda_{N}^{-}r_{ N}^{-}\left(T_{i,N}^{w,n+1}-T_{i,N-1}^{w,n+1}\right)\right) \tag{19}\]
Here the external heat transfer coefficient is estimated by a sum of natural convection and radiation. The convective external heat transfer coefficient \(\ h_{\lambda C}\) is given by equation [104] using the properties for air. The dimension used in the convective model should be the half height of the ladle standing straight up. The effective external heat transfer coefficient is then
\[\tilde{h}_{i}^{ext}=\varpi_{\text{casing}}\big{(}T_{i,1}^{w2}+T_{EXT}^{\ \ \ 2} \big{)}^{2}\big{(}T_{i,1}^{w}+T_{EXT}\big{)}+h_{NC} \tag{20}\]
When the ladle is located inside a cabinet, with the ladle kept inside a compartment with external walls, the effective heat emissivity in equation [20] can be multiplied by a factor of 0.5.
It should be noted that the external heat transfer coefficients must be adjusted to the situation the ladle experiences (melt refining, transport to casting station, casting, transport to waiting station, waiting). If the external heat transfer conditions varies between the different events this must be handled in an appropriate manner such that we can tune the model to get a realistic thermal history for the ladle.
The model for the bottom energy is completely analogous to what is described above, but now with the discrete equation
\[\begin{split}&\underbrace{\pi R^{2}\Delta x_{m}}_{\Delta V}\left( \rho C_{{}_{P}}\right)\frac{T_{m}^{b,n+1}-T_{m}^{b,n}}{\Delta t}=\\ &\pi R^{2}\left(\lambda_{m}^{+}\left(T_{m+1}^{b,n+1}-T_{m}^{b,n+1 }\right)-\lambda_{m}^{-}\left(T_{m}^{b,n+1}-T_{m-1}^{b,n+1}\right)\right) \end{split} \tag{21}\]
Here R is the inner radius of the ladle. For the element close to the liquid steel (we assume that steel flows into the bottom at time = 0.0 sec) we have:
\[\begin{split}&\underbrace{\pi R^{2}\Delta x_{m=NM}}_{\Delta V} \left(\rho C_{{}_{P}}\right)\frac{T_{m=NM}^{b,n+1}-T_{m=NM}^{b,n}}{\Delta t}= \\ &\pi R^{2}\left(\tilde{h}_{steelflow-bottom}\left(T_{steel}-T_{NM}^{b,n+1} \right)-\lambda_{1}^{+}\left(T_{NM}^{b,n+1}-T_{NM-1}^{b,n+1}\right)\right) \end{split} \tag{22}\]
\[\begin{split}& h_{steelflow-bottom}=\frac{2\lambda_{NM,j}}{ \Delta x_{NM}}\tilde{h}_{i}^{neut,flow}\ /\ (\frac{2\lambda_{NM,j}}{\Delta x_{NM}}+\tilde{h}_{i}^{ model,flow})\end{split} \tag{23}\]
_For the bottom element (steel shell) we have:_
\[\underbrace{\pi R^{2}\Delta x_{m=1}}_{\Delta V}\left(\rho C_{p}\right)\frac{T_{m=1 }^{b,n+1}-T_{m=1}^{b,n}}{\Delta t}=\pi R^{2}\left(\lambda_{1}^{+}\left(T_{2}^{ b,n+1}-T_{1}^{b,n+1}\right)-\widetilde{h}_{bottom}\left(T_{1}^{b,n+1}-T_{ EXT}\right)\right)\] [24]
where we estimate
\[\widetilde{h}_{bottom}\approx 10.0\ \mathrm{W/(m^{2}K)}\] [25]
_Radiation - Wall temperatures and heat transfer above the slag/metal_
Above the liquid phase the refractory will only see the top lid, the other parts of the wall and the metal surface. We will assume that the top lid is adiabatic, such that no energy is drained out through the lid. We now have to assess radiation transfer between different inner wear bricks and the top surface of the slag/metal. The radiation flux out from a surface with emissivity \(\mathcal{E}_{p}\) and temperature \(T_{p}\) is given by
\[q_{nd}=\mathcal{E}_{p}\sigma T_{p}^{4}\] [26]
The radiation heat flow from surface elements \(A_{1}\) to \(A_{2}\) is given by ("View factor," 2022)
\[\widetilde{Q_{1\to 2}}=\mathcal{E}_{1}\sigma T_{1}^{4}\iint\limits_{ \lambda_{2}}\frac{\cos\theta_{1}\cos\theta_{2}}{\pi s^{2}}dA_{2}dA_{1}\] [27]
The geometrical configuration is seen from Figure 4.
The radiation heat flow from \(A_{2}\) to \(A_{1}\) is then
\[\widetilde{Q_{2\to 1}}=\mathcal{E}_{2}\sigma T_{2}^{4}\iint\limits_{ \lambda_{2}}\frac{\cos\theta_{1}\cos\theta_{2}}{\pi s^{2}}dA_{2}dA_{1}\] [28]
The heat flow between the two surfaces A\({}_{1}\) and A\({}_{2}\) can be given by (Goodman, 1957):
\[\dot{Q}_{{}_{2\to 1}}=-\dot{Q}_{{}_{1\to 2}}=\frac{\left(T_{2}^{4}-T_{1}^{4} \right)}{1/\,\varepsilon_{{}_{2}}+1/\,\varepsilon_{{}_{1}}-1}\sigma\!\!\int \limits_{\Lambda}\!\!\int\limits_{\Lambda_{{}_{2}}}\frac{\cos\theta_{{}_{1}} \cos\theta_{{}_{2}}}{\pi s^{2}}dA_{{}_{2}}dA_{{}_{1}}\] [29]
Based on equations [26]-[29], the surface normal vectors \(\mathbf{n}_{{}_{1}}\) and \(\mathbf{n}_{{}_{2}}\) and the vector connecting area elements \(dA_{{}_{1}}\) and \(dA_{{}_{2}}\), all radiation heat flows can be computed. These are \(\dot{Q}_{{}_{\nu,m\to\mathit{slag-metal}}}\) (from brick number m to slag-metal interface), \(\dot{Q}_{{}_{\nu,m\to\mathit{ceiling}}}\) (from brick number m to ceiling), and \(\dot{Q}_{{}_{\mathit{slag-metal-ceiling}}}\) (from slag-metal interface to ceiling). The direct radiation between bricks was ignored. As the radiation from the slag-metal interface must respect that the slag only covers a fraction \(\alpha_{{}_{\mathit{slag}}}\) of the total free surface area. Hence, the radiation temperature \(T_{{}_{\mathit{slag-metal}}}^{4}\) is replaced by:
\[T_{{}_{\mathit{slag-metal}}}^{4}=\alpha_{{}_{\mathit{slag}}}T_{{}_{\mathit{ slag}}}^{4}+(1-\alpha_{{}_{\mathit{slag}}})T_{{}_{\mathit{metal}}}^{4}\] [30]
It is further assumed that ceiling (ladle lid) is adiabatic and that the slag and metal is well mixed. However, for the refractory bricks the thermal conduction heat flux into the inner wall surface brick and the net radiation flux must balance. The surface temperature of the wall bricks are then given as
\[T_{{}_{\mathit{wall-surface},k}}\approx\frac{\frac{2\lambda_{{}_{k}}}{\Delta y _{{}_{k}}}T_{{}_{\mathit{wall},k}}^{n}+T_{{}_{\mathit{slag-metal}}}^{n}\Psi \left(T_{{}_{\mathit{wall},k}}^{n}+T_{{}_{\mathit{slag-metal}}}^{2,n}\right) \left(T_{{}_{\mathit{wall},k}}^{n}+T_{{}_{\mathit{slag-metal}}}^{n}\right)}{ \left\{\Psi\left(T_{{}_{\mathit{wall},k}}^{2}+T_{{}_{\mathit{slag-metal}}}^{2,n }\right)\left(T_{{}_{\mathit{wall},k}}^{n}+T_{{}_{\mathit{slag-metal}}}^{n} \right)+\frac{2\lambda_{{}_{k}}}{\Delta y_{{}_{k}}}\right\}}\] [31]
This illustrated the fact that it is surface temperature that communicates radiation and not volume averaged temperature for the computational cell.
The factor \(\Psi\) is given by
\[\Psi_{{}_{k}}=\frac{\sigma}{1/\,\varepsilon_{{}_{\mathit{brick}}}+1/\, \varepsilon_{{}_{\mathit{slag-metal}}}-1}\frac{\left(R(1-1/\sqrt{2})\right) \max\left(0;x_{{}_{k}}-x_{{}_{\mathit{slag-metal}}}\right)\cdot\frac{R^{2}}{ 2}\Delta\theta}{\pi\left\{\left(R(1-1/\sqrt{2})\right)^{2}+\left(\max\left(0;x _{{}_{k}}-x_{{}_{\mathit{slag-metal}}}\right)\right)^{2}\right\}^{2}}\qquad,\] [32]
where \(R\Delta\theta\Delta x\) is the vertical area element (\(\frac{R^{2}}{2}\Delta\theta\)) on which the computation is made.
The heat flows may be converted to heat transfer coefficient by rewriting equation [29] as
\[\dot{Q}_{2\to 1}=\left[\frac{\sigma}{1\,/\,\varepsilon_{2}\,+\,1\,/\, \varepsilon_{1}\,-\,1}\int\limits_{A_{1}\,A_{2}}\frac{\cos\theta_{1}\cos\theta_ {2}}{\pi s^{2}}dA_{2}\,\right]dA_{1}\left(T_{2}^{4}-T_{1}^{4}\right)=\] \[dA_{1}\left\{\frac{\sigma}{1\,/\,\varepsilon_{2}\,+\,1\,/\, \varepsilon_{1}\,-\,1}\int\limits_{A_{1}\,A_{2}}\frac{\cos\theta_{1}\cos\theta _{2}}{\pi s^{2}}dA_{2}\left(T_{2}^{2}+T_{1}^{2}\right)\left(T_{2}\,+T_{1}\, \right)\right\}\left(T_{2}\,-T_{1}\,\right)\qquad,\] [33] \[\equiv dA_{1}\,\tilde{h}_{2\to 1}\left(T_{2}\,-T_{1}\,\right)\]
where \(\,\tilde{h}_{2\to 1}\) is the heat transfer coefficient expressed by the \(\,\left\{\,\,\,\right\}\,\) above.
As the lid is adiabatic we have the following condition to fulfil:
\[Q_{slag-metal\to ceiling}+\sum\limits_{w,m}Q_{w,m\to ceiling}=0\,\,\,,\] [34]
From [34] we compute the ceiling temperature.
### Effective heat transfer coefficient
The effective heat transfer coefficient \(\,\tilde{h}_{liq}\,\), in the liquid steel and slag, may now be estimated based on three different contributions: 1. The wave induced contribution \(\,\tilde{h}_{wave}\,\), elaborated in _Appendix E Wave induced heat transfer_, 2. The heat transfer due to bubble stirring \(\,\tilde{h}_{string}\,\), is elaborated in _Appendix F Inner wall heat transfer coefficients due to forced convection bydue to bubble stirring_, and 3. The heat transfer due to natural convection \(\,h_{NC}\,\), is elaborated in Appendix D. Pure natural and effective convection heat transfer:
\[\tilde{h}_{liq}=\tilde{h}_{wave}+\left(\tilde{h}_{string}\,^{U2}+\tilde{h}_{ NC}\,^{U2}\right)^{2}\] [35]
### Heat balance for the slag
Due to melting of additives (slag formers, refining substances, alloying elements) we have selected to represent the energy by the specific enthalpy h.
First, we give the slag enthalpy by a simplified relation:
\[\begin{split} H_{slag}=M_{slag}h_{slag}(T)=m_{slag,EX}h_{slag,EX }+\sum\limits_{k}^{N_{slag}}m_{slag,k}h_{slag,k}=\\ m_{slag,EX}C_{p,slag,EX}T+\sum\limits_{k=1}^{N_{slag}}m_{slag,k} \left\{\begin{array}{c}C_{p,s,k}^{slag}T;T\leq T_{k,1}\\ C_{p,s,k}^{slag}T_{1}+\Delta h_{k}\,\frac{T-T_{k,1}}{T_{k,1}-T_{k,2}}\,;T_{k, 1}<T<T_{k,2}\\ C_{p,s,k}^{slag}T_{1}+\Delta h_{k}+C_{p,l,k}^{slag}\left(T-T_{k,2}\right);T \geq T_{k,2}\end{array}\right.\end{split}\] [36]
Here the enthalpy for the solids are represented by \(C_{p,s}T\), and for the liquids it is given by
\[C_{p,s}T_{1}+\Delta h+C_{p,l}\left(T-T_{2}\right)\,\ \text{where}\ \ C_{p,l}\ \text{is the liquid heat capacity and}\ \ \Delta h\ \text{is the heat of}\]
transforming the solid into a liquid state. The temperatures \(T_{1}\) and \(T_{2}\) are the temperatures where the phase transition (melting) starts and is completed, respectively.
The heat balance for the slag is then
\[\begin{split}\frac{d}{dt}\Big{(}M_{slag}h_{slag}\Big{)}& =\sum_{i=1}^{N}2\pi R\Delta x_{i}\alpha_{i,slag}\tilde{h}_{i}^{slag, inner}\left(T_{i,1}^{w}-T_{slag}\right)+\dot{Q}_{slag}\\ &+\dot{M}_{slag,EAF}C_{p,slag}T_{slag,EAF}-\dot{M}_{slag,apped}h_{ slag}\\ &+\pi R^{2}\tilde{h}_{i}^{slag,lid}\left(T_{lid}-T_{slag}\right)+ \pi R^{2}\tilde{h}_{i}^{slag,metal}\left(T_{steel}-T_{slag}\right)\\ &+\sum_{k}^{N_{slag}}\dot{m}_{slag,k}h_{slag,k}\left(T_{feed} \right)\end{split} \tag{37}\]
Here \(\alpha_{i,slag}\) is the slag fraction contacting brick number i and varies with time. \(T_{feed}\) is the temperature of the materials at time of feeding, typically less than 100 \({}^{\circ}\)C. \(\dot{M}_{slag,EAF}\) is the time dependent mass flow of slag coming from the EAF. \(\tilde{h}_{i}^{slag,lid}\) is the heat transfer coefficient for slag surface - top lid heat exchange, and \(\tilde{h}_{i}^{slag,metal}\) is the area averaged heat transfer coefficient between the metal and slag. \(\dot{Q}_{slag}\) is the heating power supplied to the slag [W/kg]. All these quantities are in general varying with time.
By applying the mass balance [5] into [37] we obtain:
\[\begin{split} M_{slag}\frac{dh_{slag}}{dt}&=\sum_{i= 1}^{N}2\pi R\Delta x_{i}\alpha_{i,slag}\tilde{h}_{i}^{slag,inner}\left(T_{i,1} ^{w}-T_{slag}\right)+\dot{Q}_{slag}\\ &\quad+\dot{M}_{slag,EAF}\left(C_{p,slag}T_{slag,EAF}-h_{slag} \right)\\ &\quad+\pi R^{2}\tilde{h}_{i}^{slag,lid}\left(T_{lid}-T_{slag} \right)+\pi R^{2}\tilde{h}_{i}^{slag,metal}\left(T_{steel}-T_{slag}\right)\\ &\quad+\sum_{k}^{N_{slag}}\dot{m}_{slag,k}\left\{h_{slag,k}\left( T_{feed}\right)-h_{slag}\right\}\end{split} \tag{38}\]
We may note that eq. [38] tells that the slag components fed at low temperature \(T_{feed}\), will lower the enthalpy of the slag as \(h_{slag,k}\left(T_{feed}\right)-h_{slag}<0\).
#### Heat balance for the metal
As for the slag, the metal enthalpy \(H_{steel}\) can be expressed by the specific enthalpy \(h_{steel}\)
\[\begin{array}{l}H_{steel}=M_{steel}h_{steel}(T)=m_{steel}h_{steel}+\sum_{k}^{N_{ stelmin}}m_{alloy,k}h_{alloy,k}=\\ \\ m_{steel}C_{p,steel}T+\sum_{k=1}^{N_{stel}}m_{alloy,k}\left[\begin{array}{c}C_{p,s,k}^{steel}T;T\leq T_{k,1}\\ C_{p,s,k}^{steel}T_{k,1}+\Delta h_{k}\frac{T-T_{k,1}}{T_{k,1}-T_{k,2}};T_{k,1 }<T<T_{k,2}\\ C_{p,s,k}^{steel}T_{k,1}+\Delta h_{k}+C_{p,j,k}^{steel}\left(T-T_{k,2}\right);T \geq T_{k,2}\end{array}\right.\end{array} \tag{39}\]
Similarly, for the metal (steel) we have
\[\begin{array}{l}M_{steel}\frac{dh_{steel}}{dt}=\sum_{i=1}^{N}2\pi R\Delta x _{i}\alpha_{i,steel}\tilde{h}_{i}^{steel,inner}\left(T_{i,1}^{w}-T_{steel} \right)\\ +\sum_{j=1}^{N}2\pi r_{j}\Delta r_{j}\tilde{h}_{steellow-bottom}\left(T_{ NM,j}^{b}-T_{steel}\right)+\dot{Q}_{steel}\\ +\dot{M}_{steel,EAF}\left(C_{p,steel}T_{steel,EAF}-h_{steel}\right)\\ +\pi R^{2}\tilde{h}_{i}^{stage,metal}\left(T_{stag}-T_{steel}\right)\\ +\sum_{k}^{N_{stag}}\dot{m}_{alloy,k}\left\{h_{alloy,k}\left(T_{feed}\right)-h_{steel }\right\}\end{array} \tag{40}\]
The first RHS sum represents the heat transfer along the vertical lade wall, while the second summation term represents the heat transfer between steel and the bottom refractory. It is assumed that the bottom heat transfer is zero before steel has arrived in the lade and is at first arrival time becoming non-zero (activated).
\(\alpha_{i,steel}\) is the metal fraction contacting brick number i and varies with time. \(\dot{M}_{steel,EAF}\) is the time dependent mass flow of steel coming from the EAF. \(h_{steellow-bottom}\) is the heat transfer coefficient for metal-bottom refractory heat exchange, and \(\tilde{h}_{i}^{stage,metal}\) is the area averaged heat transfer coefficient between metal and slag. \(\dot{Q}_{steel}\) is the heating power supplied directly to the steel [W/kg]. Again, these quantities are in general varying with time.
The heat source - and \(\dot{Q}_{stag}\) are related to the total power \(\dot{Q}_{stag}\) supplied by the heating electrodes. \(\dot{Q}_{stag}\) is the power logged at the plant. The heat entering the slag and metal will be lower. We introduce and overall heating efficiency \(\eta_{eff}\) [0,1] and a heat distribution coefficient \(\eta_{stag}\), such that
\[\dot{Q}_{stag}=\frac{\eta_{stag}M_{stag}C_{p,stag}}{\eta_{stag}M_{ stag}C_{p,stag}+M_{steel}C_{p,steel}}\eta_{eff}\dot{Q}_{stag} \tag{41}\]
\[\dot{Q}_{steel}=\frac{M_{steel}C_{p,steel}}{\eta_{slag}M_{slag}C_{p,slag}+M_{steel}C_ {p,steel}}\eta_{eff}\dot{Q}_{net}\] [42]
The coefficient \(\eta_{slag}=\)1.0 tells that slag and metal increase temperature at the same rate. If \(\eta_{slag}=\)2.0 the slags picks up temperature twice as fast as the steel. If \(\eta_{slag}=\) 0.5 the slag picks up temperature at half the rate of the steel. The introduction of the coefficient \(\eta_{slag}\) allows a more controlled way to distribute heat addition between steel and slag.
_Solution for the energy equations_
Based on previous temperatures the radiation flows and fluxes are computed (equations [26]-[31]). For the radial wall elements the discrete equations [10], [13] and [19] can be written as
\[\mathbf{A_{i}}\cdot\mathbf{T}_{i}^{v,n+1}=\mathbf{b}_{i}^{w}\,,\] [43]
Here \(\mathbf{b}_{i}^{w}\) will contain reference to previous slag and metal temperatures, radiation fluxes and external temperatures. The solution is obtained by inverting the NJxNJ (here 7x7) matrix \(\mathbf{A_{i}}\):
\[\mathbf{T}_{i}^{v,n+1}=\mathbf{A_{i}^{-1}}\cdot\mathbf{b}_{i}^{w}\] [44]
We may notice that during the period when the ladle is in steady operation (no filling or tapping) the matrix \(\mathbf{A_{i}}\) is fixed. In this case the new wall temperatures are obtained by only updating \(\mathbf{b}_{i}^{w}\), which depends on values from previous time step, and then re-doing the matrix-vector operation in eq. [44]. This allows very fast solution of wall temperatures.
The bottom part of the wall is solved identically to what is explained above.
_Discrete equations for the slag and metal energy_
The coupled discrete equations for slag and metal enthalpy can be solved analytically, provided the inner refractory wall temperatures are known. First, we need to establish the relation between temperatures and enthalpies. This is elaborated in Appendix B Temperature-enthalpy relations. As seen from Appendix C Discrete equations for the slag-metal heat balance, explicit expressions for the slag and metal enthalpies are given by equation [92]. Temperatures are then computed by equations [75] and [77].
### Erosion model
The erosion is primarily a result of dissolution and mass transfer from the refractory into the metal and slag. The erosion mechanisms considered are mass loss of refractory to the liquid by dissolution. In addition, we have mass losses due to thermal stresses. These may be addressed in a machine learning model, which may exploit the predicted difference between refractory temperature and incoming steel temperature.
_Refractory loss in the steel wetted region_
During periods with considerable agitation on the metal and slag (bubble driven convection, natural convection, electromagnetic stirring) the carbon binder of the MgO-C refractory may be dissolved into the steel. The mass flux of carbon into the steel is locally given by:
\[\overline{J}=-\alpha_{C}D_{C}\rho_{steel}\nabla x_{C}\] [45]
Here \(\alpha_{C}\) is the volume fraction of the refractor that is occupied by carbon, \(D_{C}\) is the diffusivity of carbon in steel and \(x_{C}\) is the mass fraction of carbon in the steel.
By introducing the concept of a mass transfer coefficient, we may write [45] as
\[\overline{J}=\alpha_{C}k_{C,BL}\rho_{steel}\left(x_{C}^{eq}(T_{wall})-x_{C}^{ bulk}\right)\overline{n}\] [46]
Here \(k_{C,BL}\) is the mass transfer coefficient for the liquid side boundary layer and \(x_{C}^{eq}(T_{wall})\) is the solubility of C into the steel with its actual composition, and where \(T_{wall}\) is the temperature at the inner lade wall. The temperature is controlled by the steel temperature and the temperature in the refractory brick. As the steel and the refractory has comparable thermal conductivities, the wall temperature will depend on both temperatures.
For forced convection we may use the mass transfer coefficient suggested by (Scalo et al., 2012) and (Shaw and Hanratty, 1977), stating that the mass transfer coefficient for Schmidt number Sc > 20 can be approximated by
\[k_{C,BL}=0.09\cdot u_{r}\cdot Sc^{-0.7}\] [47]
Values for the shear velocities are found in Typical values range from 0.0 to 0.1 m/s.
From equation [47] we learn that erosion of the steel wetted ladle wall will increase by gas stirring flow rate, increased temperature (increased C solubility and C diffusivity, decreased viscosity).
_Mass transfer resistance in the interface between MgO-C and steel_
At the inner surface of the MgO-C bricks the C binder will dissolve into the steel while MgO may be considered as inert. A sketch is provided in Figure 5. As the carbon binder is dissolved into the steel the average transport length \(s_{pore}\) will stabilize around a typical MgO particle radius. If the MgO particles are small the convection inside the pores space can be neglected. In this case the transport in the pore space may be given by pure diffusion. In that case we may write:
\[\overline{J}_{porespace}=\alpha_{C}\frac{D_{C}}{s_{pore}}\,\rho_{steel}\left(x_{C }^{eq}(T_{wall})-x_{C}^{IB}\right)\overline{n}\] [48]
Here \(x_{C}^{IB}\) is the C mass fraction at the wall, defined at the outer surface made up if the MgO particles protruding out of the C matrix. In this case the mass flow through the inner and outer layers must match, giving:
\[\overline{J}_{\mathit{eff}}=\alpha_{C}k_{C,BL}\rho_{\mathit{steel}}\left(x_{C}^{ IB}-x_{C}^{bulk}\right)\overline{n}=\alpha_{C}\,\frac{D_{C}}{s_{\mathit{pore}}}\, \rho_{\mathit{steel}}\left(x_{C}^{\mathit{eq}}(T_{\mathit{wall}})-x_{C}^{IB} \right)\overline{n}\quad, \tag{49}\]
And where the mass transfer coefficient is given by
\[k_{C,\mathit{eff}}=\frac{k_{C}D_{C}}{k_{C}s_{\mathit{pore}}+D_{C}} \tag{50}\]
The effective mass transport of C from the MgO-C brick to the steel is then given by
\[\overline{J}_{\mathit{eff}}=\alpha_{C}k_{C,\mathit{eff}}\rho_{\mathit{steel}} \left(x_{C}^{\mathit{eq}}(T_{\mathit{wall}})-x_{C}^{\mathit{bulk}}\right) \overline{n} \tag{51}\]
_Refractory loss in the slag wetted region_
The slag is collected at a relatively thin layer at the surface. Due to the bubble plume, case by the stirring gas, the slag will be pushed away from the plume and will gather close to the refractory wall. As the bubble plume is asymmetrically places, the slag thickness close to the refractory wall will vary along the ladle perimeter. We neglect these complexities and assume complete radial symmetry. The thickness \(\delta_{\mathit{slag}}\) of the slag layer that contacts the refractory can be estimated by:
Figure 5: Upper: MgO particle in a C matrix. The liquid flow of steel is on the LHS. Lower: Illustration of C that must diffuse through channels between MgO grains to reach the inner side of the flow boundary layer. The vertical arrow indicates the steel flow. Horizontal arrow indicates diffusion flux.
\[\delta_{slag}=\beta_{slag}M_{slag}\ /\ (\rho_{slag}\pi R(H_{steel})^{2})\] [52]
The slag layer will move vertically, according to waves generated by the bubble plume, as illustrated in Figure 6. The slag layer has thickness \(\delta_{slag}\) and wave amplitude \(a_{wave}\).
_Figure 6 Illustration of the slag layer, close to the refractory, moving vertically with wave amplitude \(a_{wave}\)._
The mass transfer from wall to slag layer can be analyzed by assuming a developing boundary layer. According to Schlichting (Schlichting, 1979) the mass transfer along a developing boundary layer can be given by
\[Sh_{x}=\frac{kx}{D_{MgO}}=0.339\cdot Sc^{1/3}\sqrt{\mathrm{Re}_{x}}\ \,\] [53]
where k is the mass transfer coefficient and x is the distance along the developing boundary layer.
_\(D_{MgO}\)_ is the diffusivity of MgO into the slag, and is related to the Schmidt number by
\[Sc=\frac{\nu_{slag}}{D_{MgO}}\] [54]
The explicit mass transfer coefficient is now:
\[k=\frac{\nu}{\sqrt{a_{wave}x}}0.339\cdot Sc^{-2/3}\sqrt{\mathrm{Re}_{a_{wave}}}\] [55]
By averaging the mass transfer k in equation [55] over the thickness of the slag layer we can obtain
\[\overline{k}=0.678\cdot\frac{\nu_{slag}}{\delta_{slag}}\,Sc^{-2/3}\sqrt{\frac {u_{wave}a_{wave}}{\nu_{slag}}}\] [56]
The wave velocity \(u_{wave}\) is now estimated by equation [108], and the swept distance (amplitude) \(a_{wave}\) can be represented by \(l_{w}\) in eq. [107]. It is possible to represent the distribution of mass transfer by a probability distribution. However, as a first approximation we may assume that the wave induced mass transfer applies to a region that extends over the thickness of the slag layer and a region that extends \(a_{wave}\) both above and below the slag layer. In this case we may estimate the mass transport to the slag to be given over height \(2a_{wave}+\delta_{slag}\), and where the average mass transfer coefficient for this layer is
\[k_{wave}=\overline{k}\,\frac{\delta_{slag}}{2a_{wave}+\delta_{slag}}=0.678\cdot \frac{\nu_{slag}}{2a_{wave}+\delta_{slag}}\,Sc^{-2/3}\sqrt{\frac{U_{wave}a_{ wave}}{\nu_{slag}}}\] [57]
In addition to the explicit wave contribution to mass transfer, the impact of the bubble driven flow (slag version of eq. [47]) must be added:
\[k_{off}=k_{wave}+k_{MgO,ht}=0.09\cdot u_{r}\cdot Sc^{-0.7}+0.678\cdot\frac{\nu _{slag}}{2a_{wave}+\delta_{slag}}Sc^{-2/3}\sqrt{\frac{U_{wave}a_{wave}}{\nu_{ slag}}}\] [58]
_Overall refractory loss model_
We will track both the MgO and C components of the refractory. We may note that bottom erosion is not included in the model for now. The bottom is included due to it's impact the thermal balance (heat storage).
It is assumed that when C is dissolved from the bricks in the steel region a corresponding amount of MgO is released and will end up in the slag. It is assumed that the density of bricks is related to C and the corresponding MgO volume fractions (\(\alpha_{C},\alpha_{MgO}\)) and phase densities (\(\rho_{C}\),\(\rho_{MgO}\)) by
\[\rho_{brick}=\alpha_{C}\rho_{C}+\alpha_{MgO}\rho_{MgO}\qquad,\] [59]
where \(\alpha_{C}+\alpha_{MgO}=1\). The MgO loss mass, \(M_{MgO}\), from a brick element during time dt, eroding a slice of thickness l, is
\[M_{MgO}=\dot{\mathbf{j}}_{MgO}A\alpha_{MgO}\Delta t=lA\alpha_{MgO}\rho_{MgO} \
From the equations [60] and [61] we find that in the slag region the carbon flux out of the carbon part of the refractory wall is given by
\[\dot{J}_{c}^{\,slag}=l\,\frac{A\alpha_{C}\rho_{C}}{A\alpha_{C}\Delta t}=\frac{ \dot{J}_{u_{CO}}^{\,slag}A\alpha_{M_{GO}}}{A\alpha_{M_{GO}}\rho_{M_{GO}}}\, \Delta t\,\frac{A\alpha_{C}\rho_{C}}{A\alpha_{C}\Delta t}=\dot{J}_{u_{CO}}^{\,slag }\,\frac{\rho_{C}}{\rho_{M_{GO}}} \tag{62}\]
According to [62] the volume flows of the carbon and steel are equal. However, the surface areas are different due to the actual volume fractions. The mass flow of carbon, per surface area, to the liquid in the slag region is then
\[\dot{J}_{c}^{\,slag}\alpha_{C}=\dot{J}_{u_{CO}}^{\,slag}\alpha_{C}\,\frac{\rho _{C}}{\rho_{M_{GO}}} \tag{63}\]
Similarly, the loss of MgO in the steel region due to carbon dissolution is
\[\dot{J}_{u_{MgO}}^{\,steel}\alpha_{MgO}=\dot{J}_{c}^{\,steel}\alpha_{M_{GO}} \,\frac{\rho_{M_{GO}}}{\rho_{C}} \tag{64}\]
#### Carbon balance
The C (carbon) is lost from the refractory by two mechanisms, depending on if we are in the steel wetted or slag wetted zone.
\[\begin{split}\frac{d}{dt}\Big{(}M_{\,steel}x_{C}^{\,steel}\Big{)} &=\sum_{i=1}^{Nt}\alpha_{i,steel}\alpha_{C}A_{i}k_{C,eff}\,\rho_{ stest}\,\left(x_{C}^{\,eq,stst}\left(T_{\,wall}\right)-x_{C}^{\,steel}\right)\\ &+\sum_{i=1}^{Nt}\alpha_{i,slag}\alpha_{MgO}A_{i}\Bigg{(}\, \alpha_{C}\,\frac{\rho_{C}}{\rho_{MgO}}\,\Bigg{)}k_{MgO,eff}^{\,n}\,\rho_{ slag}\,\left(x_{MgO}^{\,eq,slag}\left(T_{\,wall,i},x_{\,composition}^{\,slag}\right)-x_{MgO}^{\, slag,n}\right)\end{split} \tag{65}\]
Here the summation is over all the vertical refractory bricks. Here \(\,\alpha_{i,steel}\) is the local steel fraction (varies with height in the ladle) and \(\,\alpha_{C}\) is the carbon fraction in the refractory brick. \(A_{i}=2\pi R\Delta x_{i}\,\) is the local wall area.
#### MgO balance
The MgO is lost from the refractory, similar to the two mechanisms as above.
\[\begin{split}\frac{d}{dt}\Big{(}M_{\,slag}x_{MgO}^{\,slag}\, \Big{)}&=\Bigg{(}\,\mathrm{I}-\alpha_{C}\,\frac{\rho_{MgO}}{ \rho_{C}}\,\Bigg{)}\sum_{i=1}^{Nt}\alpha_{i,steel}\alpha_{C}k_{C,eff}\,A_{i} \rho_{stest}\,\Big{(}\,x_{C}^{\,eq,steel}\left(T_{\,wall}\right)-x_{C}^{\,steel,n}\,\Big{)}\\ +\sum_{i=1}^{Nt}\left(\,\mathrm{I}-\alpha_{C}\,\right)\alpha_{i, slag}^{\,*}k_{MgO,eff}^{\,n}\,A_{i}\rho_{slag}\,\left(x_{MgO}^{\,eq}\left(T_{\,wall,i},x_{\, composition}^{\,slag}\right)-x_{MgO}^{\,slag,n}\,\right)\end{split} \tag{66}\]
Here \(\alpha_{C}\) is the volume fraction of carbon in the brick, while \((1-\alpha_{C})\) is the MgO fraction. \(\alpha_{i,slag}^{*}\) is the wave enhanced slag fraction, being in contact with the lining. As a first approach for \(\alpha_{i,slag}^{*}\) we used \(\alpha_{i,slag}^{*}=0.25\alpha_{i-1,slag}+0.5\alpha_{i,slag}+0.25\alpha_{i+1,slag}\).
The left-hand terms are split and the effect of total mass change entered into the models. In the case of the slag we have:
\[\frac{d}{dt}\Big{(}M_{slag}X_{MgO}\Big{)}=M_{slag}\,\frac{d}{dt}\Big{(}X_{MgO} \Big{)}+X_{MgO}\,\frac{d}{dt}\Big{(}M_{slag}\Big{)}\,\,\,, \tag{67}\]
where the mass balance was given by eq. [5]. According to these equations we may write [66] as
\[\begin{array}{l}M_{slag}\,\frac{d}{dt}\Big{(}x_{MgO}^{slag}\Big{)}=\Bigg{(} (1-\alpha_{C})\,\frac{\rho_{MgO}}{\rho_{C}}\Bigg{)}\sum_{i=1}^{N}\alpha_{i, steq}\alpha_{C}k_{C,eff}\,A_{i}\rho_{steq}\,\Big{(}x_{C}^{eq,steq}\,\big{(}T_{ wall}\big{)}-x_{C}^{steq,n}\,\Big{)}\\ +\sum_{i=1}^{N}(1-\alpha_{C})\alpha_{i,slag}^{*}k_{MgO,eff}^{n}\,A_{i}\rho_{ slag}\,\Big{(}x_{MgO}^{eq}\,(T_{wall,i},x_{composition}^{slag})-x_{MgO}^{slag,n}\, \Big{)}\\ -x_{MgO}^{slag}\Bigg{(}\dot{M}_{slag,EAF}+\sum_{k=1}^{N_{slag}}\dot{m}_{slag,k }\,\Bigg{)}\end{array}\,, \tag{68}\]
where it is assumed that there is no MgO in the slag arriving from the EAF.
Similarly, the mass balance for carbon becomes
\[\begin{array}{l}\frac{d}{dt}\Big{(}M_{steq}x_{C}^{steq}\Big{)}=\sum_{i=1}^ {N}\alpha_{i,steq}\alpha_{C}A_{i}k_{C,eff}\,\rho_{steq}\,\Big{(}x_{C}^{eq, steq}\,\big{(}T_{wall}\big{)}-x_{C}^{steq}\,\Big{)}\\ +\sum_{i=1}^{N}\alpha_{i,slag}\alpha_{MgO}A_{i}\Bigg{(}\alpha_{C}\,\frac{ \rho_{C}}{\rho_{MgO}}\Bigg{)}k_{MgO,eff}^{n}\,\rho_{slag}\,\Big{(}x_{MgO}^{ eq,slag}\,(T_{wall,i},x_{composition}^{slag})-x_{MgO}^{slag,n}\,\Big{)}\\ -\Bigg{(}\dot{M}_{steel,EAF}+\sum_{k=1}^{N_{slag}}\dot{m}_{alloy,k}\,\Bigg{)}x_{C }^{steq}+\dot{M}_{steel,EAF}x_{C}^{steq,EAF}\end{array} \tag{69}\]
The solubility of MgO in the slag is given as (see acknowledgements) by
\[\begin{array}{l}x_{MgO}^{eq,slag}\,(T_{wall,i},x_{composition}^{slag}) \approx x_{MgO}^{eq,slag}\,\big{(}T_{wall,i}\big{)}=\\ =0.1\cdot\min[(-4.34\cdot 10^{5}+514.3\cdot\widehat{T})\,/\,(1+100.74\cdot \widehat{T}-0.041\cdot\widehat{T}^{2});\\ 50.0\cdot(9.025-4.427\cdot 10^{-3}\cdot\widehat{T}-7.78\cdot 10^{6}\,/\, \widehat{T}^{2})+(-598.7+0.2927\cdot\widehat{T}+5.015\cdot 10^{8}\,/\,\widehat{T}^{2})] \end{array} \tag{70}\]
Here \(\widehat{T}\) is the temperature in \(\,{}^{\circ}C\). As the slag composition is not known we use a temperature dependency which is approximate for 50 \(wt\%CaO\), 10 \(wt\%SiO2\), 2.5 \(wt\%FeO\) and remaining is \(wt\%Al2O3\).
Developing sub models - a multi-scale approach
In the present approach we used CFD simulations (Johansen and Boysan, 1988) to obtain the shear stresses along the wall of the ladle. We did not include the effects of the slag. Using dynamic simulations with slag present more details could be added, and based on curve fitting or lookup tables, the data could have been plugged into the model. This would have improved the accuracy.
FACT SAGE calculations was performed for the solubility of MgO in the slag (see
Acknowledgements). At the present time it was not possible to use this detailed information as we have no information of the slag composition when slag arrives into the ladle from the EAF (Electric Arc furnace). Based on this it was possible to close the model equations and realize the models.
## Software
The model was coded in python 3, using libraries numpy, pandas, math, pickle, scipy, and we used matplotlib and vtk for plotting and visualization. The basic version of the model is available on
github.com, at address [https://github.com/SINTEF/refractorywear](https://github.com/SINTEF/refractorywear). The model is licensed under the open source MIT license ([https://opensource.org/licenses/MIT](https://opensource.org/licenses/MIT) ).
## Tuning the model
In the table below we show the physical and thermodynamical data that was used.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline & Density (\(\rho\) ) & Kinematic & Thermal & Specific heat & Diffusivity (D) \\ & [kg/m3] & viscosity & conductivity & capacity (C\({}_{p}\)) & [m2/s] \\ & & (\(\nu\)) & (\(\lambda\)) & [J/kg K] & \\ & & [m2/s] & [W/mK] & & \\ \hline Slag & 3400 & 0.2e-5 & 10 & 500 & - \\ \hline Steel & 8320 & - & 1.0e-6 & 15 & \(C_{p}\,=\,821.0\) - & - \\ - \(\rho\) & 0.835(\(T-273.15\)) + & & & 0.434\(\cdot T\,+\) & - \\ :(Ceotto, & (-832 + 8.35\(\cdot 10^{3}(T-273.15))X, & & & 0.000232\(\cdot T^{2}\) \\
2013) & & & & \\ - \(C_{p}\):1 & & & & \\ \hline Wear & 3540 (3040) & - & 6 & 1500 & - \\ brick & & & & \\ \hline Durable & 2900 & - & 2.7 & 1500 & - \\ brick & & & & \\ \hline Outer & 2500 & - & 2.0 & 718 & - \\ brick & & & & \\ \hline Insulation & 300 & - & 0.1 & 900 & - \\ Steel & 7100 & - & 12 & 450 & - \\ casing & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Physical properties. Here \(T\) is temperature in \({}^{\circ}\)C and \(X_{c}\) is mass fraction of carbon dissolved in the steel.
Unfortunately, detailed geometrical data and process data will not be given due to company confidentiality. In order to apply the model to single heats operational data from Sidenor was read. The static data included steel mass, time with steel in the ladle, temperature of the steel before leaving the EAF, and cyclic data for vacuum pressure, heating power, measured steel temperatures, gas flow rates, mass of additions and addition's composition, all versus time. The simulation was initiated at the time when the ladle was filled with liquid steel from the EAF and commenced after 2 hours. Once the casting process was finished, the ladle is considered as empty, but still losing heat.
As there is no data on the initial slag mass or composition, it was not possible to work with change in slag composition in the model. The initial slag mass was therefore assumed to be always 500
kg. Another consequence was that we had to assume constant solubilities of C in steel and MgO fraction in the slag. As a result the solubility of MgO in the slag only depends on temperature (see eq. [70]. Furthermore, all additions were assumed to contribute to the slag. This is acceptable if the alloy additions are of same order or smaller than the pure slag contribution. However, for special steels addition levels are significant and the model should be updated such that additions are transferred to the metal.
Different additions have different thermodynamic properties, such as melting temperature and melting enthalpy. As this information was by large unknown, we used the same melting temperature and heat of melting for all additions.
First, we tuned the steel temperature as a good thermal prediction was a prerequisite for the erosion model. At the beginning of each heat it was found that the initial temperature in many cases was a leftover from the previous heat, so we decided to use the temperature measured in the EAF, but which was decreased by 50 K due to heat loss during the tapping process. The heats where the initial temperature was non-existing or resulted in large temperature residuals, the initial temperature was corrected in an iterative manner until the residual for average relative temperature was below 20 K. The residual was computed from all measured values, except the first that was not reliable. In both Figure 8 and Figure 8 we see successful simulations, showing zero order residuals of 5 and 3 K, respectively. The first order residuals (RMSE) are similarly 7 and 5 K. In both cases the initial temperature was optimized, but for heat 206217 the "measured" initial steel temperature was quite close to the optimized initial temperature. To obtain these results, the thermal efficiency of the heater was reduced to 85 % and the thermal conductivity of the refractory bricks and insulation was significantly increased (see Table 1 and Table 3).
In the second step, the erosion model was tuned. We decided to work with constant solubilities of C in steel (soluble mass fraction was set to 0.1), while the MgO solubility in the slag is based on a
fixed slag composition and only varies with temperature (see Table 2). As we decided to keep the solubility of C in the steel high constant, the only tuning that was possible is the pore diffusion length \(s_{{}_{\mathit{pore}}}\) (see eq. [48] and Figure 5).
To do this tuning we did following steps: i) Start with simulating the preheating of the ladle, ii) look up the heat ID, then read operational data for the heat and simulate temperature and erosion. iii) Based on the erosion data reduce the radial cell sizes for the three inner bricks (wear bricks), iv) Account for the thermal history of the ladle until next heat, v) Redo step ii) for the next use of the specific ladle (next heat in campaign, and where the campaign number is unique for the wear lining, from relining until demolition), and then accumulate the erosion of the bricks, vi) If the ladle was taken out for repair of some bricks, the repaired bricks are also repaired in the model. After repair the temperature is again initialized, vii) redoing step v) until the ladle is taken out for lining demolition. At this time the predicted erosion profiles are saved and compared to data from the demolition.
In the demolition data, the ladle is segmented in two halves, where "Left" is close to the porous plug while "Right" is away from the plug. In addition, the brick with most erosion in each half is registered. In this way, a maximum erosion is recorded and the average value for each brick row is not known. However, the 2D model can only be compared with the average of the two and should have some underprediction due to the above observation. For the selected tuning factor \(s_{{}_{\mathit{pore}}}\) we see that the prediction in Figure 9 is good, both qualitatively and quantitatively. The shape of the erosion in steel, below the slag line, is typical for all ladles and campaigns. We note that for bricks 36-40 the erosion level is quite high. This is above the liquid steel level and is a result of metal splashing, causing thermomechanical cracking, and disintegration due to the vacuum treatment (Jansson, 2008). In Figure 10 we see the prediction from a campaign where the erosion in the steel section (bricks 5-25) is underpredicted. This could be a result of the different steel qualities treated in this specific campaign or that for some reason the variation along the perimeter, at each brick layer, is larger than usual. As we have no data on the erosion from heat to heat, we cannot tell if this happened during specific heats in the campaign. Another interesting feature, seen in both Figure 9 and Figure 10, is the pronounced dip of erosion around brick 16 and 17. This may be a result of alloying materials addition when the ladle is approximately 1/3 full. Alloying elements and slag may stick to the colder wall long enough to protect the lining somewhat.
Model performance against Sidenor operational data
The model was run with all available Sidenor data for 2019. The production campaigns that started in 2018 or ended in 2020 were omitted from the current data set as those campaigns were not complete. Altogether, we analyzed 5216 heats, involving 11 different ladles and 61 campaigns. Averaged erosion over bricks 5-25 is compared in Figure 11. An outlier (ladle 8, campaign 76), marked A, is seen and where the details were already shown in Figure 10. We compare the
average erosion per heat in Figure 12, as distributed over the number of heats in each campaign. The model predicts a variation of \(\pm\)12 %, while the data has a variation of \(\pm\)18 %. The outlier A from Figure 11 is clearly seen.
It could be observed from Figure 9 and Figure 10 that a peak in erosion occurs close to the surface of the steel where the slag is located (around brick 35). The steel mass in the ladle varies from heat to heat, but in cases when the reported mass is low this may be due to operation challenges
during casting. Therefore, the minimum steel mass is set to 110 tons. This introduces another uncertainty in the predictions. Now, it may seem that the erosion per heat does not change much from heat to heat, as indicated from Figure 12 We see in Figure 13 that the predictions show significant difference in amount eroded, and erosion pattern, from heat to heat. Around brick 25 (steel wetted region) the erosion for use number 17 is around twice as high than for use 69. This difference is mainly due to temperature, time with vacuum, gas flowrates and operational times. However, when averaged over a complete campaign, these variations are significantly reduced.
### Discussion
The model predicts a smooth increase in erosion rate, from the bottom and towards the slag. This is in very good agreement with some of the measured erosion profiles. Figure 9 shows one example. This is a result of the bubble driven flow, enhanced by vacuum, the transport processes in the brick (represented by s_pore, or \(s_{pore}\)) and flow boundary layer, as well as the solubility of carbon in the steel. We used an artificially high value for the saturated carbon mass fraction ( \(X_{C}^{eq}=0.1\)). However, similar results as shown here may be obtained by another combination of \(s_{pore}\) and \(X_{C}^{eq}\).
We see above that the model performs quite well. At the same time there is room for improvements. The most obvious improvements are:
* Modeling of the slag composition and adding the solubility of MgO in the slag as function of composition. However, this requires knowledge of the composition of the slag coming from the EAF.
* Separating additions into slag formers and alloy elements, and in addition update the enthalpy-temperature relations to represent the true composition of slag and metal.
* Empirical slag temperature is needed to calibrate and validate the slag temperature predictions.
* Including the solubility of carbon in the steel. Data for the steel composition is available but the carbon solubility for the different the compositions must be available.
Some features seen in the data, such as shown in Figure 14, cannot be reproduced by the model. The very high observed erosion rates, close to the bottom seems impossible to explain with the available information about the operation. A possible explanation could have been that gas purging was done with a very low steel level and containing slag. Such issues belong to the group of unnormal operations. Other possibilities are excessive mass loss during ladle cleaning, or that the lining brick quality was not consistent for a period.
Recommendations and conclusions
The presented model predicts the evolution of the lining erosion fairly wall. Much better agreement between model and data is hard to obtain due to uncertainties in operational data, uncertainties in physical data and uncertainties in measurements. The model is primarily predicting lining erosion based on hydrodynamics and solution of lining elements in steel and slag. The contribution from thermomechanical cracking of the lining is not included in the model. However, the model predicts lining temperatures at the time of tapping metal into the ladle. This information can in the future be used to assess thermomechanical brick degradation. As this effect was not included the model was tuned to predict less erosion than what is observed. Similarly, the lining degradation above the melt, in particular pronounced during vacuum treatment, was not included in the model. However, a hole in the lining this far up on the ladle wall has far less consequences than holes deep below the steel surface.
Model predictions, as we have presented above, will be an important support for the ladle operator, when deciding if the ladle can be used one more time or not. The model shows how the variation in steel level between heats impacts erosion. If all the heats were run with same steel volume this would have a negative impact on lining lifetime. On the other side the refractory life may be extended by running scheduled amounts of steel in the heats. When the operator is unsure about the ladle conditions, and based on previous experience from running the model, the model prediction will help the operator to make a good decision.
The source code is made available to the public from [https://github.com/SINTEF/refractorywear](https://github.com/SINTEF/refractorywear).
Acknowledgements
We thank Dr. Kai Tang, SINTEF Industry for his assistance with FACT SAGE calculations of MgO solubility in slag. The simplified MgO solubility versus temperature was based on this work.
This research was funded by the H2020 COGNITWIN project, which have received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 870130.
## CRediT author statement
STJ: Conceptualization, Methodology, Writing - Original draft preparation ; BTL: Software, Data curation, Validation, Visualization, Writing- Reviewing and Editing; TRD: Resources, Investigation, Writing- Reviewing and Editing.
|
2306.17806 | Stay on topic with Classifier-Free Guidance | Classifier-Free Guidance (CFG) has recently emerged in text-to-image
generation as a lightweight technique to encourage prompt-adherence in
generations. In this work, we demonstrate that CFG can be used broadly as an
inference-time technique in pure language modeling. We show that CFG (1)
improves the performance of Pythia, GPT-2 and LLaMA-family models across an
array of tasks: Q\&A, reasoning, code generation, and machine translation,
achieving SOTA on LAMBADA with LLaMA-7B over PaLM-540B; (2) brings improvements
equivalent to a model with twice the parameter-count; (3) can stack alongside
other inference-time methods like Chain-of-Thought and Self-Consistency,
yielding further improvements in difficult tasks; (4) can be used to increase
the faithfulness and coherence of assistants in challenging form-driven and
content-driven prompts: in a human evaluation we show a 75\% preference for
GPT4All using CFG over baseline. | Guillaume Sanchez, Honglu Fan, Alexander Spangher, Elad Levi, Pawan Sasanka Ammanamanchi, Stella Biderman | 2023-06-30T17:07:02Z | http://arxiv.org/abs/2306.17806v1 | # Stay on topic with Classifier-Free Guidance
###### Abstract
Classifier-Free Guidance (CFG) [37] has recently emerged in text-to-image generation as a lightweight technique to encourage prompt-adherence in generations. In this work, we demonstrate that CFG can be used broadly as an inference-time technique in pure language modeling. We show that CFG (1) improves the performance of Pythia, GPT-2 and LLaMA-family models across an array of tasks: Q&A, reasoning, code generation, and machine translation, achieving SOTA on LAMBADA with LLaMA-7B over PaLM-540B; (2) brings improvements equivalent to a model with twice the parameter-count; (3) can stack alongside other inference-time methods like Chain-of-Thought and Self-Consistency, yielding further improvements in difficult tasks; (4) can be used to increase the faithfulness and coherence of assistants in challenging form-driven and content-driven prompts: in a human evaluation we show a 75% preference for GPT4All using CFG over baseline.
## 1 Introduction
In recent years large language models have exhibited strong generative capabilities to solve a diverse range of tasks [26; 15; 71]. "Prompting" is typically used to condition generation, with task instructions and context [64], or a small set of examples [15]. However, language generation, especially with smaller models, has been shown to struggle with issues such as hallucination [49], degradation [38] and meandering [76]. Various approaches have been proposed to address this, e.g.: instruction-finetuning [81; 70] and reinforcement learning [56; 4; 6]. These techniques are expensive and their compute and data cost may not be accessible to all users. In this paper we propose an _inference time_ methodology which, as shown in Figure 1, gives more importance to the user intent, expressed through the prompt. Our hypothesis in this paper is: _focusing more on the prompt at inference-time will result in generations that better align with expected behavior._
Text-to-image-generation, too, has been shown to suffer from similar problems [28]. Standard inference approaches can ignore parts of the prompt-conditioning, especially with specific or uncommon prompts [53]. Classifier Guidance [28],
Figure 1: A notional 2D projection of a textual latent space, showing how increasing the guidance weight \(\gamma\) increases, the importance of the prompt “Today in France,”.
was proposed to enhance the generative quality of diffusion models, by using a separate classifier to encourage desired characteristics in the output image. Classifier-Free Guidance (CFG) [37] was later introduced, in which the classifier is removed and the generative model _itself_ is used as an implicit classifier.
Inspired by its effectiveness in the text-to-image-generation [68; 37; 46], we adapt CFG to unimodal text generation to increase the model alignment to the given prompt. While text-to-image models (which primarily utilize diffusion models) need to be specifically trained with conditioning dropout [37] to utilize CFG, we show that, in text generation, we can use CFG out-of-the-box in many situations. We demonstrate the effectiveness of CFG to improve alignment on a wide range of prompting approaches including zero-shot prompting, Chain-of-Thought prompting, long-form generative prompting and complex chatbot-style prompting (see Table 1).
We make the following contributions:
1. We devise a framework for using CFG in language modeling and show significant improvements across a range of standard benchmarks. These benchmarks capture a variety of different prompting techniques: basic prompting, chain-of-thought prompting, long-text prompting and chatbot-style prompting. Notably, we achieve SOTA on LAMBADA with LLaMA-7B over PaLM-540B.
2. We show that for the same inference cost, one can train a model that is half the size and obtain similar performance on those benchmarks;
3. By using a negative prompt, we demonstrate that we can have a more granular control over the aspects emphasized by CFG. In a blind human evaluation we show 75% preference for GPT4All using CFG over the vanilla sampling;
4. We provide interpretations for the impact that CFG on text generation both (1) qualitatively, by visualizing how CFG is upweighting words more related to the prompt (our visualization, we note, can be an integral part of effective prompt engineering) and (2) quantitatively, by showing that CFG decreases entropy in the sampling distribution.
## 2 Methodology
Autoregressive language models are trained to generate plausible continuations of sequences of text. Given a sequence of tokens \(w_{1},\cdots,w_{T}\), the model samples each subsequent token from the conditional probability distribution \(\text{P}_{\theta}(w|w_{t\leq T})\). It is now typical for some or all of the initial tokens to be considered a _prompt_, which specifies information about the task or how it is to be solved. In practice, prompts are syntactically and semantically distinct from the initial text to be continued.
However, standard generation methods for large language models do not differentiate between prompt text, \(w_{1}...w_{p}\) and subsequent generations \(w_{p+1},...w_{t-1}\). Directly sampling from \(\text{P}_{\theta}(w_{i+1}|w_{t\leq i})\) may result in continuations that lose adherence to the prompt (see Table 1, for example) over the course of the generation. Inspired by successes with diffusion models, we propose to address this problem by applying Classifier-Free Guidance [37] to the decoding process in autoregressive language models.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline
**Instruction:** **“Respond enthusiastically to the following user prompt.”** \\
**Prompt:** **“What was the Cambridge Analytica scandal?”** \\ \hline \hline
**Vanilla Sampling** & **Classifier Free Guidance-based Sampling** \\ \hline \hline The Cambridge Analytica scandal was a huge scandal in which it was revealed that Cambridge Analytica, a political consulting firm, had used personal data from Facebook to target and influence the 2016 US presidential elections. This scandal raised questions about the role of social media in political campaigns... & Oh my goodness! What a scandal! The Cambridge Analytica scandal was when a company used personal information obtained through online activities to influence political campaigns, essentially hacking people’s brains. It was a serious breach of trust and privacy, and rightfully so! It is a wake-up call for... \\ \hline \hline \end{tabular}
\end{table}
Table 1: Demonstration of CFG-guided generation for an _assistant-style_ prompt (using GPT4All with \(\gamma=5\)). The assistant has a system-level prompt (“Instructions”) that is potentially out-of-distribution (e.g. “_write an enthusiastic response_”) and a user-level prompt (“Prompt”). In Vinalla Sampling, the model ignores the system-level directive, but with CFG, the model adheres to both the system-level and the user-level prompt.
### Guidance in Text-to-Image Models
Let \(\mathsf{P}_{\theta}(x)\) be the unconditional generative model for an image \(x\) with parameters \(\theta\). During inference, we wish to condition the generation on a label or text description \(c\) in order to model \(\mathsf{P}(x|c)\). Generative models usually generate data from an abstract representation \(z\) in semantic space that is decoded into an actual sample (e.g. the latent vectors in GANs or the intermediate sampling steps in diffusion models). Controlling the generation usually involves guiding or adding constraints to that semantic representation. In **Classifier Guidance**[28], an auxiliary classifier \(\mathsf{P}_{\phi}(c|x)\) is introduced, which guides the sampling from \(\mathsf{P}_{\theta}(x)\) with the gradients \(\gamma\nabla_{z}\mathsf{P}_{\phi}(c|x)\) to increase the likelihood of \(c\) for generation \(x\). This modification results in approximate samples from the distribution:
\[\widehat{\mathsf{P}}(x|c)\propto\mathsf{P}_{\theta}(x)\cdot\mathsf{P}_{\phi}( c|x)^{\gamma} \tag{1}\]
where \(\gamma\) is called the guidance strength. This guidance results in a reweighting of the density according to the classifier likelihood. For \(\gamma=0\), it reduces to the unconditional generation, while \(\gamma=1\) reduces to the conditional generation. When \(\gamma>1\) then \(\widehat{\mathsf{P}}\) overemphasizes the conditioning, which as noticed by [28] results in a better inception score at the cost of diversity. This approach has been successfully used in a variety of works [32; 41; 22]
**Classifier-Free Guidance**, [37] observes that by using Bayes rule we can eliminate the necessity of an external classifier. By training the same model \(\mathsf{P}_{\theta}\) to support both conditional and unconditional generation with conditioning dropout, we can thus rewrite the second term in Equation 1 as \(\mathsf{P}_{\theta}(c|x)\propto\frac{\mathsf{P}_{\theta}(x|c)}{\mathsf{P}_{ \theta}(x)}\). Then, the sampling is performed according to the probability:
\[\widehat{\mathsf{P}_{\theta}}(x|c)\propto\frac{\mathsf{P}_{\theta}(x|c)^{ \gamma}}{\mathsf{P}_{\theta}(x)^{\gamma-1}}. \tag{2}\]
Modeling the diffusion process with \(\widehat{\mathsf{P}}_{\theta}(x|c)\) effectively means predicting the PDF of the sample noise \(\epsilon_{t}\) as
\[\log\widehat{\mathsf{P}_{\theta}}(\epsilon_{t}|x_{t+1},c)=\gamma\log\mathsf{ P}_{\theta}(\epsilon_{t}|x_{t+1},c)-(\gamma-1)\log\mathsf{P}_{\theta}(\epsilon_{t}|x_{t+1 }). \tag{3}\]
An important tool with diffusion models is **Negative Prompting**[29; 1; 23; 65]. We can rewrite Equation 3 as
\[\log\widehat{\mathsf{P}_{\theta}}(\epsilon_{t}|x_{t+1},c)=\log\mathsf{P}_{ \theta}(\epsilon_{t}|x_{t+1})+\gamma\big{(}\log\mathsf{P}_{\theta}(\epsilon_{ t}|x_{t+1},c)-\log\mathsf{P}_{\theta}(\epsilon_{t}|x_{t+1})\big{)} \tag{4}\]
Aside from its probabilistic interpretation, this equation also represents a vector arithmetic operation in latent space: we take a step of size \(\gamma\) away from the unconditional vector in the direction of the conditioning. Semantic vector linear arithmetic has proven to be effective in many situations in vision: striking examples have been generated by interpolations in GANs or diffusion models [47; 75; 14].
Moreover, the initial point does not have to be the unconditional latent, but any representation we want to move away from. We can introduce the "negative conditioning" or "negative prompt" \(\overline{c}\), as well as a generalized equation resulting in Equation 3 when \(\overline{c}=\varnothing\):
\[\log\widehat{\mathsf{P}_{\theta}}(\epsilon_{t}|x_{t+1},c,\overline{c})=\log \mathsf{P}_{\theta}(\epsilon_{t}|x_{t+1},\overline{c})+\gamma\big{(}\log \mathsf{P}_{\theta}(\epsilon_{t}|x_{t+1},c)-\log\mathsf{P}_{\theta}(\epsilon_{ t}|x_{t+1},\overline{c})\big{)} \tag{5}\]
### Classifier-Free Guidance of Language Models
To apply Classifier-Free Guidance to language models, we first have to define the semantic space to operate in. As demonstrated in [51; 60] and [27; 61], word embeddings and sentence embeddings have strong semantic structures. This makes the logits of token predictions a good choice of our latent space, due to its linear relationship with the last hidden layer. Using the logits avoids network editing [9] and is architecture agnostic.
Next, we need to define what is considered conditioning, \(c\), in decoder-only language models. In the common situations, a user provides a _prompt_\(c\) which can be a context, an instruction, or the beginning of some text, and uses a language model to sample a sequence of continuation tokens \(w_{i}\) for the prompt \(c\). Since a good continuation is expected to highly correlate to the prompt, we consider the prompt as our conditioning.
Similarly to Classifier Guidance [24; 84; 76], we wish to generate a text \(w\) which has a high likelihood of starting with \(c\). We define the \(\gamma\)-reweighted distribution \(\widehat{\mathsf{P}}(w|c)\propto\mathsf{P}(w)\cdot\mathsf{P}(c|w)^{\gamma}\), and approximate it with CFG as \(\widehat{\mathsf{P}}(w|c)\propto\frac{\mathsf{P}(w|c)^{\gamma}}{\mathsf{P}(w)^{ \gamma-1}}\)
In the case of autoregressive language models modeling \(\text{P}_{\theta}(w)=\prod_{i}^{N}\text{P}_{\theta}(w_{i}|w_{j<i})\), we can unroll the formulation and obtain Equation 2 again:
\[\widehat{\text{P}_{\theta}}(w|c)\propto\prod_{i=1}^{T}\widehat{\text{P}_{ \theta}}(w_{i}|w_{j<i},c)\propto\prod_{i=1}^{T}\frac{\text{P}_{\theta}(w_{i}|w_ {j<i},c)^{\gamma}}{\text{P}_{\theta}(w_{i}|w_{j<i})^{\gamma-1}}\propto\frac{ \text{P}_{\theta}(w|c)^{\gamma}}{\text{P}_{\theta}(w)^{\gamma-1}} \tag{6}\]
While conditioned diffusion models cannot predict unconditioned distributions without extra training, language models handle both \(\text{P}_{\theta}(w|c)\) and \(\text{P}_{\theta}(w)\) naturally due to being trained on finite context windows. Being able to drop the prefix \(c\) is a natural feature. We can thus sample the next \(i\)-th token \(w_{i}\) in the logits space:
\[\log\widehat{\text{P}_{\theta}}(w_{i}|w_{j<i},c)=\log\text{P}_{\theta}(w_{i}| w_{j<i})+\gamma\big{(}\log\text{P}_{\theta}(w_{i}|w_{i<j},c)-\log\text{P}_{ \theta}(w_{i}|w_{j<i})\big{)} \tag{7}\]
This formulation can be extended to accomodate "negative prompting", as in Equation 5. Negative prompting as applied in autoregressive LMs will be further addressed in Section 3.4. Now, we will continue on to the next section, where we introduce our experiments. In this section, we will explore the effects of CFG on different variations of prompting.
## 3 Experiments
In this section we show that Classifier-Free Guidance reliably boosts performance across a variety of common prompting approaches. In Section 3.1 we show that CFG boosts zero-shot performance on a variety of standard NLP benchmarks, including achieving state-of-the-art performance on LAMBADA with LLaMa-7B. In Section 3.2 we apply CFG to _Chain-of-Thought prompts_[55; 82] an approach to allows the model to reason first before answering the question. Next, we test the performance of CFG on _text-to-text generation prompts_ in Section 3.3. Finally, we show in Section 3.4 that CFG can be applied to _assistant_ prompts (i.e. prompts with system-instructions).
### Basic Prompting: Zero-Shot Prompts
To test _basic, zero-shot prompting_, we consider a suite of zero-shot benchmarks implemented in the Language Model Evaluation Harness [33], which includes close-book QA [5; 39], common sense reasoning tasks [85; 69; 18; 12; 20; 8; 19], and sentence completion-tasks [58]. In these settings, the desired completions are short (often 1-2 tokens), so risks of meandering [76] or degradation [38] are low. We hypothesize that the main impact of CFG in these settings will be to reduce variance in output choices, as we explore more in Section 5.
We evaluate the GPT-2 model family[62], the Pythia model family [11] and the LLaMA model family[78] using different guidance strengths across a range of standard NLP benchmarks using EleutherAI's Language Model Evaluation Harness [33] and implement CFG by starting the unconditional prompt at the last token of the initial prompt. The results are shown in Table 2. For better visualization, the charts for the GPT2 models, the Pythia models and the LLaMA models over the standard benchmarks are also shown in Figure 8, 9, and 10, respectively. We observe that except ARC (challenge) and Winogrande, the boost of performances from CFG is nontrivial and consistent. The reasons for these discrepancies are still unknown.
Furthermore, we note that even the smallest LLaMA 7B model achieves \(81\%\) accuracy in Lambda (OpenAI) zero-shot benchmark with \(\gamma=1.5\), outperforming the current SOTA (zero-shot) of PaLM-540B (\(77.9\%\)). Despite the fact that CFG almost doubles the computation during inference, the comparison is still noteworthy given that other models with comparable performances on Lambada (OpenAI) have much more parameters and would still require more compute than LLaMA 7B with CFG. Taken together, we show that CFG increases performance in basic prompting settings significantly.
### Deliberative Prompting: Chain-of-Thought
A variation on _basic prompting_ has emerged recently called _Chain-of-Thought (CoT) prompting_[82]. In this setting, the model is prompted to generate a series of reasoning steps before giving an answer to the task: i.e. \(p(w_{cot},w_{a}|w_{p})\), where \(w_{cot}=w_{p+1}...w_{c-1}\) and \(w_{a}\) is the answer. \(w_{cot}\) is designed to mimic the human reasoning or deliberation process. CoT has been shown to perform well in complex reasoning tasks that can not be fully addressed by model- or data-scaling [63], however, as observed by [82], long reasoning chains can diverge and either do not generate correct answers, or do not follow the expected result structure given by the prompt.
This setting poses a variation on the prior _base-case_ setting: now, the continuation \(w_{c}=[w_{cot},w_{a}]\) is expected to be longer than 1-2 tokens. We hypothesize that compared to basic zero-shot prompting explored in Section 3.1, CFG will _also_ be able to enforce better reasoning chains with less drift.
We evaluate the effectiveness of our proposed CFG method with respect to chain-of-thought prompting on two arithmetic reasoning tasks: GSM8K [21] and AQuA [48]. We follow [80] few-shot prompt and parsing setting, with respect to two open source LLM models: WizardLM-30B [83] and Guanaco-65B [25]. As can be seen in Figure 3, 15, using CFG increases the percentage of CoT which results in a valid answer that could be parsed. For low guidance strengths, this results in boosting the model performances. However, for large values, although the model returns more valid results, the quality of the chains is also impacted, and overall the model performances degrade. A qualitative comparison is provided in Table 15, 14.
Figure 2: Results of general natural language benchmarks. In each cell, the first value is the result for \(\gamma=1\) (baseline) and the second value is the result for \(\gamma=1.5\) (ours). LLaMA 7B with CFG on Lambada zero-shot already outperforms vanilla PaLM 540B, Chinchilla 70B, and GPT-3 175B, tops the SOTA leaderboard for Lambada zero-shot as of June 26th, 2023
We have only scratched the surface of exploring CFG's interactions with CoT; for instance, instead of upweighting just \(w_{p}\), we might upweight \(w_{p},w_{cot}\), or other variations. We anticipate in future work being able to more fully test variations of CFG-weighting on different parts of the CoT process.
### Text-to-Text Prompts: Generation
In contrast to _basic prompting_ and _CoT-prompting_, where we ultimately expect a short answer, \(w_{a}\), many settings require lengthier continuations. In this section, we study a prompt setting where the quality of answers are highly dependent the ability to stay on target over long sequences of text (both prompt, \(w_{p}\) and continuation, \(w_{c}\)). Here we focus on code generation, and in Appendix D.1 we report results on machine translation. We hypothesize that, in contrast to Sections 3.1 and 3.2, these tasks require longer-form completions, which Classifier-Free Guidance's effectiveness in enforcing adherences to many different parts of the prompt.
#### 3.3.1 Program synthesis evaluations
Computer programs represent an important language-modeling case, as formal language differs from natural language in many ways including the use of well-defined structures. Testing Classifier-Free Guidance on code-related tasks improves the robustness of our hypothesis over different distributions of data. In the exploratory experiments, we prompt GPT-J [79] and CodeGen-350M-mono [54] for small-scale code generations and observe positive results results (see Appendix D.2). And then we perform a thorough evaluation on the HumanEval benchmark [16].
#### 3.3.2 HumanEval benchmark
To systematically investigate the impact of Classifier-Free Guidance on code completion abilities, we evaluate models using different CFG strengths on HumanEval benchmark [16]. HumanEval benchmark contains \(164\) coding tasks in Python where the prompts are given by a function signature and a docstring. The model will generate continuations of the prompt, and the resulting programs will be tested against a set of unit tests for each task which evaluate the correctness of Python programs. We choose CodeGen-350M-mono, CodeGen-2B-mono and CodeGen-6B-mono ([54]) which are specialized in Python program synthesis.1
Footnote 1: _Note: CodeGen-16B-mono is omitted due to the compute constraint._
Various CFG strengths 2 are tested on \(3\) different temperatures \(0.2,0.6,0.8\) with the evaluation metrics being pass@\(k\) for \(k=1,10,100\)3. Here we show the results for temperature\(=0.2\) in Table 2. The full results are summarized in Appendix C.3 in Table 5, 6 and 7 and Figure 12, 13 and 14.
Footnote 2: \(\gamma=1.0,1.1,1.25,1.5,1.75,2.0\)
Footnote 3: The definition of pass@\(k\) according to [16]: “\(k\) code samples are generated per problem, a problem is considered solved if any sample passes the unit tests, and the total fraction of problems solved is reported.”
We observe that low CFG (\(\gamma\leq 1.5\)) increases the pass@\(1\) rate uniformly4. High CFG (\(\gamma\geq 1.5\)) leads to a deterioration of performance. We also note that the improvement from CFG diminishes or harms performance at pass@\(k\) at high \(k\).
Footnote 4: Note that the effect of low CFG on the pass@\(1\) rate is consistent with the results of the general benchmarks in the previous section.
To further investigate the effect of CFG, we break down the pass@\(1\) evaluations on CodeGen-350M-mono for \(\gamma=1,1.25\) task-by-task 5. We notice that the number of tasks where CFG outperforms is still more than the one where CFG underperforms for all temperatures \(0.2,0.6,0.8\) (See Table 4).
Figure 3: CFG impact on chain-of-thought prompting with respect to GSM8K dataset. For small CFG values, using CFG increases the percentage of chains which end in a valid answer structure while increasing the model accuracy. For large values the invalid percentage remains small but the accuracy drop.
We also find that without CFG, many tasks exhibit small nonzero passing rates while having \(0\%\) rate with CFG. This explains the decreasing improvement of CFG in pass@\(k\) for large \(k\), as larger \(k\) significantly boosts the passing rate of difficult tasks where the rates are low but nonzero.
Overall, the consistent improvement on pass@\(1\) rates and the reduced effect on pass@\(100\) rates support our hypothesis that CFG strengthens the adherence to the prompt at the small cost of reduced variability and creativity.
### Negative Prompting: Improving Assistants
Finally, we explore an addition to Classifier-Free Guidance called _negative prompting_. With negative prompting, the user specifies what they do _not_ want in the output (e.g. "low resolution", "bad hands, bad anatomy, amateur drawing" in text-to-image), which is then used to improve generation quality.
We explore this idea in the context of chatbots. Chatbots give us a setting where the _prompt_ is expanded into a _multi-stage prompt6_. In chatbots, the language model is prompted with a two-part prompt: (1) the instruction, \(w_{s}\) (sometimes called "system prompt") which may give contextual information (e.g. the "current date"), or behavioral guidelines (e.g. style, alignment, persona, etc.); and (2) \(w_{p}\), the user-prompt, or the user's query. See Table 1 for an example. Adherence becomes an even greater challenge, as our initial explorations shows. We observe systems like Alpaca [77, 59, 3] often ignore changes to their default system-prompt, and may even expose models to attacks like prompt injection [36].
Footnote 6: We note that this extension to _basic-prompting_ stands as a mirror to _CoT-prompting_’s extension (Section 3.2). In _CoT-prompting_, the _continuation_ is expanded to a _multi-stage completion_; here, the _prompt_ is expanded.
We explore CFG with negative prompting to increase the success rate of different system prompts. We set the negative prompt \(\overline{c}\) to be the default system-prompt for the models we use (i.e. "The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.") and set \(c\) to be the edited prompt (e.g. "The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write _a sad_ response."). This approach not only makes the sampling more prompt-aware in general, but directly emphasizes the difference between _our_ system-prompt and the model's default system-prompt.
To test this approach with chatbots, we generate system-prompts, \(n_{c}=25\), and user-prompts, \(n_{p}=46\), and sample \(1740\) random combinations of them. An example is shown in Table 1 (in Appendix G we include the full list of \(c\) and \(p\) we use). We use GPT4All-J v1.3-jazzy to generate two completions for each sampled combination: the first is sampled without CFG, and the second is sampled with CFG, with a guidance strength randomly chosen \(\in\) 1,2,3,4,5,6. Our hypothesis is that CFG increases system-prompt following, ideally without hurting the relevance to the user input.
We run a human preference study on our sampled continuations, where participants are shown both, blindly, and asked to assess two things: A. which output better follows the system-prompt, \(c\) and B. which output better follows the user-prompt \(p\). Our results in Figure 5 shows compelling evidence that CFG emphasized the difference between \(c\) and \(\overline{c}\) more than sampling with \(c\) alone. There is a clear peak at \(\gamma=3\) with 75% of system-prompt following preference over \(\gamma=1\) and undegraded user-prompt relevance (52%).
## 4 Computational Cost Analysis
In the previous section we showed improvements across a wide array of benchmarks and contexts. However, since classifier-free guidance requires two passes through the network, users who are compute-constrained rather than VRAM constrained might wonder if CFG is interesting to them at all, and if they should not run a model twice as big instead.
To answer this question, we calculate the FLOP for each of the benchmark experiments that we ran in Section 3.1. We then compare across model sizes, with and without CFG. We conclude with the surprising finding that, across 5 out of 9 tasks, there there is a statistically _insignificant difference_ between using CFG and using vanilla prompting with a model of twice the size at \(p=.01\), according to ANCOVA regression analysis [67]. Of the significantly different tasks, 2 favor CFG and 2 favor vanilla. See Appendix C.2, specifically Figure 11, for more details.
In other words, and most significantly, this indicates that, overall, a model using CFG can generally perform just as well as a model twice as large. This has enormous implications for training budgets and inference latency due to limited VRAM usage, which we seek to explore in future work.
## 5 Explaining the Success of Classifier-Free Guidance
In this section, we try to derive insights on the impact of Classifier-Free Guidance on generation, both quantitatively and qualitatively. We sample a dataset of \(32,902\) datapoints from the P3 dataset [70] and use the Falcon-7b-Base model family [2] as an exploratory model. Our goal is to analyze the logit distributions - we describe how in the following sections. Many of our comparisons are done with reference to an instruction-tuned model, for which we use the Falcon-7b-Instruct version. We replicate our findings on other models and datasets as well: the Open-Assistant Dataset [42] and Redpajama-3b model family7.
Footnote 7: [https://www.together.xyz/blog/redpajama](https://www.together.xyz/blog/redpajama)
### Classifier-Free Guidance's Effect on Sampling Entropy
We suspect that CFG, by focusing \(\text{P}(y|x)\) on the prompt, will reduce the entropy of the logit distribution. CFG entropy distribution is significantly lower across generation time-steps vanilla prompting, with a mean of 4.7 vs. 5.4. (See Figure 5(a)).The effect this has is to restrict the number of tokens in the top-p=90% of the vocabulary distribution (See in Figure 5(b)). We do observe qualitatively, shown in Section 5.3, that the top tokens to not shift too much, but they do re-order to some extent, which shows that CFG is not simply having the same effect as the temperature parameter.
### CFG's Relation to Instruction Tuning
Our next question: _how_ is Classifier-Free Guidance affecting the vocabulary distribution? We attempt to answer this question quantitatively, hypothesizing that CFG has similar effects to instruction-tuning, which we assume trains a model to focus on the prompt. We find that both CFG and Instruction-Tuned model variants have similar entropies
across generation samples. However, as shown in Figure 5(b) the vocabulary distributions across our samples are largely not overlapping.
We find that, overall, our hypothesis about the similarity is wrong: CFG is not having a similar effect on the vocabulary logits as instruction-tuning. To explore, we seek to derive insight from edge-cases where it does. We look for characteristics to explain when CFG _is_ similar to Instruction-Tuning (in terms of top-p overlap). One case pops out: when the prompt is longer, CFG agrees more - we observe a significant spearman correlation of \(r_{s}=.05\) between prompt-length and Instruction/CFG agreement. We also observe small but significant correlations between perplexity and agreement. As shown in Table 7, harder phrases for Instruction-Tuned models are typically where CFG and Instruction-Tuned models align. We conclude that CFG is altering the model in ways that might complement instruction-tuning, opening the door to future explorations.
### Visualizing Classifier-Free Guidance
Finally, we provide qualitative insights into the reordering of the vocabulary, after Classifier-Free Guidance is applied. We note that the Equation can be rewritten as
\[\log\text{P}_{\gamma}(w_{t}|w_{<t},c)=\log\text{P}(w_{t}|w_{<t},\overline{c})+ \gamma(\log\text{P}(w_{t}|w_{<t},c)-\log\text{P}(w_{T}|w_{<t},\overline{c}) \tag{8}\]
We propose, at each timestep, to visualize the vocabulary ranked by the difference \(\log\text{P}(w_{t}|w_{<t})-\log\text{P}(w_{T}|\hat{w})\). This shows the impact of the method, qualitatively, by revealing the tokens that are encouraged or discouraged the
Figure 6: We show into how CFG alters the logit distribution of the vanilla prompted model, \(\text{P}(y|x)\). CFG lowers the entropy to a level roughly similar to instruction-tuned model variant. CFG shares roughly 50% of the tokens in top-p=\(0.9\) as the vanilla \(\text{P}(y|x)\) model.
Figure 7: We seek to identify _when_ CFG is similar to instruction-tuning. Models mostly agree on the difficulty of input sentences, and in cases where they do not, CFG and Instruction-tuning have similar top-p overlaps.
most. In Figure 3, we prompt a model with \(c=\)"The dragon fle over Paris, France", \(\overline{c}=\emptyset\) and observe that tokens about dragons and Paris get upweighted while tokens about other locations ("Queensland"), dates ("1913"), or topics ("hostages", "'voyages") get downweighted. This confirms our initial assumptions, as we observe CFG encouraging tokens related to and discourages tokens unrelated to the prompt.
We find this visualization approach to be a useful prompt engineering tool, by using the new prompt under testing as \(c\) and setting \(\overline{c}\) as the current baseline prompt. The visualization shows the differential impact over the whole vocabulary on the next token prediction, in an interpretable way.
## 6 Conclusion
We have shown that Classifier-Free Guidance, which was originally conceived of in text-to-image applications, can be an effective way of increasing adherence to the prompt in autoregressive language modeling. In contrast to text-to-vision, CFG in autoregressive language modeling works out-of-the-box, without the need to further train the model. We have shown that CFG can boost performance across an array of canonical benchmarks in NLP that involve variations of the prompt: _basic prompting_, _chain-of-thought prompting_, _text-to-text prompting_ and _chatbot prompting_. Finally, we sought to explain the effects of CFG by showing it decreased sampling entropy, but not in the same ways that Instruction-tuned models do. Ultimately, we leave for future work the exact effects that CFG is having, but we propose qualitative visualizations that confirm our intuitions around prompt adherence.
Our work also integrates into a growing body of inference techniques aimed at perturbing the logit distributions of an LM [45, 73]. We demonstrate that by doubling the inference FLOP using CFG brings performances of a model about twice the size. This allows training smaller models, which can be ran on smaller hardware, and are cheaper to train.
Our work faces the following limitations: CFG requires tweaking and exploration: \(\gamma\) values that might work in one context (i.e. long-form generation) might be poorly suited for another context. It's also possible that CFG might be misused. We have not tested the effects of CFG if used in conjunction with malicious strategies for hacking language models, including but not limited to: prompt injection and prompts aimed at overriding alignment. It's possible that there are unforeseen effects induced by an increased adherence to parts of the prompt. We tried to explore this at length, both quantitatively and qualitatively, and we designed tasks that might reveal such behavior. However, we cannot conclude this method is risk-free. We advocate for standardized benchmarks aimed more squarely at language-model risk (including, possibly, pairs of models along with known prompt injections). Such standardized benchmarks could help us unit-test an advancement like CFG before releasing it into the wild.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline
**current** & top1 & top2 & top3 & top4 & top5 & — & bottom5 & bottom4 & bottom3 & bottom2 & bottom1 \\ \hline
**France** & flipping & destroying & washing & stopping & causing & — & — & guiName & ufaet & Outs & kees & \(\text{-}1\)’s & \(\text{-}1\)’s \\
**landing** & neigh & invis & atop & overhead & plain & — & — & goCliveryDue & PULIT & Occupations & 568 & publishes \\
**on** & Buildings & skye & road & Cheong & Plaza & — & qucas & Russo & German & passports & boatages \\
**Morter** & Basil & Mos & Camachl & Mos & Eugene & — & voyage & MIT & e & \(\text{-}1\)’s & Dragon Magazine \\
**Dame** & Cathedral & monument & schrodal & Basil & Mosque & Eugene & — & voyage & ailich & wine & artb & ab \\
**Cathendar** & \(\text{-}1\)’ & \(\text{-}1\)’ & slowing & blocking & notes & — & voyage & ailich & voy & amul & wk \\
**L** & Dragon & dragon & dragon & Dragon & Dragon & & 1915 & 1914 & 1944 & 1934 & 1913 \\
**It** & sweep & criched & dart & hopped & bolded & — & concludes & reads & cuzula & unitks & \\
**cricded** & skye & pedestrians & architectural & hanging & skyline & — & Newtonianland & Ukrainian & Zamph & Johnson & Queensland \\
**Paris** & night & amura & rum & amura & animin & — & — & — & — & — & — & — \\
**a** & longer & while & long & aswhile & length & — & — & — & — & — & — & — \\
**bit** & longer & MORE & awhile & again & more & — & — & — & — & — & — & — & — & — & \\
**-** & startled & feathers & dragon & wing & dragon & — & — & — & — & — & — & — & & — \\
**and** & dragon & dragon & golden & Winged & perched & — & CVE & imal & Ulrain & onset & — & — & — & \\
**then** & dragon & DRAG & dragon & neigh & DRAG & — & CVE & omet & Kaur & TPS & Tags \\
**few** & hiking & skietr & mm & swap & xcles & — & RG & thouse & NJ & 444 & programmes \\
**over** & Grimt & Rockefeller & Plaza & Times & Symphony & – & Brainy & Newtonianland & Bali & isconia & Yugoslavia \\
**the** & Griffith & Zwa & flag & Science & Rabjael & — & shire & Midlands & frontier & dearts & Balkans \\
**E** & BI & Rome & ident & Methodist & allah & — & coasts & ento & bys & seys & Desire \\
**iff** & Armory & Library & trems & Massion & Mahomed & — & into & omet & Off & time & Norm \\
**ed** & restaurant & Middle & restoration & boatage & — & — & — & — & — & — & — & — \\
**Tower** & Property & omit & Foundation & Creator & \(\text{-}2\)’s & — & iband & throus & centes & detach & rifn \\
**-** & dragon & dragon & Dragens & Dragon & DRAGON & — & 1944 & 1942 & Instrument & Bali & 1943 \\
**-** & dragon & dragon & dragon & dengens & Dragens & Dragens & — & — & — & — & — & & — & & \\
[MISSING_PAGE_POST]
#### Acknowledgements
We are grateful to Stability and CoreWeave for providing the compute to run the evaluations.
We also thank the volunteers who took part in the GPT4All experiment.
Alexander Spangher would like to thank Bloomberg News for a 4 year PhD fellowship that generously funds his research.
|
2309.06156 | $D_{(s)}-$ mesons semileptonic form factors in the 4-flavor holographic
QCD | We investigate semileptonic form factors of $D_{(s)}$ meson from a modified
soft-wall 4-flavor holographic model. The model successfully reproduces the
masses and decay constants of various mesons, including $\rho$, $K^*$, $D^*$,
$D_s^*$, $a_1$, $K_1$, $f_1$, $D_1$,$D_{s1}$, $\pi$, $K$, $\eta$, $D$, and
$D_s$. Moreover, we study the semileptonic decay processes $D^{+} \to (\pi, K,
\eta) l^{+} \nu_{l}$ and $D_{s}^{+} \to ( K, \eta) l^{+} \nu_{l}$, associated
with the vector meson exchange, as well as $D_{(s)}^{+} \to K^{} l^{+}
\nu_{l}$, associated with the vector and axial vector meson exchange. The form
factors $f_{+}(q^{2})$ for $D \to\pi$ and $D_{(s)}\to K$ decays agree
excellently with experimental and lattice data, outperforming other theoretical
approaches. The $f_{+}(q^{2})$ form factor for $D^{+} \to \eta $ is compatible
with experimental data, while a slight discrepancy is observed for $D_{s}^{+}
\to \eta $ at large $q^{2}$. Additionally, we predict the vector form factors
$V(q^{2})$ and $A_{1}(q^{2})$ for $D \to K^{}$ and $D_{s} \to K^{}$ decays,
respectively. The results agree well with other approaches and lattice data at
maximum recoil ($q^{2}=0$). | Hiwa A. Ahmed, Yidian Chen, Mei Huang | 2023-09-12T11:58:28Z | http://arxiv.org/abs/2309.06156v1 | # \(D_{(s)}-\) mesons semileptonic form factors in the 4-flavor holographic QCD
###### Abstract
We investigate semileptonic form factors of \(D_{(s)}\) meson from a modified soft-wall 4-flavor holographic model. The model successfully reproduces the masses and decay constants of various mesons, including \(\rho\), \(K^{*}\), \(D^{*}\), \(D^{*}_{s}\), \(a_{1}\), \(K_{1}\), \(f_{1}\), \(D_{s1}\), \(\nu_{1}\), \(D_{s1}\), \(\nu_{1}\), \(\nu_{2}\), and \(D_{s}\). Moreover, we study the semileptonic decay processes \(D^{+}\to(\pi,K,\eta)l^{+}\nu_{l}\) and \(D^{+}_{s}\to(K,\eta)l^{+}\nu_{l}\), associated with the vector meson exchange, as well as \(D^{+}_{(s)}\to Kl^{+}\nu_{l}\), associated with the vector and axial vector meson exchange. The form factors \(f_{+}(q^{2})\) for \(D\to\pi\) and \(D_{(s)}\to K\) decays agree excellently with experimental and lattice data, outperforming other theoretical approaches. The \(f_{+}(q^{2})\) form factor for \(D^{+}\to\eta\) is compatible with experimental data, while a slight discrepancy is observed for \(D^{+}_{s}\to\eta\) at large \(q^{2}\). Additionally, we predict the vector form factors \(V(q^{2})\) and \(A_{1}(q^{2})\) for \(D\to K\) and \(D_{s}\to K\) decays, respectively. The results agree well with other approaches and lattice data at maximum recoil (\(q^{2}=0\)).
## I Introduction
Semileptonic weak decays of mesons play a vital role in our comprehension of the standard model (SM) as they provide the most direct way to determine the Cabibbo-Kobayashi-Maskawa (CKM) matrix [1; 2] elements from experimental data. In particular, semileptonic \(D_{(s)}\) meson decays offer a valuable avenue for investigating the interactions within the charm sector, where by measuring the decay rates, it becomes possible to directly determine the CKM matrix elements \(|V_{cd}|\) and \(|V_{cs}|\). For instance, the values of \(|V_{cd}|\) and \(|V_{cs}|\) are found from the measurements of the decays \(D\to\pi l\mu_{l}\) and \(D\to Kl\mu_{l}\), respectively, by Belle [3], BaBar [4; 5], CLEO [6], and BESIII [7] collaborations. It is worth noting that extracting the CKM matrix elements is not straightforward, rather, it includes the nonperturbative strong effects appearing in the transition from the initial state to the final state, which is parameterized by the hadronic invariant form factors. More recently, The BESIII collaboration reports several semileptonic weak decays, such as \(D^{+}\to K^{-}\pi^{+}e^{+}\nu_{e}\)[8], \(D^{+}_{s}\to K^{0}e^{+}\nu_{e}\) and \(D^{+}_{s}\to K^{*0}e^{+}\nu_{e}\) Decays in Ref. [9], \(D^{+}_{s}\to\eta^{(\prime)}e^{+}\nu_{e}\) in Ref. [10], and \(D^{+}\to\eta\mu^{+}\nu_{\mu}\) in [11]. Since the semileptonic decays include the nonperturbative hadronic form factors, one can not use the direct quantum chromodynamics (QCD), and one needs a nonperturbative method to carry out the calculations, see Ref. [12] for listing the theoretical approaches.
Apart from the other nonperturbative approaches, a holographic QCD model was applied to describe the structure of the hadrons. Based on the anti-de Sitter/conformal field theory (AdS/CFT) correspondence discussed in Refs. [13; 14], a bottom-up holographic QCD model at low energy was established in the works of Refs. [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. They started from QCD and constructed a five-dimensional dual with the features of the dynamical chiral symmetry breaking. Since in the two-flavor system, the masses of the up and down quarks are small, and an \(SU(2)\) flavor symmetry is preserved. However, in the case of the extension of the model to three flavors [29] and four flavors [30; 31; 32; 33; 34], the flavor symmetry is broken, especially in the case of including Charm quark. The first attempt to study the semileptonic decays had been done in Ref. [29], where the \(K_{l3}\) form factors that describe the decays \(K\to\pi l\nu_{l}\) was calculated. More recently, the semileptonic \(D\) meson decays to the vector, axial vector, and scalar mesons investigated in the hard-wall holographic approach [32].
In the present work, we use 4-flavor bottom-up holographic framework to study the semileptonic decays. In the original soft-wall holographic model [16], the quark condensate is proportional to the quark mass, which is in contradiction with QCD, so to overcome the issue, a higher order potential is added to the 5D action [35]. Therefore, we adopt the modified 4-flavor soft-wall model [33] instead of the soft-wall model. Following, we proceed by calculating the masses and decay constants of the \(\pi\), \(K\), \(\eta\), \(D\), \(D_{s}\), \(\rho\), \(K^{*}\), \(\omega\), \(D^{*}\), \(D^{*}_{s}\), \(a_{1}\), \(K_{1}\), \(f_{1}\), \(D_{1}\), and \(D_{s1}\) mesons in the ground state. Furthermore, we compute the form factors of the semileptonic decays \(D^{+}\to(\pi,K,\eta,K^{*})l^{+}\nu_{l}\) and \(D^{+}_{s}\to(K,\eta,K^{*})l^{+}\nu_{l}\) which induced by the decay of the charm quark to light quark, \(c\to d(s)l\nu_{l}\). Due to the fact
that the maximum-recoil form factors are essential to extract the CKM matrix elements, and they are also observable in the experiment, we compare our determined value with the experimental and lattice QCD data.
This work is organized as follows. In section II, we revisit the formalism of the modified soft-wall holographic QCD model for \(N_{4}\) flavor and derive the equations of motion. In section III, we describe the three-point interactions and deduce the semileptonic form factors from the three-point functions obtained from the cubic-order 5D action. A detailed comparison of the numerical results with the experimental data, lattice QCD, and other theoretical approaches are provided in section IV. Finally, we briefly conclude our work in section V.
## II The 5D action and equations of motion
In this section, we revisit the formalism of the four flavors of soft-wall holographic QCD model [33; 34]. The five-dimensional metric defined in the AdS space is given by,
\[ds^{2}=g_{MN}dx^{M}dx^{N}=\frac{1}{z^{2}}\left(\eta_{\mu\nu}dx^{\mu}dx^{\nu}+dz^ {2}\right), \tag{1}\]
where \(\eta_{\mu\nu}=\text{diag}\left[-1,1,1,1\right]\) is the four-dimensional metric in the Minkowski space, and \(z\) is the fifth dimension and has an inverse energy scale. Note that the Latin indices \(M\) and \(N\) run from \(0,1,2,3,4\), and the Greek indices are defined as \(\mu\), \(\nu=0,1,2,3\). According to the holographic model, there is a correspondence between the 4D operators and Corresponding 5D gauge fields [15]. The operators and corresponding gauge fields incorporated in the chiral dynamics are defined by
\[\begin{split}& J^{a}_{R/L\mu}=\bar{\psi}_{qR/L}\gamma_{\mu}t^{a} \psi_{qR/L}\to R^{a}_{\mu}/L^{a}_{\mu}\\ & J^{S}=\bar{\psi}_{qL}\psi_{qR}\to X.\end{split} \tag{2}\]
where \(J^{a}_{R/L\mu}\) is a right/left-handed currents which correspond to the \(R^{a}_{\mu}\) and \(L^{a}_{\mu}\) gauge fields, and the quark bilinear \(\bar{\psi}_{qL}\psi_{qR}\) correspond to the complex scalar fields \(X\). Note that, \(t^{a}\) with \(a=1,2,...,N_{f}^{2}-1\) are the generators of the \(SU(N_{f})\) group. Th general five-dimensional action is written as
\[\begin{split} S_{M}&=-\int_{\epsilon}^{z_{m}}d^{ \sharp}x\sqrt{-g}e^{-\phi}\operatorname{Tr}\left\{\left(D^{M}X\right)^{ \dagger}\left(D_{M}X\right)+M_{5}^{2}|X|^{2}-\kappa|X|^{4}\right.\\ &\left.+\frac{1}{2g_{5}^{2}}\left(V^{MN}V_{MN}+A^{MN}A_{MN} \right)\right\},\end{split} \tag{3}\]
where \(D^{M}X=\partial_{M}X-i\left[V_{M},X\right]-i\{A_{M},X\}\) is the covariant derivative of the scalar field \(X\), \(M_{5}^{2}=(\Delta-p)(\Delta+p-4)=-3\) by taking the conformal dimension of the scalar field operator \(\Delta=3\) and \(p=0\), \(\kappa\) is a dimensionless parameter which can be determined, and \(\epsilon\) and \(z_{m}\) are the UV and IR limit of the model. The coupling constant \(g_{5}\) is related to the number of color and defined \(g_{5}=2\pi\) for \(N_{c}=3\) ([15]). The gauge field strength \(V_{MN}\) and \(A_{MN}\) are defined by
\[\begin{split}& V_{MN}=\partial_{M}V_{N}-\partial_{N}V_{M}-i\left[V_{M},V_{N}\right]-i\left[A_{M},A_{N}\right],\\ & A_{MN}=\partial_{M}A_{N}-\partial_{N}A_{M}-i\left[V_{M},A_{N} \right]-i\left[A_{M},V_{N}\right],\end{split} \tag{4}\]
where the vector and axial vector fields are written in terms of the right- and left-handed gauge fields as \(V_{M}=\frac{1}{2}(L_{M}+R_{M})\) and \(A_{M}=\frac{1}{2}(L_{M}-R_{M})\), respectively. The fields \(V_{M}\), and \(A_{m}\) can be expanded to \(V_{M}^{a}t^{a}\), and \(A_{m}^{a}t^{a}\), respectively, and the generators satisfy \(Tr(t^{a}t^{b})=\frac{1}{2}\delta^{ab}\). The vector, axial and psudoscalar fields are described by \(4\times 4\) matrices,
\[V=V^{a}t^{a}=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}\frac{\rho^{0}}{ \sqrt{2}}+\frac{\omega^{\prime}}{\sqrt{6}}+\frac{\psi}{\sqrt{12}}&\rho^{+}&K^{*+ }&\bar{D}^{*0}\\ \rho^{-}&-\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega^{\prime}}{\sqrt{6}}+\frac{ \psi}{\sqrt{12}}&K^{*0}&D^{*-}\\ K^{*-}&\bar{K}^{*0}&-\sqrt{\frac{2}{3}}\omega^{\prime}+\frac{\psi}{\sqrt{12}}&D _{s}^{*-}\\ D^{*0}&D^{*+}&D_{s}^{*+}&-\frac{3}{\sqrt{12}}\psi\end{array}\right), \tag{5}\]
\[A=A^{a}t^{a}=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}\frac{a_{1}^{0}}{\sqrt{2 }}+\frac{f_{1}}{\sqrt{6}}+\frac{x_{c1}}{\sqrt{12}}&a_{1}^{+}&K_{1}^{+}&\bar{D}_{1 }^{0}\\ a_{1}^{-}&-\frac{a_{1}^{0}}{\sqrt{2}}+\frac{f_{1}}{\sqrt{6}}+\frac{x_{c1}}{ \sqrt{12}}&K_{1}^{0}&D_{1}^{-}\\ K_{1}^{-}&\bar{K}_{1}^{0}&-\sqrt{\frac{2}{3}}f_{1}+\frac{x_{c1}}{\sqrt{12}}&D_{ s1}^{-}\\ D_{1}^{0}&D_{1}^{+}&D_{s1}^{+}&-\frac{3}{\sqrt{12}}\chi_{c1}\end{array}\right), \tag{6}\]
\[\pi=\pi^{a}t^{a}=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}\frac{\pi^{0}}{ \sqrt{2}}+\frac{\eta}{\sqrt{6}}+\frac{\eta_{c}}{\sqrt{12}}&\pi^{+}&K^{+}&\bar{D }^{0}\\ \pi^{-}&-\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}+\frac{\eta_{c}}{ \sqrt{12}}&K^{0}&D^{-}\\ K^{-}&\bar{K}^{0}&-\sqrt{\frac{2}{3}}\eta+\frac{\eta_{c}}{\sqrt{12}}&D_{s}^{- }\\ D^{0}&D^{+}&D_{s}^{+}&-\frac{3}{\sqrt{12}}\eta_{c}\end{array}\right). \tag{7}\]
Additionally, The complex scalar field in Eq. (3) is expressed by
\[X=e^{i\pi^{a}t^{a}}X_{0}e^{i\pi^{a}t^{a}} \tag{8}\]
where \(X_{0}=\frac{1}{2}\operatorname{diag}\left[v_{l}(z),v_{l}(z),v_{s}(z),v_{c}(z)\right]\) with \(v_{l,s,c}(z)\) the vacuum expectation value, and \(\pi^{a}\) is the pseudoscalar field. Finally, the dilaton field \(\phi\) in Eq. (3) only depends on the fifth dimension \(z\) and explicit form is shown later in this section.
The equations of motion for each field can be obtained from varying the action in Eq. (3) with respect to the corresponding field. In order to find the vacuum expectation value, one needs to remove all the fields and keep only the background. The zeroth order of the action for the background field is given by
\[\begin{split} S^{(0)}=&-\frac{1}{4}\int_{\epsilon}^{z _{m}}d^{5}x\left\{\frac{e^{-\phi(z)}}{z^{3}}\left(2v_{l}^{\prime}(z)v_{l}^{ \prime}(z)+v_{s}^{\prime}(z)v_{s}^{\prime}(z)+v_{c}^{\prime}(z)v_{c}^{\prime }(z)\right)-\right.\\ &\left.\frac{e^{-\phi(z)}}{z^{5}}\left(3\left(2v_{l}(z)^{2}+v_{s}( z)^{2}+v_{c}(z)^{2}\right)-\frac{\kappa}{4}\left(2v_{l}(z)^{4}+v_{s}(z)^{4}+v_{c}(z)^{4 }\right)\right)\right\}.\end{split} \tag{9}\]
The equation of motion for the scalar vacuum expectation value \(v_{l,s,c}(z)\) is obtained as
\[-\frac{z^{3}}{e^{-\phi}}\partial_{z}\frac{e^{-\phi}}{z^{3}}\partial_{z}v_{q}(z )-\frac{3}{z^{2}}v_{q}(z)-\frac{\kappa}{2z^{2}}v_{q}^{3}(z)=0, \tag{10}\]
where \(q=l,s,c\). The solution for the scalar vacuum expectation value \(v_{l,s,c}(z)\) that preserves the UV and IR asymptotic behavior is provided and justified in Ref. [35]
\[v(z)=az+bz\tanh\left(cz^{2}\right), \tag{11}\]
with the definitions for the parameters \(a\), \(b\), and \(c\) as
\[a=\frac{\sqrt{3}m_{q}}{g_{5}},\quad b=\sqrt{\frac{4\mu^{2}}{\kappa}}-a,\quad c =\frac{g_{5}\sigma}{\sqrt{3}b},\]
where \(m_{q}\) is the quark mass and \(\sigma\) is the chiral condensate. It worth noting that the UV and IR asymptotic behaviour of the \(v(z)\) can be achieved by expanding Eq. (11) at small and large \(z\) as
\[v(z\to 0)=az+bcz^{3}+\mathcal{O}(z^{5}), \tag{12}\]
\[v(z\rightarrow\infty)=(a+b)z=\sqrt{\frac{4\mu^{2}}{\kappa}}z. \tag{13}\]
In the initial soft wall model [16], the dilaton field was originally characterized by the expression \(\phi(z\rightarrow\infty)=\mu^{2}z^{2}\). Here, the parameter \(\mu\) is connected to the Regge slope, establishing the mass scale for the meson spectrum and
ensuring the presence of linear mass trajectories. Moreover, one can find the dilaton profile by substituting the equation (11) into equation (10) and solve for \(\phi\) field [35]. However, in this approach the profile of the dilaton field exhibits dependence on the quark flavor and differs for each value of \(v_{q}\). While this flavor reliance of the dilaton field poses no issue when exclusively considering light quarks, it becomes evident and inevitable when addressing heavy quarks like the charm quark [33]. In Ref. [36] a modified dilaton profile proposed with a negative quadratic dilaton at UV and a positive quadratic dilaton at IR which is different from the one obtained in Ref. [33; 35], where positive quadratic dilaton is required at both UV and IR. In our present study, focusing solely on the IR asymptotic behavior of the \(\phi\) field suffices for the numerical computations, thereby obviating the need to address the flavor-related variability of the dilaton profile.
The equation of motion for the vector, axial vector, and pseudoscalar mesons can be obtained from the expansion of the action in Eq. (3) up to the second order,
\[\begin{split} S^{(2)}=&-\int d^{5}x\left\{\eta^{MN} \frac{e^{-\phi(z)}}{z^{3}}\left((\partial_{M}\pi^{a}-A_{M}^{a})\left(\partial _{N}\pi^{b}-A_{N}^{b}\right)M_{A}^{ab}-V_{M}^{a}V_{N}^{b}M_{V}^{ab}\right) \right.\\ &\left.+\frac{e^{-\phi(z)}}{4g_{5}^{2}z}\eta^{MP}\eta^{NQ}\left( V_{MN}^{a}V_{PQ}^{b}+A_{MN}^{a}A_{PQ}^{b}\right)\right\},\end{split} \tag{14}\]
where \(\eta^{MN}\) is the metric in 5-D Minkowski space, \(V^{a}(A^{a})_{MN}=\partial_{M}V^{a}(A^{a})_{N}-\partial_{N}V^{a}(A^{a})_{M}\). The mass terms in the action \(M_{A}^{ab}\) and \(M_{V}^{ab}\) are defined by
\[\begin{split} M_{A}^{ab}\delta^{ab}&=Tr\left(\{t^{ a},X_{0}\}\{t^{b},X_{0}\}\right),\\ M_{V}^{ab}\delta^{ab}&=Tr\left(\{t^{a},X_{0}\}[t^{ b},X_{0}]\right),\end{split} \tag{15}\]
where \(M_{V}^{ab}\) is zero for \(a,b=1,2,3,8,15\). The vector field in Eq. (14) satisfies the following equation of motion,
\[-\partial^{M}\frac{e^{-\phi}}{g_{5}^{2}z}V_{MN}^{a}-\frac{e^{-\phi}}{z^{3}} \left(M_{V}^{aa}V_{M}^{a}\right)=0. \tag{16}\]
The gauge choice for the vector field is set to \(V_{z}^{a}=0\) and \(\partial^{\mu}V_{\mu\perp}^{a}=0\) where \(V_{\mu\perp}^{a}\) is the transverse part of the vector field \(V_{\mu}^{a}=V_{\mu\perp}^{a}+V_{\mu\parallel}^{a}\). Considering the gauge fixing and then applying the 4D Fourier transformation, Eq. (16) reduces to the following
\[\left(-\frac{z}{e^{-\phi}}\partial_{z}\frac{e^{-\phi}}{z}\partial_{z}-\frac{2 g_{5}^{2}M_{V}^{aa}}{z^{2}}\right)V_{\mu\perp}^{a}(q,z)=-q^{2}V_{\mu\perp}^{a} (q,z). \tag{17}\]
with \(V_{\mu\perp}^{a}(q,z)\) is the 4D Fourier transformation of \(V_{\mu\perp}^{a}(x,z)\). According to the AdS/CFT principle, it is allowed to write the transverse part of the vector field in terms of the bulk-to-boundary propagator and its boundary value at the UV regime, which acts as a Fourier transformation of the source of the 4D conserved vector current operator, \(V_{\mu\perp}^{a}(q,z)=V_{\mu\perp}^{0a}(q)\mathcal{V}^{a}(q^{2},z)\). The boundary conditions for the bulk-to-boundary propagator \(\mathcal{V}^{0a}(q^{2},z)\) to satisfies the equation of motion (17) are \(\mathcal{V}^{a}(q^{2},\epsilon)=1\) and \(\partial_{z}\mathcal{V}^{a}(q^{2},z_{m})=0\). Moreover, the bulk-to-boundary propagator can be written as a sum over the meson poles
\[\mathcal{V}^{a}(q^{2},z)=\sum_{n}\frac{-g_{5}f_{V^{n}}^{a}\psi_{V^{n}}^{a}(z)} {q^{2}-m_{V^{n}}^{a^{2}}}, \tag{18}\]
where \(\psi_{V^{n}}(z)\) is a wavefunction which satisfies Eq. (17) with the boundary conditions \(\psi_{V^{n}}(\epsilon)=0\) and \(\partial_{z}\psi_{V^{n}}(z_{m})=0\), and normalized as \(\int dz\frac{e^{-\phi}}{z}\psi_{V}^{n}(z)\psi_{V}^{n}(z)=\delta^{nm}\), and \(f_{V^{n}}^{a}=|\partial_{z}\psi_{V^{n}}^{a}(\epsilon)/(g_{5}\epsilon)|\) is the decay constant of the \(n^{th}\) mode of the vector meson [15].
Similar to the vector field, The axial vector field \(A_{\mu}^{a}\) can be decomposed to the transverse and longitudinal parts, \(A_{\mu}^{a}=A_{\mu\perp}^{a}+A_{\mu\parallel}^{a}\), where the longitudinal part \(A_{\mu\parallel}^{a}=\partial_{\mu}\phi^{a}\) has the contribution to the pesudoscalar mesons. The equation of motion derived from Eq. (3) is given by
\[\left(-\frac{z}{e^{-\phi}}\partial_{z}\frac{e^{-\phi}}{z}\partial_{z}+\frac{2 g_{5}^{2}M_{A}^{aa}}{z^{2}}\right)A_{\mu\perp}^{a}(q,z)=-q^{2}A_{\mu\perp}^{a}(q,z), \tag{19}\]
with the conditions \(A_{z}^{a}=0\), and \(\partial^{\mu}A_{\mu\perp}^{a}\), respectively. The bulk-to-boundary propagator of the axial vector field \(\mathcal{A}^{a}(q^{2},z)\) satisfy the boundary conditions \(\mathcal{A}^{a}(q^{2},\epsilon)=0\) and \(\partial_{z}\mathcal{A}^{a}(q^{2},z_{m})=0\), in the UV and IR region, also can be written as
\[\mathcal{A}^{a}(q^{2},z)=\sum_{n}\frac{-g_{5}f_{A^{n}}^{a}\psi_{A^{n}}^{a}(z)}{ q^{2}-m_{A^{n}}^{a^{2}}}, \tag{20}\]
with the wavefunction \(\psi_{A^{n}}^{a}(z)\), and decay constant of the axial vector mesons \(f_{A^{n}}^{a}=|\partial_{z}\psi_{A^{n}}^{a}(\epsilon)/(g_{5}\epsilon)|\).
And last but not least, the mass spectra of the pseudoscalar mesons can be obtained by solving the coupled equation of motions between the pseudoscalar field \(\pi\) and the longitudinal part of the axial vector field \(\phi\),
\[\begin{split}& q^{2}\partial_{z}\varphi^{a}(q,z)+\frac{2g_{5}^{2} M_{A}^{aa}}{z^{2}}\partial_{z}\pi^{a}(q,z)=0,\\ &\frac{z}{e^{-\phi}}\partial_{z}\left(\frac{e^{-\phi}}{z} \partial_{z}\varphi^{a}(q,z)\right)-\frac{2g_{5}^{2}M_{A}^{aa}}{z^{2}}\left( \varphi^{a}(q,z)-\pi^{a}(q,z)\right)=0,\end{split} \tag{21}\]
with the boundary conditions \(\pi^{a}(q^{2},\epsilon)=\phi^{a}(q^{2},\epsilon)=0\) and \(\partial_{z}\pi^{a}(q^{2},z_{m})=\partial_{z}\phi^{a}(q^{2},z_{m})=0\). The bulk-to-boundary propagator for the longitudinal part of the axial vector field \(\phi(q^{2},z)\) and pseudoscalar field \(\pi(q^{2},z)\) are written as
\[\begin{split}&\phi(q^{2},z)=\sum_{n}\frac{g_{5}m_{\pi^{n}}^{2}f_{ \pi^{n}}\phi^{n}(z)}{q^{2}-m_{\pi^{n}}^{2}},\\ &\pi(q^{2},z)=\sum_{n}\frac{g_{5}m_{\pi^{n}}^{2}f_{\pi^{n}}\pi^{n }(z)}{q^{2}-m_{\pi^{n}}^{2}},\end{split} \tag{22}\]
where \(f_{\pi^{n}}=|\partial_{z}\phi^{n}(\epsilon)/(g_{5}\epsilon)|\) is the decay constant of the \(n^{th}\) mode of the psuedoscalar meson.
## III Three-point interactions and semileptonic form factors
In this section, the semileptonic form factors of \(D_{(s)}\to(P,V)l^{+}\nu_{l}\) are derived in the soft-wall holographic model. The Feynman diagram of the semileptonic decay process of \(D_{(s)}\) to a pseudoscalar or a vector meson is shown in Fig. 1, where the charm quark goes through the process of \(c\to d(s)W^{+}\to d(s)l^{+}\nu_{l}\). The matrix elements of the semileptonic decays of the \(D_{(s)}\) meson within the SM is defined by [37]
\[\mathcal{M}\left(D_{(s)}\to(P,V)l^{+}\nu_{l}\right)=\frac{G_{F}}{\sqrt{2}}V_{cq }^{*}\left<(P,V)|\bar{q}\gamma^{\mu}(1-\gamma_{5})c|D_{(s)}\right>\bar{\nu}_{l }\gamma^{\mu}(1-\gamma_{5})l, \tag{23}\]
where \(G_{F}\) is a fermi constant, \(V_{cq}^{*}\) elements of a CKM matrix, and the hadronic and leptonic currents are given by the terms \(\left<(P,V)|\bar{q}\gamma^{\mu}(1-\gamma_{5})c|D_{(s)}\right>\) and \(\bar{\nu}_{l}\gamma^{\mu}(1-\gamma_{5})l\), respectively. The hadronic current can be parameterized in terms of the invariant form factors, which depend on the momentum transfer squared (\(q^{2}\)). For the case of the pseudoscalar
Figure 1: Feynman diagram for the semileptonic decay of \(D_{(s)}\) into a pesudoscalar P (vector V) and \(l^{+}\nu_{l}\).
mesons in the final state, only the vector current (\(\bar{q}\gamma^{\mu}c\)) contributes to the form factors. The transition form factors are defined by [38]
\[\left\langle P\left(p_{2}\right)\left|V^{\mu}\right|D_{\left(s\right) }\left(p_{1}\right)\right\rangle= F_{+}\left(q^{2}\right)\left[P^{\mu}-\frac{M_{1}^{2}-M_{2}^{2}}{q ^{2}}q^{\mu}\right]+F_{0}\left(q^{2}\right)\frac{M_{1}^{2}-M_{2}^{2}}{q^{2}}q^ {\mu}\] \[\left\langle V\left(p_{2},\epsilon_{2}\right)\left|V^{\mu}-A^{\mu} \right|D_{\left(s\right)}\left(p_{1}\right)\right\rangle= -\left(M_{1}+M_{2}\right)\epsilon_{2}^{*\mu}A_{1}\left(q^{2} \right)+\frac{\epsilon_{2}^{*}\cdot q}{M_{1}+M_{2}}P^{\mu}A_{2}\left(q^{2}\right) \tag{24}\] \[+2M_{2}\frac{\epsilon_{2}^{*}\cdot q}{q^{2}}q^{\mu}\left[A_{3} \left(q^{2}\right)-A_{0}\left(q^{2}\right)\right]+\frac{2i\varepsilon_{\mu\nu \rho\sigma}\epsilon_{2}^{*\nu}p_{1}^{\rho}p_{2}^{\sigma}}{M_{1}+M_{2}}V\left(q ^{2}\right),\]
where \(P=p_{1}+p_{2}\), \(q=p_{1}-p_{2}\), \(M_{1}\) and \(M_{2}\) are the mass of the mesons in the initial and final state, respectively, and \(\epsilon_{2}\) is the polarization vector of the final vector meson. The \(A_{3}(q^{2})\) form factor is not independent and can be written as a combination between \(A_{1}(q^{2})\) and \(A_{2}(q^{2})\). For the present study, we only consider the form factors associated with the vector meson exchange \(F_{+}(q^{2})\) and \(V(q^{2})\), and axial vector meson exchange \(A_{1}(q^{2})\), since these are the most important form factors in the limit of zero lepton mass.
Using the holographic QCD approach, the semileptonic form factors can be deduced from the three-point functions [29; 32]. The cubic terms of the 5D action used to find the \(F_{+}(q^{2})\) are \(S(V\pi\pi)\), and for \(V(q^{2})\) and \(A_{1}(q^{2})\) are \(S(VV\pi)\) and \(S(VA\pi)\), respectively. The expansion of the 5D action (3) to cubic order is given by,
\[\begin{split} S^{\left(3\right)}=&-\int d^{5}x \left\{\eta^{MN}\frac{e^{-\phi\left(z\right)}}{z^{3}}(2\left(A_{M}^{a}-\partial _{M}\pi^{a}\right)V_{N}^{b}\pi^{c}g^{abc}+V_{M}^{a}\left(\partial_{N}\left( \pi^{b}\pi^{c}\right)-2A_{M}^{b}\pi^{c}\right)h^{abc}\right.\\ &-V_{M}^{a}V_{N}^{b}\pi^{c}k^{abc})+\frac{e^{-\phi\left(z\right)} }{2g_{5}^{2}z}\eta^{MP}\eta^{NQ}(V_{MN}^{a}V_{P}^{b}V_{Q}^{c}+V_{MN}^{a}A_{P}^ {b}A_{Q}^{c}+A_{MN}^{a}V_{P}^{b}A_{Q}^{c}.\\ &\left.+A_{MN}^{a}A_{P}^{b}V_{Q}^{c})f^{bca}\right\}\end{split} \tag{25}\]
with the following definitions for \(g^{abc}\), \(h^{abc}\), and \(k^{abc}\),
\[\begin{split} g^{abc}&=iTr\left(\{t^{a},X_{0}\}[t^{b },\{t^{c},X_{0}\}]\right),\\ h^{abc}&=iTr\left([t^{a},X_{0}]\{t^{b},\{t^{c},X_{0 }\}\}\right),\\ k^{abc}&=-2Tr\left([t^{a},X_{0}][t^{b},\{t^{c},X_{0 }\}]\right).\end{split} \tag{26}\]
In the present work, we are interested in the three-point interactions of the \(V\pi\pi\), \(VV\pi\), and \(VA\pi\). The corresponding part of the action to these three-point interactions are
\[\begin{split} S_{V\pi\pi}=&-\int_{\epsilon}^{z_{m}} d^{5}x\left\{\eta^{MN}\frac{e^{-\phi\left(z\right)}}{z^{3}}\left(2\left(A_{M}^{a}- \partial_{M}\pi^{a}\right)V_{N}^{b}\pi^{c}g^{abc}+V_{M}^{a}\left(\partial_{N }\left(\pi^{b}\pi^{c}\right)-2A_{N}^{b}\pi^{c}\right)h^{abc}\right)\right.\\ &\left.+\frac{e^{-\phi\left(z\right)}}{2g_{5}^{2}z}\eta^{MP} \eta^{nQ}\left(V_{MN}^{a}A_{P}^{b}A_{Q}^{c}\right)f^{abc}\right\}\end{split} \tag{27}\]
\[S_{VV\pi}=\int d^{5}x\frac{e^{-\phi\left(z\right)}}{z^{3}}\eta^{MN}\left(V_{M}^ {a}V_{N}^{b}\pi^{c}\right)k^{abc} \tag{28}\]
\[S_{VA\pi}=-\int_{\epsilon}^{z_{m}}d^{5}x\left\{2\eta^{MN}\frac{e^{-\phi\left(z \right)}}{z^{3}}A_{M}^{a}V_{N}^{b}\pi^{c}\left(g^{abc}-h^{bac}\right)+\frac{e^ {-\phi\left(z\right)}}{2g_{5}^{2}z}\eta^{MP}\eta^{NQ}\left(V_{MN}^{a}A_{P}^{b} A_{Q}^{c}\right)f^{abc}\right\} \tag{29}\]
Similar to the derivation of the electromagnetic form factors using the three point function [33], and semileptonic form factors in the work of Refs. [29; 32], one can obtain the \(F_{+}(q^{2})\), \(V(q^{2})\), and \(A_{1}(q^{2})\) as the following,
\[F_{+}(q^{2})=\int dz\frac{e^{-\phi\left(z\right)}}{z}\left(f^{abc}\partial_{z} \phi^{a}\mathcal{V}^{b}(q^{2},z)\partial_{z}\phi^{c}-\frac{2g_{5}^{2}}{z^{2}}( \pi^{a}-\phi^{a})\mathcal{V}^{b}(q^{2},z)(\pi^{c}-\phi^{c})(g^{abc}-h^{bac}) \right), \tag{30}\]
\[V(q^{2})=\frac{(M_{1}+M_{2})g_{5}^{2}}{2}\int dz\frac{e^{-\phi(z)}}{z^{3}}k^{ abc}V^{a}(z)\mathcal{V}^{b}(q^{2},z)\pi^{c}(z), \tag{31}\]
\[A_{1}(q^{2})= \int dz\frac{e^{-\phi(z)}}{z}\left(\frac{M_{1}^{2}+M_{2}^{2}-q^{2} }{2(M_{1}+M_{2})}\right)f^{bac}\mathcal{A}^{a}(q^{2},z)V^{b}(z)\phi^{c}(z) \tag{32}\] \[-\int dz\frac{e^{-\phi(z)}}{z^{3}}\frac{2g_{5}^{2}}{(M_{1}+M_{2}) }\mathcal{A}^{a}(q^{2},z)V^{b}(z)\pi^{c}(z)(g^{abc}-h^{bac}).\]
## IV Results
In this section, we show the numerical results for the meson masses and decay constants of the vector, axial vector, and pseudoscalar mesons at the ground state and the form factors of the semileptonic decay process of \(D_{(s)}\) mesons to a pseudoscalar or vector mesons within the framework of \(N_{f}=4\) holographic QCD.
Firstly, let us set the parameters of the model. The parameters of the model that can be found from the fitting to the experimental data are \(\mu\), \(m_{u}\), \(m_{s}\), \(m_{c}\), \(\sigma_{u}\), \(\sigma_{s}\), \(\sigma_{c}\), \(\kappa\) and \(z_{m}\). The value of \(\mu\) is found to be 430 MeV from the fitting of the experimental masses of the ground and higher excited states of the \(\rho\) meson. Since the pion decay constant and pion mass are related to the light quark mass and condensate by the Gell-Mann-Oakes-Renner (GOR) relation, \(f_{\pi}^{2}m_{\pi}^{2}=2m_{q}\sigma\), the measured value of the pion decay constant \(f_{\pi}=92.4\) MeV and pion mass \(m_{\pi}=139.6\) MeV were used to adjust the up quark mass and up quark condensate. Similarly, we use the GOR relation to fix the values of \(m_{s}\) and \(\sigma_{s}\) from the measured mass and decay constant of the kaon. After fixing \(\mu\), \(m_{u}\), and \(\sigma_{u}\), one can use the experimental value of the \(a_{1}\) meson to determine the value of \(\kappa\). For the parameters of the charm sector, the mass \(m_{c}\) and charm quark condensate \(\sigma_{c}\) are found from the fitting of the model with the experimental value of the masses \(m_{\eta_{c}}\) and \(m_{\chi_{c1}}\). Following the work of Refs. [33; 34], the value of \(z_{m}\) is fixed at 10 GeV. The numerical values of the parameters are provided in Table 1.
By using the parameters in Table 1, one can obtain the ground state mass and decay constants of the vector, axial vector, and pseudoscalar mesons. Table 2 presents the results of the masses and decay constants. It is worth noting that the \(SU(4)\) flavor symmetry is explicitly breaking due to the different values of the quark masses and condensates. And the consequence of the flavor symmetry breaking is the difference between the masses of the strange and charmed mesons with the light flavor mesons. However, in the vector sector the mass \(M_{V}^{aa}\) in Eq. (17) is zero for \(a=1,2,3,8,15\), and this returns the same masses for the \(\rho\), \(\omega\), and \(J/\Psi\) mesons. This issue solved for the \(J/\Psi\) meson by adding an auxiliary heavy field to the action, which only include the contribution of the charm quark to explicitly break the \(SU(4)_{V}\) to \(SU(3)_{V}\)[34]. Since the contributions of \(\omega\) and \(J/\Psi\) mesons are not important for scope of the current work, we did not include the auxiliary field in the 5D action.
Furthermore, we investigate the form factors of the following semileptonic decay processes, \(D^{+}\rightarrow(\pi,K,\eta,K^{*})l^{+}\nu_{l}\) and \(D_{s}^{+}\rightarrow(K,\eta,K^{*})l^{+}\nu_{l}\). From the experimental point of view, the semileptonic decays are important to find the elements of the CKM matrix. For that reason, it is important to determine the maximum-recoil values of \(F_{+}(q^{2}=0)\), and \(V(q^{2}=0)\) and \(A_{1}(q^{2}=0)\) for \(D_{(s)}^{+}\rightarrow(\pi,K,\eta)l^{+}\nu_{l}\), and \(D_{(s)}^{+}\to K^{*}l^{+}\nu_{l}\), respectively. Regarding the vector form factor for \(D_{(s)}^{+}\to K^{*}l^{+}\nu_{l}\), it is more favorite to take the ratio between \(V(q^{2}=0)\) and \(A_{1}(q^{2}=0)\), \(r_{v}=V(0)/A_{1}(0)\)[9]. The comparison of the maximum-recoil values at \(q^{2}=0\) with the experimental data, lattice QCD, and other theoretical approaches,e.g., light-cone sum rules (LCSR), light-front quark model (LFQM), constituent quark model(CQM), covariant confined quark model (CCQM) are presented in Table 3.
For the case of the pion in the final state, the form factor \(f_{+}^{D\rightarrow\pi}(0)\) is consistent with the experimental data and lattice QCD with a small discrepancy of 6.75% and 9%,respectively. Meanwhile to compare our full form factor with the others qualitatively, we normalize the form factors with the maximum-recoil values of \(F_{+}(q^{2}=0)\). The result of the form factor for \(D^{+}\rightarrow\pi l^{+}\nu_{l}\) is shown in Fig. 2, where we compare our calculation with the experimental data [7], lattice QCD data [42], and different theoretical approaches like LCSR, LFQM, CQM, CCQM and heavy-light chiral
\begin{table}
\begin{tabular}{c c c} \hline \hline \(m_{u}=3.2\) & \(\sigma_{u}=(296.2)^{3}\) & \(\mu=430\) \\ \(m_{s}=142.3\) & \(\sigma_{s}=(259.8)^{3}\) & \(\kappa=30\) \\ \(m_{c}=1597.1\) & \(\sigma_{c}=(302)^{3}\) & \(z_{m}=10000\) \\ \hline \end{tabular}
\end{table}
Table 1: The values of the free parameters with the unit of MeV.
perturbation theory (HL\(\chi\)PT) (See Ref. [43] and the references therein). The result of \(F_{+}(q^{2})\) is in excellent agreement with the experiment and Lattice QCD and has a better reproduction compared to other theoretical approaches.
In the case of \(D_{(s)}\to K\), the form factor at zero momentum has more discrepancy compare to \(D\to\pi\), which is 20%. This can be related to the fact that the mass of the Kaon is not well reproduced in the model as shown in Table 2. However, as shown in Fig. 3, the normalized form factor \(F_{+}(q^{2})\) aligns very well with the experimental and lattice QCD data and outperforming other theoretical approaches, such as LCSR, LFQM, CQM, CCQM, HL\(\chi\)PT and large energy effective theory (LEET) (see the caption of Fig. 3 for the references).
The Experimental form factors of \(D^{+}\to\eta^{(\prime)}l^{+}\nu_{l}\) are reported by the BESIII collaborations in Refs. [10; 11]. In the current analysis, we only study \(D^{+}\to\eta l^{+}\nu_{l}\), and if one wants to consider the \(\eta^{\prime}\) in the holographic QCD, the \(U(1)_{A}\) axial anomaly should be considered [55]. The compatibility of the form factors of the \(D^{+}\to\eta\) decay with the experimental data [11] and other theoretical frameworks can be seen in Fig. 4. However, the result of the \(D^{+}_{s}\to\eta\) has some discrepancy with the experimental data [10] and grows faster at large \(q^{2}\). The discrepancy of the \(D^{+}_{s}\to\eta\) also can be seen from Table 3 for \(f_{+}^{D_{s}\to\eta}(0)\). It is worth noting that similar incompatibility with the experimental data has also been reported by other approaches such as LCSR, LFQM, CQM, CCQM and even lattice QCD has discrepancy with 25%.
Finally, we predict the vector form factors associated with the vector meson exchange \(V(q^{2})\) and axial vector meson exchange \(A_{1}(q^{2})\). As mentioned before, it is more interesting to compare their ratios at maximum recoil. From the experimental side, the form factors of \(D\to K^{*}\) and \(D_{s}\to K^{*}\) are not reported for the full range of momentum. However, Only the ratio of \(r_{V}^{D\to K^{*}}\) and \(r_{V}^{D_{s}\to K^{*}}\) are measured. Meanwhile the lattice QCD community calculated the \(D\to K^{*}\) form factor. As shown in Table 3, our results of \(r_{V}^{D\to K^{*}}\) and \(r_{V}^{D_{s}\to K^{*}}\) are well aligned with the experimental data. The results of the \(D\to K^{*}\) and \(D_{s}\to K^{*}\) form factors are shown in Fig. 5 and Fig. 6, respectively. From Fig. 5, we can see that at the low value of \(q^{2}\), our results are within the range of the other approaches and well consistent with lattice data [52]. However, by going to the high \(q^{2}\), the form factors \(V(q^{2})\) and \(A_{1}(q^{2})\) are raised faster than other approaches. Similar feature can be seen for \(D_{s}\to K^{*}\) as shown in Fig. 6, especially for the case of \(V(q^{2})\). This can be regarded as a signal that, there maybe a missing information for the vector form factors associated with the vector meson exchange \(V(q^{2})\) and axial vector meson exchange \(A_{1}(q^{2})\) using the holographic QCD model.
for the \(\rho\) meson with experimental data, revealing a discrepancy of approximately 16%. However, in the axial vector sector, the decay constant of the \(a_{1}\) meson exhibited excellent agreement with experimental data. Moreover, we successfully reproduced the decay constants of the pion and kaon in our model, comparing them with experimental data.
Figure 3: Results of \(F_{+}(q^{2})\) for the decay of \(D_{(s)}\) to kaon. Left: our result for \(D\to Kl^{+}\mu_{l}\) (solid red line), the experimental data (blue square)[7], lattice data (cyan triangle) [42], LFQM (purple triangle) [44], LEET (orange triangle) [47], LCSR (green triangle)[45], and HI\(\chi\)PT (yellow triangle)[46]. Right: \(D_{s}\to Kl^{+}\mu_{l}\) form factor (Solid red line) compared to the experimental data (Blue) [9]. LFQM (purple) [44], LCSR (green) [48], CCQM (magenta) [12], and CQM (orange) [49].
Figure 2: The semileptonic form factor \(F_{+}(q^{2})\) for \(D\to\pi l^{+}\nu_{l}\). Our result (solid red line) is compared with the experimental data (blue square)[7], lattice data (cyan triangle) [42], LFQM (purple triangle) [44], LCSR (green triangle)[45], and HL\(\chi\)PT (yellow triangle)[46].
data, while for \(D\) and \(D_{s}\) mesons, we compared our results with lattice data. Moreover, in our model, the flavor symmetry is explicitly broken due to the different values of the quark masses and condensates.
Furthermore, for three-point functions, we studied the form factors \(f_{+}(q^{2})\) of the following semileptonic decay processes, \(D^{+}\to(\pi,K,\eta)l^{+}\nu_{l}\) and \(D_{s}^{+}\to(K,\eta)l^{+}\nu_{l}\) which associate with the exchange of a vector meson, and \(V(q^{2})\) and \(A_{1}(q^{2})\) of the \(D_{(s)}^{+}\to K^{*}l^{+}\nu_{l}\) decays associated with the vector and axial vector meson exchange, respectively. The result of the form factor for \(D^{+}\to\pi l^{+}\nu_{l}\), \(f_{+}(q^{2})\) shows excellent agreement with the experimental data, and it is comparable with lattice QCD and other theoretical approaches. Likewise, the normalized form factor \(f_{+}(q^{2})\) of the \(D_{(s)}\)-to-kaon is very well consistent with the experimental and lattice data and has a better reproduction compared to other theoretical approaches; however, there is a 20% discrepancy for \(D_{(s)}\to K\) at zero momentum compare to
Figure 5: Comparison of the form factors \(V(q^{2})\) (red) and \(A_{1}(q^{2})\) (red) for \(D\to K^{*}\) with different theoretical approaches. Lattice data (cyan) from Ref. [52], LEV\({}_{\chi}\)QM (yellow) from Ref. [43], LFQM (purple) from Ref. [44], and HL\(\chi\)PT (yellow) from Ref. [46]
Figure 6: \(D_{s}\to V\) form factors \(V(q^{2})\) and \(A_{1}(q^{2})\). The references for V\({}_{\chi}\)QM, LFQM, and HL\(\chi\)PT are similar to the one mentioned in Fig. 5. LCSR is taken from Ref. [48].
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline FFs & hQCD & LCSR [45] & LCSR [48] & LCSR [50] & LCSR [51] & LFQM [44] & CQM [49] & CCQM [12] & LQCD & Exp. \\ \hline \(f_{+}^{D\to\pi}(0)\) & 0.58 & 0.65 & 0.635 & - & - & 0.66 & 0.69 & 0.63 & 0.64 [42] & 0.622 [7] \\ \hline \(f_{+}^{D\to K}(0)\) & 0.57 & 0.76 & 0.661 & - & - & 0.79 & 0.78 & 0.78 & 0.73 [42] & 0.725 [7] \\ \hline \(f_{+}^{D\to K}(0)\) & 0.57 & - & 0.820 & - & - & 0.66 & 0.72 & 0.60 & 0.77 [53] & 0.72 [9] \\ \hline \(f_{+}^{D\to\eta}(0)\) & 0.31 & - & 0.556 & 0.552 & 0.429 & 0.71 & - & 0.67 & - & 0.39 [11] \\ \hline \(f_{+}^{D\to\eta}(0)\) & 0.66 & - & 0.611 & 0.520 & 0.495 & 0.76 & 0.78 & 0.78 & 0.564 [54] & 0.45 [10] \\ \hline \(r_{V}^{D\to K^{*}}\) & 1.40 & - & 1.385 & - & - & 1.36 & 1.56 & 1.22 & 1.468 [52] & 1.41 [8] \\ \hline \(r_{V}^{D_{s}\to K^{*}}\) & 1.53 & - & 1.309 & - & - & 1.55 & 1.82 & 1.40 & - & 1.67 [9] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the maximum-recoil values of the form factors with the different theoretical approaches, lattice QCD, and experimental data.
experimental data. Another semileptonic decay process is \(D_{(s)}^{+}\to\eta l^{+}\nu_{l}\), similar to the form factors of the pion and kaon, the normalized \(f_{+}(q^{2})\) for the \(D^{+}\to\eta\) is compatible with data; however, a little deviation from the experimental data can be seen for \(D_{s}^{+}\to\eta\). Finally, we predicted the vector form factors \(V(q^{2})\) and \(A_{1}(q^{2})\) for the decays \(D\to K^{*}\) and \(D_{s}\to K^{*}\). Our results agreed well with other approaches and lattice data at maximum-recoil \(f_{+}(0)\) but increase dramatically at high momentum transfers, particularly for \(D_{s}\to K^{*}\). These results gave us a signal that, there might be a missing dynamics at high momentum transfers, and in the future, we should deeply investigate these decay channels.
In the future it would be interesting to extend the calculation of the semileptonic form factors of the B mesons, which contain the bottom quark using the holographic QCD model. Finally, we think that the model can be further improved by using an explicit expression of the dilaton profile, which respect the linear confinement and spontaneous symmetry breaking. We hope to dig down to these topics in the future work.
###### Acknowledgements.
This work is supported in part by the National Natural Science Foundation of China (NSFC) Grant Nos: 12235016, 12221005, 12147150, 12305136 and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No XDB34030000, the start-up funding from University of Chinese Academy of Sciences(UCAS), the start-up funding of Hangzhou Normal University under Grant No. 4245C50223204075, and the Fundamental Research Funds for the Central Universities. H. A. A. acknowledges the "Alliance of International Science Organization (ANSO) Scholarship For Young Talents" for providing financial support for the Ph.D. study.
|
2309.09265 | Higher-order interactions induce anomalous transitions to synchrony | We analyze the simplest model of identical coupled phase oscillators subject
to two-body and three-body interactions with permutation symmetry. This model
is derived from an ensemble of weakly coupled nonlinear oscillators by phase
reduction. Our study indicates that higher-order interactions induce anomalous
transitions to synchrony. Unlike the conventional Kuramoto model, higher-order
interactions lead to anomalous phenomena such as multistability of full
synchronization, incoherent, and two-cluster states, and transitions to
synchrony through slow switching and clustering. Phase diagrams of the
dynamical regimes are constructed theoretically and verified by direct
numerical simulations. We also show that similar transition scenarios are
observed even if a small heterogeneity in the oscillators' frequency is
included. | Iván León, Riccardo Muolo, Shigefumi Hata, Hiroya Nakao | 2023-09-17T13:01:44Z | http://arxiv.org/abs/2309.09265v2 | # Higher-order interactions induce anomalous transitions to synchrony
###### Abstract
We analyze the simplest model of identical coupled phase oscillators subject to two-body and three-body interactions with permutation symmetry. This model is derived from an ensemble of weakly coupled nonlinear oscillators by phase reduction. Our study indicates that higher-order interactions induce anomalous transitions to synchrony. Unlike the conventional Kuramoto model, higher-order interactions lead to anomalous phenomena such as multistability of full synchronization, incoherent, and two-cluster states, and transitions to synchrony through slow switching and clustering. Phase diagrams of the dynamical regimes are constructed theoretically and verified by direct numerical simulations. We also show that similar transition scenarios are observed even if a small heterogeneity in the oscillators' frequency is included.
**Synchronization is a ubiquitous emergent phenomenon in which many coupled units behave in unison. Given the pervasiveness of synchronization, understanding how it is achieved is a fundamental question. In particular, the nature of the interactions among oscillators has strong consequences on the transition to synchronization. To tackle this issue, it is convenient to consider phase models in which each oscillator is described solely in terms of a phase variable. According to phase reduction theory, the phase model captures the dynamics completely when the coupling among the oscillators is sufficiently weak. If one considers only pairwise interactions, the synchronization transition is described by the Kuramoto-type model. In recent years, however, it has been noted that higher-order (many-body) interactions are crucial to fully capture real-world systems. In this paper, we seek to improve the understanding of the impact of higher-order interactions on the synchronization transition. With such a goal, we consider an ensemble of globally coupled identical phase oscillators subject to two-body and three-body interactions derived through phase reduction. We show that the higher-order interactions induce anomalous coexistence of distinct dynamical regimes and transitions to synchrony even in the presence of small heterogeneity. Given that the phase model is derived from phase reduction, its dynamics could be observed in a wide variety of ensembles of coupled nonlinear oscillators.**
+
Footnote †: preprint: APS/123-QED
## I Introduction
In nature, we recurrently observe the emergence of collective behaviors in systems of interacting dynamical units. One remarkable example of such self-organization is the synchronization of coupled oscillators, observed in circadian rhythms, neuronal dynamics, Josephson junctions, or electric grids, to name a few [1; 2; 3]. Hence, the prediction, control, and understanding of the dynamics of coupled oscillators is a fundamental problem in multiple research fields.
In order to understand collective synchronization, a common approach is to consider simple phase-oscillator models, where each oscillator is solely described by one degree of freedom, i.e., the phase. The dynamics of the phase models are equivalent to more general systems of nonlinear oscillators, given that the coupling among the oscillators is sufficiently weak, as stated by phase reduction theory [3; 4].
Despite the power and ductility of such an approach, the classical theory of synchronization is solely based on pairwise interactions, while, in many natural systems, the interactions are intrinsically higher-order (many-body) rather than pairwise [5; 6]. From ecology [7] to neuroscience [8; 9], many examples show that a pairwise description is not sufficient to match the theory with observations and, additionally, higher-order interactions appear naturally when phase reduction is performed up to higher orders [10; 11; 12]. Moreover, theoretical studies on consensus [13], random walks [14], synchronization [15; 16], Turing pattern formation [17], and social contagion [18], to name a few, showed that higher-order interactions can dramatically affect the global behavior of the system. In particular, it was shown that extensions of the Kuramoto model including higher-order interactions exhibit an explosive transition to synchrony or collective chaos [19; 20; 10; 21; 12; 13; 14; 15; 16; 17; 18; 19; 22].
The goal of this work is to analyze the collective dynamics of the simplest minimal extension of the Kuramoto-type phase model for identical globally coupled oscillators subject to two- and three-body interactions with permutation symmetry. The simplicity of the model allows us to perform a complete analysis of the phase diagram, evidencing that higher-order interactions induce anomalous transitions to synchrony. Due to the three-body coupling, synchronization is not achieved as in the conventional two-body Kuramoto model or any of its extensions. Instead, we observe either multistability of three states, namely, full synchronization, incoherence, and two
cluster states, or a route to synchronization involving a slow-switching between two clusters. Moreover, we confirm that most of these behaviors, induced by three-body interactions, are robust with respect to the heterogeneity in the oscillators' natural frequencies. Because the model is derived from phase reduction, we expect similar scenarios to be exhibited by a wide variety of systems.
## II Phase model
In the present work, we consider an ensemble of \(N\) identical phase oscillators globally coupled through two- and three-body interactions with permutation symmetry:
\[\hat{\theta}_{j}=\omega+\frac{K_{1}}{N}\sum_{k=1}^{N}\sin(\theta_{k}-\theta_{j} +\alpha)+\frac{K_{2}}{N^{2}}\sum_{k,l=1}^{N}\sin(\theta_{k}+\theta_{l}-2\theta _{j}+\beta) \tag{1}\]
for \(j=1,...,N\), where \(K_{1}\) and \(K_{2}\) measure the strength of the two- and three-body interactions, while \(\alpha\) and \(\beta\) are phase lags of the interactions, respectively. Given the symmetry of the model, it is enough to consider only the case with \(K_{1}>0\).
The phase model (1) is a straightforward extension of the conventional two-body Kuramoto-type model to include three-body interactions that preserve permutation symmetry. This model can be exactly derived by performing phase reduction on the ensemble of Stuart-Landau oscillators with two- and three-body interactions [23]. The derivation can be found in Appendix A. Moreover, it can also be obtained through phase reduction of ensembles of general limit-cycle oscillators if additional harmonics in the phase-coupling functions are neglected.
Phase models similar to (1) have been extensively studied in the literature, so we briefly discuss them in what follows in order to highlight our findings. If the three-body interactions are removed, i.e., \(K_{2}=0\), we obtain the paradigmatic Kuramoto-Sakaguchi model displaying a transition to synchrony [3; 24]. On the other hand, if only the three-body interactions are considered,, i.e., \(K_{1}=0\), the phase model studied in [25] is recovered. In this case, the system exhibits bistability between incoherent and two-cluster states, where the latter represents a state in which each oscillator takes either of two possible phases. The model described by Eq. (1) with an additional three-body interaction was studied and derived through second-order phase reduction in [10; 12], although, in that case, the strength of higher-order interactions was considered to be much smaller than that of the pairwise interactions. Finally, synchronization of model (1) has been recently studied in the presence of noise [26], bimodally distributed frequencies [27], or with inertia [28]. We remark studies on the three-body interactions with a phase lag in Eq. (1) are scarce since the latter is not analyzable in the well-established framework of Watanabe-Strogatz [29; 30] or Ott-Antonsen theory [31; 32].
Equation (1) takes a simpler form if, without loss of generality, we fix the frequency to zero, \(\omega=0\), by choosing an appropriate rotating frame of reference, and rescaling time as \(t\to K_{1}t\). Additionally, we define the Kuramoto order parameter \(Re^{i\psi}=\sum_{k}e^{i\theta_{k}}\) through which the oscillators interact. This allows us to rewrite Eq. (1) as:
\[\hat{\theta}_{j}=R\sin(\psi-\theta_{j}+\alpha)+KR^{2}\sin(2\psi-2\theta_{j}+ \beta), \tag{2}\]
where \(K=K_{2}/K_{1}\) measures the ratio between the three- and two-body interactions. In what follows, we will see that three-body (higher-order) interactions give rise to anomalous transitions to synchrony, multistability, and other dynamics that are absent in the original Kuramoto model or its extensions.
## III Anomalous transitions to synchrony
In this section, we perform numerical simulations of the rescaled model (2), evidencing the presence of anomalous transitions to synchrony. Nevertheless, before analyzing the model, let us recall the results for pure pairwise interactions, i.e. \(K=0\). In this case, Eq. (2) reduces to the Kuramoto-Sakaguchi model of identical oscillators. It is known that the global attractors of the identical Kuramoto-Sakaguchi model are either the incoherent state, where oscillators are distributed yielding \(R=0\), or full synchronization, where oscillators form a point cluster achieving \(R=1\). The abrupt transition between those states occurs at \(\alpha=\pm\pi/2\).
In Fig. 1 (a), we present the bifurcation diagram of the Kuramoto-Sakaguchi model, where the value of the Kuramoto order parameter \(R\), obtained from numerical simulations, is depicted for multiple values of \(\alpha\). For visual clarity, yellow diamonds and blue stars are used to indicate the incoherent state and full synchronization, respectively. Additionally, we include snapshots of the oscillator states. As predicted by the theory, the incoherent state becomes unstable at \(\alpha=-\pi/2\), giving rise to full synchronization. We note that this transition is different from the smooth second-order transition of the classical Kuramoto model with inhomogeneous frequencies because in our case all oscillators are identical.
Let us now add the three-body interactions, i.e., \(K\neq 0\). Note that, since \(K_{1}>0\), the sign of \(K\) depends only on the three-body interactions. In Fig. 1 (b,c), we plot the order parameter in the steady state sufficiently after the initial transient versus \(\alpha\) for fixed \(N=1000\) and \(\beta=0\), while choosing the ratio \(K=0.45\) in (b) and \(K=-0.45\) in (c). These two cases are representative of the dynamics for positive and negative \(K\).
First, we focus on the dynamics for \(K=0.45\), presented in Fig. 1 (b). For any \(\alpha<-\pi/2\), we observe that the incoherent state is stable, as in the case without three-body interactions. Nevertheless, the effect of the three-body interactions is remarkable for \(\alpha\in(-2.67,-2.35)\), where they induce antagonistic multistability of the incoherent state, full synchronization, and two-cluster state (red circles). In other words, for the same parameter values, any of these states can be achieved depending on the initial conditions. When \(\alpha\simeq-2.35\), the numerical simulations indicate that, the two-cluster state loses its stability and the system exhibits bistability between full synchronization and incoherent state. Finally, for \(\alpha>-\pi/2\), full synchronization is the only
attractor of the system. For larger values of \(K\), it is also possible to find regions with bistability between full synchronization and two-cluster state (not shown). The multistability of the system throughout the transition implies that different hysteresis can be detected depending on how the parameters are changed.
The dynamics for \(K=-0.45\) are completely different, as we show in Fig. 1(c). For \(\alpha<-\pi/2\), the dynamics are similar to the Kuramoto-Sakaguchi model, since the incoherent state is the only attractor. Nonetheless, the system does not achieve full synchronization for \(\alpha>-\pi/2\); instead, the order parameter \(R\) displays oscillations whose period increases with time. This dynamical state is known as slow-switching [33; 34], where the system approaches a heteroclinic cycle formed by saddle two-cluster states. The snapshot captures the switching between those saddle two-cluster states. If \(\alpha\) is further increased, one of the two-cluster states becomes the only attractor of the system. Finally, we observe that full synchronization is achieved when \(\alpha\simeq-0.4\).
Let us remark that the anomalous transition to synchrony, multistability, and slow-switching detected in Fig. 1 (b,c) are all caused by the three-body interactions. Although previous studies showed that higher-order interactions were responsible for a wider variety of dynamics [10; 12; 16; 19; 20; 21; 22], the arising of multi-stability and the anomalous transitions presented in Fig. 1 (b,c) were not reported.
## IV Analytical stability analysis
In order to explain the numerical results, we perform an analytical stability analysis of the dynamical regimes displayed by model (2). For mathematical convenience, we consider the thermodynamic limit, \(N\rightarrow\infty\), although, with slight modifications, the same analysis could be performed for finite \(N\).
First of all, we study the stability of full synchronization. This state is characterized by all oscillators being located in a point cluster that rotates with frequency \(\Omega=\sin\alpha+K\sin\beta\) and thus \(R=1\). Although full synchronization is always a solution, linear stability analysis indicates that it becomes stable when
\[\cos\alpha+2K\cos\beta=0, \tag{3}\]
depicted in Fig. 2 (a,b) with a blue line.
Next, we analyze the stability of the incoherent state in which \(R=0\). The linear stability of the incoherent state is the same as in the Kuramoto-Sakaguchi model, because the three-body interaction, proportional to the term \(R^{2}\), vanishes when linearized around \(R=0\). Thus, the incoherent state becomes unstable at
\[\cos\alpha=0, \tag{4}\]
depicted by black lines in Fig. 2 (a,b). The independence of Eq. (4) on \(K\) explains why the incoherent state is always stable for \(\alpha<-\pi/2\) in the numerical simulations. Additionally, we remark that the bistability between the incoherent state and full synchronization is caused by the fact that the stability of full synchronization depends on \(K\) while the stability of the incoherent state does not.
When the incoherent state becomes unstable, quasi-periodic partial synchrony (QPS) generically appears [35]. QPS is a state in which the Kuramoto order parameter rotates uniformly while individual oscillators behave quasi-periodically. This state is not shown in Fig. 1 because it is an unstable saddle for the present model. However, because the saddle QPS is weakly unstable, during the initial transient, the system spends a large amount of time close to this state.
Figure 1: Synchronization transitions with pure two-body coupling (a) and with three-body (higher-order) coupling (b,c). Kuramoto order parameter vs. phase lag \(\alpha\) for \(N=1000\), \(\beta=0\), and \(K=0.45\) (b), and \(K=-0.45\) (c). The yellow (diamonds), blue (stars), red (circles), and green (triangles) indicate the incoherent state, full synchronization, two-cluster state, and slow switching, respectively. A typical snapshot of each state is shown in the inset to ease understanding of the dynamics.
Finally, we analyze the stability of the two-cluster state following Ref. [33; 34; 10]. First, we note that the two-cluster state is not a single configuration of oscillators but a family of configurations; each two-cluster state is determined by the fraction \(p\) of the oscillators in the first cluster and the distance (phase difference) \(\Delta\) between the first and second clusters. We can obtain the evolution equation for \(\Delta\) for each value of \(p\), whose fixed points correspond to the possible two-cluster states. To obtain the stability of these solutions, it is enough to consider the evolution of three types of variations: the variation of \(\Delta\) and the variations of one oscillator in the first or second cluster. If those variations decay, the two-cluster state with the given \(\Delta\) and \(p\) is stable. The region where the two-cluster state is stable is then determined and depicted in ref in Fig. 2 (a,b). See Appendix B for thorough details of the stability analysis.
Moreover, when the two-cluster state is unstable, it is possible to find another dynamical state: slow switching [36; 34]. In this state, two saddle two-cluster states form a heteroclinic cycle and the system approaches this cycle while switching between both unstable two-cluster states. The conditions for such slow switching to be stable were derived in [34], see also Appendix C. The analysis indicates that slow switching can only be realized when all other states are unstable.
In Fig. 2 (a), we depict the phase diagram for \(\beta=0\). In the blue, yellow, red, and green regions, full synchronization, incoherent state, two-cluster state, and slow switching are stable, respectively. The hatched regions indicate multistability following the same color code. This figure evidences that the system displays multi-stability for wide parameter ranges. Additionally, we remark that, given the intricate phase diagram, different paths along the parameter space will give rise to different anomalous transitions to synchrony. We emphasize that the present phase diagram has been computed analytically, explaining the anomalous transitions observed in Fig. 1. The phase diagram obtained by direct numerical simulations is in excellent agreement with Fig. 2 (a) as reported in Appendix D.
For the sake of completeness, we have also analyzed the model for other values of \(\beta\). Although some quantitative changes are observed, the dynamics and bifurcations are qualitatively the same. As a particular example, in Fig. 2 (b), we depict the phase diagram for \(\beta=1\). By comparing it with the case with \(\beta=0\), we observe that the stability boundaries are shifted and the stability regions for the two-cluster state are deformed, but no new dynamical regimes or transitions arise. We highlight here that, although the dynamics are equivalent, the value of \(\beta\) might be important in applications. For example, the wide region of bistability between full synchronization and the two-cluster state for negative \(K\) in Fig. 2 (b) is almost inappreciable for \(\beta=0\) in Fig. 2 (a).
## V Effects of heterogeneity
We have so far assumed all the oscillators to be identical. Although a complete analysis of the effect of heterogeneity is beyond the scope of this work, one may ask whether the above results are robust against small heterogeneity. In order to answer this question, we consider the natural frequency of each oscillator to be drawn from a normal distribution with zero mean and standard deviation \(\sigma=0.05\), \(\mathcal{N}(0,0.05)\), and perform a numerical study analogous to the above one.
The first consequence of the heterogeneity is that full synchronization is no longer possible since the heterogeneity prevents all oscillators from forming a point cluster. Nevertheless, the system evolves to partial synchrony, where some oscillators are synchronized while others are drifting.
For positive values of \(K\), the addition of small heterogeneity produces some quantitative differences in the stability boundaries; however, it is still possible to find the regions with multistability of partial synchrony, incoherent state, and two-cluster state. This means that higher-order interactions
Figure 2: Phase diagrams of Eq. (2) for \(\beta=0\) (a) and \(\beta=1\) (b). In the blue, yellow, red, and green regions, full synchronization, incoherent state, two-cluster state, and slow switching are stable, respectively. The hatching indicates that more than one state is stable using the same color code.
promote anomalous transitions to synchrony and give rise to wide regions of multistability even in the presence of small heterogeneity.
The effect of heterogeneity when \(K<0\) is more noticeable. In fact, in the regions where we observed slow switching in the identical case, the system now displays partial synchrony. This means that the heterogeneity induces partial synchrony due to the three-body interactions. This phenomenon can be understood as the heterogeneity stabilizing the saddle QPS, similar to the stabilization of QPS by noise observed in [37].
## VI Conclusions
In this work, we have studied the simplest general model of globally coupled identical oscillators subject to pairwise and higher-order interactions with permutation symmetry. The considered model is a natural extension of the Kuramoto model with an additional three-body interaction that can be derived from an ensemble of Stuart-Landau oscillators with higher-order interactions. As we have shown, the three-body coupling plays a crucial role in the dynamics of the system, giving rise to anomalous transitions to synchrony and promoting the multistability of synchronous, incoherent, and two-cluster states. These results have been obtained numerically and corroborated analytically through a stability analysis.
We stress that the anomalous transitions to synchrony and multistability are not degenerate scenarios caused by the fact that all oscillators are identical. In fact, we numerically observed the system displaying similar behaviors even when small heterogeneity is included. Thus, given that the phase model we analyzed is derived from a general system of coupled oscillators through phase reduction, we believe that the complex dynamical scenarios described in this study can be achieved in a wide variety of systems.
The results included in this work are a fundamental step forward in our understanding of the effect of higher-order interactions, which find applications in many research fields. We believe that our analysis paves the way for future studies involving the presence of noise, heterogeneity, or different higher-order network topologies [38; 39; 40; 41].
## Acknowledgements
I.L. and H.N. acknowledge JSPS KAKENHI JP22K11919, JP22H00516, and JST CREST JP-MICR1913 for financial support. The work of R.M. is supported by an FRIA-FNRS Fellowship, funded by the Walloon Region, Grant FC 33443. R.M. also acknowledges the Erasmus+ and the Mobility Out program of the FNRS for funding his visit in the group of H.N.
## Data Availability
The data that support the findings of this study are available within the article.
## Appendix A Phase reduction
This section is devoted to showing how the phase model (1) is obtained by performing phase reduction on an ensemble of Stuart-Landau oscillators. We consider \(N\) Stuart-Landau oscillators globally coupled with two- and three-body interactions:
\[W_{j} = (1+i\partial)W_{j}-(1+ic_{2})|W_{j}|^{2}W_{j}\] \[+ \frac{\kappa_{1}(1+ic_{1})}{N}\sum_{k=1}^{N}(W_{k}-W_{j})\] \[+ \frac{\kappa_{2}(1+ic_{3})}{N^{2}}\sum_{k=1}^{N}\sum_{l=1}^{N}(W_ {k}W_{l}W_{j}^{*}-|W_{j}|^{2}W_{j}),\]
where \(W_{j}\) is the oscillator's complex variable, \(\tilde{\omega}-c_{2}\) is the frequency of the oscillator, \(c_{2}\) is the non-isochronicity parameter, \(\kappa_{1}\) and \(\kappa_{2}\) are the strength of the two- and three-body interactions while \(c_{1}\) and \(c_{3}\) are the'reactivities' of the two- and three-body coupling, respectively. We note that the three-body interaction in the above model is the simplest case that satisfies the following conditions: (i) symmetric with respect to permutations of the interacting oscillators \(k\) and \(l\), (ii) symmetric with respect to rotation of all oscillators on the complex plane, i.e., \(W_{j}\to W_{j}e^{i\chi}\) where \(\chi\) is a real number, and (iii) vanishing when all oscillators synchronize.
The Stuart-Landau oscillator is the normal form of the Hopf bifurcation, and thus, model Eq. (A) approximately describes the dynamics of a population of oscillators close to a Hopf bifurcation. The model can be obtained by the center-manifold reduction of a general model of nonlinear oscillators [3], where all terms and parameters considered naturally appear as the result of the reduction. We remark that model (A) contains the simplest higher-order interaction that preserves the permutation symmetry of three oscillators, that is, the interaction is invariant if the subindex \(k\) and \(l\) are exchanged. Additionally, because non-resonant terms are eliminated in the center-manifold reduction, the model presents rotational symmetry.
In this appendix, we perform phase reduction following the standard phase-reduction theory [3; 4], applicable to any oscillator. However, for the specific model (A), we could also follow Ref. [23], where the symmetries are required to obtain the phase-reduced model. In order to perform phase reduction, it is convenient to make a change to Cartesian coordinates, \(W_{j}=x_{j}+iy_{j}\), and rewrite Eq. (A) in the form:
\[\hat{\mathbf{X}}_{j} = \mathbf{F}(\mathbf{X}_{j})+\frac{\kappa_{1}}{N}\sum_{k=1}^{N}\mathbf{p}_{1}( \mathbf{X}_{k},\mathbf{X}_{j})\] (A2) \[+ \frac{\kappa_{2}}{N^{2}}\sum_{k=1}^{N}\mathbf{p}_{2}(\mathbf{X}_{k},\bm {X}_{l},\mathbf{X}_{j}),\]
where \(\mathbf{X}_{j}=(x_{j},y_{j})\) and \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\) represent two- and three-body interactions, respectively. Phase reduction theory states that the evolution of the phase \(\theta_{j}\) of the oscillator \(j\) is represented by the natural frequency plus the product of the phase response function [3; 4] and the interaction functions evaluated on the limit cycle. This gives
\[\dot{\theta}_{j} = \tilde{\omega}-c_{2}+\frac{\kappa_{1}}{N}\sum_{k=1}^{N}\mathbf{Z}( \theta_{j})\cdot\mathbf{p}_{1}(\theta_{k},\theta_{j}) \tag{10}\] \[+\frac{\kappa_{2}}{N^{2}}\sum_{k,l=1}^{N}\mathbf{Z}(\theta_{j})\cdot \mathbf{p}_{2}(\theta_{k},\theta_{l},\theta_{j}),\]
where \(\mathbf{Z}(\theta)=(-\sin\theta-c_{2}\cos\theta,\cos\theta-c_{2}\sin\theta)\) is the phase response function of the Stuart-Landau oscillator and we have evaluated the oscillator states \(\mathbf{X}_{i}\) in \(\mathbf{p}_{1,2}\) on the limit cycle \(\mathbf{X}_{i}=(\cos\theta_{i},\sin\theta_{i})\) where \(i=k,l,j\).
Evaluating Eq. (10) yields the phase model described by Eq. (1), where the constants \(\alpha\), \(\beta\), \(K_{1}\), \(K_{2}\), and \(\omega\) take the values
\[\alpha = \arg[1+c_{1}c_{2}+(c_{1}-c_{2})i], \tag{11}\] \[\beta = \arg[1+c_{3}c_{2}+(c_{3}-c_{2})i],\] (12) \[K_{1} = \kappa_{1}\sqrt{(1+c_{1}^{2})(1+c_{2}^{2})},\] (13) \[K_{2} = \kappa_{2}\sqrt{(1+c_{3}^{2})(1+c_{2}^{2})},\] (14) \[\omega = \tilde{\omega}-c_{2}-\kappa_{1}(c_{1}-c_{2})-\kappa_{2}(c_{3}-c_ {2}). \tag{15}\]
We note that averaging is not necessary since (10) yields only resonant terms due to the rotational symmetry of (10).
## Appendix B Stability of two-cluster state
We devote this section to performing stability analysis of the two-cluster state displayed by the phase model (2), following Refs. [33; 10; 34]. First of all, we rewrite the model in the compact form:
\[\dot{\theta}_{j}=\frac{1}{N}\sum_{k=1}^{N}\Gamma(\theta_{k}-\theta_{j})+\frac {1}{N^{2}}\sum_{k,m=1}^{N}g_{2}(\theta_{k}+\theta_{m}-2\theta_{j}), \tag{16}\]
where \(\Gamma(x)=\sin(x+\alpha)\) and \(g_{2}(x)=K\sin(x+\beta)\).
We can characterize the two-cluster state by a fraction of oscillators \(p\), forming cluster \(A\) at the phase \(\theta_{A}\), and the other fraction \((p-1)\), forming cluster \(B\) at the phase \(\theta_{B}\). The evolution of the phases \(\theta_{A}\) and \(\theta_{B}\) of the clusters obeys
\[\dot{\theta}_{A} = p\Gamma(0)+(1-p)\Gamma(\theta_{B}-\theta_{A})+p^{2}g_{2}(0)\] \[+ 2p(1-p)g_{2}(\theta_{B}-\theta_{A})+(1-p)^{2}g_{2}(2\theta_{B}-2 \theta_{A}),\] \[\dot{\theta}_{B} = (1-p)\Gamma(0)+p\Gamma(\theta_{A}-\theta_{B})+(1-p)^{2}g_{2}(0)\] \[+ 2p(1-p)g_{2}(\theta_{A}-\theta_{B})+p^{2}g_{2}(2\theta_{A}-2 \theta_{B}).\]
If the two-cluster state is stable, the distance between the clusters, \(\Delta=\theta_{A}-\theta_{B}\), is constant. The evolution of the distance \(\Delta\) is given by
\[\dot{\Delta}= (2p-1)\Gamma(0)+(1-p)\Gamma(-\Delta)-p\Gamma(\Delta)\] \[+(2p-1)g_{2}(0)+2p(1-p)[g_{2}(-\Delta)-g_{2}(\Delta)]\] \[+(1-p^{2})g_{2}(-2\Delta)-p^{2}g_{2}(2\Delta). \tag{17}\]
The pairs of \((p,\Delta)\), with \(p\in(0,1)\) and \(\Delta\in[0,2\pi)\), such that the right hand side of (17) is zero are the possible two-cluster states. However, this condition only implies the existence of the two-cluster states, not their stability. The stability of the two-cluster state can be computed by decomposing small variations from the two-cluster state into three orthogonal modes [34; 42]; one mode corresponds to the phase locking of the two clusters while the other two modes capture the disintegration of clusters \(A\) and \(B\), respectively. The decay of these modes is characterized by eigenvalues \(\lambda_{L}\), \(\lambda_{A}\), and \(\lambda_{B}\), respectively.
We first consider the stability of the phase locking of the two clusters by studying the variation of their distance (phase difference) \(\Delta\). Linearizing (17) around the fixed point \((p,\Delta)\), such variation grows with the exponent
\[\lambda_{L}=-(1-p)\Gamma^{\prime}(-\Delta)-p\Gamma^{\prime}(\Delta )-2p(1-p)[g_{2}^{\prime}(-\Delta)+g_{2}^{\prime}(\Delta)]\] \[-2(1-p^{2})g_{2}^{\prime}(-2\Delta)-2p^{2}g_{2}^{\prime}(2\Delta), \tag{18}\]
where \({}^{\prime}\) indicates the derivative.
To analyze the stability of the cluster \(A\) against disintegration, we compute the evolution of the variation of a single oscillator from the cluster \(A\). This variation will grow with the exponent
\[\lambda_{A}=-p\Gamma^{\prime}(0)-(1-p)\Gamma^{\prime}(-\Delta)-2p ^{2}g_{2}^{\prime}(0)\\ -4p(1-p)g_{2}^{\prime}(-\Delta)-2(1-p)^{2}g_{2}^{\prime}(-2\Delta). \tag{19}\]
The exponent associated with the disintegration of cluster \(B\) is obtained by changing \(p\rightarrow(1-p)\) and \(\Delta\rightarrow-\Delta\) in Eq. (19). When all three eigenvalues are negative, the corresponding two-cluster state is stable.
## Appendix C Stability of slow switching
In this section, we explain the explicit conditions for slow switching to be stable. As previously stated, slow switching is a dynamical state in which the system approaches a heteroclinic cycle formed by two unstable two-cluster states. This state is stable if the following conditions hold [34]:
* There is at least one value of \(p\) such that three different two-cluster state exist. The distances \(\Delta_{1,2,3}\) between the clusters in those states are ordered as \(0<\Delta_{1}<\Delta_{2}<\Delta_{3}<2\pi\) and their eigenvalues are denoted as \(\lambda_{L,A,B}^{1,2,3}\).
* Full synchronization is unstable.
* \(\lambda_{L}^{2}>0\) while \(\lambda_{L}^{1}<0\) and \(\lambda_{L}^{3}<0\).
* \(\lambda_{A}^{1}>0\) and \(\lambda_{B}^{1}<0\) while \(\lambda_{A}^{3}<0\) and \(\lambda_{B}^{3}>0\).
These conditions ensure the two-cluster states characterized by \(\Delta_{1}\) and \(\Delta_{3}\) form a stable heteroclinic cycle. For the model studied in the main text, these conditions are only satisfied when full synchronization, incoherence, and two-cluster states are unstable.
## Appendix D Comparison between numerical and analytical phase diagram
In this section, we compare the numerically obtained phase diagram of model (2) with the analytical results.
In Fig. 11 (a), we depict the phase diagram obtained by direct numerical simulations with \(N=1000\) oscillators and \(\beta=0\). We initialized the system close to each of the possible states and plot a blue, yellow, red, or white point if the incoherent state, full synchronization, two-cluster state, or slow switching was stable, respectively. If two or more states were stable, a violet, green, or dark blue point was respectively used to denote bistability of full synchronization and the two-cluster state, bistability of incoherence and full synchronization, and multistability of incoherence, synchrony, and the two-cluster states.
In Fig. 11 (b), we have replotted the analytically obtained phase diagram of Fig. 2 (a) with the color code of Fig. 11 (a). The comparison of Fig. 11 (a) and (b) gives the evidence of the excellent agreement between the numerical and analytical results.
|
2301.00305 | The functorial semantics of Lie theory | Ehresmann's introduction of differentiable groupoids in the 1950s may be seen
as a starting point for two diverging lines of research, many-object Lie theory
(the study of Lie algebroids and Lie groupoids) and sketch theory. This thesis
uses tangent categories to build a bridge between these two lines of research,
providing a structural account of Lie algebroids and the Lie functor.
To accomplish this, we develop the theory of involution algebroids, which are
a tangent-categorical sketch of Lie algebroids. We show that the category of
Lie algebroids is precisely the category of involution algebroids in smooth
manifolds, and that the category of Weil algebras is precisely the classifying
category of an involution algebroid. This exhibits the category of Lie
algebroids as a tangent-categorical functor category, and the Lie functor via
precomposition with a functor $\partial: \mathsf{Weil}_1 \to
\mathcal{T}_{\mathsf{Gpd}},$ bringing Lie algebroids and the Lie functor into
the realm of functorial semantics. | Benjamin MacAdam | 2022-12-31T23:04:30Z | http://arxiv.org/abs/2301.00305v1 | # The functorial semantics of Lie theory
###### Abstract
We study the functorial semantics of Lie theory by using the functorial semantics of Lie theory. We show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of theory of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of theory of Lie theory. We also show that the functorial semantics of Lie theory is equivalent to the functorial semantics of Lie theory of theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory of Lie theory theory of |
2309.14552 | Tactile Estimation of Extrinsic Contact Patch for Stable Placement | Precise perception of contact interactions is essential for fine-grained
manipulation skills for robots. In this paper, we present the design of
feedback skills for robots that must learn to stack complex-shaped objects on
top of each other (see Fig.1). To design such a system, a robot should be able
to reason about the stability of placement from very gentle contact
interactions. Our results demonstrate that it is possible to infer the
stability of object placement based on tactile readings during contact
formation between the object and its environment. In particular, we estimate
the contact patch between a grasped object and its environment using force and
tactile observations to estimate the stability of the object during a contact
formation. The contact patch could be used to estimate the stability of the
object upon release of the grasp. The proposed method is demonstrated in
various pairs of objects that are used in a very popular board game. | Kei Ota, Devesh K. Jha, Krishna Murthy Jatavallabhula, Asako Kanezaki, Joshua B. Tenenbaum | 2023-09-25T21:51:48Z | http://arxiv.org/abs/2309.14552v2 | # Tactile Estimation of Extrinsic Contact Patch for Stable Placement
###### Abstract
Precise perception of contact interactions is essential for the fine-grained manipulation skills for robots. In this paper, we present the design of feedback skills for robots that must learn to stack complex-shaped objects on top of each other (see Fig. 1). To design such a system, a robot should be able to reason about the stability of placement from very gentle contact interactions. Our results demonstrate that it is possible to infer the stability of object placement based on tactile readings during contact formation between the object and its environment. In particular, we estimate the contact patch between a grasped object and its environment using force and tactile observations to estimate the stability of the object during a contact formation. The contact patch could be used to estimate the stability of the object upon the release of the grasp. The proposed method is demonstrated on various pairs of objects that are used in a very popular board game.
## I Introduction
Humans can perform very complex and precise manipulation tasks effortlessly. Consider, for example, gently stacking two lightweight objects on top of each other without looking at them as shown in Fig. 1. Successful execution of this task requires that the object not topple upon release of the grasp. In these scenarios, stability is not directly observable; it must be implicitly inferred from tactile signals that entangle both _intrinsic_ (direct) contact between the end effector and the grasped object and _extrinsic_ (indirect) contact between the grasped object and the environment. For example, it is difficult to distinguish the stability of the configuration on the left from the right by looking at it visually (as seen in Fig. 1). This work is motivated by how humans are able to disentangle a composite tactile signal to determine the nature of extrinsic contact; and are able to further predict whether a given stack configuration is stable. We present a closed-loop system that similarly reasons about object stability using tactile signals that arise out of extrinsic contacts.
Object stability could be estimated from the contact forces experienced by an object during placement. The stability of an object is governed by the relative location of the environmental contact and the center of mass location of the object. The forces observed by the force-torque (F/T) sensor mounted on the wrist of the robot as well as the deformation observed by the tactile sensors co-located at the gripper fingers depend on the contact patch between the object and its environment, as well as the geometric and physical properties of the object. As a simplification, we assume that the geometry of the objects is fixed, so the robot works with known pieces. Under this assumption, the problem of estimating the stability of placement from tactile observations is simplified. With this understanding, we try to estimate the contact patch between the object and the environment using tactile signals. However, the estimation of contact patches from a single tactile observation is a partially observable problem. Thus, it is not possible to get a perfect estimation of contact from a single interaction.
To solve the partial observability problem, we present a
method for aggregating information from multiple observations. The proposed method collects tactile observations by interacting with the environment multiple times and updates its belief of the underlying contact formation. We show that we are able to monotonically improve our estimate of the contact formation between the environment and the grasped object. This estimate is used to move the object towards a stable configuration so that it can be released in a stable pose. This is demonstrated using several pairs of objects from a popular board game where the objective is to incorporate a new block on an existing tower without destabilizing the tower. We also perform ablations to understand which sensing modality, the F/T sensor or the vision-based tactile sensors is helpful in understanding the phenomena during the considered contact phenomena.
**Contributions:** In summary, our contributions are the following.
1. We present a method to estimate extrinsic contact patches from end-effector tactile signals that compose both intrinsic and extrinsic contacts.
2. Our probabilistic filtering approach for use in a feedback control loop can stably stack a set of extremely challenging real-world objects using solely tactile sensing.
## II Related Work
**Block stacking.** Block stacking is one of the most widely studied problems in robotics. Several studies have addressed the problem of robot stacking through various approaches. These include learning to schedule auxiliary tasks for reinforcement learning (RL) [1], combining demonstrations and RL [2, 3], employing sim-to-real transfer [2, 4, 5], and using task-and-motion planning [6]. The focus of these works primarily revolves around stacking simple cubes. Lee _et al._[7] propose a benchmark that introduces relatively irregular rectangles generated by deforming cubes. However, these objects still maintain convexity and simplicity. Furrer _et al._[8] and Yifang _et al._[9] have explored the stacking of irregular stones. Another related work that reasons about vision-based contact support could be found in [10]. But this assumed access to the geometry of the object and was indeed reasoning about the relative placement between blocks given the object geometries. Nonetheless, these studies make assumptions regarding knowledge of geometry and assume that objects possess wide support and high friction, simplifying the problem and enabling basic pick-and-place strategies. Most importantly, these works do not reason about stability using contact information but rather perform placement using open-loop controllers. However, the pick-and-place-based stacking would not work if there is ambiguity in the location of the environment (for example, the scenario shown in Figure 1). To address this problem, our proposed method considers the local contact phenomenon in which the object can topple and fall, if not placed with proper support. Moreover, we remove assumptions regarding the geometry of the underlying objects, necessitating the estimation of stability through interactions.
**External contact localization** Prior works represent contacts as a set of points [11, 12] and lines [13, 14]. Although line contacts give us more information compared to point contacts, they require active exploration involving changes in the orientation of the gripper [13, 14], making it difficult to apply them in our setting where the tower is very unstable. The closest work to ours is the neural contact fields (NCF) of Higuera et al. [15], where the authors estimate the contact patch between a grasped object and its environment. While NCF is evaluated on a simulation and a limited number of objects, we tested our method on unknown geometries of the environment which can be used for an appropriate downstream task.
## III Problem Statement
We are interested in performing stable placement in environments where the object might have partial support for placement. Consider, for example, the scenario shown in Figure 1, where it is not enough to establish contact with the bottom piece but rather estimate stability of the object in the resulting contact formation. Thus, we consider the problem of estimating the stability of an object when in contact with its environment, in an attempt to release and place the object in a stable pose during a task. This happens to be a partially observable task, as we cannot observe the full state of the system, and thus stability needs to be estimated from sensor observations. We assume that the robot has access to tactile sensors co-located at the gripper fingers as well as a Force/Torque (F/T) sensor at the wrist. A certain contact formation is stable if the object can remain stable after being released from the grasp.
The stability of a contact formation depends on the relative position of the center of mass of the object and the contact patch between the object and the environment. However, this cannot be directly observed during a contact formation, and thus leads to partial-observability. A robot can usually observe force-torque signals and/or tactile images during interaction. The observed signals depend not only on the contact formation but also on the geometry and physical parameters of the grasped object. Thus, although these data have a lot of information, these are all entangled and thus it is very difficult to extract specific information, e.g. estimate contact patch. The stability estimation problem in its full scope requires reasoning about the sensor observations while considering the geometric information of the objects. To simplify the estimation problem, we make the following assumptions to limit the scope of current study:
1. Geometry and physical parameters of the grasped objects are fixed.
2. All objects are rigid and have flat surfaces.
It is important to emphasize that the robot is unfamiliar with the shape of the underlying objects and needs to explore a stable configuration through several probing attempts. These assumptions restrict the use of our proposed objects to known objects. A full, in-depth study of the problem is left as a future exercise.
## IV Method
The main idea here is to estimate the contact patch between an object and its environment using force and tactile measurements. This is based on the fact that sensor observations are generated by the contact formation between the object and its environment. We propose a framework consisting of four key components. First, the robot estimates the contact patch between the grasped object and its environment from an observation obtained by interacting with the environment. Then, it assesses stability based on the estimated contact patch; and releases the grasped object if it believes the current configuration is stable; otherwise, it aggregates information from multiple estimated contact patches to predict the belief map, which gives us a sense of the contact surface of the environment. Then it can assess the stability. Finally, the robot selects an action that moves the object towards a position that it believes can improve stability. In this section, we describe more details of these four modules.
### _Contact Patch Estimation_
Given the observed tactile image \(o^{\text{Tac}}\) and F/T measurements \(o^{\text{FT}}\), our objective is to learn a model that generates a probabilistic contact patch \(\hat{S}\), which consists of a set of probabilities indicating which part of the grasped object is in contact.
**Contact representation.** To estimate the contact patch, we discretize the contact surface of the grasped object \(S\) into \(N\) points as \(S\approx\{s_{1},...,s_{N}\}\) each of which corresponds to a specific location on the contact surface of the grasped object (see Fig. 3 right). For each point \(s_{j}\), we predict the probability of being in contact or remaining uncontacted \(p(s_{j})\). Consequently, we represent the probabilistic contact patch \(\hat{S}\) as a set of probabilities \(\hat{S}_{i}=\{p(s_{1}),...,p(s_{N})\}\).
**Data collection by interaction.** During a duration of \(T\) seconds, the robot applies a downward force along the negative Z axis for \(d\) mm, while collecting \(o^{\text{Tac}},o^{\text{FT}}\) from tactile and force-torque sensors at a frequency of \(10\) Hz. Specifically, \(o^{\text{Tac}}=\{o^{\text{Tac}}_{t}\}_{t=0}^{T}\), where \(o^{\text{Tac}}_{t}\in\mathbb{R}^{252}\) with \(252=2\times 2\times 7\times 9\), where we use two tactile sensors mounted on each finger and measure marker displacements on the \(XY\) axis in the tactile image, and \(7\times 9\) is the number of markers in column and row (see Fig. 2), which can be obtained by post-processing the tactile image \(I^{\text{Tac}}_{t}\). Similarly, \(o^{\text{FT}}=\{o^{\text{FT}}_{t}\}_{t=0}^{T}\), \(o^{\text{FT}}_{t}\in\mathbb{R}^{6}\) is the F/T measurement. We use a suitable impedance control to prevent the object from falling by using excessive force. In the data collection process, we add displacements in the \(XY\) plane such as \(x\sim\{x_{\texttt{min}},x_{\texttt{max}}\}\) and \(y\sim\{y_{\texttt{min}},y_{\texttt{max}}\}\) whose origin is the center position of the contact surface of the lower object \(O\) (see Fig. 3), and the minimum and maximum range are defined to ensure contact between the flat surfaces of the upper and lower objects. We use known geometries and displacements to generate ground-truth contact patches for training a model.
**Training.** Finally, we train a contact patch estimation model \(f^{\text{contact}}\) that takes observation \(o^{\text{Tac}},o^{\text{FT}}\) and learns to generate a probabilistic contact surface \(\hat{S}\) as:
\[\hat{S}=f^{\text{contact}}(o^{\text{Tac}},o^{\text{FT}}). \tag{1}\]
This model is trained by minimizing the binary cross-entropy loss for each data point \(s_{j}\). To capture patterns in time series data, we use LSTM to build the model.
### _Stability Estimation_
We utilize the estimated contact patch \(\hat{S}\) to estimate the stability of the current configuration. To do that, we first construct a convex hull \(C\in\text{Convex}(\hat{S})\) (see Fig. 2 (b)) using points whose associated probability exceeds a predefined threshold denoted by \(\delta\), which we use \(\delta=0.9\) for our experiments. Subsequently, we check that the convex hull includes the position of the center of mass of the grasped object. In the affirmative case, the gripper releases the grasped object. Otherwise, the gripper aggregates information and moves towards a stable position by an action selection strategy described in the following sections.
### _Aggregating Information from Multiple Interactions_
Since the contact patch estimation from tactile signals is a partially observable task, i.e., multiple different contact patches can yield similar tactile signals, it is difficult to
Fig. 2: **Pipeline**: Our method comprises four components. First, a robot probes the environment to establish contact between the grasped object and the target object upon which it must be stacked. During this probing phase, we acquire a sequence of force/torque measurements and tactile images. We then estimate the extrinsic contact patch and, in turn, the potential stability of the resultant configuration. Subsequently, we aggregate the information from multiple interactions to update the belief map of the contact state. We pick the action that maximizes the contact patch between the objects.
reliably estimate the contact patch from a single interaction. Therefore, we aggregate information from multiple interactions to disambiguate the estimate.
We denote the aggregated contact patch at the time step \(i\) as \(\hat{S}_{i}^{B}\), which again represents a probabilistic contact surface of the _bottom_ object \(\hat{S}_{i}^{B}=\{p(s_{1,i}^{B}),...,p(s_{M,i}^{B})\}\), where \(M\) is the number of discrete points. Following [16], the probabilistic formulation of the contact \(p(s_{i}^{B})\) (note we remove lowercase \(m\) for simplification) given past observation and action can be formulated as
\[\begin{split} p(s_{i}^{B}|& o_{1:i-1},a_{1:i-1})\\ &=\int p(s_{i}^{B}|s_{i-1}^{B},a_{i-1})p(s_{i-1}^{B}|o_{1:i-1},a_ {1:i-1}),\end{split} \tag{2}\]
where the first term is \(1\) as we assume deterministic dynamics, and the second term is initialized with the prior distribution and can be obtained through recursion. The posterior can be computed as:
\[p(s_{i}^{B}|o_{1:i},a_{1:i-1})\propto p(o_{i}|s_{i}^{B})p(s_{i}^{B}|o_{1:i-1}, a_{1:i-1}), \tag{3}\]
where the first term is given by the contact patch estimation model \(f^{\text{contact}}\) and the second term can be computed from Eq.(2). Specifically, we initialize the probability with \(p(s_{0}^{B})=\text{Bernoulli}(0.5)\) since we do not know whether the specific point is in contact or not before interaction.
### _Action Selection_
To realize a stable configuration, we design a policy that maximizes the contact surface area in the next step. The policy begins by calculating the central position of the convex hull of the aggregated contact patch \(s_{C^{B}}=\frac{\sum_{i\in|C^{B}|}s_{i}^{B}}{|C^{B}|}\), where \(C^{B}\) is again the convex hull of the aggregated contact map, and subsequently directs the robot to navigate in the direction to this central position from the current position. Additionally, to mitigate a large movement at each step, we restrict movement within \(d^{\text{move}}\) mm if the norm exceeds \(d^{\text{move}}\). We specifically set \(d^{\text{move}}=3\) mm.
## V Experiments
### _Settings_
**Tactile sensor.** We use a commercially available GelSight Mini [20] tactile sensor, which provides 320x240 compressed RGB images at a rate of approximately 25 Hz, with a field of view of 18.6 x 14.3 millimeters. We use gels that have 63 tracking markers.
**Robot platform.** The MELFA RV-5AS-D Assista robot, a collaborative robot with 6 DoF, is used in this study. The tactile sensor is mounted on the WSG-32 gripper (see Fig. 2). We use a Force-Torque (F/T) sensor, which is mounted on the wrist of the robot. The F/T sensor is used two-fold. First, we collect force observations which are used as input to the contact patch estimation model \(f^{\text{contact}}\). Second, the stiffness control of the position-controlled robot.
**Bandu.** We use pieces from _Bandu_ for our experiment. Bandu is a toy game that involves stacking objects onto a base plate. The players take turns stacking these objects and compete to see who can stack the most. Each piece has a highly irregular shape, requiring robots to estimate stable placements based on the shape of the objects. Figure 4 illustrates the Bandu pieces used in our experiments. The challenge in the game is to accommodate an irregular piece into an existing tower without destabilizing it.
### _Data Collection_
**Settings.** We first show the distribution of the observed tactile signals to understand the difficulties of the task. We collect \(2000\) tactile signals for each pair of top-bottom objects to train the contact patch estimation model \(f^{\text{contact}}\) by interacting with a 3D printed board as shown in Fig. 4 (a), resulting in \(6000\) training samples. During data collection, we add random displacements on the \(XY\) axis as defined
Fig. 4: The 3D printed board and Bandu pieces used in our experiments. (a) We use the 3D printed board for training data collection. The board includes small and large circles with diameters of \(15\) and \(25\) mm and one square whose length is \(15\) mm. (b) The first two pieces on the left serve as the bottom objects (or the environment), while the subsequent three on the right are designated as the grasped (top) objects. These pieces have been assigned the following names: _Short_, _Long_, _Mushroom_, _Barrel_, and _Pot_ from left to right.
Fig. 3: Definition of the **probabilistic contact patch.** (Left) The displacement \((x,y)\) is added from the origin of the bottom object \(O\) during data collection. This displacement and known contact surfaces of the two objects give the ground-truth contact surface \(S\). (Right) The discretized contact patch \(\hat{S}\) consists of a set of probabilities \(p(s_{j})\) that represents whether a specific position \(s_{j}\) of the contact surface of the grasped object is in contact or not.
in Fig. 3, and let the robot go down for \(d=1.5\) mm after establishing contact with the bottom object for \(T=2\) seconds using the stiffness controller whose gain parameter is \((K_{x},K_{y},K_{z})=(30,30,15)\) [N/mm]. We use the grasping force of \(10\) [N].
**Results and Analysis.** Fig. 5 shows the data distribution (left) and example contact patches (right). From the first to the fourth columns, we can observe the inherent difficulties of the estimation task. We do not observe any symmetric distribution of \(o_{x}^{\text{Tsc}}\), \(o_{y}^{\text{Tsc}}\) and the moment measurements \(T_{x},T_{y}\) about \(X=0\) or \(Y=0\). This could possibly be attributed to the inaccuracy in the 3D printing of the board or the slip of the object in the grasp during the contact interaction. Fig. 5 (b) shows three contact patches sampled from the star positions in each row. While tactile signals near the star positions are very similar, the resulting contact patches are very different. This highlights the partial observability of the underlying contact formation, indicating that a single tactile observation may not be sufficient to localize the contact formation. This ambiguity makes training of a machine learning model very difficult because similar inputs (i.e., tactile observations) can lead to totally different outputs (i.e., contact patches).
### _Contact Patch Estimation_
**Settings.** Next, we compare the performance of the contact patch estimation on different input modalities. We train the model \(f^{\text{contact}}\) for each top object using the dataset collected in Sec. V-B, and we evaluate the model using the intersection-over-union (IoU) and binary classification metric. We compare the performance with three different input modalities, a F/T sensor, tactile sensors, and the combination of the two denoted as _FT_, _Tac_, and _FT+Tac_, respectively. The evaluation is carried out using _unseen_ two Bandu pieces (see Fig. 4), which we denote as _Short_ and _Long_. We used a 3D-printed jig to ensure that the robot always grasps the same position of the top object and collected \(400\) interactions with random displacements.
**Results and Analysis.** The results are presented in Table I. When comparing the three modalities, we can clearly see that the combination of tactile sensors and the F/T sensor (_FT+Tac_) yields the best performance. Consequently, for our subsequent experiments, we will utilize both of these modalities. However, it should be noted that the model is not confident enough to estimate the contact patch. This is because the same tactile signals can lead to different contact patches, as discussed in Sec. V-B. Therefore, in the next experiment, we will aggregate information from multiple interactions and compare performance in stability estimation.
Fig. 5: **Distribution of contact patches**: (a) Training data distribution with _Pot_ as the grasped object and three different 3D printed shapes as the bottom objects (see Fig. 4). Each row shows the data obtained from different primitive shapes and each column shows the distribution of different data types: tactile displacements on the \(XY\) axes, moments on the \(XY\) axis, and force \(F_{z}\). The horizontal and vertical axes show the displacements randomly added during data collection (see Fig. 3), and the black circle or rectangle in each graph shows the contour of the bottom object. (b) Example contact patch sampled from the star points (\(\bigstar\)) in the left distributions. Although these contact patches are very different, the tactile signals look quite similar as seen in the data around the star point, showing the difficulty of the task; i.e., similar tactile signals can lead to very different contact patches.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Mushroom} & \multicolumn{2}{c}{Barrel} & \multicolumn{2}{c}{Pot} \\ & & S & L & S & L & S & L \\ \hline \multirow{3}{*}{IoU} & FT & \(27.4\) & \(37.7\) & \(29.6\) & \(\mathbf{44.5}\) & \(23.8\) & \(53.2\) \\ & Tac & \(33.6\) & \(22.2\) & \(31.5\) & \(42.6\) & \(16.5\) & \(37.5\) \\ & FT+Tac & \(\mathbf{38.4}\) & \(\mathbf{50.7}\) & \(\mathbf{31.9}\) & \(41.2\) & \(\mathbf{24.8}\) & \(\mathbf{54.8}\) \\ \hline \multirow{3}{*}{Acc} & FT & \(67.4\) & \(65.5\) & \(75.3\) & \(73.1\) & \(68.0\) & \(71.9\) \\ & Tac & \(\mathbf{72.9}\) & \(60.4\) & \(76.5\) & \(\mathbf{77.5}\) & \(66.9\) & \(71.7\) \\ \cline{1-1} & FT+Tac & \(\mathbf{77.9}\) & \(\mathbf{73.8}\) & \(\mathbf{77.0}\) & \(75.4\) & \(\mathbf{67.3}\) & \(\mathbf{74.9}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of the contact patch estimation performance on different input modalities measured by IoU and binary classification accuracy. Bold numbers show the best results among the three different input modalities. The \(S\) and \(L\) of the bottom objects correspond to the _Short_ and _Long_ objects, respectively (see Fig. 4).
### _Stability Estimation_
**Settings.** Next, we assess the stability estimation performance of the proposed method. We reuse the same data as used in the previous experiments with an additional binary label indicating whether the current configuration is stable by checking whether the geometric center of the bottom surface of the grasped object (i.e., the projection of the center of mass of the grasped object on the bottom surface) lies inside the contact patch. We compare our method with a baseline model that directly produces the stability probability by replacing the final layer of \(f^{\text{contact}}\) with a fully connected layer with a single unit and sigmoid activation. We name it _Implicit_ because it implicitly estimates stability, while our framework explicitly predicts it through the estimated contact patch.
**Results and Analysis.** Table II shows the qualitative results. Single interaction leads to poor performance, as seen in the results of the baseline (_Implicit_) as well as our method with single interaction (_Ours_\(n=1\)). However, by aggregating the estimates of multiple interactions, the stability estimation performance improves significantly, leading to an average accuracy of \(90\)%. Figure 6 shows how the probability of a contact patch changes during interactions. It shows that the method corrects the initial inaccurate estimate and improves accuracy with additional interactions, and the method finally reconstructs the contact surface of the bottom object with reasonable accuracy.
### _Stacking_
**Settings.** Finally, we evaluate the stacking performance of the method. We always initialize the first interaction from an unstable contact state (i.e., the object would topple upon release of grasp). We run the method \(10\) times for each piece and evaluate whether the robot successfully places the piece in a stable configuration. Furthermore, we also test the method in a harder scenario, where the _Long_ piece is already stacked onto the _Short_ piece (see Fig. 4 for the definition of the pieces), and we stack a top piece on top of these two objects. We compare our method with a _Pick & Place_ baseline, where it releases the piece without estimating the stability.
**Results and Analysis.** Table III shows the results. The pick-and-place baseline fails in all trials. The proposed method improves performance by predicting the contact patch at each iteration and aggregating information to improve the estimation accuracy. Although the success rate drops when the number of bottom objects is increased, the method can still succeed with a success rate of around \(60\)%. Figure 7 shows a qualitative result of how it moves to the more stable position.
## VI Conclusion
Designing systems that can interpret and disentangle useful contact information from observed tactile measurements is the key to performing precise and fine manipulation. We proposed a framework for estimating extrinsic contact patches from tactile and force-torque measurements. The contact-patch estimation allows us to estimate the stability of the placement of several different objects in novel and unstable environments. We tested the proposed approach for the placement of several pieces of the game of Bandu, which is known to be a difficult stacking task. In the future, we would like to improve the performance by training on a wider variety of objects and relaxing the assumption of the known geometry so that the trained model can be used for the stacking task with arbitrary objects.
|
2310.00444 | FragQC: An Efficient Quantum Error Reduction Technique using Quantum
Circuit Fragmentation | Quantum computers must meet extremely stringent qualitative and quantitative
requirements on their qubits in order to solve real-life problems. Quantum
circuit fragmentation techniques divide a large quantum circuit into a number
of sub-circuits that can be executed on the smaller noisy quantum hardware
available. However, the process of quantum circuit fragmentation involves
finding an ideal cut that has exponential time complexity, and also classical
post-processing required to reconstruct the output. In this paper, we represent
a quantum circuit using a weighted graph and propose a novel classical graph
partitioning algorithm for selecting an efficient fragmentation that reduces
the entanglement between the sub-circuits along with balancing the estimated
error in each sub-circuit. We also demonstrate a comparative study over
different classical and quantum approaches of graph partitioning for finding
such a cut. We present {\it FragQC}, a software tool that cuts a quantum
circuit into sub-circuits when its error probability exceeds a certain
threshold. With this proposed approach, we achieve an increase of fidelity by
14.83\% compared to direct execution without cutting the circuit, and 8.45\%
over the state-of-the-art ILP-based method, for the benchmark circuits. | Saikat Basu, Arnav Das, Amit Saha, Amlan Chakrabarti, Susmita Sur-Kolay | 2023-09-30T17:38:31Z | http://arxiv.org/abs/2310.00444v1 | # FragQC: An Efficient Quantum Error Reduction Technique using Quantum Circuit Fragmentation
###### Abstract
Quantum computers must meet extremely stringent qualitative and quantitative requirements on their qubits in order to solve real-life problems. Quantum circuit fragmentation techniques divide a large quantum circuit into a number of sub-circuits that can be executed on the smaller noisy quantum hardware available. However, the process of quantum circuit fragmentation involves finding an ideal cut that has exponential time complexity, and also classical post-processing required to reconstruct the output. In this paper, we represent a quantum circuit using a weighted graph and propose a novel classical graph partitioning algorithm for selecting an efficient fragmentation that reduces the entanglement between the sub-circuits along with balancing the estimated error in each sub-circuit. We also demonstrate a comparative study over different classical and quantum approaches of graph partitioning for finding such a cut. We present _FragQC_, a software tool that cuts a quantum circuit into sub-circuits when its error probability exceeds a certain threshold. With this proposed approach, we achieve an increase of fidelity by \(14.83\%\) compared to direct execution without cutting the circuit, and \(8.45\%\) over the state-of-the-art ILP-based method, for the benchmark circuits.
keywords: Quantum circuit fragmentation, Hybrid quantum systems, Quantum error, Graph partitioning, Genetic algorithm, Circuit cutting, Quantum annealing. The code for _FragQC_ is available at [https://github.com/arnavdas88/FragQC](https://github.com/arnavdas88/FragQC).
## 1 Introduction
Modern researchers have been developing quantum algorithms to achieve asymptotic improvement with the advance of the technology for quantum computing over the past two decades [1]. Through qubits and quantum gates, a quantum algorithm may be implemented as a quantum circuit. Despite the enormous potential of quantum algorithms, the current generation of quantum computers is more error-prone, which restricts their capacity to solve computational problems [2].
The use of quantum error correcting codes (QECCs) [3; 4; 5; 6; 7; 8; 9; 10; 11] to eliminate noise-related errors opens the door to fault-tolerant quantum computation [12]. However, in reality, implementing quantum error correction entails a significant cost due to the high number of qubit requirements, which is still outside the scope of current near-term devices. Numerous error reduction approaches have been developed as a result of the typical error rate of present near-term technologies; one of them is quantum error mitigation (QEM) [13; 14]. QEM does not use additional quantum resources; instead, it uses a variety of techniques, including extrapolation, probabilistic error cancellation, quantum subspace expansion, symmetry verification, machine learning, etc., to improve the accuracy of estimating the outcome in a particular quantum computational problem slightly. According to the most recent research, QEM is constrained to quantum circuits with a small number of qubits and a small depth because of the significant overhead of classical computational time complexity [13].
Fragmentation of quantum circuits [15; 16; 17] can be a helpful strategy for overcoming the technical difficulties of QEM since it partitions a quantum circuit into smaller sub-circuits, with fewer qubits and smaller depth. The short coherence periods of noisy intermediate-scale quantum (NISQ) processors must thus be handled by the sub-circuits. When running on a NISQ device, each sub-circuit experiences less noise. On a small quantum computer, a larger quantum system is primarily simulated through the fragmentation of a quantum circuit. In [18; 19], the authors suggested fragmenting a quantum circuit to reduce the exponential post-processing cost. Researchers also
investigated for the first time in another work [20] how such fragmentation impacts the various quantum noise models. Circuit fragmentation was later examined in another work as a way to lessen the impacts of noise [21]. Although the main goal of the earlier research was to break up complex circuits so that they could be implemented, the issue of noise that might cause false results was never appropriately addressed.
**Motivation:** While splitting a quantum circuit into smaller sub-circuits can lessen the impact of noise, fragmentation drives up post-processing costs, and overall computational costs. In _i-QER_[22], by predicting error in a quantum circuit while keeping in mind that the fragmentation of a particular quantum circuit is minimized, error reduction of a specific circuit is accomplished. It is not simple to predict error in a quantum circuit. However, in _i-QER_, a machine learning-based strategy has been employed to train the system to accurately predict errors in quantum circuits by taking into account their characteristics. Albeit there are several problems with _i-QER_, which are (i) the machine learning-based approach has a scalability issue when the circuit size is large, especially with training accurately, (ii) it is always a hardware-dependent method while using a machine learning approach, (iii) the choice of an appropriate machine learning models from a number of those is still an open question.
The aim of this paper is to address all the above-mentioned issues by proposing a generalized circuit fragmentation approach by optimizing both the success probability and the cut size in a graph-theoretic manner without machine learning.
**Our major contributions** are:
* Error influenced balanced circuit bi-partitioning algorithm is proposed which not only balances the error in the two partitions but also reduces the inter-partition communications.
* We provide a tool named _FragQC_ to efficiently fragment a quantum circuit with different approaches for achieving higher fidelity of the output quantum state.
The structure of this paper is as follows. Section 2 briefly presents the preliminary concepts of quantum circuit fragmentation, graph partitioning algorithms, and available quantum hardware. Section 3 describes the tool _FragQC_. Section 4 briefly discusses the experimental results of the proposed methodology. Section 5 captures our conclusions.
## 2 Background
This section briefly explains quantum circuit fragmentation and its challenges followed by a few relevant existing heuristic and approximate approaches for solving graph partitioning problems. Lastly, we shed some light on existing quantum hardware and their error rates.
### Quantum circuit fragmentation
A theoretical overview of quantum circuit fragmentation and an illustrative example are presented.
#### 2.1.1 The idea
Each line for a qubit of a quantum circuit represents a sequence of single and multi-qubit gate operations. The time flows from left to right in the quantum circuit diagram. Quantum Circuit fragmentation cuts these notional qubit wires vertically.
In [16], the authors demonstrated mathematically that the idea to cut a qubit wire is based on the notion that, if we have multiple copies of an experimentally generated single qubit state with a density matrix \(\rho\), then the set \(\{I/\sqrt{2},X/\sqrt{2},Y/\sqrt{2},Z/\sqrt{2}\}\) forms an orthonormal set of matrices with respect to the Hilbert-Schmidt inner product. So \(\rho\) may be expanded as:
\[\rho=\frac{Tr(\rho)I+Tr(\rho X)X+Tr(\rho Y)Y+Tr(\rho Z)Z}{2}. \tag{1}\]
In order to run on quantum computers, the Pauli matrices can be further decomposed into their eigenbases [18] as follows:
\[\rho=\frac{\rho_{1}+\rho_{2}+\rho_{3}+\rho_{4}}{2} \tag{2}\]
where
\[\rho_{1} = \left[Tr(\rho I)+Tr(\rho Z)\right]\left|0\right\rangle\left\langle 0\right|\] \[\rho_{2} = \left[Tr(\rho I)-Tr(\rho Z)\right]\left|1\right\rangle\left\langle 1\right|\] \[\rho_{3} = Tr(\rho X)[2\left|+\right\rangle\left\langle+\right|-\left|0 \right\rangle\left\langle 0\right|-\left|1\right\rangle\left\langle 1\right|]\] \[\rho_{4} = Tr(\rho Y)[2\left|+i\right\rangle\left\langle+i\right|-\left|0 \right\rangle\left\langle 0\right|-\left|1\right\rangle\left\langle 1\right|]\]
Physically, the trace operators are equivalent to measuring the qubits in one of the Pauli bases \(\sigma_{i}\in\{I,X,Y,Z\}\), and the density matrices correspond to physically initializing the qubits to one of the eigenstates.
If we assume that a qubit wire connecting \(A\) and \(B\), two vertices representing gates, is cut then we can reconstruct the sub-circuits by summing over the four pairs of measurement circuits added to \(A\) and an initialization circuit added to \(B\). After that, the overall circuit output is reconstructed by adding the four pairs of Kronecker products between the sub-circuit outputs. As a result, there are \(4^{k}\) Kronecker products to be computed for a cut size of \(k\).
Example of quantum circuit fragmentationLet us consider a quantum circuit with five qubits and four two-qubit _CNOT_ gates shown in Figure 1(a). All the qubits are initialized to \(|0\rangle\). In order to implement quantum circuit fragmentation, first we need to construct a graph from the given circuit, where the vertices are the two-qubit gates and there is an edge if two-qubit gates have at least one in common. Thus, for a given quantum circuit \(C\), we have a graph \(G(V,E)\). The task is to find a cut that can separate the vertices into more than one disjoint set as shown in Figure 1(b).
The circuit \(C\) can be partitioned into two sub-circuits, as shown in Figure 1(b). Here, the two partitions \(\{A,B\}\) & \(\{C,D\}\) are separated by the dashed arrow line. The number of edges between the two partitions can be defined as the cut size (\(k\)). Considering that only one qubit, i.e., the third qubit (q[2]) is connecting the partitions, the value of \(k\) is 1. Therefore, each sub-circuit can be executed on a 3-qubit quantum hardware instead of a 5-qubit one. We need to take measurements on the \(3^{rd}\)-qubit after the _CNOT_ (node \(B\)). The initialization of the sub-circuit \(\{C,D\}\) has to be done based on the measurements after \(B\). Therefore, conventionally in the classical post-processing step, the complete probability distribution for the entire circuit
Figure 1: Example of quantum circuit fragmentation: (a) a quantum circuit \(C\) with 5 qubits and 4 two-qubit gates; (b) the corresponding graph \(G\) of \(C\); (c) the two quantum sub-circuits after a fragmentation.
can be reconstructed by taking the corresponding outputs of the two smaller sub-circuits, running four pairs of Kronecker products, and adding them together.
In [18; 17], the authors have demonstrated efficient ways for the classical reconstruction method. In [23], the authors have proposed maximum-likelihood fragment tomography (MLFT) as an improved circuit fragmentation technique, with a limited number of qubits to run the quantum sub-circuits on quantum hardware. MLFT further finds the most likely probability distribution for the output of a quantum circuit, with the measurement data obtained from the circuit's fragments, along with minimizing the classical computing overhead of circuit fragmentation methods. Hence, they showed that circuit fragmentation as a standard tool can be used for running the sub-circuits on quantum devices by estimating the outcome of a partitioned circuit with higher fidelity as compared to the full circuit execution.
#### 2.1.2 Challenges of quantum circuit fragmentation
In spite of this immense potential, quantum circuit fragmentation faces a few formidable challenges when it is applied to large quantum circuits. Finding an efficient cut location is the first difficult task. Quantum circuits can always be divided into smaller sub-circuits, but choosing an efficient cut is critical for reducing the amount of classical post-processing and the effects of noise. Partitioning a large quantum circuit into sub-circuits often requires multiple edges or qubit wires to be cut. In such cases, all the possible measurement and initialization combinations have to be evaluated. Hence, the number of Kronecker products required is \(4^{k}\), with \(k\) being the number of edges cut. Thus, quantum circuits with \(n\) edges have a combinatorially explosive search space of \(\mathcal{O}(n!)\) to find an efficient cut.
Additionally, if we consider the effects of noise and look to improve the fidelity of a quantum circuit using the quantum circuit fragmentation approach, the problem of finding an efficient cut becomes even more complex. Section 3.1 addresses this problem with different classical as well as quantum annealing-based approaches. Before that, let us describe a few popular graph partitioning algorithms very briefly.
### Graph partitioning algorithms
In this paper, we consider a balanced bi-partition problem for quantum circuit fragmentation. In balanced bi-partitioning a graph, the goal is to partition the graph into two subgraphs with nearly equal disjoint sets of vertices,
while minimizing the capacity of the edges between the two subgraphs. For the sake of completeness, we first describe the popular heuristic algorithms for graph partitioning such as Kernighan-Lin (KL) algorithm [24] and Fiduccia-Mattheyses (FM) algorithm [25] in brief. We also discuss the _h-METIS_, one of the most popular hypergraph partitioning methods. Further, we also describe a genetic algorithm and a quantum annealing-based method.
Kernighan-Lin algorithmKL algorithm is a greedy heuristic that tries to identify the optimal bi-partition of a graph. Given a graph, it starts with an initial bi-section, and swaps an equal number of vertices between the two partitions, aiming to improve the cut size over the initial partition. This process is iterated in search of an optimal partition.
While it is a widely used graph partitioning algorithm, it also has some limitations
* For large graphs with multiple optimal partitions, this algorithm tends to converge to any of the optimal solutions, depending on the initial state. Hence, instead of converging into the global optima, it may get stuck in the local optima.
* The quality of the initial partition significantly influences the final partition obtained by the KL algorithm. A poorly chosen initial partition may lead to sub-optimal results or longer convergence time. Finding a suitable initial partition can be difficult for large graphs.
* Each pass of the KL algorithm takes \(\mathcal{O}(n^{3})\) time, which makes it computationally expensive for large graphs. Thus it lacks scalability as well.
Fiduccia-Mattheyse(FM) algorithmFM algorithm is an improved version of the KL algorithm which tries to overcome the limitations of the KL algorithm. Unlike the KL algorithm, the FM algorithm does not swap two vertices among the partitions. It calculates the gain of each vertex in its initial partition and moves the vertex with the highest gain to the other partition. It is the wizardry of a doubly linked list implementation of the algorithm, which makes it a linear time algorithm. However, in order to achieve a lower cut size, the FM algorithm allows imbalanced partitions to a certain degree. Both algorithms do not explicitly enforce any constraint to ensure balanced partitioning.
\(h\)-METIS. \(h\)-METIS proposed by G. Karypis et al. in [26; 27]that partitions large hypergraphs, mostly generated while circuit design. The concept of \(h\)-METIS is based on multilevel graph partitioning, as explained in [28; 29]. Unlike other graph partitioning algorithms, \(h\)-METIS does not perform partitioning operations on the original graph. It takes a coarsening approach repeatedly, where the vertices and edges of the given graph are collapsed to reduce it to a smaller graph. It then performs the partitioning operation on the small graph. In the next phase, it performs an uncoarsening on the two subgraphs obtained along with refinements to the partitioning. In this manner, \(h\)-METIS can quickly produce high-quality partitions for a large variety of hypergraphs. Experiments performed in [27] on a large number of hypergraphs show that \(h\)-METIS produces consistently better partitions than those by other widely used algorithms, such as KL, FM, etc. We leverage the consistent tool for circuit partitioning to compare with our approach in our multiple experiments.
Genetic algorithmGenetic algorithm (GA) is a metaheuristic inspired by natural selection. GAs, which rely on biologically inspired operators such as selection, crossover, and mutation, are often employed to develop near-optimal solutions to optimization and search problems. A GA initializes with an initial set of solutions or an initial population, which evolves into different populations with each iteration or generation. After multiple generations, the algorithm returns the best member of the population as the solution to the given problem.
After each generation, two members of the population are chosen and are then combined to create offspring by using a crossover operator. Mutation operator further modifies an offspring with a very low probability, to include more diversity in the population.
In [30], the authors have applied a GA for the graph bi-partitioning problem. They also claim that the GA performs comparably to or better than KL, FM, and simulated annealing algorithms. Thus, we plan to use the basic idea of the GA to construct our classical approach for solving graph partitioning problem.
Quantum annealingThe Hamiltonian of a system represents the total energy of the system. If the Hamiltonian of a system is very slowly evolved from an initial state to a final state, then the adiabatic theorem [31] states that, if the system is in the \(n^{th}\) eigenstate of the initial Hamiltonian, it evolves as the \(n^{th}\) eigenstate of the final Hamiltonian.
In [32], the authors proposed a quantum annealing algorithm that leverages this adiabatic evolution theorem to solve different combinatorial optimization problems. In the quantum annealing (QA) algorithm, we encode the solution to the problem in the ground state of a Hamiltonian. Therefore, we choose the initial state of the system to be the ground state of a simple Hamiltonian. This initial Hamiltonian is then slowly evolved to the final Hamiltonian whose ground state encodes the solution to the problem. Therefore, if the evolution is slow enough, then according to the adiabatic theorem, the system remains in its ground state throughout the evolution, and we have the solution to our problem in the final state.
If we can encode the objective function of a graph partitioning problem into a Hamiltonian, then the QA can find the optimal partitioning strategy. In this paper, the graph partitioning algorithm is used for balancing the error of a quantum circuit into two quantum sub-circuits. Thus before taking a look into the proposed methodology, it is important to discuss the quantum hardware and their error rates.
### Quantum hardware and its error rate
Superconducting quantum devices [33], quantum dots [34], ion traps [35], and neutral atoms [36] are currently the most popular quantum technologies for constructing qubits and quantum gates. For hardware compatibility, the quantum logic gates in a quantum circuit, such as _CNOT, Hadamard, S, T, X, Y, Z_ must be decomposed using primitive gate operations of a specific quantum hardware or NISQ (Noisy Inter-mediate Scale quantum) [2] device. Further, each quantum device has a dedicated qubit connectivity topology as each is built with its own unique set of physical qubits, coupling strengths, and control mechanisms. These variations can result in different noise characteristics. Some devices, for example, may contain qubits that are closer to one another, resulting in stronger interactions and perhaps greater crosstalk between qubits. Thus quantum hardware has hardware-specific properties such as gate error rates, readout errors, etc. In this paper, we consider IBM's superconducting-based hardware for further experiments. Figure 2 depict the error map of \(ibm\_nairobi\). It shows the qubit connectivity layout as well as the error rates for different gate operations in the qubits.
Along with error rates of different gate operations, there are a few critical hardware-specific properties such as relaxation time, coherence time, etc. The relaxation time \(T_{1}\) is a crucial parameter which is a measure of how long a qubit can remain in a superposition state or maintain its quantum
information before decohering into a classical state. The coherence time \(T_{2}\) represents the duration during which a quantum system, can preserve the quantum phase information, that enables quantum computations.
Therefore, there are multiple quantum devices available to perform our experiments, although these can be performed on any other available quantum hardware. We perform our experiments on a quantum device with a sufficient number of qubits so that we can easily execute medium to large quantum circuits. Thus we choose \(ibm\_Sherbrooke\), which is a 127 qubit quantum system with median \(T_{1}\) and \(T_{2}\) of 295.33 \(\mu\)s and 166.02 \(\mu\)s respectively.
## 3 Our Proposed Tool: _FragQC_
An overview of _FragQC_, our technique for reducing quantum errors, is provided below. A flowchart of the suggested tool containing the essential modules is given as Figure 3.
The tool accepts a quantum circuit as an input and calculates its potential success probability while considering the noise profile of a specific hardware. A novel error influenced balanced bi-partitioning algorithm is executed to partition the circuit into two sub-circuits whenever the projected success
Figure 2: Error Map of 7-qubit \(ibm\_nairobi\) device.
probability is below a certain user-specified threshold. This bi-partitioning is continued recursively until the sub-circuits can be run with a reasonable chance of success. For the sake of simplicity, we consider the success probability threshold of each sub-circuit akin to the success probability threshold of the overall circuit, which is given by the user. After the sub-circuits are implemented on the hardware, their outputs (probability distributions) are combined appropriately to generate the output of the entire circuit. We have adopted the output reconstruction method from [18].
### Proposed technique on circuit fragmentation
The proposed method of circuit fragmentation, shown in Figure 4, has two main parts, namely (i) constructing a doubly weighted graph for a given quantum circuit, and (ii) error influenced balanced partitioning algorithm for circuit fragmentation.
#### 3.1.1 Graph representation of a quantum circuit
First, we represent the given quantum circuit \(C\) as a graph \(G_{C}\). We denote each two-qubit gate in \(C\) as a vertex, and an edge between two vertices indicates that the corresponding two-qubit gates share one or two qubits. The weight of the edge is either 1 or 2 depending on the number of qubits shared by the two two-qubit gates.
Our goal is to cut the quantum circuit in such a way that it not only decreases the interaction between the two partitions but also balances the
Figure 3: Flowchart of the proposed tool _FragQC_.
separately, this reduces the impact of noise. In order to ensure this, we intend to store the error probability information of the circuit for specific quantum hardware as the vertex weight of the graph. Before the details of the calculation of the vertex weight from the error probabilities are given, we briefly discuss the quantum error model.
Quantum error model.Quantum errors can be broadly categorized into two major types: errors due to noisy gate operation and errors due to idle qubits. The noise model can be expressed in terms of the Kraus operators [37]. Let us consider a pure state \(\psi\), and its density matrix \(\sigma=\left|\psi\right\rangle\left\langle\psi\right|\). The evolution of the state \(\psi\) in a quantum channel can be given by a function \(\xi\) of its density matrix \(\sigma\)given as
\[\xi(\sigma)=\sum_{i}K_{i}\sigma K_{i}^{\dagger}, \tag{3}\]
where \(K_{i}\) is the Kraus operator and \(K_{i}^{\dagger}\) is complex conjugate transpose of \(K_{i}\). The evolution of a noisy quantum system can also be represented by Eqn. 3. If we consider depolarizing noise channels, then the Kraus operators are the Pauli matrices.
_1. Gate operation error:_ The possible error model for a quantum system with one qubit can be expressed as:
\[\xi(\sigma)=\sum_{i\in\{0,1\}}\sum_{j\in\{0,1\}}p_{i,j}(X^{i}Z^{j})\sigma(X^{ i}Z^{j})^{\dagger}. \tag{4}\]
Figure 4: Block diagram of our error efficient cut searcher.
where \(X\) and \(Z\) are Pauli operators, and the probability of the corresponding Kraus operator is denoted by \(p_{i,j}\).
Hence, the possible quantum error channels are
if \(i=0\) and \(j=0\) then \(X^{0}Z^{0}=I\), i.e., no error
if \(i=0\) and \(j=1\) then \(X^{0}Z^{1}=Z\), i.e., phase flip error
if \(i=1\) and \(j=0\) then \(X^{1}Z^{0}=X\), i.e., bit flip error
if \(i=1\) and \(j=1\) then \(X^{1}Z^{1}=XZ\), i.e., both bit and phase flip errors
In [38], the authors represented a noisy gate operation by the ideal gate operation followed by a set of Pauli operators \(\{X,Y,Z\}\) with probability \(p_{ex}\), \(p_{ey}\) and \(p_{ez}\) respectively.
_2. Amplitude damping error:_ When a qubit in an open quantum system is kept idle, it can absorb or dissipate energy and change its state spontaneously over time. Let us assume that a qubit can dissipate and absorb energy with a probability \(p\) and \((1-p)\) respectively. This noise channel is called an amplitude damping channel and it can also be defined using Kraus operators. The Kraus operators for energy dissipation or state change \(|1\rangle\rightarrow|0\rangle\) are written as:
\[K_{0}=\sqrt{p}\begin{bmatrix}1&0\\ 0&\sqrt{1-\lambda}\end{bmatrix},\ K_{1}=\sqrt{p}\begin{bmatrix}0&0\\ 0&\sqrt{\lambda}\end{bmatrix},\]
The Kraus operators for energy absorption or state change \(|0\rangle\rightarrow|1\rangle\) are written as:
\[K_{2}=\sqrt{1-p}\begin{bmatrix}\sqrt{1-\lambda}&0\\ 0&1\end{bmatrix}\&\&\ K_{3}=\sqrt{1-p}\begin{bmatrix}0&0\\ \sqrt{\lambda}&0\end{bmatrix}.\]
Here, \(\lambda\propto e^{-\tau/T_{1}}\), where \(T_{1}\) is called the energy relaxation time of the quantum system and \(\tau\) is the time duration of the quantum system or the time duration for which the quantum circuit is operational.
_3. Phase damping error:_ Phase damping is a unique quantum mechanical noise model that describes the loss of quantum coherence without loss of energy. The Kraus operators for the dephasing channel can be given as
\[K_{p0}=\sqrt{p}\begin{bmatrix}1&0\\ 0&\sqrt{1-\lambda}\end{bmatrix},\ K_{p1}=\sqrt{p}\begin{bmatrix}0&0\\ 0&\sqrt{\lambda}\end{bmatrix}.\]
Here, \(\lambda\propto e^{-\tau/T_{2}}\), where \(T_{2}\) is called coherence time of the quantum system and \(\tau\) denotes the time duration.
_Computing the vertex weights._ In [39], the authors have given a linear time algorithm to trace error in a quantum circuit, and in [40, 41], the authors have computed the probability of success for a quantum circuit considering errors due to noisy gate operations and amplitude damping. Inspired by these works, we first compute the error probability of a quantum circuit \(C\). We consider the most common error model for quantum systems, namely errors due to gate error and idle error [42] modeled with amplitude and phase damping error. Let us assume that \(C\) has \(k_{1}\) single-qubit gates, \(k_{2}\) two-qubit gates, and the error probability of a single-qubit gate and a two-qubit gate is \(p_{1}\) and \(p_{2}\) respectively. The error probability of the circuit due to the noisy gate operations can be written as
\[p_{GE}=1-\{(1-p_{1})^{k_{1}}(1-p_{2})^{k_{2}}\}. \tag{5}\]
The error due to the amplitude damping and phase damping along with the gate error can therefore be expressed as
\[P_{Error}=1-\{(1-p_{1})^{k_{1}}(1-p_{2})^{k_{2}}exp^{-(\tau/T_{1}+\tau/T_{2}) }\}. \tag{6}\]
The weight of a vertex from Eqn. 6 is the error of the corresponding two-qubit gate, and the product of the errors of the sequence of single-qubit gates operating on the qubit responsible for the edge. We also consider the idle error for those edges, i.e., decoherence prior to that two-qubit gate. We normalize the calculated error probability to improve accuracy and assign it to the vertex as weight.
_Example._ Let us illustrate with a quantum circuit \(C\) having 8 qubits and 14 two-qubit _CNOT_ (_cx_) gates, as shown in Figure 5(a). In the corresponding doubly-weighted graph \(G_{C}\), each vertex represents a _CNOT_ (_CX_) gate and each edge denotes a qubit connecting two _CNOT_ gate. Let us calculate the weight of the vertex \(CX13\) which has an edge from \(CX11\) and from \(CX12\). Hence, while computing the vertex weight, we have to consider:
1. the error rate of \(CX13\),
2. error due to all the single qubit gates applied between \(CX11\) to \(CX13\),
3. error for all the single qubit gates operated between \(CX12\) to \(CX13\)
4. amplitude and phase damping error.
Thus the weight of \(CX13\) is \(0.070\) as shown in Figure 5(b). Similarly, we compute the weights for all the vertices as portrayed in Figure 5(b).
Figure 5: An example circuit and its corresponding doubly-weighted graph.
#### 3.1.2 Error influenced balanced bi-partitioning of circuit
First, we present the objective function of the optimization problem we intend to solve.
Objective functionLet us assume that we have a cut \(c\), which partitions graph \(G_{C}\) into two sub-graphs \(G_{1}\) and \(G_{2}\). Let an indicator variable \(y_{v}\) be associated with each vertex \(v\in V\) such that,
\[y_{v}=\begin{cases}0&if\;\;v\in G_{1}\\ 1&if\;\;v\in G_{2}\end{cases} \tag{7}\]
If \(e_{i,j}\) is the edge connecting the vertices \(v_{i}\) and \(v_{j}\) having edge weight \(w_{i,j}\), then the cut size (\(K_{c}\)) for the specific cut \(c\), can be written as
\[K_{c}=\sum_{e_{i,j}\in E}w_{i,j}(y_{v_{i}}-y_{v_{j}})^{2}. \tag{8}\]
Let the sum of vertex weights for the partitions \(G_{1}\) and \(G_{2}\) be \(\Omega_{G_{1}}\) and \(\Omega_{G_{2}}\) respectively. Hence, the overall cost for a cut \(c\) can be written as
\[Cost_{c}=K_{c}(\frac{1}{\Omega_{G_{1}}}+\frac{1}{\Omega_{G_{2}}}). \tag{9}\]
Our aim is to find a cut \(c\) for which this \(Cost_{c}\) is minimum. Thus, the cost is given by Eqn. 9 is our objective function that has to be minimized. In this paper, we apply both classical and quantum approaches to solve this optimization problem to establish the most suitable method. We start with a genetic algorithm-based proposed classical approach followed by a quantum annealing-based approach.
Proposed classical approach for finding an error-balanced min-cutWe propose a GA-based approach to minimize our objective function in Eqn. 9. Algorithm 1 outlines our proposed approach for identifying the minimum cut \(c\) in a graph \(G_{C}\) while maintaining a balance with respect to errors.
We initiate the algorithm with a random cut effectively dividing the graph into two sub-graphs. We obtain a partition vector, essentially a string containing the values of \(y_{v}\) for all \(N\) vertices of \(G_{C}\). Next, the cost of this initial partition is computed by Algorithm 2, which implements Eqn. 9. The procedure, as outlined in Algorithm 2, includes determining the weights of vertices within each partition and employing Algorithm 3 (referred to as the Cut Size
```
0:\(N,\)\(initialPartitionVector\)
0:\(MinPartitionVector\), \(MinCost\)\(\triangleright\)\(N\) = number of vertices \(MinVector=initialPartitionVector+1\) \(MinCost=CostCalculator(initialPartitionVector)\) \(CostFlag\) = 0 while\(CostFlag\leq c_{2}\)do for\(i\gets 1\) to \(N\)do \(partitionVector[i]=initialPartitionVector[i]\oplus 1\) \(cost[i]=CostCalculator(partitionVector)\) if\(cost[i]<MinCost\)then \(MinCost=cost[i]\) \(MinVector=partitionVector\) \(costFlag=0\) else \(CostFlag=CostFlag+1\) endif endfor \(initialPartitionVector=crossover(MinVector,initialPartitionVector)\) endwhile return\(MinCost\), \(MinVector\)
```
**Algorithm 1** Finding an Error-balanced Min-Cut
```
0:\(cutSize\), \(partitionVector\), \(vertexWeight\), \(N\) \(\triangleright\)\(N\) = number of vertices
0: Cost function\(CostCalculator(partitionVector)\) \(WeightP2\Leftarrow 0\) for\(i\gets 1\) to \(N\)do \(totalWeight=totalWeight+vertexWeight[i]\) \(WeightP2=WeightP2+vertexWeight[i]\times partitionVector[i]\) endfor \(WeightP1=totalWeight-WeightP2\) \(Cost=CutSize(partitionVector)\times(\frac{1}{WeightP1}+\frac{1}{WeightP2})\) return Cost endfunction
```
**Algorithm 2** Calculating Cost of a Partition
```
0:\(cutSize\), \(partitionVector\), \(vertexWeight\), \(N\) \(\triangleright\)\(N\) = number of vertices
0: Cost function\(CostCalculator(partitionVector)\) \(WeightP2\Leftarrow 0\) for\(i\gets 1\) to \(N\)do \(totalWeight=totalWeight+vertexWeight[i]\) \(WeightP2=WeightP2+vertexWeight[i]\times partitionVector[i]\) endfor \(WeightP1=totalWeight-WeightP2\) \(Cost=CutSize(partitionVector)\times(\frac{1}{WeightP1}+\frac{1}{WeightP2})\) return Cost endfunction
```
**Algorithm 3** Finding an Error-balanced Min-Cut
Calculator), to compute the size of the partition. Once this cost of the initial partition vector is computed, it is stored as \(MinCost\).
The subsequent phases of the process iteratively flip each bit in the partition vector and update \(MinCost\) if the cost of the new partition vector is lower. This process is repeated until all bits have been flipped. This marks the completion of one iteration or pass through Algorithm 1. Once a pass is completed, Algorithm 1 prepares the initial partition vector for the next iteration from the partition vector with the lowest cost given by \(MinCost\) and the initial partition vector, then applies a crossover operation using Algorithm 4 to generate the new initial partition vector for the next pass.
```
0:\(partitionVector\), \(edges\), \(edgeWeight\)
0:\(cutSize\) /* \(partitionVector\) is '0' or '1' for vertices \((v_{1},v_{2},..v_{N})\) in Partition1 or Partition2 respectively, edge \(e_{k,l}\) have endpoints \(v_{k}\) and \(v_{l}\), \(edgeWeight\) is the weight of each edge */
0:function\(CutSize(partitionVector)\) \(cutSize\Leftarrow 0\) while\(edges\neq NULL\)do\(\triangleright\) We have considered all the edges \(cutSize\ =\ cutSize\ +\ edgeWeight[e_{i,j}]\ \times\ (partitionVector[i]\ -\ partitionVector[j])^{2}\)\(\triangleright\) Edge \(e_{i,j}\) is an edge between vertex \(v_{i}\) and \(v_{j}\) endwhile return\(cutSize\) endfunction
```
**Algorithm 3** Calculating size of min-ut size
In summary, the algorithm calculates partition costs, iteratively improves the partition by flipping bits, and uses a crossover operation to create a new initial partition vector for the next iteration, all with the aim of optimizing the partition. This process is repeated until no further improvement in cost is observed. After reaching this point, the algorithm undergoes a few more iterations (a total of \(c_{2}\) times) before ultimately returning the final \(MinCost\) and the corresponding \(MinVector\). The overall complexity of our proposed algorithm is \(O(n\cdot e)\), where \(n\) is the number of vertices and \(e\) is the number of edges of the graph.
Quantum Annealing Based Approach.We incorporate quantum annealing as a quantum approach in _FragQC_ to solve the proposed optimization problem defined in Eqn. 9. In [43], the authors have exhibited the graph partitioning using quantum annealing on the D-Wave system. We also use the D-Wave system for our experiments. For minimizing our objective function in Eqn. 9 on a D-wave system, we need to describe our objective function in the following Ising objective function
\[\min\left(\sum_{i}h_{i}s_{i}+\sum_{i<j}J_{ij}s_{i}s_{j}\right). \tag{10}\]
where \(s_{i}\in(+1,-1)\) are subject to local fields \(h_{i}\) and pair wise interactions with coupling strengths \(J_{ij}\).
The quadratic unconstrained binary optimization (QUBO) representation is often preferred with its \(0/1\)-valued variables over the Ising \(-1/+1\)-valued variables because it is more natural. The QUBO objective function is
\[\min\left(\sum_{i}Q_{ii}x_{i}+\sum_{i<j}Q_{ij}x_{i}x_{j}\right). \tag{11}\]
where \(Q_{ii}\) is analogous to the Ising \(h_{i}\), as are \(Q_{ij}\) and \(J_{ij}\). The Ising and QUBO models are related through the transformation \(s=2x-1\).
Fortunately, the D-Wave machine allows both the Ising and QUBO forms, hence we describe our objective function as Ising formulation. Our Ising
formulation for proposed doubly weighted graph balanced bi-partitioning can be derived from Eqn. 9, as follows:
\[\min\left(\sum_{i,j}^{n}w_{ij}^{2}\frac{1-s_{i}s_{j}}{2}+2\sum_{i<j}^{n}v_{i}v_{j }s_{i}s_{j}+\sum_{i=1}^{n}{v_{i}}^{2}+\left(\sum_{i=1}^{n}v_{i}-\frac{n}{2} \right)^{2}\right). \tag{12}\]
where \(w_{ij}\) is the weight of an edge between vertex \(i\) and \(j\) and \(v_{i}\) is the weight of a vertex \(i\) in \(G_{C}\). The total number of vertices of the graph is \(n\). We minimize this objective function through quantum annealing on a D-Wave system to find an optimal cut in a quantum circuit.
## 4 Experimental Evaluation
We present here the notable results obtained when we fed different benchmark quantum circuits from [44; 45] to our tool _FragQC_. We have applied both classical and quantum approaches, i.e., GA-inspired approach (Classical) and quantum annealing-based approach implemented using D-Wave's quantum annealing device. We have compared both the results with _h-METIS_ which is a very popular and state-of-the-art tool for circuit partitioning.
The system configuration on which all the experiments have been performed in Python 3.11.1 with processor AMD EPYC 7B13 (x86_64) octacore on KVM, CPU 2.5GHz, RAM 62.8Gi Usable, and x86_64 Ubuntu 22.04.2 LTS Operating System.
Figure 6(a) provides the cost of the cut selected by the specified algorithms, and Figure 6(b) the cut sizes. The green bar represents _h-METIS_, the orange one is for quantum annealing using the D-Wave systems, and the blue one is for the proposed classical GA-based approach. It is evident from these results that the cut selected by _h-METIS_ has a much higher cost and cut size, than the other two approaches. However, there is no clear winner between the classical GA-based approach and the quantum annealing-based approach. Hence, for further experiments, we have used GA-Based and QA-based approaches for our circuits and _FragQC_ reports the cut that has a lower cost associated with it.
Let us consider the circuit shown in Figure 5(a). Its corresponding doubly weighted graph is given in Figure 5(b). In order to apply our tool _FragQC_ on this circuit, the error-balanced Min-cut finder reports the partition as shown in Figure 7. In this case, the blue vertices form one sub-circuit, while the
red vertices form the other. For this cut, only 2 edges, each having weight 1, are cut, thus the cut size is 2 and the overall cost for the cut is 8.239. The two sub-circuits are shown in Figure 8. These two sub-circuits can then be executed in the quantum hardware and the final outcome can be obtained through classical reconstruction.
Despite the fact that _FragQC_ is a hardware-sensitive tool, it can easily be used with gate-based quantum hardware of any technology and size. We have used our tool _FragQC_ to cut the quantum circuit whenever the success probability was below a certain user-defined threshold and executed the smaller sub-circuits on quantum hardware. We have calculated the fidelity compared to the ideal simulation. For the purpose of a comparative study, we have leveraged _CutQC_[18] through the circuit knitting toolbox. We have also directly executed the circuits on \(IBM\_Sherbrooke\) without cutting the circuits. The results obtained are shown in the Table 1. For all the circuits, we observed better fidelity for _FragQC_ over _CutQC_ and direct execution without cutting. On average, we got 14.83% better fidelity compared to direct execution without cutting the circuit and 8.45% fidelity gain over _CutQC_.
Since quantum circuit fragmentation and knitting involve several measurement operations, we wanted to eradicate errors that occurred due to noisy measurement operations. For this purpose, we have used IBM's measurement error mitigation process to reduce the readout er
Figure 6: Cost and cut size for the selected cut by _h-METIS_ (green), Proposed QA (blue), and Proposed GA-based approach (orange) for different benchmark circuits.
Figure 8: Two sub-circuits corresponding to two sub-graphs of Figure 7 produced by our error balanced Min-cut finding algorithm.
Figure 7: Balance bi-partitioning result for the graph shown in Figure 5(b) using error balanced min-cut finding algorithm. The vertices of the subgraphs for the two partitions are marked in red and green.
distribution output generated by each sub-circuit after hardware execution. Table 2 displays the fidelity of the benchmark circuits when executed on _FragQC_ along with measurement error mitigation. We have also integrated measurement error mitigators with _CutQC_ and compared their corresponding fidelity. The fidelity obtained using _FragQC_ is approximately 8.99% better than the _CutQC_ with error mitigation.
As mentioned earlier, the determination of the threshold for _FragQC_ for benchmark circuits is specified by the user. Albeit, we try to show through numerical analysis that it would not guarantee that the fidelity for benchmark circuits would always increase if the threshold level is set very high. We have taken four example benchmark circuits to analyze the fidelity of the circuit using _FragQC_, while we vary the success probability threshold. In Figure 9, we portray the numerical results from which we can conclude that increasing the success probability threshold would not ensure an increase in fidelity.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & & & \multicolumn{3}{c|}{Fidelity} \\ \cline{3-6} Benchmark circuit & Width & Depth & Without cut & _CutQC_ & _FragQC_ \\ \hline Efficient SU & 8 & 12 & 0.849 & 0.864 & 0.879 \\ \hline ghz\_n10 & 10 & 11 & 0.738 & 0.735 & 0.799 \\ \hline Adder n\_10 & 10 & 108 & 0.735 & 0.734 & 0.738 \\ \hline bv\_n19 & 20 & 22 & 0.497 & 0.534 & 0.598 \\ \hline cat\_n24 & 24 & 25 & 0.394 & 0.491 & 0.558 \\ \hline \end{tabular}
\end{table}
Table 1: Fidelity for benchmark circuits.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & & & \multicolumn{2}{c|}{Fidelity} \\ \cline{3-6} Benchmark circuit & Width & Depth & _CutQC_ & _FragQC_ \\ \hline Efficient SU & 8 & 12 & 0.874 & 0.898 \\ \hline ghz\_n10 & 10 & 11 & 0.951 & 0.986 \\ \hline Adder n\_10 & 10 & 108 & 0.813 & 0.957 \\ \hline bv\_n19 & 20 & 22 & 0.861 & 0.963 \\ \hline cat\_n24 & 24 & 25 & 0.8997 & 0.989 \\ \hline \end{tabular}
\end{table}
Table 2: Fidelity for benchmark circuits with Measurement Error Mitigation.
## 5 Conclusion
In this paper, we have proposed a method for balanced bi-partitioning of a doubly weighted graph for quantum circuit fragmentation by considering both the hardware noise and the cut size. We have also exhibited through our proposed tool, _FragQC_, that the fidelity of the benchmark circuits has significantly improved with a GA-based method or a quantum annealing method within it, for solving the problem in comparison with the existing circuit fragmentation methods. Therefore, _FragQC_ provides a hybrid quantum computing strategy. It is also a robust and scalable method, as this method can be implemented on any gate-based hardware. Since there may be multiple numbers of sub-circuits, these may be implemented and run on different hardware in parallel [46; 47; 48] to minimize the run-time complexity of our tool. This may be explored in the future along with the trade-off in the time for reconstruction of the results of the sub-circuits.
## Declaration of competing interest
The authors declare that there is no conflict of interest. All authors have contributed equally in this manuscript.
Figure 9: Fidelity obtained by using _FragQC_ on four benchmark circuits, with varying thresholds of success probability. |
2309.04999 | Hyperbolicity of Alternating Links in Thickened Surfaces with Boundary | Let $F$ be a compact orientable surface with nonempty boundary other than a
disk. Let $L$ be a link in $F \times I$ with a connected weakly prime cellular
alternating projection to $F$. We provide simple conditions that determine
exactly when $(F \times I) \setminus N(L)$ is hyperbolic. We also consider
suitable embeddings of $F \times I$ in an ambient manifold $Y$ with boundary
and provide conditions on links $L \subset F \times I$ which guarantee
tg-hyperbolicity of $Y \setminus N(L)$. These results provide many examples of
hyperbolic links in handlebodies and fiber bundles. It also provides many
examples of staked links that are hyperbolic. | Colin Adams, Joye Chen | 2023-09-10T11:23:33Z | http://arxiv.org/abs/2309.04999v1 | # Hyperbolicity of alternating links in thickened surfaces with boundary
###### Abstract.
Let \(F\) be a compact orientable surface with nonempty boundary other than a disk. Let \(L\) be a link in \(F\times I\) with a connected weakly prime cellular alternating projection to \(F\). We provide simple conditions that determine exactly when \((F\times I)\setminus N(L)\) is hyperbolic. We also consider suitable embeddings of \(F\times I\) in an ambient manifold \(Y\) with boundary and provide conditions on links \(L\subset F\times I\) which guarantee hyperbolicity of \(Y\setminus N(L)\). These results provide many examples of hyperbolic links in handlebodies and other manifolds. They also provide many examples of staked links that are hyperbolic.
## 1. Introduction and Statement of Theorems
A link \(L\) in a compact 3-manifold \(Y\) is called _hyperbolic_ if the complement \(M=Y\setminus N(L)\) admits a complete metric of constant sectional curvature \(-1\). Hyperbolicity has proven useful in studying links in \(S^{3}\), giving rise to many powerful invariants, in particular volume. Hence, the problem of determining which links are hyperbolic is of key interest. Thurston proved that the complement of a link in a compact orientable 3-manifold is hyperbolic if it contains no essential properly embedded spheres, disks, tori or annuli.
In [16], Menasco used this to prove that all non-split prime alternating links in \(S^{3}\) which are not 2-braid links are hyperbolic. This result was extended by Adams et al in [4], where they proved that all prime, cellular (all complementary regions of the projection are disks) alternating links in thickened closed surfaces of positive genus are hyperbolic. In [13], Howie and Purcell obtained a more general result using angled chunks, proving that under certain conditions, a link \(L\) in an arbitrary compact, orientable, irreducible 3-manifold with a weakly prime, cellular alternating projection onto a closed projection surface is hyperbolic.
A natural next step is to consider projection surfaces with boundary and determine when links which are alternating with respect to these projection surfaces are hyperbolic in manifolds containing these surfaces. Throughout, we denote a projection surface by \(F\), which we require to be connected, orientable, and compact with nonempty boundary. We are interested in links
\(L\subset F\times I\) and the corresponding 3-manifold \(M=(F\times I)\setminus N(L)\), where \(N(\cdot)\) denotes a closed regular neighborhood and \(I=[0,1]\) denotes a closed interval.
Since the manifolds \(M\) we are interested in often have higher genus boundary (that is, boundary components with genus at least 2), we would like to have the stronger notion of _tg-hyperbolicity_.
**Definition 1.1**.: A compact orientable 3-manifold \(N\) is _tg-hyperbolic_ if, after capping off all spherical boundary components with 3-balls and removing torus boundaries, the resulting manifold admits a complete hyperbolic metric such that all higher genus boundary components are totally geodesic in the metric.
Ultimately, we would like to use hyperbolic volume to study links in various manifolds, and requiring tg-hyperbolicity allows us to associate a well-defined finite volume to a manifold with higher genus boundary. We also require our links to be prime in \(F\times I\) and have cellular alternating projections on \(F\).
**Definition 1.2**.: Let \(F\) be a projection surface with boundary, and let \(L\subset F\times I\) be a link with projection diagram \(\pi(L)\). We say \(L\) is _prime_ in \(F\times I\) if every 2-sphere in \(F\times I\) which is punctured twice by \(L\) bounds, on one side, a 3-ball intersecting \(L\) in precisely one unknotted arc.
We say \(\pi(L)\) is _cellular alternating_ on \(F\) if it is alternating on \(F\) and, after every boundary component of \(F\) is capped off with a disk to obtain a closed orientable surface \(F_{0}\) with diagram \(\pi(L)\), every complementary region of \(F_{0}\setminus\pi(L)\) is an open disk.
When \(\pi(L)\) is a _reduced diagram_, there is an easy way to check whether or not \(L\) is prime in \(F\times I\).
**Definition 1.3**.: Let \(\pi(L)\) be a link projection on a surface \(F\) with boundary. We say \(\pi(L)\) is _reduced_ if there is no circle in \(F\) bounding a disk in \(F\) and which intersects \(\pi(L)\) transversely in exactly one (double) point.
Note that when a projection is not reduced, we can reduce it by flipping that portion of the projection inside the circle and lower the number of crossings.
**Definition 1.4**.: Let \(\pi(L)\) be a reduced link projection on a surface \(F\) with boundary. We say a link projection \(\pi(L)\subset F\) is _weakly prime_ if every disk \(D\subset F\) which has its boundary \(\partial D\) intersect \(\pi(L)\) transversely in exactly two points contains no crossings of the projection in its interior.
We prove the following extensions of Theorem 2 from [4] and Theorem 1(b) from [16] to allow projection surfaces with boundary.
**Proposition 1.5**.: _Let \(F\) be a projection surface with nonempty boundary, and let \(L\subset F\times I\) be a link with a connected, reduced, cellular alternating projection diagram \(\pi(L)\subset F\times\{1/2\}\). Then \(L\) is prime in \(F\times I\) if and only if \(\pi(L)\) is weakly prime on \(F\times\{1/2\}\)._
### A criterion for hyperbolicity
Our first main result characterizes when alternating links on projection surfaces with boundary are hyperbolic.
**Theorem 1.6**.: _Let \(F\) be a projection surface with nonempty boundary which is not a disk, and let \(L\subset F\times I\) be a link with a connected, reduced, alternating projection diagram \(\pi(L)\subset F\times\{1/2\}\) with at least one crossing. Let \(M=(F\times I)\setminus N(L)\). Then \(M\) is tg-hyperbolic if and only if the following four conditions are satisfied:_
1. \(\pi(L)\) _is weakly prime on_ \(F\times\{1/2\}\)_;_
2. _the interior of every complementary region of_ \((F\times\{1/2\})\setminus\pi(L)\) _is either an open disk or an open annulus;_
3. _if regions_ \(R_{1}\) _and_ \(R_{2}\) _of_ \((F\times\{1/2\})\setminus\pi(L)\) _share an edge, then at least one is a disk;_
4. _there is no simple closed curve_ \(\alpha\) _in_ \(F\times\{1/2\}\) _that intersects_ \(\pi(L)\) _exactly in a nonempty collection of crossings, such that for each such crossing,_ \(\alpha\) _bisects the crossing and the two opposite complementary regions meeting at that crossing that do not intersect_ \(\alpha\) _near that crossing are annuli._
We often refer to \(F\times\{1/2\}\) by \(F\) when there is no ambiguity. We exclude the case where \(F\) is a disk, since this case is covered by [16]. Note that in that case, one must also exclude a cycle of bigons. In our case, a cycle of bigons is excluded by conditions (ii) and (iv). Note that condition (ii) implies the link is cellular alternating.
For an alternative formulation, we may start with a closed projection surface \(F_{0}\) with a link \(L\subset F_{0}\times I\). Choose a set of distinct points \(\{x_{i}\}\in F_{0}\setminus\pi(L)\). Then \(F:=F_{0}\setminus\bigcup_{i=1}^{n}\mathring{N}(x_{i})\) is a surface with boundary and we may consider \(L\) as a link in \(F\times I\). In this setting, Theorem 1.6 can be rephrased to say that if the \(\{x_{i}\}\) are chosen such that: (i) \(\pi(L)\) is weakly prime on \(F\), (ii) every region of \(F_{0}\setminus\pi(L)\) contains at most one of the \(x_{i}\), (iii) no two adjacent regions of \(F_{0}\setminus\pi(L)\) both contain an \(x_{i}\), and (iv) there is no simple closed curve on \(F_{0}\) intersecting \(\pi(L)\) exactly in a nonempty set of crossings such that it bisects each crossing and each of the two regions it does not pass through at each such crossing contain an \(x_{i}\), then \(M\) is tg-hyperbolic. Any other choice of \(\{x_{i}\}\) ensures \(M\) will not be tg-hyperbolic.
This formulation fits well with the notion of a staked link, which we discuss in the final section. However, the advantage to the statement of Theorem 1.6 is it avoids reference to an initial closed surface \(F_{0}\).
The conditions (i)-(iv) of Theorem 1.6 are necessary. Indeed, if (i) does not hold, then there is an essential twice-punctured sphere. If (ii) does not hold then a region with genus greater than 0 produces an essential annulus by taking \(\alpha\times I\) for any nontrivial non-boundary-parallel simple closed curve \(\alpha\) in the region. A planar region with more than one boundary produces an essential disk as in Figure 2(a). If (iii) does not hold, then there is an essential annulus as in Figure 2(b). If (iv) does not hold, then we will show in Lemma 3.8 that there is an essential annulus, an example of which appears in Figure 3. So our main task will be to prove that these conditions imply tg-hyperbolicity, which we do in Section 3.
Note that if \(F\) is a disk, then \(M\) is the complement of a link in a 3-ball. Capping off the spherical boundary with a 3-ball yields a link complement in \(S^{3}\). Then Corollary 2 of [16] characterizes when \(M\) is hyperbolic. Similarly,
Figure 1. Four examples of \(F\) (shaded) and \(\pi(L)\) satisfying conditions (i), (ii) and (iii) of Theorem 1.6. Examples (a) and (b) also satisfy condition (iv) so the corresponding manifolds \((F\times I)\setminus N(L)\) are tg-hyperbolic. Examples (c) and (d) fail condition (iv) and neither is tg-hyperbolic. A problematic simple closed curve appears in red.
if \(F\) is closed and of genus at least 1, Theorem 1 of [4] characterizes when \(M\) is tg-hyperbolic.
As an application of Theorem 1.6, note that \(F\times I\) is always a handlebody. Specifically, if \(F\) is an orientable genus \(g\) surface with \(k\) boundary components, then \(F\times I\) is a genus \(2g+(k-1)\) handlebody. Hence, if \(L\) is a link in a handlebody and we can find a way to represent this handlebody as a thickened projection surface \(F\) such that \(L\) is cellular alternating on \(F\), we can determine if \(L\) is hyperbolic. In particular, there are examples of links in
Figure 3. An essential annulus is present when the red curve does not satisfy condition (iv).
Figure 2. Conditions (ii) and (iii) are necessary for Theorem 1.6: on the left is a local portion of \(\pi(L)\) on \(F\) and on the right is the corresponding portion of \(M\). In (a) we exhibit an essential disk \(\alpha\times I\) when a complementary region has two or more boundary components, and in (b) we exhibit an essential annulus \((\alpha\times I)\setminus N(L)\) when adjacent regions both have boundary.
handlebodies which do not have a closed projection surface satisfying the hypotheses of Theorem 1.1 from [13], but do have a projection surface with boundary satisfying the hypotheses of Theorem 1.6. We give an example in Section 5.
Relatively few links in handlebodies with tg-hyperbolic complements were previously known. One was proved to be so in [1]. There are a finite number given in [10]. Each of [11] and [17] give an explicit infinite set, one for each genus. Theorem 1.1 of [13] does generate many examples when each compressing disk on the boundary of the handlebody is crossed at least four times by an appropriate alternating link. Theorem 1.6, particularly when conjoined with a form of "composition", as described in Theorem 2.1 of [7], further increases the number of such known.
### Generalizing to additional ambient manifolds
We now consider a compact orientable 3-manifold with boundary \(Y\) that contains a properly embedded orientable surface \(F\) with boundary that is both incompressible and \(\partial\)-incompressible and that intersects all essential annuli and tori in \(Y\). We show that if an appropriate link is removed from \(F\times I\subset Y\), the link complement in \(Y\) is tg-hyperbolic.
First, we state a similarly easy way to check whether or not \(L\) is prime in \(Y\). Let \(\pi(L)\) be a projection of a link \(L\) to \(F\).
**Proposition 1.7**.: _Let \(F\) be an orientable incompressible \(\partial\)-incompressible surface with nonempty boundary properly embedded in a compact orientable irreducible \(\partial\)-irreducible 3-manifold \(Y\), and let \(L\) be a link in \(Y\) with a connected, reduced, cellular alternating projection to \(F\). Then \(L\) is prime in \(Y\) if and only if \(\pi(L)\) is weakly prime on \(F\)._
Then we have the following theorem:
**Theorem 1.8**.: _Let \(Y\) be an orientable irreducible \(\partial\)-irreducible 3-manifold with \(\partial Y\neq\emptyset\). Let \(F\) be a properly embedded orientable incompressible and \(\partial\)-incompressible connected surface with boundary in \(Y\). Suppose all essential tori and annuli that exist in \(Y\) intersect \(F\). Let \(L\) be a link in a regular neighborhood \(N=F\times I\) of \(F\) that has a connected reduced alternating projection \(\pi(L)\) to \(F\) with at least one crossing and that satisfies the following conditions:_
1. \(\pi(L)\) _is weakly prime on_ \(F\)_;_
2. _the interior of every complementary region of_ \(F\setminus\pi(L)\) _is either an open disk or an open annulus;_
_Then \(Y\setminus N(L)\) is tg-hyperbolic._
Compare to Theorem 1.6; in particular, here, the previous condition (iii) that adjacent regions cannot both be annuli and the previous condition (iv)
are no longer necessary. In either of these cases, the annulus that is generated when the condition fails does not extend to an annulus in \(Y\). Also, note again that condition (ii) implies the link is cellular alternating. The requirement that \(F\) be connected is not essential, as the theorem can be repeated for additional surface components and tg-hyperbolicity is maintained.
Examples for \(Y\) include any finite-volume hyperbolic 3-manifold with cusps and/or totally geodesic boundary of genus at least 2. In these cases, there are no essential annuli or tori to worry about and there is always an incompressible \(\partial\)-incompressible surface that can play the role of \(F\) (see for instance Lemma 9.4.6 of [15]). A simple example for \(F\) would be a minimal genus Seifert surface in a hyperbolic knot complement \(Y\) in \(S^{3}\).
In Figure 4(a), we see an example where \(Y\) is the complement of the trefoil knot. There is an essential annulus, however, it intersects the shaded Seifert surface playing the role of \(F\). Hence the link complement shown is hyperbolic by Theorem 1.8.
Examples also include any surface bundle \(F\tilde{\times}S^{1}\) over a compact orientable surface \(F\) with nonempty boundary other than a disk or annulus. We pick the incompressible \(\partial\)-incompressible surface to be a fiber. Such a manifold does contain essential annuli and/or tori but they all intersect \(F\). See Figure 4(b) for an example.
### Organization and further directions
In Section 2, we introduce the notion of _bubbles_, first defined by Menasco in [16]. We also adapt various lemmas from [4] concerning intersection curves. In Section 3, we prove Theorem 1.6 as well as Proposition 1.5, and in Section 4, we prove Theorem 1.8 and Proposition 1.7. Finally, in Section 5, we discuss some applications of our results. One motivation for Theorem 1.6 is that it gives a large class of
Figure 4. Examples of manifolds that are tg-hyperbolic by Theorem 1.8.
hyperbolic links in handlebodies, which we may naturally view as thickened surfaces-with-boundary. Besides being interesting objects in their own right, they appear naturally in the study of _knotoidal graphs_, as defined in [5]. In that paper, the authors construct a map \(\phi_{\Sigma}^{D}\) from the set of knotoidal graphs to the set of spatial graphs in 3-manifolds. Hence, there is a well-defined notion of hyperbolicity and hyperbolic volume for knotoidal graphs which may be used to distinguish them.
In particular, _staked links_, as defined in [5], are obtained by adding vertices, called isolated poles, to the complementary regions of a link projection. As in the case of endpoints of knotoids, we do not allow strands of the projection to pass over or under these poles. A subclass of knotoidal graphs, these stacked links are mapped by \(\phi_{\Sigma}^{D}\) into the set of links in handlebodies. Then it is interesting to determine which staked links are hyperbolic and compute their volumes. Theorem 1.6 gives the answer to the first question in the case of alternating staked links. Furthermore, we can show that certain staked links which are "close to alternating" in some sense are also hyperbolic, using results from [7]. Theorem 1.6 is used in [5] to prove that every link in \(S^{3}\) can be staked to be hyperbolic.
In future work, it would be interesting to obtain volume bounds for alternating links in handlebodies (and such results would immediately apply to alternating staked links). Also, while we work only with orientable 3-manifolds, we suspect similar results hold when \(F\) is a nonorientable surface with boundary, or when we allow for orientation-reversing self-homeomorphisms of \(F\) in the construction of \(F\tilde{\times}S^{1}\).
### Acknowledgements
The research was supported by Williams College and NSF Grant DMS-1947438 supporting the SMALL Undergraduate Research Project. We are grateful to Alexandra Bonat, Maya Chande, Maxwell Jiang, Zachary Romrell, Daniel Santiago, Benjamin Shapiro and Dora Woodruff, who are the other members of the knot theory group of the 2022 SMALL REU program at Williams College, for many helpful discussions and suggestions.
## 2. Bubbles and Intersection Curves
Let \(F\) be a compact, connected, orientable surface with boundary, and let \(L\subset F\times I\) be a link with connected, cellular alternating projection \(\pi(L)\subset F\times\{1/2\}=F\). By Thurston's Hyperbolization Theorem, proving that \(M\) has no essential spheres, tori, disks, or annuli is sufficient to conclude that \(M\) is tg-hyperbolic.
Throughout this section, \(\Sigma\) is a properly embedded essential surface in \(M=(F\times I)\setminus N(L)\) with boundary on \(\partial(F\times I)\). Note that when \(\Sigma\) is a disk, we may always assume it has boundary on \(\partial(F\times I)\). Indeed, suppose \(\Sigma\) has
boundary on \(\partial N(K)\), where \(K\) is a component of \(L\). Then \(\partial(N(K)\cup N(D))\) is an essential sphere. Hence, if we can eliminate essential spheres, then we have eliminated such disks.
As in [16], arrange \(L\) to lay in \(F=F\times\{1/2\}\) away from the crossings, and at each crossing place a 3-ball \(B\) which we call a _bubble_. Arrange the over and understands so that they lie in the upper hemisphere \(\partial B_{+}\) and lower hemisphere \(\partial B_{-}\) of the bubble, respectively. We may isotope \(\Sigma\) to intersect the bubbles in saddle-shaped disks, by first isotoping \(\Sigma\) to intersect the vertical axis of each bubble transversely, then pushing \(\Sigma\) radially outward from the axis. See Figure 5.
Let \(F_{+}\) (resp. \(F_{-}\)) be the surface obtained from \(F\times\{1/2\}\) by removing each equatorial disk where \(F\times\{1/2\}\) intersects a bubble \(B\) and replacing it with the upper hemisphere \(\partial B_{+}\) (resp. lower hemisphere \(\partial B_{-}\)). The desired contradictions come from analyzing the intersection curves between \(\Sigma\) and the \(F_{\pm}\), which may be closed or properly embedded arcs since \(\partial\Sigma\subset\partial(F\times I)\) and therefore, can be perturbed to intersect the \(F_{\pm}\) transversely.
In the remainder of this section, we state and prove various lemmas for \(\Sigma\cap F_{+}\), noting that all results and arguments apply to to \(\Sigma\cap F_{-}\) as well.
**Lemma 2.1**.: _There is at least one intersection curve in \(\Sigma\cap F_{+}\)._
Proof.: Suppose for contradiction that \(\Sigma\cap F_{+}=\emptyset\). Without loss of generality, \(\Sigma\) can be isotoped to lie in \(F\times(1/2,1]\), a handlebody, and if \(\Sigma\) has boundary, then \(\partial\Sigma\) lies in \((F\times\{1\})\cup(\partial F\times(1/2,1])\). If \(\Sigma\) is a sphere or torus, then it is compressible, and if \(\Sigma\) is a disk or annulus, then it is \(\partial\)-parallel.
We would like to simplify \(\Sigma\cap F_{+}\) as much as possible. Assign to each embedding of \(\Sigma\) an ordered pair \((s,i)\), where \(s\) is the number of saddle disks in the intersection between \(\Sigma\) and the bubbles and \(i\) is the number of intersection curves in \(\Sigma\cap F_{+}\). For the remainder of the section, we
Figure 5. A surface \(\Sigma\) intersecting a bubble in a saddle disk.
may assume that our choice of an embedding of \(\Sigma\) minimizes \((s,i)\) under lexicographical ordering.
To this end, we can show that \(\Sigma\cap F_{+}\) cannot contain any intersection curves which are _trivial_ on both \(\Sigma\) and \(F_{+}\).
**Definition 2.2**.: We say a simple closed curve on a surface is _trivial_ if it bounds a disk in the surface. We say a properly embedded arc on a surface is _trivial_ if it cuts a disk from the surface.
To eliminate curves trivial on both \(\Sigma\) and \(F_{+}\), we define the notion of _meridional (in)compressibility_, first introduced in [16]. Eventually, in Section 3, we show that essential surfaces in \(M\) cannot be meridionally incompressible nor meridionally compressible, thus eliminating them.
**Definition 2.3**.: Let \(Y\) be a compact 3-manifold containing a link \(L\), and let \(\Sigma\) be a properly embedded surface in \(Y\setminus N(L)\). We say \(\Sigma\) is _meridionally incompressible_ if for every disk \(D\) in \(Y\) such that \(D\cap\Sigma=\partial D\) and \(D\) is punctured exactly once by \(L\), there is another disk \(D^{\prime}\) in \(\Sigma\cup N(L)\) such that \(\partial D^{\prime}=\partial D\) and \(D^{\prime}\) is punctured by \(L\) exactly once. Otherwise, we say \(\Sigma\) is _meridionally compressible_ and we call \(D\) a _meridional compression disk_. We refer to surgery on \(\Sigma\) along \(D\) as a _meridional compression_.
Throughout this section we take \(Y=F\times I\). We remark that if \(\Sigma\) is essential in \(M\) and has a meridional compression disk \(D\), then the (not necessarily connected) surface \(\Sigma^{\prime}\) resulting from meridionally compressing \(\Sigma\) along \(D\) is also essential.
The following lemma tells us that when \(\Sigma\) is meridionally incompressible, all of the closed intersection curves are nontrivial in some sense.
**Lemma 2.4**.: _Suppose \(\Sigma\) is meridionally incompressible and \((s,i)\) is minimized. Then no closed intersection curve in \(\Sigma\cap F_{+}\) can be trivial in both \(\Sigma\) and \(F_{+}\)._
In order to prove this, we first prove several other lemmas. Note that the upper hemisphere of each bubble \(B\) is separated by the overstrand into two sides. We observe that because \(\pi(L)\) is an alternating diagram, any intersection curve in \(\Sigma\cap F_{+}\) must alternate between entering bubbles on the left side of the overstrand and entering on the right side of the overstrand. See Figure 6 for a local picture. The alternating property places strong restrictions on the appearance of the intersection curves in \(F_{+}\), as the next two lemmas demonstrate.
**Lemma 2.5**.: _A closed intersection curve \(\alpha\) in \(\Sigma\cap F_{+}\) which is trivial in \(F_{+}\) cannot intersect a bubble twice on the same side._
Proof.: Let \(B\) be a bubble and for convenience, let \(B^{L}\) and \(B^{R}\) denote the two halves of \(B\) obtained by slicing along a vertical plane containing the
overstrand. Without loss of generality, suppose \(\alpha\) meets \(B^{L}\) in more than one arc, and let \(D\) be the disk in \(F_{+}\) bounded by \(\alpha\). Let \(\{\alpha_{i}\}\) be the set of arcs in \(\alpha\cap B^{L}\) and observe that each corresponds to a distinct saddle disk in \(\Sigma\cap B\). Furthermore, we may assume that there exists a pair of arcs, \(\alpha_{0}\) and \(\alpha_{1}\), which are adjacent on \(B^{L}\). Indeed, if there is another intersection curve \(\alpha^{\prime}\) in \(\Sigma\cap F_{+}\) which meets \(B^{L}\) in between \(\alpha_{0}\) and \(\alpha_{1}\), then it does so in at least two arcs between \(\alpha_{0}\) and \(\alpha_{1}\) because \(\alpha^{\prime}\) must be contained in \(D\). We can continue finding these nested loops until we are able to choose an adjacent pair of arcs belonging to the same intersection curve.
As in [2], let \(\mu\) be an arc running along \(\alpha\) in \(\Sigma\) connecting \(\alpha_{0}\) and \(\alpha_{1}\). Then we may isotope \(\Sigma\) to remove the two saddles corresponding to \(\alpha_{0}\) and \(\alpha_{1}\) by pushing a regular neighborhood of \(\mu\) across the disk \(D\), through \(B\), and downwards past \(F_{+}\). See Figure 7. This reduces \(s\) by \(2\), contradicting minimality of \((s,i)\).
**Lemma 2.6**.: _Suppose \(\Sigma\) is meridionally incompressible and suppose \(\alpha\) is a closed intersection curve in \(\Sigma\cap F_{+}\) which is trivial in \(F_{+}\) and bounds a disk containing the overstrand of some bubble \(B\). Then \(\alpha\) cannot intersect \(B\) on both sides._
Proof.: Suppose for contradiction that a curve \(\alpha\) satisfying the hypotheses intersects \(B\) on both sides in arcs \(\alpha_{0}\) and \(\alpha_{1}\). Let \(D\) be a disk in \(F_{+}\) bounded by \(\alpha\).
First, observe that \(\alpha_{0}\) and \(\alpha_{1}\) must be connected by an arc \(\gamma\) as in Figure 8(a). Otherwise, suppose \(\gamma\) appears as in Figure 8(b). Then because \(\alpha\) bounds a disk, the endpoint of \(\alpha\) cannot escape the enclosed region along a handle and hence must exit the enclosed region by passing over \(B\) again. However, whichever side of \(B\) it passes over, this contradicts Lemma 2.5.
So now we can assume we are in a situation as depicted in Figure 8(a). Of the intersection curves which are trivial and intersect \(B\) on both sides, we
Figure 6. The alternating property: \(\alpha\) alternates sides as it encounters each bubble.
Figure 8. The two possibilities when an intersection curve intersects the opposite sides of a bubble. We rule out 8(b) by contradiction with Lemma 2.5.
Figure 7. Isotoping \(\Sigma\) to remove two saddles. We visualize \(\Sigma\) as folding back over itself to intersect \(B^{L}\) twice. Then imagine using your finger to push the fold into the bubble and downwards past \(F\), smoothing it out and removing \(\alpha_{0}\) and \(\alpha_{1}\). For simplicity, we only draw \(\Sigma\) where it lies above \(F\).
may choose \(\alpha\) to intersect closest to the overstrand of \(B\) on one of its sides. We claim that the two arcs of \(\alpha\) which intersect \(B\) closest to the overstrand on either side belong to the same saddle disk of \(\Sigma\cap B\). Indeed, suppose \(\alpha^{\prime}\) is another intersection curve such that it meets \(B\) between an arc of \(\alpha\cap B\) and the overstrand. Since \(\alpha\) bounds a disk, \(\alpha^{\prime}\) must intersect \(B\) at least twice on one side of \(B\). Since \(D\) contains the overstrand of \(B\), \(\alpha^{\prime}\) is contained in \(D\) and hence, is trivial. This contradicts Lemma 2.5.
Let \(\sigma\subset\Sigma\cap B\) be the saddle disk which contains \(\alpha_{0}\) and \(\alpha_{1}\) in its boundary. The remainder of the proof more or less follows the proof of Lemma 7 of [4] or Lemma 1 of [16]. Let \(\mu\) be an arc in \(\Sigma\) running parallel to \(\alpha\) which connects \(\alpha_{0}\) and \(\alpha_{1}\). Then there is an arc \(\gamma\) in \(\sigma\) such that \(\mu\cup\gamma\) is a circle in \(\Sigma\) which bounds a disk punctured once by \(L\) in \(M\). See Figure 9. But this yields a meridional compression disk, a contradiction.
To prove Lemma 2.4 and subsequent lemmas, the notion of innermost (closed) curves and outermost arcs is very useful.
**Definition 2.7**.: Let \(\alpha\) be a closed intersection curve in \(\Sigma\cap F_{+}\). If \(\alpha\) is trivial on \(\Sigma\) (resp. trivial on \(F_{+}\)), we say \(\alpha\) is _innermost_ on \(\Sigma\) (resp. on \(F_{+}\)) if \(\alpha\) bounds a disk \(D\) in \(\Sigma\) (resp. \(F_{+}\)) which does not contain any intersection curves of \(\Sigma\cap F_{+}\) in its interior. Similarly, suppose \(\alpha\) is an intersection arc in \(\Sigma\cap F_{+}\). If \(\alpha\) is trivial on \(\Sigma\) (resp. \(F_{+}\)), we say \(\alpha\) is _outermost_ on \(\Sigma\) (resp. on \(F_{+}\)) if \(\alpha\), together with an arc of \(\partial\Sigma\) (resp. \(\partial F_{+}\)), bounds a disk \(D\) in \(\Sigma\)
Figure 9. The curve \(\mu\cup\gamma\) bounds a meridional compression disk for \(\Sigma\).
(resp. \(F_{+}\)) which does not contain any intersection curves of \(\Sigma\cap F_{+}\) in its interior.
Proof of Lemma 2.4.: Let \(D\subset F_{+}\) and \(D^{\prime}\subset\Sigma\) be disks bounded by \(\alpha\), and of the (closed) intersection curves contained in \(D\), let \(\beta\) be an innermost such curve on \(F_{+}\). Then \(\beta\) bounds a disk \(E\subset D\subset F_{+}\) and by Lemmas 2.5 and 2.6, \(\beta\) cannot intersect any bubbles. By iterating this argument, we find that none of the intersection curves contained in \(D\) can intersect bubbles and hence, neither can \(\alpha\). Then \(D\cup D^{\prime}\) is a 2-sphere which is not punctured by the link. By irreducibility of \(F\times I\), \(D\cup D^{\prime}\) bounds a 3-ball and we may isotope \(D^{\prime}\) through the 3-ball, pushing it slightly past \(F_{+}\) to remove \(\alpha\) and any intersection curves contained in \(D\). This contradicts minimality of \((s,i)\).
We conclude this section by using Lemma 2.4 to prove several more lemmas restricting the appearance of the intersection curves when \(\Sigma\) is meridionally incompressible.
**Lemma 2.8**.: _Suppose \(\Sigma\) is meridionally incompressible, and let \(\alpha\subset\Sigma\cap F_{+}\) be an intersection curve. Then \(\alpha\) is trivial in \(\Sigma\) if and only if one of the following is true:_
1. \(\alpha\) _is trivial in_ \(F_{+}\)_; or_
2. \(\alpha\) _is an arc bounding a_ \(\partial\)_-compression disk for_ \(F_{+}\) _in_ \(M\)_._
Proof.: Suppose \(\alpha\) is nontrivial in \(\Sigma\) and trivial in \(F_{+}\). When \(\Sigma\) is a sphere or disk, this is clearly impossible. Of all intersection curves which are nontrivial in \(\Sigma\) and trivial in \(F_{+}\), choose \(\alpha\) to be an innermost closed curve or an outermost arc on \(F_{+}\). Let \(D\) be a disk in \(F_{+}\) bounded by \(\alpha\) (and possibly an arc of \(\partial F_{+}\) if \(\alpha\) is an arc). Then any intersection curve in \(D\) is trivial in both \(\Sigma\) and \(F_{+}\), so by Lemma 2.4, we may assume that all of these curves are arcs.
In particular, if \(\Sigma\) is a torus, or \(\Sigma\) is an annulus and \(\alpha\) is a closed curve, then all intersection curves contained in \(D\) are eliminated, and \(\alpha\) bounds a compression disk for \(\Sigma\), a contradiction. If \(\Sigma\) is an annulus and \(\alpha\) is an arc, then all closed intersection curves in \(D\) are eliminated. If all intersection curves in \(D\) are closed, then \(\alpha\) bounds a compression disk for \(\Sigma\). Hence, there is at least one intersection arc contained in \(D\). But then the outermost such arc bounds a \(\partial\)-compression disk for \(\Sigma\), contradicting essentiality.
Conversely, suppose \(\alpha\) is trivial in \(\Sigma\) and nontrivial in \(F_{+}\). Fill in \(N(L)\) to work in the handlebody \(F\times I\). We show that \(\Sigma\) cannot intersect \(F_{+}\) in such an \(\alpha\) in \(F\times I\), much less in \((F\times I)\setminus N(L)\). Of all the curves which are trivial in \(\Sigma\) and nontrivial in \(F_{+}\), choose \(\alpha\) to be an innermost closed curve or outermost arc on \(F_{+}\), and let \(D^{\prime}\) be a disk in \(\Sigma\) bounded by \(\alpha\) (and possibly an arc of \(\partial\Sigma\) if \(\alpha\) is an arc). Every intersection curve contained in
\(D^{\prime}\) is trivial in \(\Sigma\) and in \(F_{+}\). Of these curves, suppose at least one is closed and choose \(\beta\) to be an innermost such curve on \(\Sigma\). Let \(E\) and \(E^{\prime}\) be disks bound by \(\beta\) in \(F_{+}\) and \(\Sigma\) respectively. Push the interior of \(E\) slightly off \(F_{+}\) so \(E\cup E^{\prime}\) is a 2-sphere in \(M\). By irreducibility of \(F\times I\), \(E\cup E^{\prime}\) bounds a 3-ball. Isotope \(\Sigma\) through this 3-ball to remove \(\beta\) as before, and by iterating this process, we can remove all closed intersection curves contained in \(D\). If \(\alpha\) is a closed curve, then all intersection curves in \(D\) are eliminated, and \(\alpha\) bounds a compression disk for \(F_{+}\) in \(F\times I\), a contradiction. If \(\alpha\) is an arc, it is either trivial in \(F_{+}\) or \(D^{\prime}\) is a \(\partial\)-compression disk for \(F_{+}\) in \(F\times I\), hence in \(M\).
Henceforth, we say an intersection curve is trivial if it is trivial on \(\Sigma\) and \(F_{+}\). When \(\Sigma\) is meridionally incompressible we may assume such curves are arcs by Lemmas 2.4 and 2.8.
**Lemma 2.9**.: _Suppose \(\Sigma\) is meridionally incompressible, and let \(\alpha\subset\Sigma\cap F_{+}\) be an intersection arc. Then \(\alpha\) intersects at least one bubble._
Proof.: Suppose for contradiction that \(\alpha\) does not intersect any bubble. Then the arc \(\alpha\) is properly embedded in a complementary region \(R\) of \(F_{+}\setminus\pi(L)\) which is homeomorphic to an annulus, and furthermore, both endpoints of \(\alpha\) lie on the same boundary component of \(R\). In particular, \(\alpha\) is trivial in \(F_{+}\) and \(\Sigma\), using Lemma 2.8.
Let \(D\) be a disk in \(R\subset F_{+}\) bounded by \(\alpha\) and an arc \(\beta\) of \(\partial F_{+}\), and let \(D^{\prime}\) be a disk in \(\Sigma\) bounded by \(\alpha\) an arc \(\gamma\) of \(\partial\Sigma\subset\partial(F\times I)\). Then \(D\cup D^{\prime}\) is a disk with boundary \(\beta\cup\gamma\) on \(\partial(F\times I)\). Furthermore, since \(\alpha\) is outermost on \(\Sigma\), \(\gamma\) does not intersect \(\partial F_{+}\) away from its endpoints. Hence, without loss of generality, \(\beta\cup\gamma\) lies in \((F\times\{1\})\cup(\partial F\times[1/2,1])\). But \(D\cup D^{\prime}\) cannot be a compression disk for \(F\times[1/2,1]\), so \(\beta\cup\gamma\) must bound a disk \(E\) in \((F\times\{1\})\cup(\partial F\times[1/2,1])\). Then \(D\cup D^{\prime}\cup E\) is a 2-sphere in \(F\times[1/2,1]\) bounding a 3-ball. Isotope \(D^{\prime}\) to \(D\) through the 3-ball and slightly past \(F_{+}\), removing \(\alpha\) and any other intersection curves contained in \(D\). This reduces \(i\) without affecting \(s\), a contradiction.
## 3. Proof of Theorem 1.6
For now, we assume the statement Proposition 1.5 so we may regard \(L\) as prime in \(F\times I\). We prove that \(M\) has no essential spheres, tori, disks, or annuli in that order. To use the lemmas from the previous section, we assume that \(\partial\Sigma\subset\partial(F\times I)\): later on we eliminate essential annuli with at least one boundary component on \(\partial N(L)\) using different methods.
We will be explicit about which of the conditions we use from Theorem 1.6 so that we can also use the appropriate lemmas for the proof of Theorem 1.8.
**Lemma 3.1**.: _Let \(\Sigma\) be a meridionally incompressible essential torus or an essential annulus with both boundary components on \(\partial(F\times I)\) such that \(L\) satisfies the hypotheses of Theorem 1.6 with the possible exception of conditions (iii) and (iv). Then at least one intersection curve in either \(\Sigma\cap F_{+}\) or \(\Sigma\cap F_{-}\) is trivial on \(\Sigma\)._
Proof.: Consider \(\Sigma\) with all of the intersection curves in \(\Sigma\cap F_{+}\) and \(\Sigma\cap F_{-}\) projected onto it along with saddles corresponding to quadrilaterals. After shrinking down the saddles to vertices, we obtain a (not necessarily connected) 4-valent graph together with some circles without vertices.
These circles correspond to nontrivial closed intersection curves which do not meet any bubbles and lie in an annular region of \(F_{+}\setminus\pi(L)\). Denote the graph together with the circles without vertices by \(\Gamma\); we call \(\Gamma\) the _intersection graph_ of \(\Sigma\) (see Figure 10). Each complementary region of \(\Sigma\setminus\Gamma\) lies entirely in one of the components of \(M\setminus F\), and the boundary of a region is precisely an intersection curve in either \(\Sigma\cap F_{+}\) or \(\Sigma\cap F_{-}\), depending on which side of \(F\) the region lies in. Hence, our goal is to show that at least one region of \(\Sigma\setminus\Gamma\) is a disk.
First, suppose there are no circles without vertices in \(\Gamma\), so \(\Gamma\) is just a 4-valent properly embedded graph on \(\Sigma\). Let \(V\), \(E\), and \(F\) be the number of saddles, edges, and regions of \(\Sigma\setminus F\) respectively, including endpoints of intersection arcs on the boundary of \(\Sigma\) as vertices. The Euler characteristic \(\chi(\Sigma)\) is 0 when \(\Sigma\) is a torus or annulus. From Lemmas 2.1 and 2.9, we know there are vertices from saddles. If no regions are disks, then each has nonpositive Euler characteristic contribution.
Figure 10. Possible intersection graphs for \(\Sigma\).
On the torus, \(E=2V\), and we then have
\[\chi(\Sigma)\leq V-E=V-2V=-V<0.\]
Hence, when there are no circles without vertices, there is a disk region of \(\Sigma\setminus\Gamma\), and its boundary will be an intersection curve in \(\Sigma\cap F_{+}\) or \(\Sigma\cap F_{-}\) which is trivial on \(\Sigma\).
On the annulus, let \(V_{I}\) be the number of interior vertices corresponding to saddles and \(V_{B}\) be the number of boundary vertices. Then
\[\chi(\Sigma)\leq V_{I}+V_{B}-\frac{4V_{I}+3V_{B}}{2}=-V_{I}-\frac{V_{B}}{2}<0.\]
Again, this forces there to be a region with positive Euler characteristic, meaning there is a disk region.
Now suppose that \(\Gamma\) has at least one circle without vertices and at least one vertex. Note that the circles without vertices must be parallel to one another. Hence, the circles cut \(\Sigma\) into annuli, and at least one annulus \(A\) contains a nonempty 4-valent graph \(\Gamma^{\prime}\). Then the same Euler characteristic argument shows that at least one region of \(A\setminus\Gamma^{\prime}\) is a disk.
It remains to show that \(\Sigma\) must meet the bubbles in at least one saddle. Suppose \(\Sigma\) does not meet any bubbles for contradiction, and consider the projection of the intersection curves in \(\Sigma\cap F\) onto \(F\). Then the intersection curves must be contained in one annular region of \(F\setminus\pi(L)\) and encircle the same boundary component of \(F\). First consider the case where \(\Sigma\) is a torus. Then there are an even number of curves since \(F\) is separating in \(M\) (in particular, there are at least two). Take a pair of intersection curves \(C_{1}\) and \(C_{2}\) which are adjacent in \(\Sigma\) (note they might not be adjacent in \(F_{+}\)). They bound an annulus \(A\) in \(\Sigma\) and an annulus \(A^{\prime}\) in \(F_{+}\).
After pushing \(A^{\prime}\) slightly off \(F_{+}\), we find a torus \(T=A\cup A^{\prime}\) contained in one component of \(M\setminus F\), both components of which are homeomorphic to a handlebody \(F\times I\). Then observe that \(T\) must have a compressing disk, as \(\pi_{1}(T)\cong\mathbb{Z}^{2}\) cannot inject into \(\pi_{1}(F\times I)\cong\mathbb{Z}^{*g}\). Compress \(T\) along some compressing disk to obtain a 2-sphere, which bounds a 3-ball. Gluing back in the compressing disk yields a solid torus bounded by \(T\). It cannot yield a knot exterior because of the fact \(A^{\prime}\) lies in the boundary of the handlebody. Note that the boundary curves of the two annuli must intersect the boundary of the compressing disk once. So we can isotope \(\Sigma\) through this solid torus to remove intersection curves \(C_{1}\) and \(C_{2}\), contradicting minimality of \((s,i)\).
If \(\Sigma\) is an annulus and there are an even number of intersection curves, the same argument goes through. Otherwise, we may assume \(\Sigma\) intersects \(F_{+}\) in a single intersection curve. Then we claim \(\Sigma\) is \(\partial\)-parallel.
Let \(\gamma_{0}\) and \(\gamma_{1}\) denote the boundary circles of \(\Sigma\): since \(\Sigma\cap F_{+}\) contains a single curve \(\gamma_{\frac{1}{2}}\), we know that \(\gamma_{1}\) must be in \((\partial F\times(\frac{1}{2},1])\cup(F\times\{1\})\) and
\(\gamma_{0}\) must be in \((\partial F\times[0,\frac{1}{2}))\cup(F\times\{0\})\) (note that both of these spaces are homeomorphic to \(F\)). Viewed on \(F\), the \(\gamma_{i}\) for \(i=0,\frac{1}{2},1\) are all homotopic to the same component of \(\partial F\). This means that \(\partial\Sigma\) cuts an annular component \(A\) from \(\partial(F\times I)\). As above, \(\Sigma\) must be parallel to \(A\) making it \(\partial\)-parallel, a contradiction to its being essential.
Now we are ready to eliminate essential spheres and tori.
**Proposition 3.2**.: _Let \(M=(F\times I)\setminus N(L)\) as in Theorem 1.6 with the possible exception of conditions (iii) and (iv). Then \(M\) has no essential spheres or tori._
Proof.: Suppose \(\Sigma\) is meridionally compressible. If \(\Sigma\) is a 2-sphere, then a meridional compression yields two 2-spheres in \(F\times I\), each punctured by \(L\) exactly once, which cannot occur. If \(\Sigma\) is a torus, then a meridional compression yields a 2-sphere which is twice-punctured by the link. By Proposition 1.5, \(L\) is prime in \(F\times I\). Then \(\Sigma\) must be \(\partial\)-parallel.
Hence, \(\Sigma\) must be meridionally incompressible. By Lemmas 3.1 and 2.8, we may assume without loss of generality that \(\Sigma\cap F_{+}\) contains some intersection curve \(\alpha\) which is trivial in \(\Sigma\). But by Lemma 2.8, \(\alpha\) is also trivial in \(F_{+}\), and this contradicts Lemma 2.4. Let \(D\) be a disk in \(F_{+}\) bounded by \(\alpha\), and choose \(\beta\) to be the innermost intersection curve contained in \(D\). Let \(D^{\prime}\) be a disk in \(F_{+}\) bounded by \(\beta\). Since \(\beta\) is trivial and closed, it intersects at least two bubbles (counted with multiplicity). Because of the alternating property, \(D^{\prime}\) contains the overstrand of some bubble \(B\) intersected by \(\beta\), and since \(D^{\prime}\) contains no other intersection curve, we conclude that \(\beta\) must intersect \(B\) on both sides (see Figure 11). But this contradicts Lemma 2.6.
Figure 11. If \(\Sigma\) is a meridionally incompressible essential sphere or torus, then there must be some curve \(\beta\) bounding \(D^{\prime}\subset F_{+}\) appearing in this configuration at some bubble.
As previously remarked, this also eliminates the possibility of an essential disk whose boundary lies on \(N(K)\) for some component \(K\) of \(L\). To eliminate essential disks in general, we introduce the useful notion of _forks_ (which are similar to forks as defined in [3] but here they only have two prongs). Consider an essential disk \(\Sigma\) with the intersection curves in \(\Sigma\cap F_{+}\) projected onto it. After shrinking down the vertices, we obtain a 4-valent intersection graph \(\Gamma\) on \(\Sigma\).
**Definition 3.3**.: A _fork_ of \(\Gamma\) is a vertex with at least two non-opposite edges ending on \(\partial\Sigma\).
See Figure 12. Note that the endpoints of the two edges need not be adjacent on the boundary of \(\Sigma\).
**Proposition 3.4**.: _Let \(M=(F\times I)\setminus N(L)\) as in Theorem 1.6 with the possible exception of condition (iv). Then \(M\) has no essential disks._
Proof.: First, note that suppose \(\Sigma\) cannot be meridionally compressible as a meridional compression would generate a disk with boundary on a meridian of \(N(L)\), that is, a 2-sphere once-punctured by the link, which cannot occur.
Hence, \(\Sigma\) must be meridionally incompressible, and by Lemma 2.9, the intersection graph \(\Gamma\) contains at least one vertex. As in [3], we can show there is at least one fork in \(\Gamma\) as follows. Because there are no closed intersection curves using Lemmas 2.4 and 2.8, every complementary region on the disk must intersect the boundary of the disk. If we discard all edges that touch the boundary to obtain a new graph \(\Gamma^{\prime}\) on \(\Sigma\), we are left with a collection of trees. If a tree has two or more vertices, then it must have two or more leaves, each of which has three edges on \(\Gamma\) that end on the boundary. So we have a fork.
If the tree is only a single vertex, then there are four edges leaving it that end on the boundary of the disk and we again have a fork.
Figure 12. A portion of the intersection graph on \(\Sigma\) corresponding to a fork.
Let \(\alpha\) and \(\beta\) denote two adjacent edges of the fork. Note that \(\alpha\) and \(\beta\) cannot have endpoints on the same component of \(\partial F\); otherwise, \(\alpha\cup\beta\) together with an arc in \(\partial F\) and an arc of the saddle bound a disk in \(F_{+}\) which has exactly one intersection point with \(\pi(L)\). Since \(\alpha\) and \(\beta\) don't meet other bubbles, they lie in adjacent complementary regions. But this implies that two adjacent regions are homeomorphically annuli, contradicting condition (iii) in the statement of Theorem 1.6.
We remark that condition (iii) (or perhaps some weaker variation thereof) is necessary to show that no disks exist: for example, if all four complementary regions meeting at a crossing bubble are annuli, then there is an essential disk \(D\) as pictured in Figure 13.
It remains to show that there are no essential annuli. Suppose \(\Sigma\) is an essential annulus. There are three cases to consider:
1. \(\Sigma\) has both boundary components on \(\partial(F\times I)\);
2. \(\Sigma\) has one boundary component on \(\partial(F\times I)\) and one boundary component on \(\partial N(L)\);
3. \(\Sigma\) has both boundary components on \(\partial N(L)\).
**Lemma 3.5**.: _Let \(M=(F\times I)\setminus N(L)\) as in Theorem 1.6 with the possible exception of conditions (iii) and (iv). Then if \(M\) has a essential annulus of
Figure 13. There is an essential disk if we allow all four complementary regions meeting at a crossing bubble to be annuli: it meets exactly one bubble in exactly one saddle disk.
type (2) or type (3), it has an essential torus or an essential annulus of type (1)._
Proof.: Let \(A\) be an essential annulus of type (3), so both of its boundaries are on \(\partial N(L)\). If there exists a single component \(K\) such that \(\partial A\subset\partial N(K)\), then let \(T_{1}\) and \(T_{2}\) be the two tori that form the boundary of \(N(A\cup N(K))\).
The boundaries of \(A\) cannot be meridians on \(\partial N(K)\), as that would contradict primeness. The annulus \(A\) cuts \((F\times I)\setminus N(K)\) into two components, one of which contains \(\partial(F\times I)\). Choose \(T_{1}\) to be the torus that separates \(K\) from \(\partial(F\times I)\). We prove that \(T_{1}\) is either essential or there is an essential type (2) annulus.
If \(T_{1}\) is boundary-parallel to the side containing \(K\), then it must be boundary-parallel to \(\partial N(K)\). But then \(A\) would have been boundary-parallel to \(\partial N(K)\), a contradiction.
If \(T_{1}\) is boundary-parallel to the side not containing \(K\), then the manifold outside \(T_{1}\) is \(T\times I\) with \(T\times\{0\}=T_{1}\) and \(T\times\{1\}=\partial(F\times I)\). Then \(F\) must be an annulus. Since \(T_{1}\) can be isotoped to \(\partial(F\times I)\), there is an essential annulus that has one boundary that is on \(\partial N(K)\) and a second in \(\partial(F\times I)\). This is an essential type (2) annulus.
The torus \(T_{1}\) is incompressible to the side containing \(K\), as \(K\) prevents a compressing disk other than one with boundary parallel to that of \(\partial A\), and that cannot exist by incompressibility of \(A\). If there is a compressing disk to the other side of \(T_{1}\), compressing along it yields a sphere, which must bound a ball to that side, leaving no place for \(\partial(F\times I)\).
Similar arguments work in the case the two boundaries of \(A\) are on the boundaries of regular neighborhoods of two different components of \(L\). So we can restrict to the case of an essential type (2) annulus.
Suppose that \(A\) is such an annulus with one boundary on \(\partial(F\times I)\) and one on \(\partial N(K)\) for some component \(K\) of \(L\). Let \(A^{\prime}\) be the boundary of \(N(A\cup K)\). It is now a type (1) annulus. That it is incompressible is immediate from the fact \(A\) is incompressible. And similar to the previous cases, it cannot be boundary-parallel to the side containing \(K\), and it cannot be boundary-parallel to the other side, because of the presence of \(\partial(F\times I)\) to that side.
Thus, whenever there is an essential type (2) or (3) annulus present, there is an essential torus or an essential type (1) annulus present.
We have already eliminated essential tori in Proposition 3.2. So, we now eliminate essential type (1) annuli. But, for this case, we need all four of the conditions from Theorem 1.6.
**Lemma 3.6**.: _Let \(M=(F\times I)\setminus N(L)\) as in Theorem 1.6. Then \(M\) has no essential type (1) annuli._
Proof.: Suppose \(\Sigma\) is an essential type (1) annulus in \(M\), and suppose it is meridionally compressible. Then a meridional compression of \(\Sigma\) yields two disks with boundary on \(\partial(F\times I)\), each punctured once by the link. Let \(\Sigma^{\prime}\) be one of these once-punctured disks; similar to the proof of Proposition 3.4, consider the projection of the intersection curves in \(\Sigma^{\prime}\cap F_{\pm}\) and saddle disks to \(\Sigma^{\prime}\), which is homeomorphic to an annulus. We call the boundary component in \(\partial(F\times I)\) the _outer boundary_ and we call the boundary component in \(\partial N(L)\) the _inner boundary_. After shrinking down the saddle disks, we obtain a 4-valent graph, possibly with some circles without vertices. We denote their union by \(\Gamma\) which we call the intersection graph on \(\Sigma^{\prime}\).
Since \(\Sigma\) is punctured exactly once by \(L\), it is punctured away from the crossings. In particular, \(L\) lies in \(F\) near where it punctures \(\Sigma\). Furthermore, \(\Sigma^{\prime}\) was obtained via a meridional compression on \(\Sigma\), so the core curve of \(\Sigma^{\prime}\) is isotopic in \(M\) to a meridian \(\mu\) of \(L\). Hence, there are exactly two intersection arcs of \(\Gamma\) with an endpoint on the inner boundary; all other intersection arcs must have both endpoints on the outer boundary, and in particular, they are trivial on \(\Sigma^{\prime}\).
Observe that we may assume the intersection graph \(\Gamma\) has no circles without vertices. Suppose \(\gamma\) is such a circle in \(\Sigma^{\prime}\cap F_{\pm}\). Then it is isotopic via \(\Sigma^{\prime}\) to \(\mu\). We know \(\gamma\) cannot be trivial in \(F_{\pm}\); otherwise \(\mu\) would be trivial in \(M\). But then \(\gamma\) represents a nontrivial element in \(\pi_{1}(F\times I)\subset\pi_{1}(M)\) and hence, cannot be homotopic to \(\mu\). In particular, this shows that \(\Gamma\) is a genuine 4-valent graph on \(\Sigma^{\prime}\), and every complementary region of \(\Sigma^{\prime}\setminus\Gamma\) intersects \(\partial\Sigma^{\prime}\) in its boundary.
Next, we determine when \(\Gamma\) contains a fork. As in the proof of Proposition 3.4 and following remark, the existence of a fork with both edges ending on the outer boundary immediately gives us the desired contradiction by condition (iii).
Consider the subgraph \(\Gamma^{\prime}\) obtained by throwing away any edges of \(\Gamma\) which meet either boundary component of \(\Sigma^{\prime}\). If there are no cycles in \(\Gamma^{\prime}\), then \(\Gamma^{\prime}\) is a collection of trees. If one of those trees has two or more vertices, then there are at least two vertices of the tree which appear at the end of leaves. Since in \(\Gamma\), every vertex is 4-valent, each of these vertices has at least three edges ending on \(\partial\Sigma^{\prime}\). Since there are exactly two edges in \(\Gamma\) with endpoint on the inner boundary, we know there is at least one fork with both endpoints on the outer boundary.
If there is only one tree with exactly one vertex, then all four of its edges have an endpoint on \(\partial\Sigma^{\prime}\), and in particular, exactly two (adjacent) edges have endpoint on the outer boundary. This is a fork. Finally, if there are no vertices, then the two arcs with an endpoint each on the inner boundary have their other endpoint on the outer boundary. Denote these two arcs by \(\alpha\) and \(\beta\). On the projection surface \(F\), \(\alpha\cup\beta\) appears as an arc which has
both endpoints on \(\partial F\) and intersects \(\pi(L)\) transversely in exactly one point (corresponding to the puncture in \(\Sigma\)). See Figure 14 for a diagram of \(\Gamma\) on \(\Sigma^{\prime}\) and the corresponding picture on \(F\). But \(\alpha\) and \(\beta\) lie in adjacent regions of \(F\setminus\pi(L)\), both of which contain boundary in \(\partial F\). This contradicts condition (iii) from the statement of Theorem 1.6.
If the intersection graph on \(\Gamma^{\prime}\) does contain a cycle, then because there are no trivial cycles on \(\Gamma^{\prime}\), the only possibility is a single cycle parallel to the boundaries of \(\Gamma^{\prime}\). Because there cannot be any forks, the cycle must intersect two saddles in order that there are two edges ending on the inner boundary. The opposite edge of each saddle ends on the outer boundary of \(\Sigma^{\prime}\).
The complement of this graph is four disks, two of which live above \(F_{+}\) and two of which live below \(F_{-}\). Let \(D_{1}^{+}\) denote the disk that lives above \(F_{+}\) and that has a part of its boundary on the inner boundary of \(\Sigma^{\prime}\). We can realize \(\partial D_{1}^{+}\) as a simple closed curve \(\mu\) on \(F_{+}\) that is crossed once by the link corresponding to the inner boundary and twice more by the link at the saddles. Because \(F\) is incompressible in \(F\times I\), \(\mu\) must be trivial on \(F\times I\). But we cannot have three strands of the link entering a disk region on \(F\), a contradiction.
Thus, \(\Sigma\) must be meridionally incompressible. By Lemma 2.1, without loss of generality \(\Sigma\cap F_{+}\) contains at least one intersection curve \(\alpha^{\prime}\). First, suppose that \(\alpha^{\prime}\) is trivial. If \(\alpha^{\prime}\) is closed, then by Lemma 2.8, it is trivial on \(F_{+}\) and contradicts Lemma 2.4. Then \(\alpha^{\prime}\) must be an arc (which bounds a disk \(D\) in \(\Sigma\)). Consider the intersection graph \(\Gamma^{\prime}\) on \(\Sigma\). By Lemma 2.9, this graph has at least one vertex.
If \(\Gamma^{\prime}\) has a fork (where the endpoints of its edges can lie on either boundary component of \(\Sigma\)), then we are done. Suppose that the intersection graph contains a cycle. As the cycle cannot be trivial, it must wrap once around
Figure 14. An intersection graph on \(\Sigma^{\prime}\) without vertices and intersection arcs \(\alpha\) and \(\beta\) as they appear on \(F\).
the core curve of the annulus and it must have vertices. To avoid trivial cycles and to avoid forks, the only possibility is that the remaining two edges coming out of a vertex must go directly out to the boundary, one to each of the separate boundaries of \(\Sigma\). Thus, we obtain an intersection graph as in Figure 15, but with any even number of vertices. Note that the intersection graph decomposes \(\Sigma\) into disks, and any two disks that share an edge in the graph must appear on opposite sides of \(F\). This forces the number of vertices to be even.
We now consider such annuli and show that they contradict condition (iv) of Theorem 1.6. Suppose there is an annulus with such an intersection graph. Then we first consider how that annulus \(\Sigma\) can sit in \(F\times I\). Note that if all of the saddles that occur on \(\Sigma\) appear in distinct bubbles, then the core curve in the intersection graph yields a simple closed curve \(\beta\) on \(F\) that passes through the corresponding bubbles, bisecting each crossing. The fact the remaining two edges coming out of each saddle must go directly to a boundary component of \(F\) implies that the two complementary regions on \(F\) at such a crossing that do not intersect \(\beta\) in their interiors must contain these edges and therefore must be annular regions so that there are boundary components of \(F\) for these edges to end on. This is exactly the situation that condition (iv) eliminates. Note that in this case, the number of crossings intersected by \(\alpha\) is the number of saddles in \(\Sigma\), which is even.
Figure 15. An example of a nontrivial cycle in the intersection graph on \(\Sigma\). There must be an even number of vertices on the cycle.
Suppose now that there are some saddles from \(\Sigma\) that occur in the same bubble. Then the resultant curve \(\beta\) on \(F\) will pass through a given crossing more than once. However, at any such crossing, \(\beta\) must only pass through the same pair of opposite complementary regions, as otherwise all four complementary regions meeting at the crossing would have to be annuli to accommodate the branches on the intersection graph and this contradicts condition (iii). The curve \(\beta\) must cross itself transversely as it passes through a crossing because it is passing along the diagonal of saddles each time it passes through as in Figure 16. Note that it is still the case that the two opposite complementary regions through which \(\beta\) does not pass must still be annuli to accommodate the branches coming out of the saddles.
Suppose there are bubbles with more than two saddles. Then we can surger \(\beta\) so that it passes through the crossing once if the original passed through an odd number of times and twice if the original passed through an even number of times, as in Figure 17. Call the resulting set of curves \(\Phi\). If there is at least one component \(\phi\) of \(\Phi\) with at least one crossing that it passes through once, then surger all remaining crossings that are passed through twice by \(\phi\) so as not to pass through those crossings. Then take a simple closed curve component \(\phi^{\prime}\) that results and that passes through at least one crossing. Such a \(\phi^{\prime}\) must exist. It contradicts condition (iv).
If there is no component of \(\Phi\) that passes through a bubble once, so each component passes through every bubble it intersects twice, then choose any component \(\mu\) that passes through bubbles. Its projection on \(F\) is the
Figure 16. When the curve \(\beta\) passes through a crossing more than once, it must intersect the same pair of opposite complementary regions and it must cross itself transversely in the process.
projection of a knot. Because two of the opposite regions at a crossing of \(\mu\) must touch regions of the link projection that are annuli and the other two regions cannot touch regions of the link projection that are annuli by condition (iii), the projection of \(\mu\) must be checkerboard colorable, with the annular regions of the link projection touching a crossing all occurring in the shaded regions of the projection of \(\mu\). However, we can then surger the crossings of \(\mu\) around the crossings on the boundary of a single shaded region to obtain a loop that passes through each crossing of the link projection once, with annular regions to either side at each crossing. This loop contradicts condition (iv).
**Proposition 3.7**.: _The manifold \(M\) contains no essential annuli._
Proof.: This follows from Proposition 3.2 and Lemmas 3.5 and 3.6.
In order to prove Theorem 1.6 in both directions, we need the following lemma.
**Lemma 3.8**.: _If there exists a simple closed curve \(\alpha\) in \(F\) that intersects \(\pi(L)\) exactly in a nonempty collection of crossings, such that for each crossing, \(\alpha\) bisects the crossing and the two opposite complementary regions meeting at that crossing that do not intersect \(\alpha\) near that crossing are annuli, then there exists an essential annulus in \(M=(F\times I)\setminus N(L)\)._
In Figure 18 (a), we see a particular example of such an annulus. Green curves represent the link, red curves represent the intersection arcs and saddles, and purple curves represent the boundary curves of the annulus.
Proof.: First assume that \(\alpha\) passes through an even number of crossings \(n\). We construct an annulus \(\Sigma\) as in Figure 15 that exists in the manifold. The nontrivial cycle in \(\Sigma\) becomes \(\alpha\). Each of the branch edges going out to \(\partial F\) on \(\Sigma\) exist on \(F\) since at each crossing that \(\alpha\) passes through, the opposite regions that it does not pass through are annuli. Thus, we can create the corresponding intersection graph on \(F\). We describe how to insert the disk \(D_{1}^{+}\) in \((F\times I)\setminus N(L)\), but the same description works to insert any of the
Figure 17. Surgering curve at crossings to make sure it passes through a crossing either once or twice.
disks \(D_{i}^{\pm}\). The disk \(D_{1}^{+}\) has boundary on \(F=F\times\{1/2\}\) that runs along an arc in the intersection graph starting at \(x_{1}\) on \(\partial F\), followed by an arc on the boundary of a saddle, followed by another arc in the intersection graph along \(\alpha\), followed by an arc on the boundary of another saddle, followed by an arc in the intersection graph that ends at \(x_{2}\) on \(\partial F\) on the opposite side of \(\alpha\) from \(x_{1}\). (See Figure 18(b).) Call this longer arc \(\lambda_{0}\). The remaining portion of \(\partial D_{1}^{+}\) lies on the boundary of the handlebody. We describe it as follows. Let \(\lambda_{1}=\{x_{1}\}\times[1/2,1]\) and \(\lambda_{2}=\{x_{2}\}\times[1/2,1]\). Let \(\lambda_{3}\) be a copy of the arc \(\lambda_{0}\) from \(F\) but appearing on \(F\times\{1\}\). Then \(\partial D_{1}^{+}=\lambda_{0}\cup\lambda_{1}\cup\lambda_{2}\cup\lambda_{3}\). That this curve bounds a disk that avoids \(L\) is immediate from the construction. That the set of disks inserted in this manner together with the corresponding saddles form a properly embedded annulus is also immediate from the construction and the way the various disks share certain edges. It remains to prove that \(\Sigma\) is essential.
Suppose \(\Sigma\) is compressible. Then \(\alpha\) must bound a disk in \(F\times I\). But by assumption, since branch arcs to either side of \(\alpha\) must end on boundary components of \(F\), \(\alpha\) cannot be trivial on \(F\). Hence, it cannot be trivial in \(F\times I\).
Suppose now that \(\Sigma\) is boundary compressible. Then if we take a path through the intersection graph that corresponds to two opposite edges at a saddle that each go out to the boundary, together with a diagonal of the saddle, and call that arc \(\kappa\), then \(\kappa\) together with an arc \(\kappa^{\prime}\) on the boundary of \(F\times I\) bound a disk \(D\) in \(M\).
Consider the intersection graph of \(D\) with \(F\). If \(D\) only intersects \(F\) along \(\kappa\), then \(\partial F\) wraps once around \(L\) when \(\kappa\) passes under \(L\) on the saddle, Forcing \(D\) to be punctured. Otherwise, there are arcs of intersection that go out to the boundary of \(D\). The result is an intersection graph on \(D\) much
Figure 18. The intersection curves for an annulus that is generated when condition (iv) does not hold.
like the intersection graph we had on \(D\) when we were proving there were no essential disks in \(M\). By the same argument given there using forks, we prove no such disk can exist.
Finally we must consider the case that \(\alpha\) passes through an odd number of bubbles. In this case, the same construction yields a properly embedded Mobius band. The boundary of a regular neighborhood of this Mobius band is a properly embedded annulus \(\Sigma\). It is both incompressible and boundary incompressible to the side that contains the Mobius band. A similar argument to the previous case demonstrates that it is also incompressible and boundary incompressible to the other side.
This allows us to complete the proof of Theorem 1.6, assuming Proposition 1.5.
Proof of Theorem 1.6.: By Thurston's Hyperbolization Theorem, \(M=(F\times I)\setminus N(L)\) is tg-hyperbolic if and only if there are no essential spheres, tori, disks, or annuli. Then the theorem follows in one direction from Propositions 3.2 and 3.4 and Lemmas 3.5 and 3.6, and in the other direction by the comments subsequent to the theorem statement and Lemma 3.8.
We conclude the section with a proof of Proposition 1.5.
Proof of Proposition 1.5.: First, suppose that \(\pi(L)\) is not weakly prime, and let \(\gamma\) be a circle intersecting \(\pi(L)\) transversely in exactly two points such that it bounds a disk \(D\) and \(D\) contains at least two crossings (note this uses that \(\pi(L)\) is reduced). Let \(B\) be a regular neighborhood of \(D\) which is a 3-ball. Its boundary \(\partial B\) is a 2-sphere punctured twice by \(L\). Furthermore, \(D\) contains at least two (non-reducible) crossings and \(\pi(L)\) is alternating; hence, \(B\) intersects \(L\) in a nontrivial arc and \(L\) is not prime.
It remains to show that if \(L\) is not prime, then \(\pi(L)\) is not weakly prime. Let \(\Sigma\subset F\times I\) be an essential sphere which is punctured twice by \(L\). Note that we may assume \(\Sigma\) is meridionally incompressible. Indeed, a meridional compression yields two essential spheres, each of which are punctured twice by \(L\). Then iteratively perform these compressions until we obtain a meridionally incompressible essential twice-punctured sphere.
Let \(\tilde{F}\) be the closed orientable surface of genus \(g\) obtained by capping off each circle boundary of \(F\) with a disk. Then we may regard \(L\subset F\times I\) as a link in \(\tilde{F}\times I\) with projection diagram \(\pi(L)\) onto \(\tilde{F}\times\{1/2\}\). We refer to this projection surface by \(\tilde{F}\) when there is no ambiguity. Analogously, we may define \(\tilde{F}_{+}\) (resp. \(\tilde{F}_{-}\)) to be the surfaces obtained from \(\tilde{F}\) by removing the disks in its intersection with the bubbles and replacing them with the upper (resp. lower) hemispheres of the bubbles. Also, we view \(\Sigma\) as a sphere in
\(\tilde{F}\times I\) which is twice-punctured by \(L\) via the inclusion map \(F\times I\hookrightarrow\tilde{F}\times I\). Note that \(\Sigma\) is still essential in \(\tilde{F}\times I\).
In Lemma 13 of [4], the authors show that when \(\Sigma\) is a meridionally incompressible essential sphere which is punctured twice by \(L\) and the embedding of \(\Sigma\) minimizes \((s,i)\), then there is exactly one intersection curve \(\alpha\) in \(\Sigma\cap\tilde{F}_{+}\) and it intersects \(L\) at least twice. This is true even when \(\pi(L)\) is not reduced when viewed on \(\tilde{F}_{+}\). Moreover, the authors of [4] show that \(\alpha\) must be trivial on \(\Sigma\) and \(\tilde{F}_{+}\), and it does not intersect any bubbles. Hence, \(\alpha\) is a circle bounding a disk \(D\) in \(\tilde{F}_{+}\) which intersects \(\pi(L)\) exactly twice. Since \(\Sigma\) is essential, \(D\) contains at least one crossing.
If \(\pi(L)\) is reduced when viewed on \(\tilde{F}\), then \(\pi(L)\) is not weakly prime on \(\tilde{F}\). Observe that \(\alpha\) must bound a disk in \(F_{+}\) as well; otherwise, we find a compression disk for \(F\times\{1/2\}\) in \(F\times[1/2,1]\). Hence, \(\pi(L)\) is not weakly prime on \(F\).
If \(\pi(L)\) is not reduced, suppose \(\alpha\) may be isotoped slightly in \(\tilde{F}_{+}\) so it intersects \(\pi(L)\) exactly once at a double point. Consider the regions of \(\tilde{F}\setminus\pi(L)\) which are contained in \(D\) and do not intersect \(\alpha\), and observe that since \(\pi(L)\) is reduced on \(F\), at least one of these regions is obtained by capping off a boundary component of the corresponding region in \(F\setminus\pi(L)\). But then we can find a compression disk for \(F\times\{1/2\}\) in \(F\times[1/2,1]\). Hence, the crossings contained in \(D\) are unaffected by reducing \(\pi(L)\) in \(\tilde{F}_{+}\), and by the same argument as before, \(\pi(L)\) is not weakly prime in \(F\).
## 4. Proof of Theorem 1.8
Proof of Theorem 1.8.: Note that \(F\) is neither a disk by \(\partial\)-irreducibility of \(Y\) nor an annulus since if it were, we could push a copy off itself and we would have an essential annulus in \(Y\) that did not intersect \(F\).
Suppose \(\Sigma\) is a properly embedded essential disk or sphere in \(Y\setminus N(L)\). If \(\Sigma\) does not intersect \(F\times I\), its existence contradicts the \(\partial\)-irreducibility or irreducibility of \(Y\). If \(\Sigma\) is entirely contained in \((F\times I)\setminus N(L)\) then if it is a sphere, it must bound a ball in \((F\times I)\setminus N(L)\) by Proposition 3.2, which did not use conditions (iii) or (iv), and hence a ball in \(Y\setminus N(L)\), a contradiction to its being essential. If it is a disk \(D\), the boundary of the disk would have to be a nontrivial curve in \(\partial F\times I\), contradicting the fact \(\partial F\times I\) is incompressible in \(F\times I\) and also the \(\partial\)-irreducibility of \(Y\).
Still assuming \(\Sigma\) is an essential disk or sphere, if \(\Sigma\) intersects \(F\times I\) but is not entirely contained in it, then by incompressibility and \(\partial\)-incompressibility of \(F\), we can replace it by a properly embedded essential \(\Sigma^{\prime}\) that does not intersect \(F\times\{0,1\}\), a contradiction to the cases we have already discussed.
Suppose now that \(\Sigma\) is an essential torus or annulus in \(Y\setminus N(L)\). If it does not intersect \(F\times I\), then it must either be compressible or boundary-parallel in \(Y\). If it compresses in \(Y\), then a compressing disk \(D\) must intersect \(F\times I\). But by incompressibility of \(F\), we can find another compressing disk that does not intersect \(F\times I\), contradicting essentiality in \(Y\setminus N(L)\).
If \(\Sigma\) is boundary-parallel in \(Y\), then there is a boundary component \(H\) of \(Y\) with which \(\Sigma\) is boundary-parallel. In the case \(\Sigma\) is a torus, there is a \(T\times I\) through which \(\Sigma\) is parallel to the boundary, and \(H\) is a torus. Since \(F\) does not intersect \(\Sigma\), it must be contained in \(T\times I\) so that \(L\subset F\times I\) can prevent \(\Sigma\) from being boundary-parallel in \(Y\setminus N(L)\). Further, \(F\) must have all of its boundary on \(H\). But there are no essential surfaces in \(T\times I\) with all boundaries on \(T\times\{1\}\). So \(F\) does not intersect \(T\times I\) and \(\Sigma\) remains boundary-parallel in \(Y\setminus N(L)\), a contradiction.
In the case \(\Sigma\) is an annulus that is boundary-parallel in \(Y\), there is a solid torus through which \(\Sigma\) is parallel into a boundary component \(H\) of \(Y\). Then \(\Sigma\) must have both boundary components on \(H\). Again, there are no essential surfaces to play the role of \(F\) in the solid torus with boundary just on \(H\), so \(F\) does not intersect the solid torus and \(\Sigma\) remains boundary-parallel in \(Y\setminus N(L)\), a contradiction.
In the case that \(\Sigma\) is entirely contained in \((F\times I)\setminus N(L)\), \(\Sigma\) cannot be a torus by Proposition 3.2, which did not assume conditions (iii) and (iv).
If \(\Sigma\) is an annulus entirely contained in \((F\times I)\setminus N(L)\), then by Lemma 3.5, there exists a type (1) annulus \(\Sigma^{\prime}\) that has both boundaries in \(\partial F\times I\), and they must be nontrivial curves so that \(\Sigma\) is incompressible in \(Y\setminus N(L)\). If each boundary is on a different component of \(\partial F\times I\), then the two components would be isotopic in \(F\times I\) through the annulus, a contradiction to the fact \(F\) is not itself an annulus.
If both boundary components are in the same component of \(\partial F\times I\), then\(\Sigma\) can be isotoped so that \(\partial\Sigma\) does not intersect \(F\). Therefore, the intersection graph on \(\Sigma\) has no edges that touch \(\partial\Sigma\).
Since the intersection graph is 4-valent, this implies that there are simple closed curves in the intersection graph that are trivial, which contradicts the comment following the proof of Lemma 2.8. Hence the intersection graph is empty. Thus, \(\Sigma\) does not intersect \(F\) and it must be boundary-parallel in \(F\times I\), a contradiction to its being essential in \(Y\setminus N(L)\).
Let \(\Sigma\) be an essential torus or annulus in \(Y\setminus N(L)\) that does intersect \(F\times I\) but is not entirely contained in it. In the case \(\Sigma\) is a torus, we can assume it is meridionally incompressible, as if not, we would generate a twice-punctured sphere that is essential, and Lemma 1.7 would then imply the projection of our link \(L\) to \(F\) is not weakly prime, contradicting condition (i) of the theorem.
In the case \(\Sigma\) is an annulus, if it is not meridionally incompressible, we can compress to obtain two annuli, each essential. If either had a second meridional compression, we would similarly contradict the fact that the projection of our link \(L\) to \(F\) is not weakly prime. Thus by replacing \(\Sigma\) by one of the two resulting annuli if needed, we can assume \(\Sigma\) is both essential and meridionally incompressible.
We then consider its intersection curves with \(F\times\{0,1\}\). Assume we have chosen \(\Sigma\) to minimize the number of such intersection curves.
Any simple closed intersection curve that bounds a disk on \(\Sigma\) must also bound a disk on \(F\) by incompressibility. Using irreducibility of \(Y\) and of \((F\times I)\setminus N(L)\), we can isotope to eliminate all such simple closed curves, contradicting our choice of \(\Sigma\) as having a minimal number of intersection curves. So the only possible simple closed curves of intersection are nontrivial on \(\Sigma\) and therefore cut it into annuli.
In the case when \(\Sigma\) is an annulus, we can also have intersection arcs. If such an arc cuts a disk from \(\Sigma\), take an outermost such. It must also cut a disk from \(F\) by \(\partial\)-incompressibility of \(F\). The union of these two disks generates a properly embedded disk in \(Y\), which by \(\partial\)-irreducibility of \(Y\) must have trivial boundary on \(\partial Y\). We can then form a sphere that bounds a ball, through which we can isotope \(\Sigma\) to eliminate the intersection arc, again contradicting minimality of intersection curves on \(\Sigma\).
In the case there are simple closed curves of intersection, \(\Sigma\) will intersect \(F\times I\) in a collection of annuli, each with boundary components on \(F\times\{0,1\}\) and possibly one on \(\partial N(L)\) and also intersect \(Y\setminus(F\times I)\) in another collection of annuli, the boundaries of which are on \(F\times\{0,1\}\). Given such an annulus \(A^{\prime}\) in \((F\times I)\setminus N(L)\) with both boundaries in \(F\times\{0,1\}\), it is incompressible because \(\Sigma\) is incompressible in \(Y\setminus N(L)\). It cannot be boundary-parallel since we cannot reduce the number of intersections of \(\Sigma\) with \(F\times\{0,1\}\). So \(A^{\prime}\) is essential in \((F\times I)\setminus N(L)\).
But \(\partial A^{\prime}\) does not intersect \(F\), and therefore the intersection graph on \(A^{\prime}\) has no edges that touch \(\partial A^{\prime}\). As argued in a previous case, since the intersection graph is 4-valent, this implies that there are simple closed curves in the intersection graph that are trivial, which contradicts the comment following the proof of Lemma 2.8. Hence the intersection graph is empty and \(A^{\prime}\) does not intersect \(F\). So, it must be boundary-parallel in \(F\times I\), a contradiction to the minimality of the number of intersection curves with \(F\times\{0,1\}\).
The last possibility for \(A^{\prime}\) when the intersection curves of \(\Sigma\cap(F\times\{0,1\})\) are simple closed curves is that \(A^{\prime}\) intersects \(F\times\{0,1\}\) in a single simple closed curve. Then its other boundary is on \(\partial N(K)\) where \(K\) is a component of \(L\). The boundary of a regular neighborhood of \(A^{\prime}\cup K\) is an annulus \(A^{\prime\prime}\) with both boundaries on one component of \(F\times\{0,1\}\). It must be
incompressible since \(\Sigma\) is incompressible. And it is not boundary-parallel in \((F\times I)\setminus N(L)\) to the side containing \(K\). If it was boundary-parallel to the other side, then that side must be a solid torus. The side containing \(K\) is already a solid torus. These two solid tori share an annulus on their boundary and their union is all of \(F\times I\). Thus, the boundary of \(F\times I\) must be a torus and \(F\) must be an annulus, a contradiction to our comment at the beginning of the proof that \(F\) cannot be an annulus.
Thus, \(A^{\prime\prime}\) is essential in \(F\times I\) with both boundaries on \(F\times\{0,1\}\). But we have already eliminated this possibility.
Finally, suppose that \(\Sigma\) is an annulus, and all intersection curves are arcs from one boundary of \(\Sigma\) to the other. They cut \(\Sigma\) into disks that alternately lie in \((F\times I)\setminus N(L)\) and \(Y\setminus(F\times I)\). Let \(D\) be such a disk in \((F\times I)\setminus N(L)\). Its boundary \(\partial D\) breaks up into four arcs, two of which are non-adjacent arcs in \(F\times\{0,1\}\) and two of which are in \(\partial F\times I\).
The disk \(D\) must intersect \(F\times\{1/2\}\) or we could isotope it off \(F\times I\) and reduce the number of intersections of \(\Sigma\) with \(F\times I\). Then we can isotope to obtain an intersection graph on \(D\) with at least one vertex by Lemma 2.9. Hence as argued previously, there must be a fork.
In fact, we can argue that there are at least four forks. If there is just a single vertex, there are four edges that go to the boundary and hence four forks. If there is more than one vertex, removing all edges in the intersection graph that intersect \(\partial D\), we are left with a tree or trees with two or more leaves. Each then has three edges going out to the boundary, and thus generates at least two forks. So we have at least four forks.
If both edges of a fork end on the same boundary component of \(F\), then we create a region in the complement of the intersection graph on \(F\) that has one strand of \(L\) entering from a bubble with nowhere to go, since the region is a disk or annulus, a contradiction. Hence, the two edges making up the fork must end on distinct components of the boundary of \(F\). However, in order for \(\partial D\) to pass from one component of \(\partial F\times I\) to a different component, it must travel on the boundary of \(F\times I\) up \(\partial F\times I\), across one of \(F\times\{0,1\}\) and down \(\partial F\times I\). Since there are at least four such forks, this contradicts the fact there are only two connected arcs on the boundary of \(D\) that are in \(\partial F\times I\).
Therefore, since there are no essential spheres, disks, tori or annuli in \(Y\setminus N(L)\), it must be tg-hyperbolic.
We conclude this section with a proof of Proposition 1.7.
Proof of Proposition 1.7.: First, the direction "\(\pi(L)\) is not weakly prime implies \(L\) is not prime in \(Y\)" follows as in the proof of Proposition 1.5. To prove the converse, suppose that \(L\) is not prime and let \(\Sigma\subset Y\) be an essential
sphere punctured twice by \(L\). Let \(G=F\times\{0\}\cup F\times\{1\}\). Consider the intersection curves in \(\Sigma\cap G\). If the intersection is empty, then we can cut along \(G\) so that \(\Sigma\) is an essential sphere embedded in \(F\times I\). Then by Proposition 1.5, \(\pi(L)\) is not weakly prime on \(F\).
Now suppose \(\Sigma\cap G\) is not empty, and let \(\alpha\subset\Sigma\cap G\) be an intersection curve which is innermost on \(\Sigma\). Since \(G\) is incompressible in \(Y\), \(\alpha\) bounds a disk \(D\subset G\) on \(G\). Let \(D^{\prime}\subset\Sigma\) be a disk bounded by \(\alpha\) on \(\Sigma\) which contains no other intersection curves of \(\Sigma\cap G\): then \(D\cup D^{\prime}\) is a 2-sphere in \(Y\setminus G\). If \(D\cup D^{\prime}\) is outside \(F\times I\) it must bound a ball outside \(F\times I\) and we can isotope to remove the intersection. If it is inside \(F\times I\), it bounds a ball in \(F\times I\) by Proposition 3.2.
Thus we can eliminate all intersections of \(\Sigma\) with \(G\), and hence Proposition 1.5 implies \(\pi(L)\) is not weakly prime.
## 5. Applications and Further Directions
One motivation for Theorem 1.6 comes from studying hyperbolic links in handlebodies. In addition to being interesting objects in their own right as a natural generalization of classical link theory, they also show up naturally when studying hyperbolicity of knotoids and generalized knotoids as in [5] and [6]. Indeed, the map \(\phi^{D}_{\Sigma}\) constructed in these papers allows us to associate a hyperbolic volume to generalized knotoids by mapping them to the set of links in a handlebody.
Observe that a genus \(g\) handlebody can be obtained by thickening a 2-sphere with \((g+1)\) disks removed, or more generally by thickening a genus \(k\) closed orientable surface with \((g-k)\) disks removed (where \(k\geq 1\)). In the remainder of this section, when we refer to a projection surface in a handlebody, we always mean one of these surfaces, so that the handlebody is obtained by thickening it. Then Theorem 1.6 is useful for studying links in handlebodies, and hence, for studying generalized knotoids as well.
As one application of the theorem, note that if a generalized knotoid \(k\) has any poles of nonzero valency, then \(\phi^{D}_{\Sigma}\) never yields a link which is cellular alternating with respect to one of these projection surfaces. This is because the construction requires us to double the rail diagram of \(k\) across the boundary portions corresponding to the poles of nonzero valency.
However, we can restrict to the class of generalized knotoids whose poles are all valency-zero, that is, generalized knotoids whose diagram consists of a link on \(\Sigma\) together with a set of valency-zero poles. These are the _staked links_ defined in [5]. Then, as noted in Proposition 7.7 of [5], Theorem 1.6 precisely characterizes which alternating staked links (or equivalently, alternating links in handlebodies) are hyperbolic under \(\phi^{D}_{\Sigma}\).
In that paper, this is used to prove Theorem 7.8, which says that every link with a checkerboard-colorable diagram on a closed surface \(\Sigma\) has a diagram such that staking that diagram makes the resulting link hyperbolic in \(\Sigma\times I\). In particular, this means that we can define the staked volume for any such link to be the minimum volume of any hyperbolic staking of the link. Since all link diagrams in \(S^{3}\) are checkerboard-colorable, we can define staked volume for every link in \(S^{3}\). See the last section of [5] for more details.
As we mentioned in the introduction, there are examples of links in handlebodies shown to be hyperbolic by Theorem 1.6 that are not covered by the hypothesis of Theorem 1.1 from [13]. For example, consider the family of examples in Figure 19.
We would like to extend Theorem 1.6 to cellular non-alternating links. The following result gives an extension in this direction:
**Corollary 5.1**.: _Let \(F\) be a projection surface with nonempty boundary which is not a disk, and let \(L\subset F\times I\) be a link with a reduced cellular (not necessarily alternating) projection diagram \(\pi(L)\subset F\times\{1/2\}\), and let \(M=(F\times I)\setminus N(L)\). Suppose conditions (i)-(iv) of Theorem 1.6 are satisfied, as well as the following:_
1. _let_ \(c_{1},\ldots,c_{n}\) _be crossings of_ \(\pi(L)\) _such that_ \(\pi(L)\) _becomes alternating after each_ \(c_{i}\) _is changed to the opposite crossing. Each_ \(c_{i}\) _locally divides_ \(F\) _into four complementary regions such that a pair of opposite regions are homeomorphically annuli._
_Then the conclusion of Theorem 1.6 holds._
Figure 19. Here \(F\) is an annulus and \(T\) is a prime, cellular alternating tangle which is not an integer tangle. By Theorem 1.6, this is a family of tg-hyperbolic links in a solid torus. There are no closed projection surfaces which satisfy the hypotheses of Theorem 1.1 from [13]. For example, if we choose the torus parallel to \(\partial(F\times I)\), the link will not have a cellular projection.
Proof.: Consider the projection diagram \(\pi(L)\subset F\). For each crossing \(c\) in \(\{c_{i}\}_{i=1}^{n}\), let \(R_{1}\) and \(R_{2}\) denote the two complementary regions of \(F\setminus\pi(L)\) which meet \(c\) and are homeomorphically annuli. There is an arc \(\alpha\subset F\) which has an endpoint each on \(\partial R_{1}\cap\partial F\) and \(\partial R_{2}\cap\partial F\) and intersects \(\pi(L)\) exactly once through \(c\). Then \(\alpha\times I\) is a properly embedded disk \(\Sigma\) in \(F\times I\) which is punctured twice by \(L\). See Figure 20.
Now we may cut \(M\) along \(\Sigma\), yielding two copies, \(\Sigma_{1}\) and \(\Sigma_{2}\). Reglue \(\Sigma_{2}\) to \(\Sigma_{1}\) along a rotation by \(2\pi\): this has the effect of changing the crossing \(c\) to the opposite crossing, and the resulting manifold is homeomorphic to \(M\). By performing this operation for each of the \(c_{i}\), we obtain a link \(L^{\prime}\subset F\times I\) such that \((F\times I)\setminus N(L^{\prime})\) is homeomorphic to \(M\) and its projection \(\pi(L^{\prime})\) is cellular alternating on \(F\). Moreover, conditions (i), (ii), (iii) and (iv) still hold. Then the statement follows from Theorem 1.6.
The corollary expands the number of known hyperbolic stacked links as defined in [5], or equivalently tg-hyperbolic links in handlebodies. Theorem 1.6 may also be combined with results from [7] to give other ways of obtaining tg-hyperbolic links in handlebodies, namely, by _composition_.
There is much work to be done in expanding the number of links in handlebodies known to be tg-hyperbolic. The methods used in this paper rely heavily on the alternating property: however, it is conceivable that these methods might be adapted for almost alternating links by taking into account the different behavior of the intersection curves at the non-alternating crossing.
Another direction is to shrink the number of hypotheses needed for Theorem 1.8. Theorem 1.1 of [13] is very powerful in this sense: it applies to links in an arbitrary compact 3-manifold \(Y\) (satisfying some mild conditions) with a cellular alternating diagram on a _closed_ projection surface that is not
Figure 20. On the left is a local picture of \(\pi(L)\subset F\) near crossing \(c\) with the \(R_{i}\) shaded. Crossing the arc \(\alpha\) by \(I\) yields the twice-punctured disk \(\Sigma\), shown on the right.
necessarily incompressible in \(Y\) but rather the diagram satisfies a certain representivity condition. There should be a version of Theorem 1.8 where \(F\) need not be incompressible and \(\partial\)-incompressible, but similarly, the diagram satisfies an appropriate representativity condition.
We might also try to generalize Theorems 1.6 and 1.8 to allow for nonorientable projection surfaces \(F\) or for nonorientable \(I\)-bundles. Since the analogous results for closed surfaces in [4] hold in the nonorientable case, we suspect these generalizations hold here as well.
We are also interested in volume computations for hyperbolic alternating links in thickened surfaces with boundary. In [14], Lackenby proves a lower bound on hyperbolic volume for alternating links in \(S^{3}\) in terms of the number of twist regions, which can be read off the link diagram. Howie and Purcell generalize this in [13] to a lower bound for volumes of links in \(Y\). It would be interesting to try adapting their methods to prove a similar lower bound on volume in our case. This might be done by defining a slightly more general version of Howie and Purcell's _angled chunks_ which can account for boundary coming from the projection surface. Alternatively, we might try to find proofs of the lower bounds from the viewpoint of bubbles instead.
|
2302.00133 | Sublinear Approximation Schemes for Scheduling Precedence Graphs of
Bounded Depth | We study the classical scheduling problem on parallel machines %with
precedence constraints where the precedence graph has the bounded depth $h$.
Our goal is to minimize the maximum completion time. We focus on developing
approximation algorithms that use only sublinear space or sublinear time. We
develop the first one-pass streaming approximation schemes using sublinear
space when all jobs' processing times differ no more than a constant factor $c$
and the number of machines $m$ is at most $\tfrac {2n \epsilon}{3 h c }$. This
is so far the best approximation we can have in terms of $m$, since no
polynomial time approximation better than $\tfrac{4}{3}$ exists when $m =
\tfrac{n}{3}$ unless P=NP. %the problem cannot be approximated within a factor
of $\tfrac{4}{3}$ when $m = \tfrac{n}{3}$ even if all jobs have equal
processing time. The algorithms are then extended to the more general problem
where the largest $\alpha n$ jobs have no more than $c$ factor difference. %
for some constant $0 < \alpha \le 1$. We also develop the first sublinear time
algorithms for both problems. For the more general problem, when $ m \le \tfrac
{ \alpha n \epsilon}{20 c^2 \cdot h } $, our algorithm is a randomized
$(1+\epsilon)$-approximation scheme that runs in sublinear time. This work not
only provides an algorithmic solution to the studied problem under big data %
and cloud computing environment, but also gives a methodological framework for
designing sublinear approximation algorithms for other scheduling problems. | Bin Fu, Yumei Huo, Hairong Zhao | 2023-01-31T22:47:32Z | http://arxiv.org/abs/2302.00133v1 | # Sublinear Approximation Schemes for Scheduling Precedence Graphs of Bounded Depth
###### Abstract
We study the classical scheduling problem on parallel machines where the precedence graph has the bounded depth \(h\). Our goal is to minimize the maximum completion time. We focus on developing approximation algorithms that use only sublinear space or sublinear time. We develop the first one-pass streaming approximation schemes using sublinear space when all jobs' processing times differ no more than a constant factor \(c\) and the number of machines \(m\) is at most \(\frac{2ne}{3hc}\). This is so far the best approximation we can have in terms of \(m\), since no polynomial time approximation better than \(\frac{4}{3}\) exists when \(m=\frac{n}{3}\) unless P=NP. The algorithms are then extended to the more general problem where the largest \(\alpha n\) jobs have no more than \(c\) factor difference. We also develop the first sublinear time algorithms for both problems. For the more general problem, when \(m\leq\frac{\alpha ne}{20c^{2}\cdot h}\), our algorithm is a randomized \((1+\epsilon)\)-approximation scheme that runs in sublinear time. This work not only provides an algorithmic solution to the studied problem under big data environment, but also gives a methodological framework for designing sublinear approximation algorithms for other scheduling problems.
Introduction
Big data and cloud computing play a huge role nowadays in our digital society. Each day a large amount of data is generated and collected by a variety of programs and applications. These large sets of data, which are referred as "big data", are hard to peruse or query on a regular computer. On the other hand, cloud computing provides a platform for processing big data efficiently on the "cloud" where the "cloud" is usually a set of high-powered servers from one of many providers. The "cloud" can view and query large data sets much more quickly than a standard computer could. Big data and cloud computing together provide the solutions for the companies with big data but limited resources, a dilemma encountered by many companies in manufacturing and service industries.
Two decades ago, researchers in the area of statistics, graph theory, etc. started to investigate the sublinear approximation algorithms that uses only sublinear space or sublinear time, namely sublinear space algorithms or sublinear time algorithms. With more and more data being generated and stored away in the data center, and higher and higher dimension of computation being required and performed remotely on the "cloud" in various applications, sublinear algorithms become a new paradigm in computing to solve the problems under big data and cloud computing. Unlike the traditional data model where all the data can be stored and retrieved locally and one can hope to get the exact answers, the goal of sublinear space and sublinear time algorithms in general, is to obtain reasonably good approximate answers without storing or scanning the entire input.
Sublinear space algorithms are also called streaming algorithms, which process the input where some or all of the data is not available for random access in the local computers but rather arrives as a sequence of items and can be examined in only a few passes (typically just one). Early research on streaming algorithms dealt with simple statistics of the input data streams, such as the median [21], the number of distinct elements [11], or frequency moments [2]. Recently, many effective streaming algorithms have been designed for a range of problems in statistics, optimization, and graph algorithms (see surveys by Muthukrishnan [22] and McGregor [19]).
Sublinear time algorithms target at giving good approximations after inspecting only a very small portion of the input. Usually, this is achieved by using randomization. Sublinear time algorithms have been derived for many computational problems, for example, checking polygon
intersections [4], approximating the average degree in a graph [10, 14], estimating the cost of a minimum spanning tree [5, 8, 7], finding geometric separators [12], and property testing [13, 15], etc. Developing sublinear time algorithms not only speeds up problem solving process, but also reveals some interesting properties of computation, especially the power of randomization.
This paper aims at designing both types of sublinear approximation algorithms for the classical parallel machine scheduling problem subject to precedence constraints. We hope that our algorithms not only provide algorithmic solutions for this specific problem under big data and cloud computing environment, but also provide a framework and insight for solving other scheduling problems under big data and cloud computing environment which are encountered by many companies in the manufacturing and service industries.
Formally our problem is to schedule \(n\) jobs on \(m\) identical parallel machines where there are precedence constraints between jobs. The jobs are all available for processing at time \(0\) and labeled as \(1,2,\cdots,n\). Each job \(j\), \(1\leq j\leq n\), has a processing time \(p_{j}\). The jobs have precedence constraints, \(\prec\), such that \(i\prec j\) represents that job \(j\) cannot start until job \(i\) finishes. The jobs and their precedence constraints can be described by a directed acyclic graph (DAG), \(G=(V,E)\), where \(V\) is a set of vertices representing the jobs and \(E\) is a set of directed arcs representing the precedence constraints among the jobs. We assume that there are no transitive edges in \(G\). If there is a directed arc \(\langle i,j\rangle\) in \(E\), then we have the precedence constraint \(i\prec j\), and we say that job \(i\) is the immediate predecessor of job \(j\) and \(j\) is the immediate successor of job \(i\). We consider non-preemptive schedules, i.e. a job cannot be interrupted once it is started. Given a schedule \(S\), let \(C_{j}\) be the completion time of job \(j\) in \(S\), then the makespan of the schedule \(S\) is \(C_{max}=\max_{1\leq j\leq n}C_{j}\). The goal is to find the minimum makespan. Using the three field notation, the problem can be denoted as \(P\mid prec\mid C_{max}\) when the number of machines \(m\) is arbitrary, and be denoted as \(P_{m}\mid prec\mid C_{max}\) when \(m\) is fixed.
A lot of research has been done on this classical scheduling problem. For arbitrary precedence, when \(m=2\) and jobs have unit processing time, i.e., \(P_{2}\mid prec,p_{j}=1\mid C_{max}\), Coffman and Graham [6] gave an optimal polynomial time algorithm in 1972. In 1978, Lenstra and Kan [17] showed when jobs have unit processing time, the problem with arbitrary precedence constraints and arbitrary \(m\), \(P\mid prec,p_{j}=1\mid C_{max}\), is strongly NP-hard. When the jobs' processing times are either \(1\) or \(2\), Lenstra and Kan [17] and Ullman [25] independently showed that the problem
\(C_{max}\) is strongly NP hard. However, the complexity of the problem \(P_{3}\mid prec,p_{j}=1\mid C_{max}\) remains open. Graham [16] showed that list scheduling is a \((2-\frac{1}{m})\)-approximation for the problem with arbitrary \(m\) and arbitrary job processing times, i.e., \(P\mid prec\mid C_{max}\). In 2011, Svensson [24] showed that assuming a new, possibly stronger, version of the unique games conjecture (introduced by Bansal and Khot [3]), it is NP-hard to approximate the scheduling problem, \(P\mid prec,p_{j}=1\mid C_{max}\), within any factor strictly less than 2. This result improves the inapproximibility of 4/3 by Lenstra and Rinnooy Kan [17].
Due to the importance and the hardness of the problem, a lot of research focused on various types of precedence constraints. One type of precedence constraints studied in literature is precedence graphs with bounded height where the height is the number of vertices on the longest path. In 1978, Lenstra and Kan [17] showed that for arbitrary number of machines \(m\), the problem is NP-hard even if the precedence constrained graph has bounded height and the jobs have unit processing time. In 1984, Doleva and Warmuth [9] developed an optimal algorithm for this problem when \(m\) is fixed and the running time of the algorithm is \(n^{h(m-1)+1}\). In 2006, Aho and Makinen [1] considered a special case where both the height of the graph and the maximum degree are bounded, and jobs have unit processing time. They showed that for large \(n\), the optimal schedule has makespan \(\lceil n/m\rceil\) and can be scheduled using modified critical path rule. This result is in fact a special case of the one studied by Dolev and Warmuth [9]. For more related results, one can refer to the survey by Prot and Bellinguez-Morineaua [23] on how the structure of precedence constraints may change the complexity of the scheduling problems.
In this paper, we focus on the problem where the precedence graph has bounded depth \(h\). The depth of a job \(j\), denoted as \(dp_{j}\), is the number of jobs on the longest directed path ending at \(j\) in \(G\). It is easy to see that the maximum depth of the jobs in \(G\) is equal to the height of the graph \(G\). Given a precedence graph, one can easily compute the depth \(dp_{j}\) of each job \(j\). In addition, we assume that the processing times of the jobs are constrained. We first consider the case that the processing times of the jobs vary from one to another within a factor \(c\), i.e. \(p_{max}\leq c\cdot p_{min}\) where \(p_{max}=max_{1\leq j\leq n}\{p_{j}\}\), \(p_{min}=min_{1\leq j\leq n}\{p_{j}\}\), and \(c\) is a constant integer. Using the three-field notation, we denote this problem as \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\). We then consider the more general case where the largest \(\alpha n\) jobs have no more than \(c\) factor difference for some constant where \(0<\alpha\leq 1\). For a given set of \(n\) jobs, let \([j]\) be the \(j\)-th smallest job.
Then \(p_{[1]}\) is the smallest job, and \(p_{[n]}\) is the largest job. We denote this more generalized problem as \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\). Our goal is to develop sublinear approximation algorithms, that is, approximation algorithms using only sublinear time or sublinear space, for these two versions of precedence constrained scheduling problems.
### New Contributions
In this work, we develop two types of sublinear approximation algorithms for the classical parallel machine scheduling problems where the precedence graph has bounded depth \(h\) and the processing times of jobs are constrained. Specifically, our contributions are listed as follows:
1. We develop two streaming approximation schemes for the problem \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) depending on whether \(c\), \(h\), and each job's depth are known or not. The algorithms are then extended to solve the more general problem where the largest \(\alpha n\) jobs have no more than \(c\) factor difference for some constant \(\alpha\), \(0<\alpha\leq 1\), \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\).
2. We develop the first randomized approximation schemes that uses only sublinear time for both problems. In particular, for the more general problem, \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), when \(m\leq\frac{\alpha n\epsilon}{20c^{2}\cdot h}\), our algorithm is a randomized \((1+\epsilon)\)-approximation scheme that runs in time \(O(\frac{c^{4}h^{2}m^{2}}{\alpha^{3}\epsilon^{6}}\log^{2}(\frac{cn}{\epsilon} )\log(\frac{h}{\epsilon}\log(\frac{cn}{\epsilon})))\).
3. Our approximation results greatly complement the in-approximability results of the studied problems. When \(m=\frac{n}{3}\), even if \(h=3\) and \(c=1\), the problems cannot be approximated within a factor of \(\frac{4}{3}\) in polynomial time unless P=NP (see Section 2 for reference). Surprisingly, our results show that when \(m\) is a little bit smaller, i.e., upper bounded by \(n\) times a factor that depends on \(\epsilon\), \(h\) and \(\alpha\), then the problems admit polynomial time approximation schemes. For example, if \(m\leq\frac{n}{15}\), \(h=3\) and \(c=1\), then there is a polynomial time 1.3-approximation for \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\).
4. We provide a methodological framework for designing sublinear approximation algorithms that can be used for solving other scheduling problems. The framework starts with generating the "sketch of input", which is a summarized description of the input jobs, then computes an
approximate value of the optimal criterion, and finally generates the "sketch of schedule", a succinct description of a schedule that achieves the approximate value.
We introduce the concept of "sketch of schedule" for the applications where not only an approximate value, but also a schedule associated with the approximate value is needed. As illustrated in the paper, we can use the "sketch of schedule" to easily generate a real schedule when the complete jobs information is read.
The paper is organized as follows. In Section 2, we give the complexity of the studied scheduling problems. In Section 3, we present the streaming algorithms for our problems. In Section 4, we design the randomized sublinear time algorithms for our problems. Finally, we draw the concluding remarks in Section 5.
## 2. Complexity
From the introduction, we know that if the jobs have unit processing time, then \(P_{m}\mid prec,dp_{j}\leq h,p_{j}=1\mid C_{max}\) is solvable in \(O(n^{h(m-1)+1})\) time which is polynomial if \(m\) is constant (see [9] for reference); however, the problem with arbitrary \(m\), \(P\mid prec,dp_{j}\leq h,p_{j}=1\mid C_{max}\), is NP-hard in the strong sense even if \(h=3\) (see [17] for reference). In this section, we first show that if we allow jobs to have different processing times, then even for fixed \(m\), the problem becomes NP-hard.
**Theorem 1**.: _The problem \(P_{m}\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) is NP-hard._
**Proof:** We will reduce even-odd partition problem to a restricted even-odd partition problem, and then reduce the restricted even-odd partition problem to \(P_{2}\mid p_{max}\leq c\cdot p_{min}\mid C_{max}\), which implies that \(P_{m}\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) is NP-hard.
**Even-odd partition:** there is a set of \(2n\) integers \(B=\{b_{i},1\leq i\leq 2n\}\) such that \(b_{i}<b_{i+1}\) for all \(1\leq i<2n\), is there a partition of B into \(B_{1}\) and \(B_{2}\) such that \(B_{1}\) and hence \(B_{2}\) contains exactly one of \(\{b_{2i-1},b_{2i}\}\) for each \(1\leq i\leq n\), and \(\sum_{b_{i}\in B_{1}}b_{i}=\sum_{b_{i}\in B_{2}}b_{i}\)?
**Restricted even-odd partition:** Given a set of \(2n\) integers \(D=\{d_{i},1\leq i\leq 2n\}\) such that \(d_{i}<d_{i+1}\) for all \(1\leq i<2n\), and \(d_{2n}\leq cd_{1}\) for some constant \(c>1\), is there a partition of D into \(D_{1}\) and \(D_{2}\) such that \(D_{1}\) and hence \(D_{2}\) contains exactly one of \(\{d_{2i-1},d_{2i}\}\) for each \(1\leq i\leq n\), and \(\sum_{d_{i}\in D_{1}}d_{i}=\sum_{d_{i}\in D_{2}}d_{i}\)?
Given an arbitrary instance \(B=\{b_{i},1\leq i\leq 2n\}\) of the even-odd partition problem, we can reduce it to an instance of the restricted even-odd partition problem \(D=\{d_{i},1\leq i\leq 2n\}\) as follows. Without loss of generality, we can assume that \(b_{2n}>c\cdot b_{1}\). Let \(Y\) be the integer such that \(Y\geq\frac{b_{2n}-cb_{1}}{c-1}\), i.e. \(b_{2n}\leq c\cdot b_{1}+(c-1)Y\). For each \(1\leq i\leq 2n\), let \(d_{i}=b_{i}+Y\). It is easy to see that \(d_{2n}=b_{2n}+Y\leq cb_{1}+c\cdot Y=c\cdot d_{1}\). It is trivial to show that there is a solution to instance \(B\) if and only if there is a solution for instance \(D\). Thus the restricted even-odd partition is also NP-hard. The restricted even-odd partition can be easily reduced to the scheduling problem \(P_{2}\mid p_{max}\leq c\cdot p_{min}\mid C_{max}\), which implies that \(P_{m}\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) is NP-hard.
The next theorem is showing the in-approximability of our problems. In the strong NP-hardness proof of \(P\mid prec,p_{j}=1\mid C_{max}\) in [17], the scheduling instance created from the instance of clique problem has a precedence graph of height 3, and there is a schedule of the \(n=3m\) jobs with makespan of 3 if and only if there is a solution to the clique instance. This implies if an approximation algorithm can generate a schedule with approximation ratio less than 4/3, it must be optimal, which is impossible unless P=NP.
**Theorem 2**.: _Given any \(\epsilon>0\), unless P=NP, there is no polynomial time \((4/3-\epsilon)\)-approximation algorithm for \(P\mid prec,dp_{j}\leq h,p_{j}=1\mid C_{max}\) even if \(h=3\)._
Despite the in-approximability result from Theorem 2, in the next two sections, we will develop approximation schemes that use only sublinear space or sublinear time for our studied problems when \(m\) is upper bounded by \(n\) times a factor.
## 3 Streaming Algorithms using Sublinear Space
At the conceptual level, our streaming algorithms have the following two stages:
**Stage 1:**: Generate and store a sketch of the input while reading the input stream.
**Stage 2:**: Compute an approximation of the optimal value based on the sketch of the input.
Roughly speaking, the sketch of the input is a summary of the input jobs which requires only sublinear space. Instead of storing the accurate processing times of the jobs, we map each job's
processing time into the range of \([(1+\delta)^{u},(1+\delta)^{u+1})\) where \(\delta\) is a parameter. Thus we only need to store the number of jobs that are mapped in each range for each depth. We then use the rounded processing time for each job to obtain the approximation of the optimal makespan. A formal definition of the sketch for our problems is given below.
**Definition 3**.: For a given parameter \(\delta\), and an instance of the problem \(P\mid precc,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) or \(P\mid precc,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), the **sketch of the input** with respect to \(\delta\), denoted as \(SKJ_{\delta}=\{(d,u,n_{d,u})\}\), consists of a set of tuples, \((d,u,n_{d,u})\), where \(n_{d,u}\) is the number of jobs with the depth \(d\) and the processing time in the range of \([(1+\delta)^{u},(1+\delta)^{u+1})\).
The size of the sketch, which is the number of tuples \((d,u,u_{d,u})\), may be different for different problems and different types of stream input. In some cases, for example, we disregard the jobs with small processing time.
In the following subsection, we will first present our streaming algorithms for the problem \(P\mid precc,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\). For the stream input, we consider both the case where \(c\), \(h\) and \(dp_{j}\), \(1\leq j\leq n\), are given and the case where these information is not directly given. We will then adapt our algorithms to the more general problem \(P\mid precc,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\).
Streaming Approximation Schemes for \(P\mid precc,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\)
#### 3.1.1 The parameters \(c\), \(h\), and \(dp_{j}\) are known
We study the problem under data stream model assuming \(c\), \(h\), and \(dp_{j}\) are known. The jobs are given via stream and each job \(j\) is described by a pair \((p_{j},dp_{j})\), where \(p_{j}\) and \(dp_{j}\) are job \(j\)'s processing time and depth, respectively. Without loss of generality, we can assume \(p_{j}\in[1,c]\) in this case. The algorithm is simple: scan the jobs from the stream and generate the sketch of the input, \(SKJ_{\delta}=\{(d,u,n_{d,u})\}\), where \(n_{d,u}\) is the number of jobs with the depth \(d\) and the processing time in the range of \([(1+\delta)^{u},(1+\delta)^{u+1})\); for each \(d\), \(1\leq d\leq h\), compute the length of the time interval where all the jobs with the depth \(d\) can be feasibly scheduled, and then return the total length of these intervals. The complete algorithm is given in Streaming-Algorithm1.
**Theorem 4**.: _For any \(\epsilon\), when \(m\leq\frac{2n\epsilon}{3\cdot h\cdot c}\), Streaming-Algorithm1 is a one-pass streaming approximation scheme for \(P\mid precc,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) that uses \(O(\frac{h\log c}{\epsilon})\) space, \(O(1)\) update
time for each job in the stream, and \(O(\frac{h\log c}{\epsilon})\) time to return the approximate makespan._
**Proof:** First we consider the complexity. The space complexity is dominated by the sketch for which we can use a two dimensional array of size \(h\cdot k=O(\frac{h\log c}{\epsilon})\). It is easy to see that the update time for each job is \(O(1)\). Finally, it takes \(O(h\cdot k)=O(\frac{h\log c}{\epsilon})\) time to compute and return the approximate value \(A\).
Now we consider the approximation ratio of the algorithm. Let \(I\) be the input instance, and let \(I^{\prime}\) be the instance corresponding to the sketch \(SKJ_{\delta}\) which consists of \(n_{d,u}\) jobs that have processing time \(rp_{u}\) for each \(d\), \(u\). Alternatively, we can also view \(I^{\prime}\) being obtained from \(I\) by rounding up the processing time of each job. Let \(C^{*}_{max}\) and \(C^{\prime}_{max}\) be the optimal makespan for the instance \(I\) and \(I^{\prime}\), respectively. It is easy to see that \(C^{*}_{max}\leq C^{\prime}_{max}\leq(1+\delta)C^{*}_{max}\). In the following, we prove that the returned value from Streaming-Algorithm1, \(A\), satisfies the inequality, \(C^{*}_{max}\leq A\leq(1+\epsilon)C^{*}_{max}\).
First, we show that \(A\) is an upper bound of \(C^{\prime}_{max}\). Consider a schedule \(S\) for the jobs of instance \(I^{\prime}\) which schedules the jobs as follows: the jobs at each depth \(d\) are scheduled using list scheduling rule (i.e. schedule the jobs one by one in the given order to the machine that is available at the earliest time), and jobs with depth \(d+1\) can start only after all jobs with depth \(d\) complete. It is easy to see that in \(S\) the jobs at a depth \(d\) are scheduled into an interval of length at most
\(\lfloor\frac{1}{m}\sum_{u=0}^{k}(n_{d,u}\cdot rp_{u})\rfloor+c=\lfloor A_{d}\rfloor+c\). Therefore, the makespan of the feasible schedule \(S\) is at most \(\sum_{d=1}^{h}(\lfloor A_{d}\rfloor+c)=A\), which implies that \(A\geq C_{max}^{\prime}\), where \(C_{max}^{\prime}\) is the optimal makespan for the instance \(I^{\prime}\).
On the other hand, it is obvious that \(\sum_{d=1}^{h}A_{d}=\sum_{d=1}^{h}(\frac{1}{m}\sum_{u=0}^{k}(n_{d,u}\cdot rp_{u }))\) is a lower bound of \(C_{max}^{\prime}\), and \(C_{max}^{*}\leq C_{max}^{\prime}\leq(1+\delta)C_{max}^{*}\). Thus,
\[A=\sum_{d=1}^{h}\left(\lfloor A_{d}\rfloor+c\right)\leq\left(\sum_{d=1}^{h}A_ {d}\right)+h\cdot c\leq C_{max}^{\prime}+h\cdot c\leq(1+\delta)C_{max}^{*}+h \cdot c.\]
Since \(C_{max}^{*}\geq\frac{n}{m}\), when \(m\leq\frac{2n\epsilon}{3\cdot h\cdot c}\), we have \(h\cdot c\leq\frac{2\epsilon}{3}\frac{n}{m}\leq\frac{2\epsilon}{3}C_{max}^{*}\). Therefore,
\[A\leq(1+\delta)C_{max}^{*}+\frac{2\epsilon}{3}C_{max}^{*}=(1+\frac{\epsilon}{ 3})C_{max}^{*}+\frac{2\epsilon}{3}C_{max}^{*}\leq(1+\epsilon)C_{max}^{*}.\]
In summary, we have \(C_{max}^{*}\leq C_{max}^{\prime}<A\leq(1+\epsilon)C_{max}^{*}\), and this completes the proof.
Recall the inapproximability result of the problem \(P\mid prec,dp_{j}\leq h,p_{j}=1\mid C_{max}\) from Theorem 2, which tells us no approximation better than \(\frac{4}{3}\) is possible in polynomial time even if the height is bounded and all jobs have unit processing time unless P=NP. Our result from Theorem 4 surprisingly shows that if \(m\) is bounded by a fraction of \(n\) then we can get a \((1+\epsilon)\)-approximation even if the processing times are slightly different. For example, if \(m\leq\frac{n}{15}\), \(h=3\) and \(c=1\), then we can get a 1.3-approximation.
Theorem 4 also shows that Stream-Algorithm1 only takes constant time to read and process each job in the stream input, and then constant time and constant space to return an approximation of the makespan if the parameters are known. In some cases, however, we may not know the exact value of \(c\), but we are given an estimate \(\hat{c}\) of the parameter \(c\). In these cases, we can still apply Stream-Algorithm1 by using \(\hat{c}\). As long as \(\frac{\hat{c}}{c}\) is a constant, we still have a \((1+\epsilon)\)-approximation with the same space and time complexity.
#### 3.1.2 The parameters \(c\), \(h\), and \(dp_{j}\), are unknown
In this subsection, we consider the case that the parameters \(c\) (or the estimate \(\hat{c}\)) and \(h\) are not known, furthermore, the depth of the jobs are not given directly as in the previous section. Instead, the stream input consists of all the jobs \((j,p_{j})\) in arbitrary order followed by all the arcs \(\langle i,j\rangle\) of
the precedence graph in topological order.
In this case, we need to compute and update both the depth of each job and the sketch of the input dynamically as we read the input. We use a B-tree to maintain the sketch tuples of the input \((d,u,n_{d,u})\) where \((d,u)\) is the key. We define a linear order to compare two tuples \((d_{1},u_{1},n_{d_{1},u_{1}})\) and \((d_{2},u_{2},n_{d_{2},u_{2}})\), we say \((d_{1},u_{1},n_{d_{1},u_{1}})<(d_{2},u_{2},n_{d_{2},u_{2}})\) if 1) \(u_{1}<u_{2}\), or 2) \(u_{1}=u_{2}\) and \(d_{1}<d_{2}\). Additionally we use an array \(B\) to store the jobs' information: for each job \(j\) with the processing time \(p_{j}\), we maintain a pair \((dp_{j},u_{j})\), where \(dp_{j}\) represents its current depth, and \(u_{j}=\left\lfloor\log_{1+\delta}p_{j}\right\rfloor\).
When each job \((j,p_{j})\) arrives in the stream input, we update job \(j\)'s entry in the array \(B\) such that \(dp_{j}=1\) and \(u_{j}=\left\lfloor\log_{1+\delta}p_{j}\right\rfloor\), then create and insert a node \((1,u_{j},n_{1,u_{j}})\) into the tree. Simultaneously we update the smallest processing time \(p_{min}\) and the largest processing time \(p_{max}\). After all the jobs are read in, we can get the final \(p_{min}\) and \(p_{max}\) and compute \(c=\left\lceil p_{max}/p_{min}\right\rceil\).
When each arc \(\langle i,j\rangle\), which indicates job \(i\) is the direct predecessor of job \(j\), arrives in the stream input, we access job \(i\)'s and \(j\)'s entries in the array to obtain their keys \((d_{i},u_{i})\) and \((d_{j},u_{j})\), and compute \(dp_{j}=\max\left(d_{j},d_{i}+1\right)\). If \(dp_{j}>d_{j}\), we will update the node \((d_{j},u_{j},n_{d_{j},u_{j}})\) by setting \(n_{d_{j},u_{j}}=n_{d_{j},u_{j}}-1\) or delete this node if \(n_{d_{j},u_{j}}\) becomes 0; and then update the node \((dp_{j},u_{j},n_{dp_{j},u_{j}})\) by setting \(n_{dp_{j},u_{j}}=n_{dp_{j}u_{j}}+1\) or insert a new node if the node with the key \((dp_{j},u_{j})\) does not exist in the tree. The job \(j\)'s entry in the array \(B\) is also updated with \((dp_{j},u_{j})\). After all the arcs are read in, we can get the sketch of the stream input and the largest depth \(h\). The complete algorithm is given in Streaming-Algorithm2.
**Theorem 5**: _If parameters \(c\) and \(h\) are not known, both the jobs and the precedence graph in topological order are input via the stream, for any \(\epsilon\), when \(m\leq\frac{2n\epsilon}{3\cdot h\cdot c}\), Streaming-Algorithm2 is a one-pass streaming approximation scheme for \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) that uses \(O(n)\) space, takes \(O(\log(\frac{h}{\epsilon}\log c))\) update time for processing each job and each arc in the stream, and \(O(\frac{h\log c}{\epsilon})\) time to return the approximate makespan._
**Proof:** The main difference of Streaming-Algorithm2 and Streaming-Algorithm1 is the implementation. The analysis for approximation ratio is similar to Theorem 4. We will use the same notations as in the proof of Theorem 4. So \(C_{max}^{*}\) is the optimal makespan of the input instance, \(C_{max}^{\prime}\) is the optimal makespan for the instance \(I^{\prime}\) corresponding to the sketch \(SKJ_{\delta}\). We can construct a schedule \(S\) for \(I^{\prime}\) whose makespan is at most \(\sum_{d=1}^{h}(\lfloor A_{d}\rfloor+p_{max})=A\), which implies
```
1:Parameters \(\epsilon\), \(m\)
2:Stream input: the set of jobs in arbitrary order, \((j,p_{j})\), \(1\leq j\leq n\), followed by the set of arcs of the precedence graph in topological order Output: An approximate value of the optimal makespan
3:create an empty B-tree \(T\) and an array \(B\) of size \(n\)
4:initialize \(p_{min}=\infty\), \(p_{max}=1\), \(h=1\)
5:let \(\delta=\frac{\epsilon}{3}\)
6:read the input stream and generate the sketch of the input \(SKJ_{\delta}\):
7:for each job \(j\) with \((p_{j},dp_{j})\) in the stream input do
8: let \(u=\left\lfloor\log_{1+\delta}p_{j}\right\rfloor\)
9:\(B[j]=(1,u)\)
10:if there is a node \((1,u,n_{1,u})\) in the tree \(T\)then
11: update this node by setting \(n_{1,u}=n_{1,u}+1\)
12:else
13: create and insert a node \((1,u,1)\) into \(T\)
14:endif
15:if \(p_{min}>p_{j}\), \(p_{min}=p_{j}\)
16:if \(p_{max}<p_{j}\), \(p_{max}=p_{j}\)
17:endfor
18:for each arc \(\langle i,j\rangle\), in the stream input do
19: let \((d_{i},u_{i})=B[i]\) and \((d_{j},u_{j})=B[j]\)
20:if\(d_{i}+1>d_{j}\)then
21:\(dp_{j}=d_{i}+1\)
22:\(B[j]=(dp_{j},u_{j})\)
23: update the node \((d_{j},u_{j},n_{d_{j},u_{j}})\) in \(T\) as below
24:\(n_{d_{j},u_{j}}=n_{d_{j},u_{j}}-1\)
25:if\(n_{d_{j},u_{j}}=0\), delete this node
26:if the node with the key \((dp_{j},u_{j})\) does not exist in the tree then
27: insert a new node \((dp_{j},u_{j},1)\)
28:else
29: update the node \((dp_{j},u_{j},n_{dp_{j},u_{j}})\) in \(T\) by setting \(n_{dp_{j},u_{j}}=n_{dp_{j},u_{j}}+1\)
30:endif
31:endif
32:if \(h<dp_{j}\), set \(h=dp_{j}\)
33:endfor
34: traverse all the nodes \((d,u,n_{d,u})\) in \(T\)
35: let \(SKJ_{\delta}=\{(d,u,n_{d,u})\}\)
36: compute the approximate makespan
37: let \(u_{-}=\left\lfloor\log_{1+\delta}p_{min}\right\rfloor\) and \(u_{+}=\left\lfloor\log_{1+\delta}p_{max}\right\rfloor\)
38: let \(rp_{u_{+}}=p_{max}\)
39: for each \(u_{-}\leq u<u_{+}\)
40: let \(rp_{u}=(1+\delta)^{u+1}\)
41:for each \(d\)
42: let \(A_{d}=\frac{1}{m}\sum_{u=u_{-}}^{u_{+}}(n_{d,u}\cdot rp_{u})\)
43: let \(A=\sum_{d=1}^{h}(\left\lfloor A_{d}\right\rfloor+p_{max})\)
44:return \(A\)
```
**Algorithm** Streaming-Algorithm2
that \(A\geq C^{\prime}_{max}.\)
It is obvious that \(C^{\prime}_{max}\geq\sum_{d=1}^{h}A_{d}\). Thus,
\[A=\sum_{d=1}^{h}\left(\lfloor A_{d}\rfloor+p_{max}\right)\leq\left(\sum_{d=1}^{h} A_{d}\right)+h\cdot p_{max}\leq C^{\prime}_{max}+h\cdot p_{max}\leq(1+\delta)C^{*}_{ max}+h\cdot p_{max}.\]
Since \(p_{max}\leq c\cdot p_{min}\) and \(C^{*}_{max}\geq\frac{n\cdot p_{min}}{m}\), we have \(h\cdot p_{max}\leq h\cdot c\cdot p_{min}\leq h\cdot c\cdot\frac{m}{n}C^{*}_{ max}\). when \(m\leq\frac{2n\epsilon}{3\cdot h\cdot c}\), we get \(h\cdot p_{max}\leq\frac{2\epsilon}{3}C^{*}_{max}.\) Therefore,
\[A\leq(1+\delta)C^{*}_{max}+\frac{2\epsilon}{3}C^{*}_{max}=(1+\frac{\epsilon}{ 3})C^{*}_{max}+\frac{2\epsilon}{3}C^{*}_{max}=(1+\epsilon)C^{*}_{max}.\]
In summary, we have \(C^{*}_{max}\leq C^{\prime}_{max}\leq A\leq(1+\epsilon)C^{*}_{max}\)
Now we analyze the complexity of Streaming-Algorithm2, the number of nodes in B-tree \(T\) is at most \(O(h\log_{1+\delta}\lceil\frac{p_{max}}{p_{min}}\rceil)=O(h\log_{1+\delta}c)=O (\frac{h}{\epsilon}\log c)\). So when each job or arc is read from the stream input, the corresponding update time for search, insertion or update operation on the B-tree is always \(O(\log(h\log_{1+\delta}c))=O(\log(\frac{h}{\epsilon}\log c))\). After the input is read in, it takes additional \(O(\frac{h}{\epsilon}\log c)\) time to traverse B-tree and compute the approximation of the optimal value. The stream input size is \(O(n+e)\), where \(n\) is the number of jobs and \(e\) is the number of arcs of the precedence graph. Streaming-Algorithm2, uses only \(O(n)\) space to store the array \(B\) and the tree \(T\), which is sublinear considering the number of arcs usually has \(e=O(n^{1+\beta})\), \(0<\beta\leq 1\) in a dense graph.
Streaming Approximation Algorithms for \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\)
In this section, we consider the more general case where the largest \(\alpha n\) jobs have no more than \(c\) factor difference for some constant \(0<\alpha\leq 1\). Apparently, the problem \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) is the special case where \(\alpha=1\). Following the same procedure of our streaming algorithms, we need to compute the sketch of the input \(SKJ_{\delta}=\{(d,u,n_{d,u})\}\). However, different from the case \(\alpha=1\), i.e., the problem \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\), for which there are only constant number \(O(\frac{h\log c}{\epsilon})\) of entries in the sketch of the input, for the problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), if we consider all jobs in the sketch there may be a
very large number of entries in the sketch of the input since \(p_{max}\) may be very large compared with \(p_{min}\). We will show in the following that when we generate the sketch of the input we can ignore those small jobs whose processing time is less than \(\frac{p_{max}}{n^{2}}\) and still get a good approximation of the optimal makespan using only sublinear space.
#### 3.2.1 The parameters \(c\), \(h\), and \(dp_{j}\), are known
We study the streaming algorithm for \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\) when the parameters \(c\), \(h\), and \(dp_{j}\) for all \(1\leq j\leq n\) are known. As mentioned above, the jobs with processing time less than \(\frac{p_{max}}{n^{2}}\) will not be included in the sketch \(SKJ_{\delta}\). Specifically, \(SKJ_{\delta}=\{(d,u,n_{d,u}):u_{-}\leq u\leq u_{+},\,1\leq d\leq h\}\), where \(u_{-}=\lfloor\log_{1+\delta}\frac{p_{max}}{n^{2}}\rfloor\), and \(u_{+}=\lfloor\log_{1+\delta}p_{max}\rfloor\). Without loss of generality, we assume \(p_{max}\) is not known until all input is read. So \(p_{max}\) in our algorithm represents the current maximum processing time of the jobs that we have read so far. We use a \(B\)-tree to store all the considered tuples, \((d,u,n_{d,u})\). When a job \(j\) with \((p_{j},dp_{j})\) arrives, if \(p_{j}<\frac{p_{max}}{n^{2}}\), we skip this job and continue to read the next job. Otherwise, let \(d=dp_{j}\) and \(u=\lfloor\log_{1+\delta}p_{j}\rfloor\), and we update B-tree as follows: if \((d,u,n_{d,u})\) exists in the tree, update this node with \((d,u,n_{d,u}+1)\); otherwise, insert a new node \((d,u,1)\). To limit the number of nodes in the tree, whenever a new node is inserted, we check the node with the smallest \(u\), \((d^{\prime},u^{\prime},n_{d^{\prime},u^{\prime}})\), if \(u^{\prime}<\lfloor\log_{1+\delta}\frac{p_{max}}{n^{2}}\rfloor\), we delete the smallest node. The final sketch of the input \(SKJ_{\delta}\) includes only the tuples \((d,u,n_{d,u})\) from the \(B\)-tree such that \(u_{-}\leq u\leq u_{+}\). We present our algorithm formally in Streaming-Algorithm3.
**Theorem 6**: **.** _When \(m\leq\frac{2n\alpha\epsilon}{3(h+1)\cdot c}\), Streaming-Algorithm3 is a streaming approximation scheme for the problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]},\mid C_{max}\) that uses \(O(\frac{h}{\epsilon}\log n)\) space, takes \(O(\log\frac{h}{\epsilon}+\log\log n)\) update time for each job in the stream, and \(O(\frac{h}{\epsilon}\log n)\) time to return an approximate value that is at most \((1+\epsilon)\) times the optimal makespan._
**Proof:** We first analyze the approximation ratio. Let \(I\) be the given instance. Let \(I^{\prime}\) be the instance obtained from \(I\) by rounding up all the jobs with the processing times greater than or equal to \(\frac{p_{max}}{n^{2}}\), i.e. for each job \(j\) in \(I\), if \(p_{j}\geq\frac{p_{max}}{n^{2}}\), we round it up to \(rp_{u}\) where \(u=\lfloor\log_{1+\delta}p_{j}\rfloor\); otherwise, we keep it same as before. Let \(C^{*}_{max}\) and \(C^{\prime}_{max}\) be the optimal makespan for \(I\) and \(I^{\prime}\) respectively. Let \(I^{\prime\prime}\) be the instance corresponding to the sketch \(SKJ_{\delta}\). Apparently \(I^{\prime\prime}\) can be obtained from \(I^{\prime}\) by removing the small jobs whose processing time is less than \(\frac{p_{max}}{n^{2}}\). Let
```
1:Parameters \(\epsilon\), \(m\), \(n\), \(\alpha\), \(c\) and \(h\) Stream input: \((p_{j},dp_{j})\) for all jobs \(1\leq j\leq n\).
2:An approximate value of the optimal makespan
3:let \(\delta=\frac{\epsilon}{3}\)
4:create an empty B-tree \(T\)
5:initialize \(p_{max}=1\)
6:read the input stream and generate the sketch of the input \(SKJ_{\delta}\):
7:for each job \(j\) with \((j,p_{j})\) in the stream input do
8:if\(p_{j}<\frac{p_{max}}{n^{2}}\)then
9: skip this job and continue the next job
10:else
11:if\(p_{max}<p_{j}\), \(p_{max}=p_{j}\)
12:let \(d=dp_{j}\), \(u=\left\lfloor\log_{1+\delta}p_{j}\right\rfloor\), and update B-tree as follows:
13:if node \((d,u,n_{d,u})\) exists in the tree then
14: update the node with \(n_{d,u}=n_{d,u}+1\)
15:else
16:insert a new node \((d,u,1)\)
17:let \((d^{\prime},u^{\prime},n_{d^{\prime},u^{\prime}})\) be the node with the smallest \(u\)
18:if\(u^{\prime}<\log_{1+\delta}\frac{p_{max}}{n^{2}}\), delete \((d^{\prime},u^{\prime},n_{d^{\prime},u^{\prime}})\) from the tree
19:endif
20:endif
21:endfor
22:let \(u_{-}=\left\lfloor\log_{1+\delta}\frac{p_{max}}{n^{2}}\right\rfloor\), \(u_{+}=\left\lfloor\log_{1+\delta}p_{max}\right\rfloor\)
23:traverse \(T\) and generate the sketch using only the nodes with \(u_{-}\leq u\leq u_{+}\)
24:\(SKJ_{\delta}=\{(d,u,n_{d,u}):1\leq d\leq h,u_{-}\leq u\leq u_{+}\}\)
25:compute the approximate makespan
26:let \(rp_{u_{+}}=p_{max}\)
27:for each \(u_{-}\leq u<u_{+}\)
28:let \(rp_{u}=(1+\delta)^{u+1}\)
29:for each \(d\)
30:let \(A_{d}=\frac{1}{m}\sum_{u=u_{-}}^{u_{+}}(n_{d,u}\cdot rp_{u})\)
31:let \(A=(\sum_{d=1}^{h}(\left\lfloor A_{d}\right\rfloor+p_{max}))+\left\lceil\frac{p _{max}}{n}\right\rceil\)
32:return \(A\)
```
**Algorithm** Streaming-Algorithm3
\(C^{\prime\prime}_{max}\) be the optimal makespan for \(I^{\prime\prime}\). Then we have \(C^{\prime\prime}_{max}\geq\sum_{d=1}^{h}A_{d}\). It is easy to see that \(C^{\prime\prime}_{max}\leq C^{\prime}_{max}\leq(1+\delta)C^{*}_{max}\), and \(C^{*}_{max}\leq C^{\prime}_{max}\leq C^{\prime\prime}_{max}+n\cdot\frac{p_{max }}{n^{2}}=C^{\prime\prime}_{max}+\frac{p_{max}}{n}\).
As before, we can construct a schedule \(S\) for \(I^{\prime\prime}\) using list scheduling rule to schedule the jobs depth by depth starting with \(d=1\). To get a schedule for all jobs in \(I^{\prime}\) based on \(S\), for each depth \(d\), we can simply insert into \(S\) all the small jobs of this depth onto the first machine after all big jobs of depth \(d\) finish and before the first big job of \(d+1\) starts. Let the new schedule be \(S^{\prime}\). Apparently the makespan of \(S^{\prime}\) is at least \(C^{*}_{max}\) and at most \(A=\sum_{d=1}^{h}(\lfloor A_{d}\rfloor+p_{max})+\lceil\frac{p_{max}}{n}\rceil\). Thus, we have \(A\geq C^{*}_{max}\) and
\[A=\left(\sum_{d=1}^{h}(\lfloor A_{d}\rfloor+p_{max})\right)+\lceil\frac{p_{max }}{n}\rceil\leq C^{\prime\prime}_{max}+h\cdot p_{max}+\lceil\frac{p_{max}}{n}\rceil. \tag{1}\]
Since the largest \(\alpha n\) jobs have no more than \(c\) factor difference, each of the largest \(\alpha n\) jobs has processing time at least \(\frac{p_{max}}{c}\). Thus, we have
\[C^{*}_{max}\geq\alpha\cdot n\cdot\frac{p_{max}}{c}\cdot\frac{1}{m}=\frac{ \alpha n}{c\cdot m}p_{max},\]
which implies \(p_{max}\leq\frac{c\cdot m}{\alpha\cdot n}C^{*}_{max}\). If we plug this into inequality (1), we get
\[A \leq C^{\prime\prime}_{max}+h\cdot p_{max}+\lceil\frac{p_{max}}{n}\rceil\] \[\leq C^{\prime\prime}_{max}+(h+1)\cdot p_{max}\] \[\leq C^{\prime\prime}_{max}+(h+1)\cdot\frac{c\cdot m}{\alpha\cdot n}C^ {*}_{max}\] \[\leq (1+\delta)C^{*}_{max}+\frac{(h+1)\cdot c\cdot m}{\alpha n}C^{*}_ {max}\] \[\leq (1+\delta+\frac{(h+1)\cdot c\cdot m}{\alpha n})C^{*}_{max}\] \[\leq (1+\frac{\epsilon}{3}+\frac{(h+1)\cdot c\cdot m}{\alpha n})C^{*} _{max}.\]
If \(m\leq\frac{2n\alpha\epsilon}{3(h+1)\cdot c}\), we have \(A\leq(1+\frac{\epsilon}{3}+\frac{2\epsilon}{3})C^{*}_{max}=(1+\epsilon)C^{*}_ {max}\).
Now we consider the complexity. The space complexity is dominated by the B-tree. As the way it is implemented, each time a node is inserted into the tree, if there is a node \((d,u,n_{d,u})\) such that \(u<\log_{1+\delta}\frac{p_{max}}{n^{2}}\), the smallest such node will be deleted from the tree. In this way, the number of nodes in the tree is at most \(h\cdot\log_{1+\delta}n^{2}=O(\frac{h}{\epsilon}\log n)\). For each job in the stream input,
a constant number of tree operations are needed, and thus the update time for processing each job is \(O(\log(\frac{h}{\epsilon}\log n)=O(\log(\frac{h}{\epsilon})+\log\log n)\) time. After reading all the jobs, the computation of the approximation is bounded by the size of the sketch which is \(O(\frac{h}{\epsilon}\log n)\).
#### 3.2.2 The parameters \(c\), \(h\), and \(dp_{j}\) are unknown
We consider the problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\) when the parameters \(c\), \(h\), and \(dp_{j}\) are not known. In this case, the stream input would include the jobs followed by the arcs. As in Streaming-Algorithm 2, we use a B-tree to store the sketch information, and an array to store the jobs' information. Both the array and the tree are updated when we read the jobs and arcs from the stream input. The streaming algorithm will be similar to Streaming-Algorithm2 but with some nodes for small processing times excluded as in Streaming-Algorithm3. Using similar arguments as in the proof of Theorem 5 and 6, we can get the following theorem.
**Theorem 7**.: _If parameters \(c\) and \(h\) are not known, the jobs, and the precedence graph in topological order are input via the stream, for any \(\epsilon\), when \(m\leq\frac{2n\circ c}{3(h+1)\cdot c}\), there is a streaming approximation scheme for the problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\) that uses \(O(n)\) space, takes \(O(\log\frac{h}{\epsilon}+\log\log n)\) update time for each job in the stream, and \(O(\frac{h}{\epsilon}\log n)\) time to return the approximate value._
### The Sketch of the Schedule
All the streaming algorithms we have presented so far return an approximate value of the optimal makespan. This may be sufficient for some scheduling and planning applications. However, in many other applications, it would be desirable to have a schedule whose makespan is the approximate value. In the traditional data model, a schedule explicitly or implicitly specifies for each job on which machine and in what time interval it is scheduled. This means we need at least \(\Omega(n)\) time complexity and space complexity to describe a schedule. For the big data model, we introduce the concept of **sketch of a schedule** which is a condensed description of a schedule using only sublinear space. In the following, we first give a formal definition for the **sketch of a schedule**, then we show that our previous algorithms can compute simultaneously not only an approximate value, but also the **sketch of a schedule**, and finally we show how the sketch can be used to
generate a real schedule that achieves the approximate value when the jobs are scanned in the second pass.
**Definition 8**.: For the problems \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{max}\mid C_{max}\) and \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), the **sketch of a schedule** describes a feasible schedule and consists of a set of time instants \(t_{d}\), \(1\leq d\leq h\), such that all the jobs of depth \(d\) can be scheduled during the interval \([t_{d-1},t_{d})\) for \(1\leq d\leq h\) where \(t_{0}=0\). Mathematically we denote the **sketch of a schedule** as \(SKS=\{t_{d}:1\leq d\leq h\}\).
For the problem \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\), by the proof of Theorem 4 we know all the jobs of depth \(d\) can be feasibly scheduled during an interval of length \(\lfloor A_{d}\rfloor+c\), which implies that \(SKS=\{t_{d}:1\leq d\leq h\}\) where \(t_{d}=\sum_{k=1}^{d}(\lfloor A_{d}\rfloor+c)\) for all \(1\leq d\leq h\), is the sketch of a schedule that can be computed by Streaming-Algorithm1.
**Lemma 9**.: _For the problem \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\), Streaming-Algorithm1 can compute a sketch of a schedule \(SKS=\{t_{d}:1\leq d\leq h\}\) where \(t_{d}=\sum_{k=1}^{d}(\lfloor A_{d}\rfloor+c)\) for all \(1\leq d\leq h\)._
Based on the sketch of the schedule, if we scan all the jobs in the second time, we can generate a feasible schedule using the Algorithm SketchToSchedule.
By lemma 9 and the Algorithm SketchToSchedule, we have the following theorem.
**Theorem 10**.: _For \(P\mid prec,dp_{j}\leq h,p_{min}\leq c\cdot p_{max}\mid C_{max}\), given any \(0<\epsilon<1\), when \(m\leq\frac{2n\epsilon}{3\cdot h\cdot c}\), Streaming-Algorithm1 can compute a sketch of the schedule \(SKS\) which can be applied to Algorithm SketchToSchedule to generate a feasible schedule with the makespan at most \((1+\epsilon)\) times the optimal makespan._
**Proof:** Since the total length of the jobs at depth \(d\) after rounding is \(m\cdot A_{d}\), and the largest processing time is \(c\), it is easy to see that Algorithm SketchToSchedule generates a feasible schedule of these jobs during the interval \([t_{d-1},t_{d}]\), where \(t_{d}=t_{d-1}+\lfloor A_{d}\rfloor+c\). The final schedule of all \(n\) jobs has the makespan at most \(t_{h}=\sum_{d=1}^{h}\lfloor A_{d}\rfloor+c\), which is at most by \((1+\epsilon)C_{max}^{*}\) by the proof of Theorem 4.
Similarly, Streaming-Algorithm2 can compute a sketch of a schedule \(SKS=\{t_{d}:1\leq d\leq h\}\) where \(t_{d}=\sum_{k=1}^{d}(\lfloor A_{d}\rfloor+p_{max})\) for all \(1\leq d\leq h\), and we have the following theorem.
**Theorem 11**.: _For \(P\mid prec,dp_{j}\leq h,p_{min}\leq c\cdot p_{max}\mid C_{max}\), given any \(0<\epsilon<1\), when \(m\leq\frac{2n\epsilon}{3\cdot h\cdot c}\) Streaming-Algorithm2 can compute a sketch of a schedule \(SKS\) which can be applied to Algorithm SketchToSchedule to generate a feasible schedule with the makespan at most \((1+\epsilon)\) times the optimal makespan._
For the problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n]}\mid C_{max}\), Streaming-Algorithm3 gives an approximate value of the optimal makespan. However, the small jobs from depth \(d\) are not considered when we calculate \(A_{d}\), so the sketch of the schedule is slightly different from previous problem. We will show that in this case, the sketch of a schedule is given by \(SKS=\{t_{d}:t_{d}=t_{d-1}+(\lfloor A_{d}\rfloor+p_{max}+\lceil\frac{p_{max}}{ n}\rceil),1\leq d\leq h\}\), \(t_{0}=0\).
**Theorem 12**.: _For the problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), given any \(0<\epsilon<1\), when \(m\leq\frac{2n\epsilon}{3(h+1)\cdot c}\), Streaming-Algorithm3 can compute a sketch of the schedule \(SKS=\{t_{d}:t_{d}=t_{d-1}+(\lfloor A_{d}\rfloor+p_{max}+\lceil\frac{p_{max}}{ n}\rceil),1\leq d\leq h\}\), \(t_{0}=0\), and based on \(SKS\), Algorithm SketchToSchedule can generate a feasible schedule with the makespan at most \((1+\epsilon)\) times the optimal makespan._
**Proof:** From the proof of Theorem 6, we know that the interval with the length \(\lfloor A_{d}\rfloor+p_{max}\) can feasibly fit in all the jobs of depth \(d\) and with the process time at least \(\frac{p_{max}}{n^{2}}\). If we add additional length of \(\lceil n\cdot\frac{p_{max}}{n^{2}}\rceil=\lceil\frac{p_{max}}{n}\rceil\) to the interval, we can guarantee that both the large jobs and the small
jobs of depth \(d\) can be fit in. Hence, \(SKS=\{t_{d}:t_{d}=t_{d-1}+(\lfloor A_{d}\rfloor+p_{max}+\lceil\frac{p_{max}}{n} \rceil),1\leq d\leq h\}\), \(t_{0}=0\), describes a feasible schedule such that all the jobs of depth \(d\) can be scheduled during the interval \([t_{d-1},t_{d}]\). Based on the sketch \(SKS\), we can use Algorithm SketchToSchedule to generate a feasible schedule with the makespan at most
\[t_{h}=\sum_{d=1}^{h}(\lfloor A_{d}\rfloor+p_{max}+\lceil\frac{p_{max}}{n} \rceil)\leq\left(\sum_{d=1}^{h}\lfloor A_{d}\rfloor\right)+(h+1)p_{max}.\]
From the proof of Theorem 6, we know \(t_{h}\leq C_{max}^{\prime\prime}+(h+1)p_{max}\leq(1+\epsilon)C_{max}^{*}\).
In summary, if we can read the input in two passes, based on the sketch of the schedule produced by all our streaming algorithms, the Algorithm SketchToSchedule can generate a feasible schedule with the makespan at most \((1+\epsilon)\) times the optimal value.
**Theorem 13**.: _For the problems \(P\mid prec,dp_{j}\leq h,p_{[min]}\leq c\cdot p_{max}\mid C_{max}\), and \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), when \(m\leq\frac{2n\epsilon}{3\cdot h\cdot c}\) and \(m\leq\frac{2n\alpha\epsilon}{3(h+1)\cdot c}\), respectively, there exist streaming approximation schemes that can return an approximate value and a sketch of a schedule in one pass, and output a schedule for each job in constant time in the second pass._
## 4 Randomized Sublinear Time Algorithm
In the previous section, we studied the streaming algorithms, which scan all the input data, generate a sketch of the input data, and use it to compute an approximate value of the optimal makespan and a sketch of a schedule that describes a feasible schedule with the approximated makespan. In this section, we study a different computing paradigm, sublinear time algorithms which are also inspired by the boost of multitude of data in manufacturing and service industry. For sublinear time algorithms, our goal is to compute an approximate value of the optimal solution by considering only a fraction of the input data. As most sublinear time algorithms, our algorithms are randomized. Like streaming algorithms, our sublinear time algorithms also use the sketch of the input to compute the approximate value and the sketch of the schedule. The concept of the sketch of the input and the sketch of the schedule are similar to the ones that we defined in the streaming algorithms. However, since we do not read all input, the sketches are not accurate. We call them **estimated sketch of the input**, and **estimated sketch of the schedule**.
The **estimated sketch of the input** is an estimated summary of the \(n\) input jobs that is computed based on the sketch of \(n^{\prime}\) sample jobs. The sample size \(n^{\prime}\) is determined by the approximation ratio \(\epsilon\), and other parameters. We will show that with appropriate sample size, the estimated sketch of the input can give a good approximation of the accurate sketch of the input with high probability, and thus can give a good approximation of the optimal makespan. Formally, the **estimated sketch of the input** is defined as follows:
**Definition 14**.: For a given parameter \(\delta\), and an instance of the problem \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) or \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), the **estimated sketch of the input** with respect to \(\delta\), is denoted as \(\widehat{SKJ}_{\delta}=\{(d,u,\hat{e}_{d,u})\}\) where \(\hat{e}_{d,u}\) is the estimated number of jobs with the depth \(d\) and the processing time in the range of \([(1+\delta)^{u},(1+\delta)^{u+1})\).
Similarly, the **estimated sketch of a schedule** is a concise description of a schedule. Based on the estimated sketch of the schedule, with high probability, we can generate a feasible schedule with the makespan of at most \((1+\epsilon)\) times the optimal makespan. Formally the **estimated sketch of a schedule** is defined as follows:
**Definition 15**.: For the problems \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{max}\mid C_{max}\) and \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n]}\mid C_{max}\), the **estimated sketch of a schedule** describes a schedule and consists of a set of time instants \(t_{d}\), \(1\leq d\leq h\), such that all the jobs of depth \(d\) are scheduled during the interval \([t_{d-1},t_{d})\) for \(1\leq d\leq h\) where \(t_{0}=0\). Mathematically we denote the **estimated sketch of a schedule** as \(\widehat{SKS}=\{t_{d}:1\leq d\leq h\}\).
At the conceptual level, our sublinear time algorithms have the following three steps:
**Step 1:** Compute the sample size \(n^{\prime}\) that is sublinear in \(n\) but is sufficient for computing an estimated sketch of the input jobs that is close to the accurate sketch for the original input data.
**Step 2:** Sample \(n^{\prime}\) jobs uniformly at random from the input, find the sketch of the sampled jobs, \(SKJ^{\prime}_{\delta}=\{(d,u,n^{\prime}_{d,u})\}\), and calculate the estimated sketch of all \(n\) jobs, \(\widehat{SKJ}_{\delta}=\{(d,u,\hat{e}_{d,u})\}\).
**Step 3:** Based on the estimated sketch of the input, compute an approximation of the optimal value and an estimate sketch of a schedule.
In the following, we first develop a sublinear time algorithm for \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) and then adapt it to solve the general problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\).
### Sublinear Time Algorithm for \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\)
As before, each job \(j\) is represented by a pair \((p_{j},dp_{j})\). Without loss of generality, we assume that \(1\leq p_{j}\leq c\) for all \(1\leq j\leq n\). Our algorithm mainly consists of the three steps as described above: (1) compute the sample size \(n^{\prime}\); (2) sample \(n^{\prime}\) jobs uniformly at random, find the sketch of the sampled jobs, \(SKJ^{\prime}_{\delta}=\{(d,u,n^{\prime}_{d,u})\}\), where \(n^{\prime}_{d,u}\) is the number of sampled jobs with the depth \(d\) and the processing time in the range of \([(1+\delta)^{u},(1+\delta)^{u+1})\), and calculate the estimated sketch of the input for \(n\) jobs, \(\widehat{SKJ_{\delta}}=\{(d,u,\hat{e}_{d,u})\}\) such that \(\hat{e}_{d,u}=\frac{n}{n^{\prime}}n^{\prime}_{d,u}\), and \(\hat{e}_{d,u}\geq 2\tau(n,h,c)\), where \(\tau(n,h,c)\) is determined by some parameters; (3) compute an approximation of the optimal value. The algorithm is formally described in Randomized-Algorithm1.
Now we give the performance analysis for the above algorithm. The time complexity is dominated by the sampling operation. Thus we have the following lemma.
**Lemma 16**: _The running time of the algorithm is \(O(\frac{c^{2}h^{2}\log^{2}c}{\epsilon^{6}}\log(\frac{h}{\epsilon}\log c)\cdot m ^{2})\)._
**Proof:** The algorithm takes \(n^{\prime}\) random samples and the processing time for each sampled job is O(1). So the running time of the algorithm is \(O(n^{\prime})=O(\frac{1}{\beta^{2}}\cdot\ln\frac{2}{\gamma})=O(\frac{c^{2}h^{ 2}k^{2}m^{2}}{\epsilon^{4}}\log(hk))=O(\frac{c^{2}h^{2}\log^{2}c}{\epsilon^{6} }\log(\frac{h}{\epsilon}\log c)\cdot m^{2})\).
From now on we focus on the accuracy analysis for our algorithm. Since in our analysis we use the bounds that Ma [18] has obtained based on the well-known Chernoff bounds (see [20]), and the union bound from probability theory, we list them in the following for reference.
**Lemma 17** (Lemma 3 in Ma [18]): _Let \(X_{1},\ldots,X_{n}\) be \(n\) independent random \(0\)-\(1\) variables and \(X=\sum_{i=1}^{n}X_{i}\)._
1. _If_ \(X_{i}\) _takes_ \(1\) _with probability at most_ \(p\) _for_ \(i=1,\ldots,n\)_, then for any_ \(\beta>0\)_,_ \(\Pr(X>pn+\beta n)<e^{-\frac{1}{3}n\beta^{2}}\)_._
2. _If_ \(X_{i}\) _takes_ \(1\) _with probability at least_ \(p\) _for_ \(i=1,\ldots,n\)_, then for any_ \(\beta>0\)_,_ \(\Pr(X<pn-\beta n)<e^{-\frac{1}{2}n\beta^{2}}\)_._
```
0: Parameters: \(m\), \(c\), \(h\), \(\epsilon\)
1: Jobs: \((p_{j},dp_{j})\), \(1\leq j\leq n\), \(1\leq dp_{j}\leq h\)
2: An approximation of the optimal makespan
3: compute the sample size \(n^{\prime}\)
4: let \(\delta=\frac{\epsilon}{20}\), and \(k=\left\lfloor\log_{1+\delta}c\right\rfloor\)
5: let \(p=\frac{5\delta}{2c\delta\cdot h\cdot c}\), and \(\beta=\delta p\)
6: let \(n^{\prime}=\frac{3}{\beta^{2}}\cdot\ln\frac{2}{\gamma}\), where \(\gamma=\frac{1}{10\hbar k}\)
7: sample \(n^{\prime}\) jobs uniformly at random, and compute the sketch of the sampled jobs \(SKJ_{\delta}^{\prime}=\{(d,u,n_{d,u}^{\prime})\}\)
8: compute the estimated sketch of all jobs \(\widehat{SKJ}_{\delta}\)
9: let \(\tau(n,h,c)=n\cdot p\)
10:\(\widehat{SKJ}_{\delta}=\emptyset\)
11:for each \((d,u,n_{d,u}^{\prime})\in SKJ_{\delta}^{\prime}\)do
12: let \(\hat{e}_{d,u}=n\cdot\frac{n_{d,u}^{\prime}}{n^{\prime}}\)
13: if \(\hat{e}_{d,u}>2\tau(n,h,c)\)
14:\(\widehat{SKJ}_{\delta}=\widehat{SKJ}_{\delta}\cup\{(d,u,\hat{e}_{d,u})\}\)
15:endfor
16: compute the estimated makespan
17: let \(rp_{k}=c\)
18: for each \(u\), \(0\leq u<k\)
19: let \(rp_{u}=(1+\delta)^{u+1}\)
20:for each \(d\), \(1\leq d\leq h\)do
21: let \(\hat{A}_{d}=\frac{1}{m}\sum_{u=0}^{k}(\hat{e}_{d,u}\cdot rp_{u})\), where \((d,u,\hat{e}_{d,u})\in\widehat{SKJ}_{\delta}\)
22:endfor
23: let \(\hat{A}=\sum_{d=1}^{h}\left(\left\lfloor\hat{A}_{d}\right\rfloor+c\right)\)
24:return \(\hat{A}\)
```
**Algorithm** Randomized-Algorithm1
**Fact 18** (Union bound).: _Let \(E_{1},E_{2},\ldots,E_{m}\) be \(m\) events that may not be independent, we have the inequality_
\[\Pr(E_{1}\cup E_{2}\ldots\cup E_{m})\leq\Pr(E_{1})+\Pr(E_{2})+\ldots+\Pr(E_{m}).\]
We will use Lemma 17 and Fact 18 to show that \(\hat{e}_{d,u}\), computed by Randomized-Algorithm1, is a good estimate of the exact number of jobs with the depth \(d\) and processing time in the range of \([(1+\delta)^{u},(1+\delta)^{u+1})\), \(n_{d,u}\). Specifically, we have that with high probability: (1) if \(n_{d,u}\) is at least \(\tau(n,h,c)\), then our estimate, \(\hat{e}_{d,u}\), is in the range of \([(1-\delta)n_{d,u},(1+\delta)n_{d,u}]\); and (2) if \(n_{d,u}<\tau(n,h,c)\), our estimated number of jobs, \(\hat{e}_{d,u}\), is no more than \(2\tau(n,h,c)\).
**Lemma 19**.: _For any \(d\), \(u\), let \(\hat{e}_{d,u}\) be the value computed by Randomized-Algorithm1, then we have:_
* _If_ \(n_{d,u}\geq\tau(n,h,c)\)_,_ \(\Pr((1-\delta)n_{d,u}\leq\hat{e}_{d,u}\leq(1+\delta)n_{d,u})\geq 1-\gamma\)_; and_
* _If_ \(n_{d,u}<\tau(n,h,c)\)_,_ \(\Pr(\hat{e}_{d,u}\leq 2\tau(n,h,c))\geq 1-\gamma\)_._
**Proof:** Let \(X_{i}\) denote the indicator random variable for the event that the \(i\)-th sample job has depth \(d\), and the processing time is in \([(1+\delta)^{u},(1+\delta)^{u+1})\). Then \(n^{\prime}_{d,u}=\sum_{i=1}^{n^{\prime}}X_{i}\). Since \(n^{\prime}\) jobs are sampled uniformly at random from \(n\) jobs, we have \(\Pr(X_{i}=1)=\frac{n_{d,u}}{n}\). For convenience, we let \(p_{0}=\frac{n_{d,u}}{n}\). By line 7 of our algorithm, \(p=\frac{\tau(n,h,c)}{n}\).
We first prove (i): if \(n_{d,u}\geq\tau(n,h,c)\), \(\Pr((1-\delta)n_{d,u}\leq\hat{e}_{d,u}\leq(1+\delta)n_{d,u})\geq 1-\gamma\). It is sufficient to show that \(\Pr(\hat{e}_{d,u}\leq(1-\delta)n_{d,u})\leq\frac{\gamma}{2}\) and \(Pr(\hat{e}_{d,u}\geq(1+\delta)n_{d,u})\leq\frac{\gamma}{2}\). By line _10_ of the algorithm, \(\hat{e}_{d,u}=n\cdot\frac{n^{\prime}_{d,u}}{n^{\prime}}\), thus we have
\[\Pr(\hat{e}_{d,u}\leq(1-\delta)n_{d,u}) = \Pr(n\cdot\frac{n^{\prime}_{d,u}}{n^{\prime}}\leq(1-\delta)n_{d, u})\] \[= \Pr(n^{\prime}_{d,u}\leq(1-\delta)\frac{n_{d,u}}{n}\cdot n^{ \prime})\] \[= \Pr(n^{\prime}_{d,u}\leq(1-\delta)p_{0}n^{\prime})\] \[= \Pr(n^{\prime}_{d,u}\leq(p_{0}-\delta p_{0})n^{\prime})\]
If \(n_{d,u}\geq\tau(n,h,c)\), then \(Pr(X_{i}=1)=p_{0}=\frac{n_{d,u}}{n}\geq\frac{\tau(n,h,c)}{n}=p\). Using this fact, and applying Lemma 17 for the variable \(n^{\prime}_{d,u}\), \(n^{\prime}_{d,u}=\sum_{i=1}^{n^{\prime}}X_{i}\), we get
\[\Pr(n^{\prime}_{d,u}\leq(p_{0}-\delta p_{0})n^{\prime})\leq e^{-\frac{1}{2}n^ {\prime}(\delta p_{0})^{2}}\leq e^{-\frac{1}{2}n^{\prime}\beta^{2}}\leq\frac{ \gamma}{2},\]
which means that \(\Pr(\hat{e}_{d,u}\leq(1-\delta)n_{d,u})\leq\frac{\gamma}{2}\). Similarly, we have
\[\Pr(\hat{e}_{d,u}\geq(1+\delta)n_{d,u}) = \Pr(n^{\prime}_{d,u}\geq(p_{0}+\delta p_{0})n^{\prime})\] \[\leq e^{-\frac{1}{3}n^{\prime}(\delta p_{0})^{2}}\] \[\leq e^{-\frac{1}{3}n^{\prime}(\delta p)^{2}}\] \[\leq e^{-\frac{1}{3}n^{\prime}\beta^{2}}\] \[\leq \frac{\gamma}{2}.\]
Next, we show (ii): if \(n_{d,u}<\tau(n,h,c)\), \(\Pr(\hat{e}_{d,u}\leq 2\tau(n,h,c))\geq 1-\gamma\). As for (i), we prove that \(\Pr(\hat{e}_{d,u}>2\tau(n,h,c))\leq\gamma\). By line 7 of the algorithm, \(\tau(n,h,c)=n\cdot p\), and \(\hat{e}_{d,u}=n\cdot\frac{n^{\prime}_{d,u}}{n^{\prime}}\).
\[\Pr(\hat{e}_{d,u}>2\tau(n,h,c))=\Pr(\hat{e}_{d,u}>2np)=\Pr(n^{\prime}_{d,u}>2 n^{\prime}p)\leq\Pr(n^{\prime}_{d,u}>(p+\delta p)n^{\prime}).\]
If \(n_{d,u}<\tau(n,h,c)\), then \(Pr(X_{i}=1)=\frac{n_{d,u}}{n}\leq\frac{\tau(n,h,c)}{n}=p\). Using this fact, and applying Lemma 17 for the variable \(n^{\prime}_{d,u}\), \(n^{\prime}_{d,u}=\sum_{i=1}^{n^{\prime}}X_{i}\), we get
\[\Pr(n^{\prime}_{d,u}>(p+\delta p)n^{\prime})\leq e^{-\beta^{2}\frac{n^{\prime }}{3}}\leq\frac{\gamma}{2},\]
which implies that \(\Pr(\hat{e}_{d,u}>2\tau(n,h,c))\leq\frac{\gamma}{2}<\gamma\). This completes the proof.
Lemma 19 tells us that the estimated sketch of input \(\widehat{SKJ}_{\delta}\) approximates the exact sketch of input \(SKJ_{\delta}\) very well. Based on this, we will show that the estimated makespan, \(\hat{A}\), computed from the estimated sketch, is a good approximation of the optimal makespan. For the ease of our analysis and proof later, we summarize all the symbols we use in the following:
* \(I\): the input instance for the algorithm
* \(SKJ_{\delta}=\{(d,u,n_{d,u})\}\): the exact sketch of all jobs in \(I\) where \(n_{d,u}\) is the number of jobs in \(I\) with the depth \(d\) and the processing time in the range of \([(1+\delta)^{u},(1+\delta)^{u+1})\) for all \(1\leq d\leq h\) and \(1\leq u\leq k\)
* \(I_{round}\): the instance corresponding to the sketch \(SKJ_{\delta}\) with the rounded processing times for all the jobs, that is, for each \((d,u,n_{d,u})\in SKJ_{\delta}\), there are \(n_{d,u}\) jobs at depth \(d\) whose
processing times are \(rp_{u}\)
* \(I_{big}\): the instance obtained from the instance \(I_{round}\) by removing the jobs corresponding to \((d,u,n_{d,u})\) where \(n_{d,u}<3\tau(n,h,c)\) for all \(1\leq d\leq h\) and \(1\leq u\leq k\)
* \(\widehat{SKJ_{\delta}}=\{(d,u,\hat{e}_{d,u})\}\): the estimated sketch for the jobs in \(I\), which is computed by Randomized-Algorithm1, and where \(\hat{e}_{d,u}\) is the estimated value for \(n_{d,u}\). Note that only the tuples with \(\hat{e}_{d,u}>2\tau(n,h,c)\) are included in \(\widehat{SKJ_{\delta}}\).
* \(\hat{I}\): the instance corresponding to the estimated sketch \(\widehat{SKJ_{\delta}}=\{(d,u,\hat{e}_{d,u})\}\), that is, for each \((d,u,\hat{e}_{d,u})\in\widehat{SKJ_{\delta}}\), there are \(\hat{e}_{d,u}\) jobs at depth \(d\) whose processing times are \(rp_{u}\)
* \((d,u)\)-group of an instance: the group of all the jobs in the instance with depth \(d\) and processing time \(rp_{u}\)
We first compare the optimal makespan of instance \(\hat{I}\) and that of instance \(I_{big}\).
**Lemma 20**.: _Let \(C^{*}_{max}(I_{big})\) and \(C^{*}_{max}(\hat{I})\) be the optimal makespan for instances \(I_{big}\) and \(\hat{I}\) respectively, with probability of at least \(\frac{9}{10}\), we have_
\[(1-\delta)(C^{*}_{max}(I_{big})-h\cdot c)<C^{*}_{max}(\hat{I})\leq(1+\delta)C ^{*}_{max}(I_{big})+\frac{15\delta n}{m}+h\cdot c. \tag{2}\]
**Proof:** From our definition of \(I_{big}\) and \(\hat{I}\), we know that for any \((d,u)\)-group included in \(I_{big}\), we must have \(n_{d,u}\geq 3\tau(n,h,c)\) and for any \((d,u)\)-group included in \(\hat{I}\), we must have \(\hat{e}_{d,u}>2\tau(n,h,c)\). We first show that with high probability the instance \(\hat{I}\) contains all jobs from instance \(I_{big}\). Consider an arbitrary \((d,u)\)-group from \(I_{big}\), we must have \(n_{d,u}\geq 3\tau(n,h,c)\), since \(\delta=\frac{\epsilon}{20}<\frac{1}{20}\), we have \((1-\delta)n_{d,u}\geq 2\tau(n,h,c)\). By Lemma 19, with the probability of at least \(1-\gamma\), we have \(2\tau(n,h,c)<(1-\delta)n_{d,u}\leq\hat{e}_{d,u}\leq(1+\delta)n_{d,u}\). That means, with the probability of at least \(1-\gamma\), we have that any \((d,u)\)-group in \(I_{big}\) is also included in \(\hat{I}\). In other words, the probability that a \((d,u)\)-group is in \(I_{big}\) but not in \(\hat{I}\) is less than \(\gamma\). Since there are at most \(h\cdot k\)\((d,u)\)-groups, by Fact 18, the probability that some \((d,u)\)-groups are in \(I_{big}\) but not in \(\hat{I}\) is at most \(\gamma\cdot h\cdot k=\frac{1}{10}\). Therefore, considering all \((d,u)\)-groups in \(I_{big}\), we have that with probability at least \(\frac{9}{10}\), all \((d,u)\)-groups that are included in \(I_{big}\) are also included in \(\hat{I}\).
A lower bound of \(C^{*}_{max}(\hat{I})\) can be obtained by considering only those \((d,u)\)-groups that are in \(I_{big}\). To schedule the jobs in these groups from \(\hat{I}\), one need an interval of length at least \(\sum_{d}\frac{1}{m}\sum_{u}(\hat{e}_{d,u}\cdot rp_{u})\geq\sum_{d}\frac{1}{m} \sum_{u}((1-\delta)n_{d,u}\cdot rp_{u})\). So we have
\[C^{*}_{max}(\hat{I})\geq\sum_{d=1}^{h}\left(\frac{1}{m}\sum_{u=0}^{k}\left((1- \delta)n_{d,u}\cdot rp_{u}\right)\right). \tag{3}\]
For all the jobs from \(I_{big}\), we have:
\[C^{*}_{max}(I_{big})\geq\sum_{d=1}^{h}\left(\frac{1}{m}\sum_{u=0}^{k}(n_{d,u} \cdot rp_{u})\right), \tag{4}\]
and
\[C^{*}_{max}(I_{big})\leq\sum_{d=1}^{h}\left(\frac{1}{m}\sum_{u=0}^{k}(n_{d,u} \cdot rp_{u})+c\right)=\sum_{d=1}^{h}\left(\frac{1}{m}\sum_{u=0}^{k}(n_{d,u} \cdot rp_{u})\right)+h\cdot c. \tag{5}\]
By inequalities (3) and (5) we have
\[C^{*}_{max}(\hat{I})\geq(1-\delta)(C^{*}_{max}(I_{big})-h\cdot c). \tag{6}\]
Next, we consider the upper bound of \(C^{*}_{max}(\hat{I})\). We split the jobs in \(\hat{I}\) into two parts: those \((d,u)\)-groups that are in both \(I_{big}\) and \(\hat{I}\), and those \((d,u)\)-groups that are in \(\hat{I}\) but not in \(I_{big}\). For the jobs in \(\hat{I}\) from the former, we need an interval of length at most \(\sum_{d}(\frac{1}{m}(\sum_{u}(\hat{e}_{d,u}\cdot rp_{u}))+c)\leq\sum_{d}( \frac{1}{m}(\sum_{u}((1+\delta)n_{d,u}\cdot rp_{u}))+c)\) to schedule them; for the jobs from the latter \((d,u)\)-groups, we note that each such group must correspond to a group in instance \(I\) where \(n_{d,u}<3\tau(n,h,c)\) and there are at most \(h\cdot k\) such groups. By Lemma 19, with the probability of at least \(1-\gamma\), we have at most \(\hat{e}_{d,u}\leq 6\tau(n,h,c)\) jobs in \(\hat{I}\) for each \((d,u)\)-group in the latter type. Thus we can schedule these jobs in an interval of at most \(6\tau(n,h,c)\cdot h\cdot k\cdot c\). Combining both types of groups and by inequality (4), we have
\[C^{*}_{max}(\hat{I}) \leq \sum_{d=1}^{h}\left(\frac{1}{m}\left(\sum_{u=0}^{k}\left((1+ \delta)n_{d,u}\cdot rp_{u}\right)\right)+c\right)+6\tau(n,h,c)\cdot h\cdot k\cdot c\] \[\leq (1+\delta)C^{*}_{max}(I_{big})+h\cdot c+6\tau(n,h,c)\cdot h\cdot k \cdot c.\]
By line 7 of the algorithm, \(\tau(n,h,c)=n\cdot p=\frac{5\delta n}{2c\cdot h\cdot k\cdot m}\), we have
\[C^{*}_{max}(\hat{I})\leq(1+\delta)C^{*}_{max}(I_{big})+\frac{15\delta n}{m}+h \cdot c. \tag{7}\]
Therefore, from (6) and (7), we get
\[(1-\delta)(C^{*}_{max}(I_{big})-h\cdot c)\leq C^{*}_{max}(\hat{I})\leq(1+ \delta)C^{*}_{max}(I_{big})+\frac{15\delta n}{m}+h\cdot c.\]
The next lemma compares the optimal makespan of instance \(I_{round}\) and that of instance \(I\) and \(I_{big}\).
**Lemma 21**.: _Let \(C^{*}_{max}(I)\) and \(C^{*}_{max}(I_{round})\) be the optimal makespan for instances \(I\) and \(I_{round}\) respectively, we have the following inequalities:_
\[C^{*}_{max}(I)\leq C^{*}_{max}(I_{round})\leq(1+\delta)C^{*}_{max}(I). \tag{8}\]
\[C^{*}_{max}(I_{round})-\frac{8\delta n}{m}\leq C^{*}_{max}(I_{big})\leq C^{*}_ {max}(I_{round}). \tag{9}\]
**Proof:** By our notation, \(I\) is the original instance of \(n\) jobs where a job \(j\) has processing time \(p_{j}\) and depth \(dp_{j}\), and \(I_{round}\) is the instance from \(I\) after rounding up the jobs' processing time such that if \((1+\delta)^{u}\leq p_{j}\leq(1+\delta)^{u+1}\), then the rounded processing time is \(rp_{u}\leq(1+\delta)p_{j}\). It is easy to see that we have
\[C^{*}_{max}(I)\leq C^{*}_{max}(I_{round})<(1+\delta)C^{*}_{max}(I).\]
The instance \(I_{big}\) can be obtained from the instance \(I_{round}\) by removing those \((d,u)\)-group jobs where \(n_{d,u}<3\tau(n,h,c)\). The total number of the jobs removed is at most \(3\tau(n,h,c)\cdot h\cdot k\), and each of these jobs have processing time at most \(c\). Therefore, we have \(C^{*}_{max}(I_{round})-3\tau(n,h,c)\cdot h\cdot k\cdot c\leq C^{*}_{max}(I_{ big})\leq C^{*}_{max}(I_{round}).\) Since \(\tau(n,h,c)=\frac{5\delta n}{2c\cdot h\cdot k\cdot m}\), we get
\[C^{*}_{max}(I_{round})-\frac{8\delta n}{m}\leq C^{*}_{max}(I_{big})\leq C^{*}_ {max}(I_{round}).\]
Combining all the lemmas we proved in this section, we can prove that the Randomized-Algorithm1 is an approximation scheme.
**Theorem 22**.: _If \(m\leq\frac{n\epsilon}{20h\cdot c}\), Randomized-Algorithm1 is a randomized \((1+\epsilon)\)-approximation scheme for \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) that runs in \(O(\frac{c^{2}h^{2}\log^{2}c}{\epsilon^{6}}\log(\frac{h}{\epsilon}\log c)\cdot m ^{2})\) time._
**Proof:** The running time follows from Lemma 16. We focus on the approximation ratio. By our notation, \(\hat{I}\) is the instance corresponding to the estimated sketch \(\widehat{SKJ_{\delta}}=\{(d,u,\hat{e}_{d,u})\}\) where \(\hat{e}_{d,u}\) is the estimated value for \(n_{d,u}\). Only the tuples with \(\hat{e}_{d,u}>2\tau(n,h,c)\) are included in \(\widehat{SKJ_{\delta}}\). By Randomized-Algorithm1, \(\hat{A}_{d}=\frac{1}{m}\sum_{u=0}^{k}(\hat{e}_{d,u}\cdot rp_{u})\) and \(\hat{A}=\sum_{d=1}^{h}(\lfloor\hat{A}_{d}\rfloor+c)\). Following the same proof as in Theorem 4, we can get
\[C_{max}^{*}(\hat{I})\leq\hat{A}\leq C_{max}^{*}(\hat{I})+h\cdot c.\]
By inequality (2), we get, with probability at least \(\frac{9}{10}\)
\[\hat{A}\leq C_{max}^{*}(\hat{I})+h\cdot c\leq(1+\delta)C_{max}^{*}(I_{big})+ \frac{15\delta n}{m}+2h\cdot c.\]
If \(m\leq\frac{n\epsilon}{20h\cdot c}\), with \(\delta=\frac{\epsilon}{20}\), we get \(h\cdot c\leq\frac{\delta n}{m}\). Thus, we get,
\[\hat{A} \leq (1+\delta)C_{max}^{*}(I_{big})+\frac{15\delta n}{m}+2h\cdot c\] \[\leq (1+\delta)C_{max}^{*}(I_{big})+\frac{17\delta n}{m}\] by (9) \[\leq (1+\delta)^{2}C_{max}^{*}(I)+\frac{17\delta n}{m}\] by (8) \[\leq (1+20\delta)C_{max}^{*}(I)\] by \[C_{max}^{*}(I)\geq\frac{n}{m}\] \[\leq (1+\epsilon)C_{max}^{*}(I),\] by \[\delta=\frac{\epsilon}{20}\]
\[\hat{A} \geq C_{max}^{*}(\hat{I})\] \[\geq (1-\delta)(C_{max}^{*}(I_{big})-h\cdot c)\] by (2) \[\geq (1-\delta)(C_{max}^{*}(I_{big})-\frac{\delta n}{m})\] by \[h\cdot c\leq\frac{\delta n}{m}\] \[\geq (1-\delta)(C_{max}^{*}(I_{round})-\frac{9\delta n}{m})\] by(9) \[\geq (1-\delta)(C_{max}^{*}(I)-\frac{9\delta n}{m})\] by(8) \[\geq (1-\delta)(1-9\delta)C_{max}^{*}(I)\] by \[C_{max}^{*}(I)\geq\frac{n}{m}\] \[\geq (1-10\delta)C_{max}^{*}(I)\] by \[\delta=\frac{\epsilon}{20}\] \[\geq (1-\epsilon)C_{max}^{*}(I)\]
Based on the above theorem, when \(m=o(n^{1/2})\), Randomized-Algorithm is a sublinear time approximation scheme.
**Corollary 23**.: _When \(m=o(n^{1/2})\), Randomized-Algorithm1 is a randomized \((1+\epsilon)\)-approximation scheme for \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) that runs in sublinear time._
Sublinear Time Algorithm for \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n]}\mid C_{max}\)
In this section, we will generalize Randomized-Algorithm1 to solve the general problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n]}\mid C_{max}\). The idea is basically similar except some pre-processing is needed because we do not know the processing time range of the top \(\alpha n\) jobs. Specifically, our sublinear algorithm first samples some jobs to determine the upper bound of the largest job, then samples enough number of jobs to generate the estimated sketch of the input, and finally computes the approximation of the optimal value based on the estimated sketch of the input. The details are given in Randomized-Algorithm2.
Like the Randomized-Algorithm1, the time complexity of Randomized-Algorithm2 is dominated by the sample size \(n^{\prime}\). However, \(n^{\prime}\) in this algorithm depends on \(n\). Still, we will show in the lemma below that the running time of the algorithm is sublinear when \(m=o(n^{1/2})\).
```
1:Parameters \(m\), \(c\), \(h\), \(\epsilon\), \(\alpha\)
2:Jobs: \((p_{j},dp_{j})\), \(1\leq j\leq n\), \(1\leq dp_{j}\leq h\)
3:An approximation of the optimal makespan
4:determine the upper bound of the largest job:
5:let \(\delta=\frac{\epsilon}{20}\), \(k=\left\lfloor\log_{1+\delta}\frac{cn}{\delta}\right\rfloor\), and \(\gamma=\frac{1}{10hk}\)
6:let \(n_{0}=1\) if \(\alpha=1\), and \(n_{0}=\left\lceil\frac{\ln\gamma}{\ln(1-\alpha)}\right\rceil\) if \(\alpha<1\)
7:sample \(n_{0}\) jobs uniformly at random
8:let \(w_{0}\) be the largest processing time among all the \(n_{0}\) sampled jobs
9:determine the sample size \(n^{\prime}\):
10:let \(p=\frac{5\alpha\delta}{2c^{2}_{j}\cdot h\cdot k\cdot m}\), and \(\beta=\delta p\)
11:let \(n^{\prime}=\frac{3}{\alpha\beta^{2}}\cdot\ln\frac{2}{\gamma}\)
12:sample \(n^{\prime}\) jobs uniformly at random
13:remove those jobs whose processing time is at most \(\frac{\delta w_{0}}{n}\) from the sampled jobs
14:compute the sketch of the remaining sample jobs \(SKJ^{\prime}_{\delta}=\{(d,u,n^{\prime}_{d,u})\}\)
15:compute the estimated sketch of all jobs \(\widehat{SKJ}_{\delta}\)
16:let \(\tau(n,h,c)=n\cdot p\)
17:\(\widehat{SKJ}_{\delta}=\emptyset\)
18:for each \((d,u,n^{\prime}_{d,u})\in SKJ^{\prime}_{\delta}\)do
19:let \(\hat{e}_{d,u}=n\cdot\frac{n^{\prime}_{d,u}}{n^{\prime}}\)
20:if \(\hat{e}_{d,u}>2\tau(n,h,c)\)
21:\(\widehat{SKJ}_{\delta}=\widehat{SKJ}_{\delta}\cup\{(d,u,\hat{e}_{d,u})\}\)
22:endfor
23:compute the estimated makespan
24:let \(u_{-}=\left\lfloor\log_{1+\delta}\frac{\delta w_{0}}{n}\right\rfloor\), \(u_{+}=\left\lfloor\log_{1+\delta}cw_{0}\right\rfloor\)
25:let \(rp_{u_{+}}=cw_{0}\)
26:for each \(u_{-}\leq u<u_{+}\)
27:let \(rp_{u}=(1+\delta)^{u+1}\)
28:for each \(d\), \(1\leq d\leq h\)
29:let \(\hat{A}_{d}=\frac{1}{m}\sum_{u=u_{-}}^{u_{+}}(\hat{e}_{d,u}\cdot rp_{u})\), where \((d,u,\hat{e}_{d,u})\in\widehat{SKJ}_{\delta}\)
30:\(\hat{A}=\sum_{d=1}^{h}\left(\left\lfloor\hat{A}_{d}\right\rfloor+cw_{0}\right)\)
31:return \(\hat{A}\)
```
**Algorithm** Randomized-Algorithm2
**Lemma 24**: _Randomized-Algorithm2 runs in time \(O(\frac{c^{4}h^{2}}{\alpha^{3}e^{6}}\cdot m^{2}\log^{2}(\frac{cn}{\epsilon})\log( \frac{h}{\epsilon}\log(\frac{cn}{\epsilon})))\)._
**Proof:** The running time is dominated by the sampling of \(n_{0}+n^{\prime}=O(n^{\prime})\) jobs. Thus its running time is
\[O(n^{\prime})=O(\tfrac{1}{\alpha\beta^{2}}\cdot\ln\tfrac{2}{\gamma}))=O(\tfrac {1}{\alpha\delta^{2}}(\tfrac{c^{2}hkm}{\alpha\delta})^{2}\log hk)=O(\tfrac{c^{ 4}h^{2}}{\alpha^{3}e^{6}}\cdot m^{2}\log^{2}(\tfrac{cn}{\epsilon})\log(\tfrac {h}{\epsilon}\log(\tfrac{cn}{\epsilon}))).\]
The next lemma shows that by sampling \(n_{0}\) jobs, we can get a good estimate of the largest processing time \(p_{[n]}\).
**Lemma 25**: _With probability at least \(1-\gamma\), \(p_{[n]}\leq cw_{0}\)._
**Proof:** Since we sample the jobs uniformly, the probability that a job from the top \(\alpha n\) largest jobs is selected is \(\alpha\). The probability that no job from top \(\alpha n\) largest jobs is sampled is at most \((1-\alpha)^{n_{0}}\leq\gamma\), which implies that with probability of at least \(1-\gamma\), some jobs from the top \(\alpha n\) largest jobs are sampled, which means \(w_{0}\geq p_{[1-\alpha)n]}\) and \(p_{[}n]\leq cw_{0}\).
The next lemma is similar to Lemma 19 which states that \(\hat{e}_{d,u}\) is a good estimate of the number of corresponding jobs in the input instance, \(n_{d,u}\). The only difference is that here we focus on the jobs whose processing time is at least \(\frac{\delta w_{0}}{n}\). The proof is the same and we omit here.
**Lemma 26**: _For any \(d\), \(u\), let \(\hat{e}_{d,u}\) be the value computed by Randomized-Algorithm2, then we have:_
* _If_ \(n_{d,u}\geq\tau(n,h,c)\)_,_ \(\Pr((1-\delta)n_{d,u}\leq\hat{e}_{d,u}\leq(1+\delta)n_{d,u})\geq 1-\gamma\)_, and_
* _If_ \(n_{d,u}<\tau(n,h,c)\)_,_ \(\Pr(\hat{e}_{d,u}\leq 2\tau(n,h,c))\geq 1-\gamma\)_._
Like Theorem 22, we can prove that Randomized-Algorithm2 is an approximation scheme.
**Theorem 27**: _For \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{\text{ max}}\), when \(m\leq\frac{n\cdot\alpha\cdot\epsilon}{20c^{2}\cdot h}\), Randomized-Algorithm2 is a randomized \((1+\epsilon)\)-approximation scheme that runs in time \(O(\frac{c^{4}h^{2}}{\alpha^{3}e^{6}}\cdot m^{2}\log^{2}(\frac{cn}{\epsilon}) \log(\frac{h}{\epsilon}\log(\frac{cn}{\epsilon})))\)._
**Proof:** The running time follows from Lemma 24. We consider the approximation ratio only. The proof is similar to that of Theorem 22. For the input instance \(I\), let \(I_{round}\) be the instance
obtained from \(I\) by rounding up the processing times for the jobs with \(p_{j}\geq\frac{\delta w_{0}}{n}\). Let \(C^{*}_{max}(I)\) and \(C^{*}_{max}(I_{round})\) respectively be the optimal makespan for \(I\) and \(I_{round}\). Then we still have the same inequalities between \(C^{*}_{max}(I_{round})\) and \(C^{*}_{max}(I)\):
\[C^{*}_{max}(I)\leq C^{*}_{max}(I_{round})\leq(1+\delta)C^{*}_{max}(I) \tag{10}\]
Let \(I_{big}\) be the instance obtained from the instance \(I_{round}\) by removing not only the \((d,u)\)-groups with \(n_{d,u}<3\tau(n,h,c)\) but also the groups of the jobs whose processing time is less than \(\frac{\delta w_{0}}{n}\). The total processing time of the jobs with processing time less than \(\frac{\delta w_{0}}{n}\) is at most \(n\cdot\frac{\delta w_{0}}{n}\leq\delta w_{0}\leq\delta C^{*}_{max}(I_{round})\). The other jobs removed belong to the groups with \(n_{d,u}<3\tau(n,h,c)\), and each of these jobs has processing time at least \(\frac{\delta w_{0}}{n}\) and at most \(cw_{0}\). There are at most \(h\cdot k\) such groups where \(k=\log_{1+\delta}\frac{cn}{\delta}\) as defined in the algorithm. Thus the total processing time of these jobs is at most
\[3\tau(n,h,c)\cdot h\cdot k\cdot cw_{0}=3n\cdot\frac{5\alpha\delta}{2c^{2}\cdot h \cdot k\cdot m}\cdot h\cdot k\cdot cw_{0}\leq\frac{8\delta\cdot\alpha\cdot nw _{0}}{cm}.\]
Thus we have
\[C^{*}_{max}(I_{round})-\delta C^{*}_{max}(I_{round})-\frac{8\delta\cdot\alpha \cdot nw_{0}}{cm}\leq C^{*}_{max}(I_{big})\leq C^{*}_{max}(I_{round}) \tag{11}\]
As before, let \(\hat{I}\) be the instance corresponding to the sketch \(\widehat{SKJ}_{\delta}\), which contains \(\hat{e}_{d,u}\) number of jobs with the depth of \(d\) and the processing time \(rp_{u}\) where \(\hat{e}_{d,u}>2\tau(n,h,c)\). Then the optimal makespan of \(\hat{I}\), \(C^{*}_{max}(\hat{I})\), is at least \(\hat{A}_{d}=\frac{1}{m}\sum_{u=u_{-}}^{u_{+}}(\hat{e}_{d,u}\cdot rp_{u})\). Between the \(I_{big}\) and \(\hat{I}\), we can use similar argument for (6) and (7) to show that with probability at least \(\frac{9}{10}\), we have
\[(1-\delta)(C^{*}_{max}(I_{big})-h\cdot cw_{0})<C^{*}_{max}(\hat{I}) \tag{12}\]
and
\[C^{*}_{max}(\hat{I})\leq(1+\delta)C^{*}_{max}(I_{big})+\frac{15\delta\alpha nw _{0}}{cm}+h\cdot cw_{0} \tag{13}\]
The returned value \(\hat{A}=\sum_{d=1}^{h}\left(\lfloor\hat{A}_{d}\rfloor+cw_{0}\right)\) is at least \(C^{*}_{max}(\hat{I})\) and
\[\hat{A}=\sum_{d=1}^{h}\left(\lfloor\hat{A}_{d}\rfloor+cw_{0}\right)\leq C^{*}_{ max}(\hat{I})+h\cdot cw_{0}\leq(1+\delta)C^{*}_{max}(I_{big})+\frac{15\delta \alpha nw_{0}}{cm}+2h\cdot cw_{0}.\]
Assuming \(m\leq\frac{n\cdot\alpha\cdot\epsilon}{20c^{2}\cdot h}=\frac{n\cdot\alpha\cdot \delta}{c^{2}\cdot h}\), then \(h\cdot cw_{0}\leq\frac{\delta\alpha n\cdot w_{0}}{cm}\), and combining with the above inequalities, we get
\[\hat{A} \leq (1+\delta)C^{*}_{max}(I_{big})+\frac{17\delta\alpha nw_{0}}{cm}\] \[\leq (1+\delta)C^{*}_{max}(I_{round})+\frac{17\delta\alpha nw_{0}}{cm}\] by (11) \[\leq (1+\delta)C^{*}_{max}(I_{round})+17\delta C^{*}_{max}(I)\] by \[C^{*}_{max}(I)\geq\frac{\alpha n\cdot w_{0}}{cm}\] \[\leq (1+\delta)^{2}C^{*}_{max}(I)+17\delta C^{*}_{max}(I)\] by (10) \[\leq (1+20\delta)C^{*}_{max}(I)\] \[\leq (1+\epsilon)C^{*}_{max}(I)\] by \[\delta=\frac{\epsilon}{20}\]
and
\[\hat{A} > C^{*}_{max}(\hat{I})\] \[> (1-\delta)(C^{*}_{max}(I_{big})-h\cdot cw_{0})\] by (12) \[> (1-\delta)((1-\delta)C^{*}_{max}(I_{round})-\frac{8\delta\alpha nw _{0}}{cm}-h\cdot cw_{0})\] by (11) \[> (1-\delta)((1-\delta)C^{*}_{max}(I_{round})-\frac{9\delta\alpha nw _{0}}{cm})\] by \[h\cdot cw_{0}\leq\frac{\delta\alpha n\cdot w_{0}}{cm}\] \[> (1-\delta)\left((1-\delta)C^{*}_{max}(I)-\frac{9\delta\alpha nw }{cm}\right)\] by (10) \[> (1-\delta)\left((1-\delta)C^{*}_{max}(I)-9\delta C^{*}_{max}(I)\right)\] by \[C^{*}_{max}(I)\geq\frac{\alpha nw_{0}}{cm}\] \[\geq (1-\delta)(1-10\delta)C^{*}_{max}(I)\] \[= (1-\epsilon)C^{*}_{max}(I)\] by \[\delta=\frac{\epsilon}{20}\]
From Theorem 27, we can easily get the following corollaries.
Corollary 28: When \(m=o(n^{1/2})\), Randomized-Algorithm2 is a randomized \((1+\epsilon)\)-approximation
scheme for \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\), that runs in sublinear time._
**Corollary 29**.: _For any \(\alpha=n^{-\phi}\) where \(\phi\in(0,1/3)\), if \(m\leq\frac{n\cdot\alpha\cdot\epsilon}{20c^{2}\cdot h}\), there is a randomized \((1+\epsilon)\)-approximation scheme for \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\) and the algorithm runs in sublinear time \(O(\frac{c^{4}h^{2}}{\alpha^{3}\epsilon^{6}}\cdot m^{2}\log^{2}(\frac{cn}{ \epsilon})\log(\frac{h}{\epsilon}\log(\frac{cn}{\epsilon})))\)._
Clearly, the algorithm will also work if there is no precedence constraint, i.e. all jobs have the same depth \(1\). This becomes the traditional load balancing problem.
**Corollary 30**.: _For any \(\alpha=n^{-\phi}\) where \(\phi\in(0,1/3)\), if \(m\leq\frac{n\cdot\alpha\cdot\epsilon}{20c^{2}\cdot h}\), there is a randomized \((1+\epsilon)\)-approximation scheme for \(P\mid p_{[n]}\leq c\cdot p_{[(1-\alpha)n)]}\mid C_{max}\) and the algorithm runs in sublinear time \(O(\frac{c^{4}}{\alpha^{3}\epsilon^{6}}\cdot m^{2}\log^{2}(\frac{cn}{\epsilon} )\log(\frac{1}{\epsilon}\log(\frac{cn}{\epsilon})))\)._
### The Estimated Sketch of the Schedule
In this subsection, we will show that as the streaming algorithms in Section 3, the two sublinear time algorithms in this section can compute an **estimated sketch of a schedule**\(\widehat{SKS}=\{t_{d}:1\leq d\leq h\}\) which describes a schedule where all the jobs of depth \(d\) are scheduled during the interval \([t_{d-1},t_{d})\) for \(1\leq d\leq h\) where \(t_{0}=0\). And we will show that based on \(\widehat{SKS}\), the Algorithm SketchToSchedule in Section 3.3 can, with high probability, generate a feasible schedule with the makespan at most \((1+2\epsilon)\) times the optimal makespan.
For the problem \(P\mid prec,dp_{j}\leq h,p_{min}\leq c\cdot p_{max},\mid C_{max}\), we let the estimate sketch of a schedule be \(\widehat{SKS}=\{t_{d}:1\leq d\leq h\}\) where \(t_{d}=\sum_{i=1}^{d}(\lfloor\frac{\hat{A}_{d}}{1-\delta}\rfloor+c+3\lfloor\tau (n,h,c)\rfloor\cdot k\cdot c)\) for all \(1\leq d\leq h\). We have the following theorem for the estimate sketch of a schedule:
**Theorem 31**.: _For \(P\mid prec,dp_{j}\leq h,p_{min}\leq c\cdot p_{max}\mid C_{max}\), given any \(0<\epsilon<1\), when \(m\leq\frac{n\epsilon}{20h\cdot c}\), Randomized-Algorithm1 can compute an estimated sketch of the schedule \(\widehat{SKS}\), and based on the sketch, with probability at least \(\frac{9}{10}\), Algorithm SketchToSchedule can generate a feasible schedule with the makespan at most \((1+2\epsilon)\) times the optimal makespan._
**Proof:** It is easy to see that Randomized-Algorithm1 can compute the estimate sketch of the schedule \(\widehat{SKS}=\{t_{d}:1\leq d\leq h\}\). Now we will show that with high probability all the jobs with depth \(d\) can be feasibly scheduled during the interval \([t_{d-1},t_{d})\) for \(1\leq d\leq h\) where \(t_{0}=0\). It
suffices to prove that with high probability all the jobs from the input instance \(I\) with depth \(d\) can be scheduled within an interval of length \(\lfloor\frac{\hat{A}_{d}}{1-\delta}\rfloor+c+\lfloor 3\tau(n,h,c)\rfloor\cdot k\cdot c\). Since the instance \(I_{round}\) is obtained from \(I\) by rounding up the processing times, all we need to prove is that the jobs from the instance \(I_{round}\) with depth \(d\) can be scheduled within an interval of length \(\lfloor\frac{\hat{A}_{d}}{1-\delta}\rfloor+c+\lfloor 3\tau(n,h,c)\rfloor\cdot k\cdot c\).
As the proof of Lemma 20, we split the jobs in \(I_{round}\) into two parts: those \((d,u)\)-groups that are in \(I_{big}\), and those \((d,u)\)-groups that are not in \(I_{big}\). For the jobs from the former, with probability at least \(\frac{9}{10}\), all \((d,u)\)-group in \(I_{big}\) are also included in \(\hat{I}\) and for each \((d,u)\)-group of this type, we have \(n_{d,u}\geq 3\tau(n,h,c)\) and \(n_{d,u}\leq\frac{\hat{e}_{d,u}}{1-\delta}\). Thus all these jobs at depth \(d\) can be feasibly scheduled during the interval of length
\[\left\lfloor\frac{1}{m}\sum_{u=0}^{k}(n_{d,u}\cdot rp_{u})\right\rfloor+c\leq \left\lfloor\frac{1}{m}\sum_{u=0}^{k}\frac{(\hat{e}_{d,u}\cdot rp_{u})}{(1- \delta)}\right\rfloor+c=\left\lfloor\frac{\hat{A}_{d}}{1-\delta}\right\rfloor+c.\]
For the jobs from the latter \((d,u)\)-groups, we have \(n_{d,u}<3\tau(n,h,c)\), and thus they can be feasibly scheduled on a single machine during an interval of length \(\lfloor 3\tau(n,h,c)\rfloor\cdot k\cdot c\). Therefore, combining both types of jobs, we have that with probability at least \(\frac{9}{10}\) all the jobs with depth \(d\) from \(I_{round}\) can be scheduled within a time interval of length \(\lfloor\frac{\hat{A}_{d}}{1-\delta}\rfloor+c+\lfloor 3\tau(n,h,c)\rfloor\cdot c\cdot k\). Since the jobs in \(I_{round}\) are rounded up from those in \(I\), the jobs depth \(d\) from \(I\) can also be scheduled within the same interval length.
Finally, it is easy to see that the Algorithm SketchToSchedule generates a feasible schedule of all the jobs with depth \(d\) from \(I\) during the interval \([t_{d-1},t_{d})\). The makespan of the final schedule is at most
\[t_{h}=\sum_{d=1}^{h}\left(\left\lfloor\frac{\hat{A}_{d}}{1-\delta}\right\rfloor +c+\lfloor 3\tau(n,h,c)\rfloor\cdot k\cdot c\right)\leq\sum_{d=1}^{h}\left( \frac{\hat{A}_{d}}{1-\delta}\right)+h\cdot c+\lfloor 3\tau(n,h,c)\rfloor \cdot h\cdot k\cdot c\]
Note \(3\tau(n,h,c)\cdot h\cdot k\cdot c\leq\frac{8\delta n}{m}\), and \(h\cdot c\leq\frac{\delta n}{m}\) when \(m\leq\frac{n\epsilon}{20h\cdot c}\). Thus,
\[t_{h} \leq \sum_{d=1}^{h}\left(\frac{\hat{A}_{d}}{1-\delta}\right)+\frac{ \delta n}{m}+\frac{8\delta n}{m}\] \[\leq \frac{1}{1-\delta}\sum_{d=1}^{h}\left(\hat{A}_{d}\right)+\frac{9 \delta n}{m}\] \[\leq \frac{1}{1-\delta}\sum_{d=1}^{h}\left(\hat{A}_{d}\right)+9\delta C _{max}^{*}(I)\hskip 72.27pt\text{by }C_{max}^{*}(I)\geq\frac{n}{m}\] \[\leq \frac{\hat{A}}{1-\delta}+9\delta C_{max}^{*}(I)\] \[\leq \frac{(1+20\delta)}{1-\delta}C_{max}^{*}(I)+9\delta C_{max}^{*}(I) \hskip 72.27pt\text{by Theorem \ref{thm:22}}\] \[\leq (1+25\delta)C_{max}^{*}(I)+9\delta C_{max}^{*}(I)\hskip 72.27pt \text{by }\delta=\frac{\epsilon}{20}<\frac{1}{20}\] \[\leq (1+2\epsilon)C_{max}^{*}(I).\]
For \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n]}\mid C_{max}\), we let the estimate sketch of a schedule be \(\widehat{SKS}=\{t_{d}:1\leq d\leq h\}\) where \(t_{d}=\sum_{i=1}^{d}(\lfloor\frac{\hat{A}_{d}}{1-\delta}\rfloor+cw_{0}+ \lfloor 3\tau(n,h,c)\rfloor\cdot k\cdot cw_{0}+\lfloor\delta w_{0}\rfloor)\) for all \(1\leq d\leq h\). For this sketch of the schedule, we can get similar conclusion.
**Theorem 32**.: _For the problem \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n]}\mid C_{max}\), given any \(0<\epsilon<1\), and \(m\leq\frac{n\cdot\alpha\epsilon}{20c^{2}\cdot h}\), Randomized-Algorithm2 can generate an estimated sketch of the schedule \(\widehat{SKS}\), and with probability at least \(\frac{9}{10}\), Algorithm SketchToSchedule can produce based on \(\widehat{SKS}\) a feasible schedule with the makespan at most \((1+2\epsilon)\) times the optimal makespan._
**Proof:** It is easy to see that Randomized-Algorithm2 can generate the estimated sketch of a schedule \(\widehat{SKS}\). Now we will show that with high probability all the jobs with depth \(d\) can be feasibly scheduled during the interval \([t_{d-1},t_{d})\) for \(1\leq d\leq h\) where \(t_{0}=0\). As the proof of Theorem 31, it suffices to prove that with high probability the jobs from the instance \(I_{round}\) with depth \(d\) can be scheduled within an interval of length \(\lfloor\frac{\hat{A}_{d}}{1-\delta}\rfloor+cw_{0}+\lfloor 3\tau(n,h,c)\rfloor \cdot k\cdot cw_{0}+\lfloor\delta w_{0}\rfloor\).
For this problem, the jobs in \(I_{round}\) can be split into three types: (1) \((d,u)\)-groups that are in \(I_{big}\), (2) \((d,u)\)-groups corresponding to \((d,u,n_{d,u})\) where \(n_{d,u}<3\tau(n,h,c)\) for all \(1\leq d\leq h\) and \(1\leq u\leq k\) and the processing times of all jobs are greater than \(\frac{\delta w_{0}}{n}\), and (3) jobs whose processing
times are no more than \(\frac{\delta w_{0}}{n}\). We will bound the interval length needed to schedule jobs from each type. For type (1) jobs, as the proof in Theorem 27, with probability at least \(\frac{9}{10}\), all \((d,u)\)-group in \(I_{big}\) are also included in \(\hat{I}\) and for each \((d,u)\)-group of this type, we have \(n_{d,u}\geq 3\tau(n,h,c)\) and \(n_{d,u}\leq\frac{\hat{e}_{d,u}}{1-\delta}\). Thus all these jobs at depth \(d\) can be feasibly scheduled during the interval of length
\[\left\lfloor\tfrac{1}{m}\sum_{u=0}^{k}(n_{d,u}\cdot rp_{u})\right\rfloor+cw_{0 }\leq\left\lfloor\tfrac{1}{m}\sum_{u=0}^{k}\tfrac{(\hat{e}_{d,u}\cdot rp_{u})}{ (1-\delta)}\right\rfloor+cw_{0}=\left\lfloor\tfrac{\hat{A}_{d}}{1-\delta} \right\rfloor+cw_{0}.\]
For type (2) jobs, it is easy to see that all the jobs at depth \(d\) can be feasibly scheduled during an interval of length \(\lfloor 3\tau(n,h,c)\rfloor\cdot cw_{0}\cdot k\); For type (3) jobs, since the processing times are integer, the processing time of each job must be at most \(\lfloor\tfrac{\delta w_{0}}{n}\rfloor\). There are at most \(n\) such jobs at each depth, thus they can be feasibly scheduled during an interval of length \(n\lfloor\tfrac{\delta w_{0}}{n}\rfloor=\lfloor\delta w_{0}\rfloor\). Adding all these together, with probability at least \(\frac{9}{10}\) all jobs at depth \(d\) from \(I_{round}\) can be scheduled into an interval of length
\[\lfloor\tfrac{\hat{A}_{d}}{1-\delta}\rfloor+cw_{0}+\lfloor 3\tau(n,h,c) \rfloor\cdot k\cdot cw_{0}+\lfloor\delta w_{0}\rfloor.\]
Similar as before, we can use Algorithm SketchToSchedule to generate a feasible schedule with the makespan at most
\[t_{h} = \sum_{d=1}^{h}(\lfloor\tfrac{\hat{A}_{d}}{1-\delta}\rfloor+cw_{0 }+\lfloor 3\tau(n,h,c)\rfloor\cdot k\cdot cw_{0}+\lfloor\delta w_{0}\rfloor)\] \[\leq \left(\sum_{d=1}^{h}\tfrac{\hat{A}_{d}}{1-\delta}\right)+h\cdot( cw_{0}+3\tau(n,h,c)\cdot k\cdot cw_{0}+\delta w_{0})\] \[\leq \left(\sum_{d=1}^{h}\tfrac{\hat{A}_{d}}{1-\delta}\right)+(1+ \delta)h\cdot cw_{0}+3\tau(n,h,c)\cdot h\cdot k\cdot cw_{0}\]
By line 13 of Randomized-Algorithm2, \(3\tau(n,h,c)\cdot h\cdot k\cdot cw_{0}\leq\frac{8\delta\alpha n\cdot w_{0}}{ cm}\) which implies \(h\cdot cw_{0}\leq\frac{\delta\alpha n\cdot w_{0}}{cm}\). With \(m\leq\frac{n\cdot\alpha\cdot\epsilon}{20c^{2}\cdot h}\), we get \(t_{h}\leq\left(\sum_{d=1}^{h}\tfrac{\hat{A}_{d}}{1-\delta}\right)+(1+\delta) \frac{\delta\alpha n\cdot w_{0}}{cm}+\frac{8\delta\alpha n\cdot w_{0}}{cm}\). Since \(C^{*}_{max}(I)\geq\frac{\alpha n\cdot w_{0}}{cm}\), we have \(t_{h}\leq\left(\sum_{d=1}^{h}\tfrac{\hat{A}_{d}}{1-\delta}\right)+10\delta C^ {*}_{max}(I)\leq\frac{\hat{A}}{1-\delta}+10\delta C^{*}_{max}(I)\). By Theorem 22, and \(\delta=\frac{\epsilon}{20}\), we have
\[t_{h}\leq\frac{(1+20\delta)}{1-\delta}C^{*}_{max}(I)+10\delta C^{*}_{max}(I) \leq(1+2\epsilon)C^{*}_{max}.\]
This completes the proof.
Conclusions
In this work, we studied the parallel machine precedence constrained scheduling problems \(P\mid prec,dp_{j}\leq h,p_{max}\leq c\cdot p_{min}\mid C_{max}\) and \(P\mid prec,dp_{j}\leq h,p_{[n]}\leq c\cdot p_{[(1-\alpha)n]}\mid C_{max}\). We focused on two types of computing paradigms, sublinear space algorithms and sublinear time algorithms, which are inspired by the boost of multitude of data in manufacturing and service industry. It is worth mentioning that in spite of the inapproximability result that there does not exist a polynomial time approximation algorithm with approximation ratio better than \(\frac{4}{3}\) unless P=NP, our algorithms imply that both problems admit approximation schemes if \(m\) satisfies certain condition. Moreover, our algorithms for precedence constrained problems also imply the sublinear approximation algorithms for the popular load balancing problem where jobs are independent.
Our work not only provides an algorithmic solutions to the studied problem under big data model, but also provide a methodological framework for designing sublinear approximation algorithms that can be used for solving other scheduling problems. In particular, besides outputting the approximate value of the optimal makespan, we introduced the concept of "the sketch of a schedule" to cater the need of generating a concrete schedule which approximates the optimal schedule. For our studied problems, it is also interesting to design sublinear approximation algorithms for other various precedence constraints and other performance criteria including total completion time, maximum tardiness, etc.
|
2309.04027 | TIDE: Textual Identity Detection for Evaluating and Augmenting
Classification and Language Models | Machine learning models can perpetuate unintended biases from unfair and
imbalanced datasets. Evaluating and debiasing these datasets and models is
especially hard in text datasets where sensitive attributes such as race,
gender, and sexual orientation may not be available. When these models are
deployed into society, they can lead to unfair outcomes for historically
underrepresented groups. In this paper, we present a dataset coupled with an
approach to improve text fairness in classifiers and language models. We create
a new, more comprehensive identity lexicon, TIDAL, which includes 15,123
identity terms and associated sense context across three demographic
categories. We leverage TIDAL to develop an identity annotation and
augmentation tool that can be used to improve the availability of identity
context and the effectiveness of ML fairness techniques. We evaluate our
approaches using human contributors, and additionally run experiments focused
on dataset and model debiasing. Results show our assistive annotation technique
improves the reliability and velocity of human-in-the-loop processes. Our
dataset and methods uncover more disparities during evaluation, and also
produce more fair models during remediation. These approaches provide a
practical path forward for scaling classifier and generative model fairness in
real-world settings. | Emmanuel Klu, Sameer Sethi | 2023-09-07T21:44:42Z | http://arxiv.org/abs/2309.04027v2 | # TIDE: Textual Identity Detection for Evaluating and Augmenting Classification and Language Models
###### Abstract
Machine learning models can perpetuate unintended biases from unfair and imbalanced datasets. Evaluating and debiasing these datasets and models is especially hard in text datasets where sensitive attributes such as race, gender, and sexual orientation may not be available. When these models are deployed into society, they can lead to unfair outcomes for historically underrepresented groups. In this paper, we present a dataset coupled with an approach to improve text fairness in classifiers and language models. We create a new, more comprehensive identity lexicon, TIDAL, which includes 15,123 identity terms and associated sense context across three demographic categories. We leverage TIDAL to develop an identity annotation and augmentation tool that can be used to improve the availability of identity context and the effectiveness of ML fairness techniques. We evaluate our approaches using human contributors, and additionally run experiments focused on dataset and model debiasing. Results show our assistive annotation technique improves the reliability and velocity of human-in-the-loop processes. Our dataset and methods uncover more disparities during evaluation, and also produce more fair models during remediation. These approaches provide a practical path forward for scaling classifier and generative model fairness in real-world settings.
## 1 Introduction
The growing adoption of machine learning across a variety of applications have reignited concerns about unfair and unintended bias in models. Bias can be introduced throughout the development workflow, for example during problem framing, data sampling and preparation, and even through training algorithm choices Shah et al. (2020); Saleiro et al. (2018). When models contain biases, they can play an active role in perpetuating societal inequities and unfair outcomes for underrepresented groups Sweeney (2013); Abid et al. (2021).
Algorithmic fairness is a rapidly growing field of research with a wide range of definitions, techniques and toolkits available. Fairness is anchored in understanding and mitigating model performance disparities across sensitive and protected attributes. Popular toolkits such as AI Fairness 360 Bellamy et al. (2018), Fairlearn Bird et al. (2020), and the Responsible AI toolkit in TensorFlow Abadi et al. (2015), all assume these attributes are readily available in datasets. In many real-world datasets, attributes are either not available or not reliable. This is due to a myriad of issues like privacy and safety laws around protected attributes, human annotation cost and reliability, and inconsistent taxonomy and attribute coverage Andrus et al. (2021).
Attempts to address this problem involve techniques to extract attributes from text, through human or computational means. A common one is to create an adhoc list of "identity terms" Dixon et al. (2018) for token matching. However this approach is limited due to the polysemy of words (e.g. "black" as a color or race), scalability of token matching techniques, and a lack of important contextual information about the terms Blodgett et al. (2020). Connotation is one such example of missing context: a non-literal meaning of a word informed by one's beliefs and prejudices about its typical usage (e.g. "undocumented workers" and "illegal aliens" have the same lexical denotation but different connotations) Carpuat (2015); Allan (2007); Webson et al. (2020).
Our research goal is to first explore techniques that can improve availability and reliability of identity term annotations by providing context for disambiguation. A second goal is to leverage these annotations to adapt existing fairness techniques in ways that scale for use in real-world text datasets
and throughout the development workflow.
### Related Work
#### 1.1.1 Availability of identity labels.
Gupta et al.; Jung et al. propose methods to leverage proxy attributes in the absence of identity labels, however Tschantz; McLoughney et al. show proxies could be a source of bias and discrimination. When labels exist but are noisy or unreliable, Celis et al. explore techniques to achieve fairness under uncertainty. Lahoti et al. attempt to remove the need for identity labels altogether. Our work follows Andrus and Villeneuve (2022), focusing on addressing the issue earlier in the pipeline by taking a human-in-the-loop approach. We deploy assistive techniques for acquiring high quality annotations from humans faster.
#### 1.1.2 Identity lexicon.
Eckle-Kohler et al. (2012) show the need for a standardized lexicon, while Allaway and McKeown (2021) extend one with contextual dimensions including sentiment and emotional association. Our approach is most closely related to Smith et al. (2022) who create a similar identity lexicon. We focus on creating an extensible schema that enables multilingual support, and enabling fairness use cases by capturing additional context and increasing the depth of coverage across groups
#### 1.1.3 Identity entity recognition.
Sense disambiguation Pal and Saha (2015) has been used to address polysemy, with recent advances in knowledge-based techniques Agirre et al. (2014). On the other hand Honnibal and Montani (2017); Bird et al. (2009) use syntactic and NLP techniques to detect canonical entities like "person", which is too coarse. Our work merges both techniques to build a reusable annotation tool. We specialize in identity detection and optimize for fairness workflows, and additionally adapt for counterfactual generation.
#### 1.1.4 Effectiveness of fairness techniques.
Dixon et al. (2018) use a keyword list to source new organic data for debiasing datasets, while Wadhwa et al. (2022) generate counterfactuals using existing datasets as the seed. Our experiments aim to scale up both fairness techniques for use throughout the entire ML workflow. We also leverage identity taxonomy instead of terms to uncover previously missed bias in classifiers and generative models alike.
### Contributions
Our key contributions are summarized below:
* Textual Identity Detection and Augmentation Lexicon (TIDAL)1: to the best of our knowledge TIDAL is the largest identity lexical dataset with comprehensive coverage of groups and associated sense context, using a methodology and schema that supports multiple languages. Footnote 1: Dataset will be made available after review and acceptance
* A specialized identity annotation tool built with the lexicon and optimized for multiple fairness workflows.
* An assistive technique for human annotation that improves time, cost and reliability of acquiring identity labels.
* Updated fairness techniques that improve coverage of bias detection and result in more effective remediation of datasets and models.
### Preliminaries
#### 1.3.1 Datasets.
We use the CivilComments dataset Borkan et al. (2019) for most experiments conducted, relying on its human-annotated identity labels as ground truth. We use the C4 dataset Raffel et al. (2020) as a control.
#### 1.3.2 Data Augmentation.
We generate synthetic datasets using sentence templates from HolisticBias Smith et al. (2022) and UnintendedBias Dixon et al. (2018). We additionally generate counterfactuals Wadhwa et al. (2022) for robustness.
#### 1.3.3 Models.
For generative tasks we use BlenderBot Roller et al. (2021). For classification we train toxicity models on CivilComments, and additionally use counterfactual logit pairing (CLP) for remediation.
#### 1.3.4 Dataset and model evaluation metrics.
We use slice analysis and deficits to understand class balance in datasets and models Dixon et al. (2018). We measure model performance using F1, area-under-curve (AUC), and counterfactual flips
(Garg et al., 2019) for classifiers, and token likelihood (Smith et al., 2022) for generative models.
#### 1.3.5 Inter-annotator reliability (IAR).
Following (Lacy et al., 2015), we use simple percent agreement, Krippendorff's alpha (Krippendorff, 1970) and Gwet's AC1 (Gwet, 2014) to measure the degree of agreement on annotations between human annotators. While Krippendorff's alpha penalizes for data scarcity, Gwet's AC1 corrects for the probability that the annotators agree by chance - both cases are likely given our data distribution and task complexity.
#### 1.3.6 Identity terms and sense context.
Multiple descriptors are used throughout the literature to describe words, utterances or context associated with identity, such as "sensitive attributes", "sensitive features", "group labels", "protected attributes" or "identity terms" (Garg et al., 2019; Dixon et al., 2018). In our work we use "identity terms" for the lexicon that appears in text, and "sense context", for the structured contextual data associated with senses of identity terms.
## 2 Methodology
### TIDAL dataset
The TIDAL dataset consists of lexical entries and their related forms (e.g. black, gay, trans, hindus) that are associated with identity groups. Each head and related form is associated with grammatical properties (e.g. part-of-speech, grammatical gender) and context (or "sense") entries (e.g. identity groups/subgroups, connotation). Although we develop a lexicon, schema and methodology that works for multiple languages, we will focus on English in this paper. In total TIDAL has 1,419 English language head-form identity lexical entries, with over 13,709 related lexical forms and 15,270 context/sense entries.
#### 2.1.1 Schema.
Figure 1 shows the conceptual model of the TIDAL schema and Figure 2 shows a flattened tabular example of TIDAL data. We create an adapted UBY-LMF schema (Eckle-Kohler et al., 2012) which is based on the Lexical Markup Framework (LMF) standard (for Standardization, 2022) for representing NLP lexicons.
Our paper focuses on the following identity groups (IdentityGroup): race, nationality or ethnicity (RNE), sexual orientation, gender identity, gender expression and sex characteristics (SOGI-ESC) and Religion. We choose RNE as a collective category to be more inclusive since their constituent concepts of race, ancestry, nationality and ethnicity are inconsistent and sometimes redundant across cultures (Morning, 2008). We choose SOGI-ESC for similar reasons, instead of Gender Identity and Sexual Orientation, LGBT or SOGI (Trithart, 2021). Although multiple dimensions of connotation like social value, politeness or emotional association have been proposed in prior lexical work (Allaway and McKeown, 2021), our scope is limited to NEUTRAL and PEJORATIVE connotations. PEJORATIVE implies a term can be used to de-mean or disparage a group of people.
Table 1 shows a comparative analysis of TIDAL with known similar sources such as UnintendedBias (Dixon et al., 2018) used by Perspective API 2, and HolisticBias (Smith et al., 2022). Additional details of our data distribution can be found in Appendix A.3.
Footnote 2: [https://perspectiveapi.com/](https://perspectiveapi.com/)
#### 2.1.2 Sourcing.
We source the seed set of identity terms for our lexicon from the following public sources:
* **UNdata**(UNSD, 2003): "Population by national and/or ethnic group" and "Population by religion" tables from UNData are used to create RNE and Religion seed sets, respectively.
Figure 1: TIDAL: Conceptual model
Figure 2: TIDAL: Example, flattened tabular format.
* **CAMEO**(Gerner et al., 2002): We utilize the CAMEO coding framework, which contains approximately 1,500 religions and 650 ethnic groups.
* **GLAAD**: We leverage GLAAD glossary of LGBTQ and transgender terms (GLAAD) for SOGIESC seed sets.
* **HRC**: We use HRC glossary of words and meanings (HRC Foundation) for SOGIESC seed sets.
* **Wikipedia**: We leverage demonyms and adjectivals (Wikipedia contributors, 2023) list for RNE seed sets.
Appendix A.2 provides additional details on seed set data processing.
#### 2.1.3 Curation.
We expand the seed terms to their grammatical and morphological variants using linguistic experts and rule-based lexical expansion tools. Each resulting term is treated as a new lexical entry with reference to the head. Next we curate multiple pools of data contributors to corroborate, correct and expand our data. We leverage a human annotation platform to curate a diverse pool of linguistic experts and create tasks reflecting the following phases:
1. **Expansion**: expand seed terms to grammatical variants, common misspellings and person noun combinations.
2. **Contextualization**: research and associate all possible context for seed terms and expansions, including connotation and identity groups.
3. **Disambiguation**: research and associate context that can help distinguish identity and prevalent non-identity usage of the terms.
Contributors research public sources (such as dictionaries, encyclopedias, and other lexical sources) for unstructured context for identity terms. They also provide citations for the sources they use, their own beliefs about missing context or usage of a term not available in sources. Finally, we anonymize contributor personally-identifiable information before aggregating the assertions and ingesting the data into the lexicon database.
### Identity Annotation Tool
To scale the acquisition of identity labels, we build a configurable multi-label multi-class annotation tool that leverages our identity lexicon and lexical properties to label identity terms found in text.
#### 2.2.1 Annotator components.
We first preprocess text using spaCy Honnibal and Montani (2017) to tokenize and tag with part-of-speech labels, the dependency tree and morphological properties. We then match tokens with terms in the lexicon, using lemmas and variants. We disambiguate non-identity usage of terms with person-noun detection using i) a lexicon of person nouns from Wiktionary Wiktionary contributors (Wiktionary contributors, 2021) and ii) the NLTK Bird et al. (2009) wordnet module to compare similarity with person identifiers like "person" and "people" and non-person identifiers like "object" and "thing". Additionally, spaCy linguistic features Honnibal and Montani (2017) is used for person-nouns detection using named entities like "PERSON", "NORP", and "GPE". To disambiguate a potential identity term we use the dependency tree (with support for conjunctions) and
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & HolisticBias & UnintendedBias & **TIDAL** \\ \hline Supported Identity Groups & 14 & N/A & **3** \\ \hline Head terms / lexical entries & 594 & 50 & **1565** \\ \hline Variants and expansions & - & - & **14148** \\ \hline Includes connotation context & & No & **Yes** \\ \hline Includes identity groups/subgroups & Yes & No & **Yes** \\ \hline Includes non-identity context & No & No & **Yes** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of TIDAL to other lexicons datasets.
Figure 3: Data flow and system components of the annotation tool, with examples.
part-of-speech tags to include tokens that modify person-nouns and exclude tokens that modify non-person nouns. Finally, we train a custom spaCy NER model. The output of the annotator includes identity groups, subgroups, connotation and possible non-identity usage. Figure 3 shows the annotation flow and example output. Additional design details are specified in Appendix B.1.
## 3 Acquiring Identity Context at Scale
### Annotation Tool Performance
We measure the performance of our annotation techniques against human annotations available in the CivilComments dataset, and additionally validate performance consistency using the C4 dataset as a control. Our goal is to understand the effectiveness of techniques for a variety of downstream tasks, and whether performance can generalize to new datasets.
#### 3.1.1 Annotation techniques.
We implement substring matching as the baseline technique and configure multiple annotator variants using tokenizers: i) tokenize and match any occurrence in the lexicon, including all term forms and expansions; ii) tokenize and match occurrence of head terms only; iii) a variation of ii) that additionally disambiguates using a person-term lexicon; and iv) a variation of iii) that uses similarity-to-person-term disambiguation. We finally configure the custom NER model as a standalone annotator variant. Across all techniques, only annotations matching lexical entries in the dataset are considered valid. Figure 3 shows examples of annotation output.
#### 3.1.2 F1 scores.
All techniques outperform substring matching, with the custom NER model achieving the highest score of 91.92%, followed by lemma and exact matching (91.13%, 91.11%) in Figure 4. Disambiguation filters result in increased false negatives that impact overall performance. RNE has the lowest performance trend among subgroups while Religion has the most similar performance across techniques. Additional performance details are provided in Appendix B.2.
### Human Annotation Impact
We assess the impact of assistive annotation in human annotation workflows used to acquire identity labels. In addition to time and cost improvements we seek to understand the quality and consistency of human annotations, including potential new biases.
#### 3.2.1 Methodology.
We sample 337 examples from the CivilComment dataset annotated in the previous experiment. This example dataset is balanced across groups and highlights the performance differences between annotator variants. We present these examples in a human computation task for contributors to first identify tokens associated with identity and then provide an appropriate IdentityGroup label (RNE, Religion or SOGIESC). From a pool of more than 1,000 human annotators, at least 5 annotators review each example. We run three variations of this human annotation task, i) the first with an example-only dataset as the baseline, and the others with assistive annotations: ii) using a token-matching annotator without disambiguation, and iii) using a token-matching annotator with disambiguation. We also request an optional satisfaction survey for each task where the human annotators are asked to rate "Ease of Job" and "Pay". We run the same set of experiments on the C4 dataset as a control. Detailed human annotation job design and guidelines can be found in Appendix B.3.
#### 3.2.2 Inter-annotator reliability (IAR).
Assistive annotations consistently improve the reliability of human annotations as seen in Figure 5. Token-matching achieves an Gwet's AC1 score of 0.7622, representing a 89.27% increase over the baseline, while additional disambiguation results in a score of 0.6257, a 55.37% increase. Our analysis finds similar improvement trends in percent agreement and Krippendorff's Alpha metrics. Additional results are available in Appendix B.4.
Figure 4: Multi-class F1 scores for the identity annotation tool on CivilComments.
#### 3.2.3 F1 scores.
Since IAR doesn't provide a per-class understanding of agreement and quality, we use micro-average F1 scores to understand performance across groups. We use the output of the baseline annotation task (example-only) as ground truth for this comparison. Token-matching achieves the highest overall score of 87.38%, while additional disambiguation performs better only for Religion, seen in Figure 6. Further analysis reveals tradeoffs between false positives and false negatives across the two annotation techniques. More details are in Appendix B.4.
#### 3.2.4 Velocity, cost and satisfaction scores.
We use the interquartile mean (IQM) of time taken for a human annotator to complete the tasks as a proxy for completion velocity. To understand cost, we count the total number judgements required to meet the agreement threshold of 0.7. Lastly, the results from a task satisfaction survey inform task completion difficulty. Token-matching performs the best on velocity, taking 44.8% less time than the baseline. Both assistive annotations tasks have similar costs (24-27% better compared to the baseline). While we receive no data on satisfaction for token-matching, contributors find assistive annotations with disambiguation makes tasks 84.4% easier to perform and result in 43.4% better pay to the baseline task. Table 2 provides detailed per task scores.
## 4 Fairness Applications
Our experiments in this section explore opportunities to leverage our lexicon and annotation tool at various points in the ML fairness workflow, from data labeling to model training. We modify and augment existing techniques from the literature in ways that are only enabled by our work. Our goal is to improve overall effectiveness of fairness interventions and demonstrate that it can be done at scale.
### Assistive Context for Ground Truth Labeling
We explore data collection interventions by replicating the toxicity labeling human annotation task3 for the Perspective API. Figure 7 shows an example of the assistive annotations we provide during human computation to understand the impact of context on annotation quality.
Footnote 3: [https://github.com/conversational/conversational.github.io](https://github.com/conversational/conversational.github.io)
#### 4.1.1 Methodology.
We modify their human computation setup by excluding all sub-attributes except "Identity based attack", which we show only when the toxicity question is answered with "VERY TOXIC", "TOXIC"
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & **Viselity** & **Cont** & **Ease of Jobs** & **Psp** \\ \hline & Independent Time (s) & Total Indepments & Scale 1:5 & Scale 1:5 \\ \hline Example-only & 82.5 & 3623 & 2.5 & 3 \\ (baseline) & 45.5 & 1981 & - & - \\ \hline Assistive annotations using & 45.5 & 1981 & - & - \\ \hline
\begin{tabular}{l} tasking multi- \\ (using with disambiguation) \\ \end{tabular} & 64 & 1905 & 4.15 & 4.3 \\ \hline \end{tabular}
\end{table}
Table 2: Velocity, cost and satisfaction results from human annotation tasks for identity labels
Figure 5: IAR (Gwet’s AC1) for human annotations: identity labeling on CivilComments.
Figure 6: Multi-class F1 scores for human annotations: identity labeling on CivilComments.
Figure 7: Example of identity context annotation in HCOMP toxicity labeling task.
or "NOT SURE". We sample 298 examples from the CivilComment dataset annotated in the previous experiment, only including examples where our annotations are an exact match with provided ground truth labels. This example dataset is balanced across groups and is representative of the performance differences between annotator variants. We run three variations of the human evaluation task, i) the first with an example-only dataset as the baseline, and the others with assistive identity context: ii) providing "IdentityGroup" annotations, and iii) providing "IdentityGroup" and "Connotation" annotations. From a pool of more than 1,300 human annotators, at least 10 annotators review each example. Detailed human annotation job design and guidelines are given in Appendix C.3.
#### 4.1.2 Inter-annotator reliability (IAR).
Assistive annotations consistently improve the reliability of human annotations as seen in Figure 8. IdentityGroup+Connotation annotations achieve the highest AC1 score, seeing an 14.04% increase over the baseline, IdentityGroup annotations achieve an 9.96% increase over baseline. Krippendorff's Alpha scores have the lowest trend due to class imbalance - 85% of labels are toxic. Our agreement performance is consistent with prior work ((Ross et al., 2016) and (Wulczyn et al., 2017)), given the subjective nature of toxicity labeling. Additional results are in Appendix C.4.
### Counterfactual Logit Pairing
We replicate the experimental setting from the counterfactual logit pairing (CLP) guide4, and introduce additional counterfactual techniques enabled by our work to evaluate and mitigate classifier bias.
Footnote 4: [https://www.tensorflow.org/responsible_ai/model_remediation/counterfactual/guide/counterfactual_keras](https://www.tensorflow.org/responsible_ai/model_remediation/counterfactual/guide/counterfactual_keras)
#### 4.2.1 Counterfactual techniques.
We establish a baseline with token ablation using their keyword list. We implement two additional techniques: i) token ablation using subgroup annotations instead of keywords and ii) token replacement using least similar counterfactuals. We train CLP-remediated models for each technique and evaluate flips on the baseline test set. Additional details in Appendix C.2.
#### 4.2.2 Counterfactual flip rates.
The counterfactual flip rate diff metric measures the difference between the flip rate for a counterfactual model and that of the base model on the baseline counterfactual dataset. Results show that using annotations for ablation instead of a keyword list increases the coverage of terms, leading to consistently fewer counterfactual flips in Table 3. We also observe that the counterfactual ablation technique performs better than replacement since ablation creates only one counterfactual compared to multiple generated with replacement technique. Mitigating using counterfactual replacements requires generating multiple counterfactuals for better chances of success, which we'll observe in the next section. The CLP library also only supports generating one counterfactual which limits the coverage of counterfactual evaluation and remediation.
### Dataset Debiasing
We replicate the experimental setting from (Dixon et al., 2018) to evaluate dataset and model bias. We additionally augment their data augmentation techniques and introduce counterfactual generation to improve effectiveness of data debiasing and model remediation.
#### 4.3.1 Data debiasing techniques.
We use their keyword list as a baseline to understand toxicity rates, compute subgroup rate deficits
Figure 8: IAR for human annotations: toxicity labeling on CivilComments.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline & **Overall** & **Black** & **Homosexual** & **GenderIdentity** \\ \hline Keyword ablation (baseline) & 0.37\% & 0.27\% & -0.30\% & 0.32\% \\ \hline Annotation ablation & 0.08\% & -0.09\% & -0.74\% & 0.00\% \\ \hline Annotation replacement & 0.34\% & 0.36\% & -0.30\% & 0.26\% \\ \hline \end{tabular}
\end{table}
Table 3: Difference in counterfactual flip rates per technique on CivilComments compared to the original model.
and source non-toxic examples from Wikipedia article snippets for debiasing. We implement two additional techniques: i) sourcing using subgroup annotations instead of keywords and ii) generating five least similar counterfactual examples per label. We train a model per augmented dataset and evaluate classification performance on a templated synthetic dataset. Additional details can be found in Appendix C.1.
#### 4.3.2 Dataset toxicity rates and model AUC.
Annotation-driven data sourcing increases the coverage of terms compared to the keyword list, leading to more balanced toxicity rates across subgroups. Counterfactual augmentation increases per-label term diversity, resulting in the highest AUC scores and the most equality across subgroups in Figure 9. Toxicity rate balance from annotations translates to equality in model performance across subgroups, but with lower overall performance.
### Generative Model Bias
We replicate the experimental setting from Smith et al. (2022) to evaluate generative model bias, leveraging our lexicon to expand the coverage of bias detection.
#### 4.4.1 Dataset generation.
We create two datasets: i) a baseline dataset using the templates and lexicon from HolisticBias and ii) a new dataset using our lexicon with the same templates. We generate perplexity scores by running evaluations of the 90M-paremeter BlenderBot model on both datasets.
#### 4.4.2 Token likelihood bias.
Our lexicon's deeper coverage of terms reveals a broader bias in token likelihoods for RNE in Figure 10. SOGIESC and Religion have a much smaller vocabulary as seen in Appendix A.3, thus are not as prone to coverage issues.
## 5 Conclusion
We create a new identity lexicon, TIDAL and use it to develop an annotation tool for textual identity detection and augmentation. Through our experiments we demonstrate the effectiveness of our work to scale and improve existing human annotation and fairness techniques.
When coupled with a comprehensive lexicon that includes term forms and expansions, token-matching emerges as the most practical annotation technique given its implementation simplicity and low computational cost. We note that a custom NER model results in computational speed gains, but requires training resources and ground truth annotations. We demonstrate improvements in human annotation reliability and cost, positioning our annotator as an assistive tool for acquiring identity labels from contributors.
To scale fairness in practice, we build on our work to advance techniques used throughout the machine learning workflow. We demonstrate how to increase reliability in human annotations of ground truth, uncover more bias in data than previously known and train more fair models using improved techniques. We find that our approaches can be leveraged across different notions of fairness, ML development stages and model types.
## 6 Limitations
Our current lexicon is limited in a number of ways due to the scope of the paper. We propose future work to increase the number of represented identity
Figure 10: Generative model perplexities on a synthetic dataset, with a max of 6000. Our lexicon shows an example of a previously missed term.
Figure 9: Model AUCs (triangles) and dataset toxicity rates (circles) per debiasing technique on a synthetic dataset. A tighter cluster pattern indicates less bias across subgroups.
groups and subgroups. The scope of terms can be expanded to include non-literal associative words (e.g. "temple" for Religion), compound phrases that imply an identity group (e.g. "same-sex marriage" for SOGIEC), and prevalent stereotypes (e.g. "kinky hair" for RNE), all the while considering intersectionality. Coverage of contextual dimensions (Appendix A.3) can be improved for balance across groups. Additional sense context can also be added to improve disambiguation, for example by integrating with other lexical-semantic datasets such as WordNet and Wiktionary Eckle-Kohler et al. (2012) as shown in Appendix A.1
Token-based techniques presented are limited due to complexity of identity, contextual interpretation and fluidity of language. In addition to NLP, advanced knowledge-based approaches Agirre et al. (2014) need to be explored for disambiguated identity detection. Generative techniques like DataSynth5 hold a lot of promise for counterfactual generation. All of these require expanding the lexicon to include more "sense context" as mentioned above.
Footnote 5: [https://github.com/Tobiadefami/datasynth](https://github.com/Tobiadefami/datasynth)
Our results show that trade-offs are required in fairness depending on use case and type of bias, as techniques have different impacts in datasets and models Goldfarb-Tarrant et al. (2021). While our experiments use techniques independently, we propose future work to examine mixed-method approaches to improve guidelines for practical settings.
Finally, our goal is to incorporate sense context from many perspectives, however crowd-sourcing does not explicitly advance this goal. Contributor diversity, task sensitivity and a lack of benchmarks all impact representation and perceived quality. Future work on identity datasets should explore participatory data collection and governance models to empower groups to not only shape how they're represented, but also where and how their data is used.
## 7 Ethical Statement
During our research we encounter a variety of questions, including how to collect identity context data ethically, how assistive context could bias human annotations, and what the right compensation for those tasks should be.
We acknowledge that there are a lot more demographic categories and context than we choose to focus on in this paper. This means the work presented does not mitigate bias for everyone. Given our limited scope there is a high risk of misrepresentation and disenfranchisement especially of historically underrepresented groups.
We recommend caution when generalizing our findings to non-English languages or even across different cultures and groups given the subjectivity of identity assertions and toxicity labels.
### Wellness in Human Evaluation
Toxicity labeling has a side-effect of exposing human annotators and researchers to toxic languages, something we experience first-hand during our work. We only select contributors that accept explicit content (Appen, a) on the Appen platform.
We also leverage the Fair Pay plugin (Appen, c) to ensure that each contributor is fairly compensated based on their geographical location, with an extra 50% pay increase over the suggested baseline to account for task complexity.
|
2309.03748 | Enhancing Pipeline-Based Conversational Agents with Large Language
Models | The latest advancements in AI and deep learning have led to a breakthrough in
large language model (LLM)-based agents such as GPT-4. However, many commercial
conversational agent development tools are pipeline-based and have limitations
in holding a human-like conversation. This paper investigates the capabilities
of LLMs to enhance pipeline-based conversational agents during two phases: 1)
in the design and development phase and 2) during operations. In 1) LLMs can
aid in generating training data, extracting entities and synonyms,
localization, and persona design. In 2) LLMs can assist in contextualization,
intent classification to prevent conversational breakdown and handle
out-of-scope questions, auto-correcting utterances, rephrasing responses,
formulating disambiguation questions, summarization, and enabling closed
question-answering capabilities. We conducted informal experiments with GPT-4
in the private banking domain to demonstrate the scenarios above with a
practical example. Companies may be hesitant to replace their pipeline-based
agents with LLMs entirely due to privacy concerns and the need for deep
integration within their existing ecosystems. A hybrid approach in which LLMs'
are integrated into the pipeline-based agents allows them to save time and
costs of building and running agents by capitalizing on the capabilities of
LLMs while retaining the integration and privacy safeguards of their existing
systems. | Mina Foosherian, Hendrik Purwins, Purna Rathnayake, Touhidul Alam, Rui Teimao, Klaus-Dieter Thoben | 2023-09-07T14:43:17Z | http://arxiv.org/abs/2309.03748v1 | # Enhancing Pipeline-Based Conversational Agents with Large Language Models
###### Abstract
1 The latest advancements in AI and deep learning have led to a breakthrough in large language model (LLM)-based agents such as GPT-4. However, many commercial conversational agent development tools are pipeline-based and have limitations in holding a human-like conversation. This paper investigates the capabilities of LLMs to enhance pipeline-based conversational agents during two phases: 1) in the design and development phase and 2) during operations. In 1) LLMs can aid in generating training data, extracting entities and synonyms, localization, and persona design. In 2) LLMs can assist in contextualization, intent classification to prevent conversational breakdown and handle out-of-scope questions, auto-correcting utterances, rephrasing responses, formulating disambiguation questions, summarization, and enabling closed question-answering capabilities. We conducted informal experiments with GPT-4 in the private banking domain to demonstrate the scenarios above with a practical example. Companies may be hesitant to replace their pipeline-based agents with LLMs entirely due to privacy concerns and the need for deep integration within their existing ecosystems. A hybrid approach in which LLMs' are integrated into the pipeline-based agents allows them to save time and costs of building and running agents by capitalizing on the capabilities of LLMs while retaining the integration and privacy safeguards of their existing systems.
Footnote 1: The fifth author R.T. was at Accenture during the time of the research. This paper has been accepted at the TamingLLMs Workshop at SigDial 23 in Prague, Sept. 12th 2023.
## 1 Introduction
The field of conversational artificial intelligence (CAI) has experienced significant advances in recent years, with the emergence of both commercial and open-source CAI development platforms such as Google Dialogflow, Amazon's Alexa Skills Kit, Cognigy, and Rasa, as well as the more recent large language model (LLM)-based conversational agents (CA) like ChatGPT.
CAs can be text-based agents (Chatbots), Voice-User interfaces (VUI), or embodied-dialog Agents (EDA) (Harms et al., 2019) and generally aim to replace or empower humans through natural language interaction.
CAs can be pipeline-based or end-to-end (Chen et al., 2017). In pipeline-based CAs, the natural language understanding (NLU) component processes the user's message sequentially to identify their goal (intent recognition), and extract information pieces called entities. The dialog management component tracks the dialog state and decides on the next action based on the current state. Finally, the natural language generation (NLG) component builds and returns the response. The CAs "intelligence" relies on the agent's training data and internal logic used to create its NLU and dialog management models (Harms et al., 2019).
The end-to-end CAs rely on dialog models trained with large training datasets (Chen et al., 2017). These models learned hidden relations between input and output utterances, effectively avoiding that developers create interim representations (Dinan et al., 2021). A downside is that the necessity of larger datasets makes end-to-end approaches less applicable in domains such as manufacturing, where developers cannot derive training data from existing human-human conversations. End-to-end CAs also bear substantial safety issues, such as generating offensive language and responding inappropriately to offensive content or in safety-critical situations (Dinan et al., 2021). Combinations of pipeline-based and end-to-end approaches are also feasible. Rasa Open Source, for instance, already supports both (Rasa, 2023b, a).
This article uses the term LLM to refer to language models trained with an end-to-end approach on a large amount of high-quality training data. Prominent LLMs comprise GPT-4 by OpenAI, PaLM by Google, and LLaMA by Meta AI. Such models can, for instance, possess emergent abilities, be hard to steer, and humans will likely have difficulties interpreting how they work (Bowman, 2023).
In this article, we demonstrate how LLMs can expand the capabilities of pipeline-based CAs without removing the pipeline altogether. The impact of LLMs helps the pipeline-based CAs in generating training data for intent classification, the identification of domain-specific entities and synonyms, requirement characterization for the agent and its personalization and localization, among others. During deployment, LLMs can provide auto-correction to user input, handle context switching and out-of-scope questions, introduce response variability, create conversation summarizations and perform closed Question-Answering (Q&A). 2
Footnote 2: Please refer to Section 5 for a disclaimer.
## 2 The State of Conversational Agents
Broadly, CAs can be categorized into two main categories based on the design methodology employed: Pipeline methods and End-to-end methods (Chen et al., 2017). Agents that are developed using conversational AI platforms (task-oriented CAs), such as Rasa, Google Dialogflow, Cognigy, and IBM Watson, fall into the first category. LLM-based CAs such as ChatGPT can be identified as CAs belonging to the second category. While explicit architectural components can be identified in the pipeline-based CAs, such clear distinctions cannot be identified in end-to-end CAs.
### Pipeline-Based Conversational Agents
#### 2.1.1 Architecture
In the case of task-oriented CAs, the components that can be explicitly identified are NLU, dialog management, and NLG. A typical architecture of such a CA is shown in Figure 1. For NLU and NLG, pipeline-based CAs would traditionally use machine learning-based and template-based approaches, respectively. The dialog management component can be handcrafted, probabilistic, or hybrid. Most of the commercial frameworks and low-code platforms to create task-oriented CAs, such as Google Dialogflow, Cognigy, and IBM Watson, are pipeline-based and use handcrafted rules for dialog management. They are more reliable but less human-like. However, CAs using the probabilistic approach, such as ChatterBot, which are often used for open-domain CAs, create opposite results. Among different platforms, Rasa uses a hybrid approach for the dialog management component (Harms et al., 2019).
#### 2.1.2 Limitations
Conversational breakdown is a common issue during a conversation with a pipeline-based CA, indicating that the agent did not correctly understand the user's utterance or responded inadequately to the user's request (Moore and Arar, 2019; Folstad and Taylor, 2020). Conversational breakdowns can
lead to frustration, disappointment, and dissatisfaction (Bentley et al., 2018; Cowan et al., 2017; Luger and Sellen, 2016) if left unaddressed. In pipeline-based CAs, these breakdowns occur for various reasons, such as errors during intent and slot recognition, errors during task fulfillment, errors in generating the response, and users' lack of familiarity with a chatbot's intents (Li et al., 2020). In addition to conversational breakdowns, most of the commercial CAs cannot handle complex queries, lack emotional intelligence, and have limited domain knowledge (Luo et al., 2022).
Pipeline-based CAs are also limited regarding the effort in configuring them and how to operate in real-time conversations. In terms of configuration, intent classes, domain entities, and synonym lists need to be created a priori. It requires a certain amount of depth of domain knowledge to come up with suitable notions. The agent's personality and the power dynamics between the agent and user must be defined and expressed by manually creating individual utterances for the bot. The localization to various language varieties requires a significant amount of rework, in particular when it comes to scarcely-supported dialects.
### Large Language Models
The advancement of Language Models (LM) in NLP has driven significant progress. In general, the LM aims to predict the next word of a sentence given the current context. With the improvement of research, the concept of the LM has evolved in different stages. From a statistical LM (Jelinek, 1998), to predict the next word based on Markov assumption, it further progressed through a distributed word representation learning (Word2Vec), which initiated the usage of language model beyond word sequence (Mikolov et al., 2013). Context-aware pre-trained language models (Peters et al., 2018; Devlin et al., 2018) is one of the early adopters of the modern language model, which sets the paradigm of performing a fine-tuning on any of these pre-trained models on the downstream task and raised the performance achievements on many NLP tasks. One of these models, BERT (Devlin et al., 2018), is based on a parallelizable Transformer based network (Vaswani et al., 2023) with a self-attention mechanism, that begins a new era for future Language models. With the scaling of the model architecture and training data, there has been a rise of many LMs which are named Large Language Model or LLM (e.g., Generative Pretrained Transformer or GPT-series, Pathways Language Model or PaLM-series, etc.) (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Anil et al., 2023). One of the prominent differences has been seen in the emergence abilities (Wei et al., 2022) where these models could do a series of complex tasks given some specific prompt in a zero-shot or few-shot learning mechanism. Recent models in NLP are based on LLMs and one of the prominent LLM-based conversational agents has been ChatGPT. It is based on the Instruction tuned GPT models (InstructGPT), fine-tuned with Reinforcement learning from human feedback on dialogue data (Ouyang et al., 2022). An early experiment (Bubeck et al., 2023) from the OpenAI's latest release GPT-4 model, has shown the potential capabilities of the LLMs in different domains that denotes that GPT-4's performance is strikingly close to the human-intelligence level and it is far beyond next-word-level prediction. With the emergence of these LLMs with human-like conversational agents, the evolution of Chatbots has advanced to a different level.
#### 2.2.1 LLM-based Conversational Agents
LLM-based CAs like ChatGPT are trained in a similar way to InstructGPT, specifically optimized for dialogue. Human-generated dialogue data is collected, playing both a Human and an AI role. They used a three-step process: First, they collected a dataset of human-written demonstrations of the desired model behavior and used it to fine-tune GPT-3 using supervised learning. Next, they collected a dataset of rankings of model outputs and used it to fine-tune the supervised model further using reinforcement learning from human feedback. Finally, they evaluate their models by having human labelers rate the quality of model outputs on a test set. This methodology allows the authors to train language models that are better aligned with
Figure 1: Architecture of a pipeline-based CA. Based on (Harms et al., 2019; Brabra et al., 2022)
user intent and more natural interaction with the inclusion of human-generated data. Recent studies (Longpre et al., 2023) show the effectiveness of instruct tuning on different LLM models to improve performance on different prompt settings (zero-shot/few-shot settings). One of the recent LLM, Alpaca (Taori et al., 2023), which is built upon self-instruct (Wang et al., 2022) methods on Llama model (Touvron et al., 2023) shows a 7B parameters LLM can demonstrate high potential to compete with larger GPT-like models.
#### 2.2.2 Limitations and Risks
Despite many benefits of LLMs, they have several limitations (OpenAI, 2023). For instance, their responses are not reliable (they "hallucinate") (Bang et al., 2023; Zhao et al., 2023). Models like Chat-GPT still can produce faithful but nonsensical responses when viewed in the light of the common knowledge in a particular area (Alkaissi and Mcfarlane, 2023). LLMs also have long training times and require huge computation resources; thus, they are not easily obtainable with the latest event knowledge. They do not learn from experience as their context window is limited. For certain task-oriented domains, for example, cybersecurity, the models are unable to assess properly due to context limitations. There are also risks regarding the output of LLM-based models, as they could contain harmful advice, buggy code, or inaccurate information. Like other deep learning models, LLM-based models (Brown et al., 2020) are difficult to interpret due to their complex architecture. Also, their ability to make accurate predictions on new inputs cannot be relied upon, as evidenced by their much higher performance variance than humans on standard benchmarks.
### Integrating LLM into Pipeline-based Conversational Agents
As of May 2023, we have found that the involvement of LLMs in pipeline-based CA platforms is mainly limited to NLU and training data generation. For example, Cognigy (Cognigy, 2023), with the help of a third-party Generative AI provider, allows users to generate training data, including intent utterances, lexicons, and flows with pre-configured nodes, Rephrasing bot outputs and completing texts. Even though Cognigy offers a conversation option using generative AI, it is only intended to be used as a preview feature. In another case, (Rasa, 2023) recently announced the integration of LLMs in their chatbot framework with a new component called IntentlessPolicy. They explain a) how an LLM-based system can take advantage of multiple FAQs without setting up intents for each question, b) how user meaning can be understood in multiple turns of dialogue, and c) how out-of-scope messages can be understandable from the context. They also show that this can be generalized from very little data in a few-shot learning mechanism. They further emphasize that IntentlessPolicy complements intents, rules, stories, and forms. This hybrid approach will better equip with engaging interaction with the user.
## 3 LLMs to Overcome Limitations of Pipeline-based CAs
Despite the various frameworks for building pipeline-based CAs, it still requires substantial time and expertise to design and develop successful CAs. Related tasks concern the design of high-quality training utterances, the definition of intents and consistent and accurate named entities, the selection of domain-specific synonyms, and the localization of training data and responses. Besides, designers must modulate, for instance, training data, dialog management rules, and pre-defined responses to represent desired assistant traits (e.g., client orientation) or personas. We assume that the strengths of LLMs in processing natural language from different countries and domains can substantially shorten at least the time and potentially also the expertise needed to build pipeline-based CAs. Their capability to generate responses matching the style of a generated persona or mimicking an actual person's style could provide new techniques to create attractive CAs.
A second area for improvement is the robustness of a CA at run-time, i.e., when it interacts with a user. Often, pipeline-based CAs produce repetitive responses (robust but less attractive) or experience conversational breakdown because users switched contexts (not robust). In addition, pipeline-based CAs' narrow domain knowledge provokes out-of-scope answers due to smaller training data and limited responses. All of the situations above lower the user's satisfaction and could encourage them to give up on the agent. We assume that LLMs' extensive general and domain knowledge, coupled with their capability of generating attractive and diverse natural language texts, has the potential to achieve more robust and attractive CAs.
We conclude that LLMs have the capability to enhance pipeline-based CAs during the design and development phase (_delivery accelerator_) and a dialogue with a user (_real-time booster_). In contrast to relying on LLMs only, this hybrid approach is helpful because the pipeline-based approach grants the CA designer more control and transparency over the agent's behavior. The former is critical to counter, for instance, hallucinations, while the latter helps trace and potentially explain unexpected or unwanted behavior.
## 4 GPT-4 Experiments
To demonstrate the impact of LLMs on CAs, we conducted a series of experiments with GPT-4. The example scenario is a chat agent serving as a client advisor for private banking. A supporting document contains the exact prompts and replies in the conducted experiments.
ParametersWe used the Azure Open AI playground with the default parameters for our experiments: Max Response: 800, Temperature: 0.7, Top P: 0.95 Frequency penalty: 0, Presence penalty: 0, Deployment: GPT-4, Past messages included: 10, Max tokens: 8192. The temperature value of 0.7 means that generated responses are not deterministic, i.e., the exact response may vary during reproduction. To keep this article short, we sometimes shorten the actual prompts and answers by inserting an ellipsis.
### LLM as Delivery Accelerator
LLM, as a delivery accelerator, involves scenarios to assist developers and designers in building and refining the CA. This can include generating training data, creating lists of entities and synonyms, designing personas to guide the agent's responses, and localizing the agent for different languages and cultures. These tasks can be time-consuming and require significant expertise, so automating them with generative models can save developers time and resources. In Table 1, we show examples of how LLM can be used in the cases mentioned above for our scenario. The following sections review each development aspect's limitations in pipeline-based CAs and demonstrate how LLMs could address them.
Creating intents listsOne of the initial steps in designing a pipeline-based CA is to define and identify possible user inquiries or intents. To create a comprehensive list of intents, designers require approaches such as analyzing existing data, sessions with domain experts, and user research. However, LLMs can provide valuable assistance to the designers to gain general insight. We test GPT-4's ability to identify customer intents within a specific industry. We provided the following prompt:
For designing a chatbot, give me a list of 10 most prominent intents in a conversation about banking between a client and an agent.
The first five results provided by GPT-4 (omitting the explanations):
1. Check account balance...
2. View recent transactions...
3. Transfer funds between accounts...
4. Pay a bill or set up recurring payments...
5. Update personal information...
We observe that all of these are common consumer banking interactions that can trigger contact with banking customer service.
Generating training utterances for intent classificationWriting high-quality training data is a time-consuming task. GPT-4 is capable of helping CA designers with this matter. In A.1, we provide ten examples generated by GPT-4. We observe that all generated examples are valid for the intent cancel_account with various phrasings. However, further prompt tuning would be required to increase variety in length and formality. Generating training data using LLMs and incorporating those into the training pipeline would still require human feedback in the loop to avoid incorrect, biased, or inappropriate training data.
Creating lists of named entitiesWe test GPT-4's ability to identify entities relevant to a specific industry. We provided GPT-4 with the following prompt:
For designing a chatbot in the finance domain, give me a list of relevant named entities that the NLP back-end of the chatbot should be able to extract.
GPT-4 returned the following entities (with explanations omitted here):
account numbers, transaction amounts, dates, currency, transaction IDs, percentage rate, financial product names, contact details, company names, bill or invoice numbers, tax-related terms, ac
count types, payment methods, financial goals and financial institutions.
We observe that these are useful terms, although there is an overlap between companies and financial institutions.
Creating synonyms listsWe test GPT-4's ability to create synonyms lists for populating entity definitions. We provided the following prompt:
For designing a chatbot in the domain of private banking, give me a synonym list for the word "insolvent".
GPT-4 returned these 13 terms:
Bankrupt, Impoverished, Pennilless, Financially ruined, Broke, Indigent, Destitute, Impecunious, In default, In debt, Insufficient funds, Unable to pay debts, Financially distressed.
While some of these results are unlikely to be used in natural language, all of them are similar in meaning to the term "insolvent". Additionally, the list is not exhaustive, lacking slang terms, e.g., "skint".
Traits of good agentsWe test GPT-4's ability to identify traits associated with a particular role, an important element of chatbot personality design. We provided the following prompt:
Describe the traits of a good client advisor in private banking in max. 100 words. GPT-4 responded:
A good client advisor in private banking possesses strong financial knowledge, excellent communication and interpersonal skills, and a deep understanding of client's needs and goals. They maintain high ethical standards...
We observe that those are indeed characteristics of a good agent (for details see A.2).
Persona-based responsesWe show examples of how GPT-4 can be used to personalize responses according to a persona, the linguistic and societal background of the client. We provided the following prompt:
Write how a chatbot would respond to each of the following scenarios, when talking to a highly educated upper class British client. Use no more than 3 sentences:
GPT-4 responded:
1. I regret to inform you that the product you've inquired about is no longer available. My sincerest apologies for any inconvenience this may have caused. Should you require any assistance in finding an alternative, please do not hesitate to ask.
We observe that GPT-4 is able to answer this question _in character_, including empathy for the client, apologies and polite pointing in alternative directions.
In a second variation of this experiment, we asked GPT-4 to write chatbot statements in grammatically correct yet simple English. GPT-4 responded:
1. I'm sorry, but the product you are looking for is no longer available.
The example demonstrates how the language can be adapted by context, which in practice could come from the user data for personalization or from the owner's brand for consistency in customer communication.
LocalizationWe test GPT-4's ability to localize agents to different language and dialect varieties of a language (for details see A.3). We provided the following prompt:
Translate these three statements into Ger
\begin{table}
\begin{tabular}{l l} \hline
**Benefit** & **Example** \\ \hline Creating intents list & check account balance, transfer funds, pay bill \\ Generating training utterances & Provide a list of different wordings with the intent of canceling \\ for intent classification & an account \\ Creating named entities & account numbers, transaction amounts, date, currency \\ Creating synonyms lists & insolvent, bankrupt, penniless, broke \\ Traits of good agents & financial knowledge, interpersonal skills, client orientation, ethical standards \\ & highly educated client, simple English. \\ Localization & German, Swiss German, Spanish, French \\ \hline \end{tabular}
\end{table}
Table 1: Examples of LLMs as a delivery accelerator. A pipeline-based CA for private banking is configured with the help of GPT-4.
man, Swiss German, Spanish, and French.
1. I regret to inform you that the product is no longer available....
GPT-4 responded:
German: 1. Est tut mir leid, Ihnen mitteilen zu mussen, dass das Produkt nicht mehr verfugbar ist....
Swiss German: 4. Est tuer leid, Ihne mitz'teile, dass s'Produkt numme verfugbar isch....
Spanish: 7. Lamento informar le que el producto ya no esta disponible....
French: 10. Je regrette de vous informer que le produit n'est plus disponible....
We observe that the translations are of high quality in all four languages. Even for Swiss German, a German dialect, the translation is good. This is astonishing because Swiss German does not have a standardized written format for all its varieties.
### LLM as Real-time Booster
LLM, as a real-time booster, involves improving the agent's ability to understand and respond to user input in real-time conversations. This includes contextualizing the conversation to provide more relevant and personalized responses, intent classification and response generation to handle fallback and unhappy paths when the agent is unable to understand or fulfill the user's request, and disambiguating ambiguous requests by asking for more information. Additionally, generative models can assist with integrating closed Q&A and providing quick and accurate answers to common questions, among others. In Table 2, we provide examples of how LLM can improve the above-mentioned aspects in pipeline-based CAs.
Auto-correctionWe test GPT-4's ability to correct low-literacy utterances. We provided the following examples of phrases which were then auto-corrected by GPT-4. The prompt starts with:
Please rephrase the following 5 utterances into orthographically and grammatically correct American English...
GPT-4 responded:
a) wunt to cancel this account \(\rightarrow\) I want to cancel this account
b) i want 2 get rid of my acount \(\rightarrow\) I would like to delete my account
c) goodbye to my acount \(\rightarrow\) I am closing my account
We observe that orthographic errors are corrected (a), politeness is introduced (b), and utterances are re-phrased using clear English (c).
Context SwitchingWe test GPT-4's ability to handle a user that switches between two intents (address change and money transfer) before providing enough information to complete the first task:
User: Hi, I would like to transfer money. My bank account is 334402.
Chatbot: Hello!... Please provide the following information:
1. The recipient's bank account number.
2. The amount you would like to transfer.
Once I have this information, I can proceed with the transfer.
User: BTW, I also need to change my address: It's Park Avenue 14.
Chatbot: Thank you for providing your new address. To complete the address change, please also provide the following information:
1. City
2. Postal Code...
User: I live in 10012 New York.
Chatbot: Great, thank you for providing the city and postal code for your address change..... Now, let's get back to the money transfer request. Please provide:
1. The recipient's bank account number.
2. The amount you would like to transfer....
User: I want to transfer 400 Dollars to account number 831123
Chatbot: Thank you for providing the necessary information.
After receiving sufficient information about the address change, the bot gets back to get the complete information for the money transfer.
Out-of-scope QuestionsWe added an out-of-scope question to the conversation in the previous paragraph:
User: Where is Germany?
Chatbot: Germany is a country located in Central Europe....
We observe that the bot gives a sensible answer when appending a random general knowledge question to the conversation in the previous paragraph. Staying in context, the agent will then ask the user for a final confirmation:
Chatbot: Before we proceed with the1030 money transfer and address1031 change,
please confirm if the1032 provided details are correct...
Creating stylistic variabilityWe test GPT-4's potential to introduce variability in English writing style. We provided the following prompt:
For a chatbot, write 10 variations each one more apologetic than the previous one, of the statement: _,I didn't understand what you said, please rephrase." Vary in vocabulary, grammar and tone...
GPT-4 responded (examples 1, 4, and 9):
1. Sorry, I didn't quite get that. Could you rephrase your statement, please?...
4. Apologies for the confusion, I'm unable to grasp what you're saying. Kindly rephrase your statement....
9. My most profound apologies for not comprehending your statement. I would be grateful if you could rephrase it for me. We observe that the generations are of great stylistic variability and that a controlled degree of servitude is introduced into the utterances (see A.4 for more details).
Closed Q&AWe test GPT-4's capability to avoid hallucinations in closed Q&A by only providing exact predefined answers that are not altered (see A.5 for details). When testing the system with informally articulated questions, we got five correct answers from 5 trials.
Summarizing conversationWe test GPT-4's capability to summarize a conversation between a chatbot and a user and state what the agent picking up the conversation needs to do (see A.6 for details). Summarizing for a CA is particularly useful when requesting confirmation from the client before concluding a conversation or handing over the conversation to a human operator. We observe that the model could deliver the response in the requested format.
## 5 Outlook
This paper proposes a hybrid approach that leverages LLMs, in particular GPT-4, to enhance pipeline-based CAs. Using this approach, maintainers of existing CAs can adopt new domains and overcome the limitations in conversations with users while ensuring seamless integration with the existing ecosystem. This approach accelerates the CA delivery process through the assistance of LLMs in generating intents, entities, synonyms, respective training data, and agent personality traits. During deployment, LLMs can boost pipeline-based CAs' performance by utilizing auto-correct, context-switching capabilities, answering out-of-scope questions, creating diverse and stylistically richer responses, and incorporating Closed Q&A and summarization. This paper presented experiments to showcase the scenarios mentioned above.
In future work, we will extend the ad-hoc subjective assessment to a more rigorous evaluation among different LLMs and provide an integrated solution to demonstrate the proposed hybrid approach. Given the existing risks regarding the reliability of LLMs (OpenAI, 2023; Bang et al., 2023; Zhao et al., 2023), our future research will focus on examining the factors that prompt business owners to consider the integration of LLMs within their pipeline-based CAs.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Benefit** & **Explanation** \\ \hline Auto-correct & Correct / rephrase an orthographically and grammatically incorrect utterance to make it more easily classifiable by the bot e.g., "wunt to cancel this accent” -¿ ”I want to cancel this account” \\ Context switching & follows the user in switching back and forth between different intents like address change and money transfer \\ Out-of-scope questions & can be answered when regarding general knowledge \\ Creating stylistic variability & utterances can be rephrased, achieving a better writing style \\ & while maintaining the same meaning \\ Closed Q\&A & Exact formulation of answer is picked from a defined set of options \\ & summarizing conversation & summarization for hand-over to a human agent \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of LLMs as a real-time booster. A pipeline-based CA can be enhanced during deployment in various ways by GPT-4 overcoming its limitations.
## Acknowledgements
This work was supported by the European Union's Horizon 2020 research and innovation program via the project COALA "COgnitive Assisted agile manufacturing for a LAbor force supported by trustworthy Artificial Intelligence" (Grant agreement 957296). In addition, this work was supported by REasoning for Conversation and Information Technology Exchange (RECITE) project which is an OASIS Open Project dedicated to developing a standard for dialog modeling in conversational agents.
## Disclaimer
This document is intended for general informational purposes only and does not take into account the reader's specific circumstances, and may not reflect the most current developments. In particular, this research paper does not take into account the specific needs of an IT ecosystem and network, which may vary and require unique action. The reader should independently assess their specific needs in deciding to use any of the tools mentioned. Google Dialogflow, Cognigy, Rasa, etc. tools are not Accenture tools. Accenture does not make any representation that it has vetted or otherwise endorses these tools. Accenture disclaims any liability for their use, effectiveness or any disruption or loss arising from the use of these tool. Accenture disclaims, to the fullest extent permitted by applicable law, any and all liability for the accuracy and completeness of the information in this paper and for any acts or omissions made based on such information. Accenture does not provide legal, regulatory, audit, or tax advice. Readers are responsible for obtaining such advice from their own legal counsel or other licensed professionals.
|
2309.14234 | Mitigating Worst-Case Exozodiacal Dust Structure in High-contrast Images
of Earth-like Exoplanets | Detecting Earth-like exoplanets in direct images of nearby Sun-like systems
brings a unique set of challenges that must be addressed in the early phases of
designing a space-based direct imaging mission. In particular, these systems
may contain exozodiacal dust, which is expected to be the dominant source of
astrophysical noise. Previous work has shown that it may be feasible to
subtract smooth, symmetric dust from observations; however, we do not expect
exozodiacal dust to be perfectly smooth. Exozodiacal dust can be trapped into
mean motion resonances with planetary bodies, producing large-scale structures
that orbit in lock with the planet. This dust can obscure the planet,
complicate noise estimation, or be mistaken for a planetary body. Our ability
to subtract these structures from high-contrast images of Earth-like exoplanets
is not well understood. In this work, we investigate exozodi mitigation for
Earth--Sun-like systems with significant mean motion resonant disk structures.
We find that applying a simple high-pass filter allows us to remove structured
exozodi to the Poisson noise limit for systems with inclinations $< 60^\circ$
and up to 100 zodis. However, subtracting exozodiacal disk structures from
edge-on systems may be challenging, except for cases with densities $<5$ zodis.
For systems with three times the dust of the Solar System, which is the median
of the best fit to survey data in the habitable zones of nearby Sun-like stars,
this method shows promising results for mitigating exozodiacal dust in future
HWO observations, even if the dust exhibits significant mean-motion resonance
structure. | Miles H. Currie, Christopher C. Stark, Jens Kammerer, Roser Juanola-Parramon, Victoria S. Meadows | 2023-09-25T15:46:47Z | http://arxiv.org/abs/2309.14234v1 | # Mitigating Worst-Case Exozodiacal Dust Structure in High-contrast Images of Earth-like Exoplanets
###### Abstract
Detecting Earth-like exoplanets in direct images of nearby Sun-like systems brings a unique set of challenges that must be addressed in the early phases of designing a space-based direct imaging mission. In particular, these systems may contain exozodiacal dust, which is expected to be the dominant source of astrophysical noise. Previous work has shown that it may be feasible to subtract smooth, symmetric dust from observations; however, we do not expect exozodiacal dust to be perfectly smooth. Exozodiacal dust can be trapped into mean motion resonances with planetary bodies, producing large-scale structures that orbit in lock with the planet. This dust can obscure the planet, complicate noise estimation, or be mistaken for a planetary body. Our ability to subtract these structures from high-contrast images of Earth-like exoplanets is not well understood. In this work, we investigate exozodi mitigation for Earth-Sun-like systems with significant mean motion resonant disk structures. We find that applying a simple high-pass filter allows us to remove structured exozodi to the Poisson noise limit for systems with inclinations \(<60^{\circ}\) and up to 100 zodis. However, subtracting exozodiacal disk structures from edge-on systems may be challenging, except for cases with densities \(<5\) zodis. For systems with three times the dust of the Solar System, which is the median of the best fit to survey data in the habitable zones of nearby Sun-like stars, this method shows promising results for mitigating exozodiacal dust in future HWO observations, even if the dust exhibits significant mean-motion resonance structure.
Exozodiacal dust, Direct imaging, Habitable zone, Coronagraphic imaging, Extrasolar rocky planets +
Footnote †: journal: AJ
0000-0002-8002-8003]Miles H. Currie
0000-0002-0002-3880]Christopher C. Stark
0000-0002-3133-2207]Jens Kammerer
0000-0002-3133-0885]Roser Juanola-Parramon
0000-0002-4880-7885]Victoria S. Meadows
## 1 Introduction
Stars are not solitary objects; they host a complex system that may include a variety of planets, comets, asteroids, and a sea of small debris generated from larger bodies known as an exozodiacal disk. As we plan for a new era of detecting and characterizing Earth-like planets via high-contrast imaging, it is imperative to define the impact of all astrophysical sources that may contribute to the noise budget of future observations. In particular, exozodiacal dust may dominate the noise budget of a given system. Our ability to fit and subtract this dust from directly imaged systems containing Earth-like exoplanets will depend on properties of both the debris disk and the observatory.
Our own Solar System (SS) provides a nearby and well-studied example of dust in the habitable zone, known as zodiacal dust. With a surface brightness of \(\sim 22\) mag/arcsec\({}^{2}\) at 1 AU in the V band (Levine et al., 2006), zodiacal dust is a non-negligible source of noise
for all astronomical observations (e.g. Dermott et al., 2002). Other stars host dust systems known as exozodiacal dust that can be much brighter than zodiacal dust, introducing an additional source of noise when observing exoplanets (Roberge et al., 2012).
The origin of exozodiacal dust (exozodi) for a typical system is not well understood. Exozodi may originate from distant objects, analogous to our Solar System's Kuiper Belt, whose dust slowly migrates inward to the habitable zone via Poynting-Robertson drag (Reidemeister et al., 2011; Kennedy and Piette, 2015). This dust may also be generated by eccentric comets evaporating near periastron (Beust et al., 1990), a separate population of warm planetesimals similar to our Asteroid Belt, or a recent catastrophic event that redistributed material to the habitable zone (Weinberger et al., 2011). One or more of these processes may generate exozodiacal dust in Sun-like stellar systems, leading to variation in the observed density (Ertel et al., 2020). Regardless of its origin, exozodiacal dust can obscure observations of Earth-like exoplanets, and will likely need to be subtracted from the images.
Exozodiacal dust can populate the warm inner regions of planetary systems, where Earth-like planets may reside. A recent survey using the Large Binocular Telescope Interferometer suggests that the median level of habitable zone exozodiacal dust for nearby Sun-like stars is approximately three zodis (Ertel et al., 2020), where one zodi is equal to the SS level of zodiacal dust in the habitable zone. Exozodi mitigation is therefore a particularly prudent consideration for precursor studies supporting a future Habitable Worlds Observatory (National Academies of Sciences, Engineering, and Medicine, 2021), which will be designed to detect and characterize Earth-like exoplanets in the habitable zone via high-contrast direct imaging.
Exozodiacal dust can impact exoplanet detection, and removing exozodi from observations of Earth-like exoplanets may be challenging, especially if the disk is spatially inhomogeneous. Our ability to remove exozodi will depend primarily on the disk's brightness, the scale of the instrument's PSF, and the method used fit the exozodi's spatial distribution. If exozodi is smooth, it may be fairly straightforward to fit a high-order polynomial to the observed image to subtract off the bulk of the dust--this method is only effective up to a few tens of zodis, after which it is no longer possible to subtract the background down to the Poisson noise limit (Kammerer et al., 2022). However, we do not expect exozodi to be perfectly smooth. The SS zodiacal cloud has features associated with specific asteroid families, and the Earth is known to shepherd dust into a clumpy, circumsolar resonant ring structure (e.g. Dermott et al., 1985; Dermott et al., 1994; Reach et al., 1995). A similar ring structure has also been observed near Venus's orbit (Stenborg et al., 2021). Furthermore, the outer regions of debris disks observed around other stars exhibit clumps, warps, rings, and gaps (e.g. Greaves et al., 1998; Wilner et al., 2002; Kalas, 2005). Analogous structures may exist in the inner regions of disks; one possible morphology is an annulus around the star at the orbital radius of the planet, with a width of a few tenths of an AU and a gap at the location of the planet (Kuchner and Holman, 2003). These structures may be difficult to remove from observations, preventing us from detecting potentially habitable planets (Roberge et al., 2012). Although preliminary studies have suggested exozodi may significantly impact planetary detection (e.g. Defrere et al., 2012), the feasibility of removing these structures from high-contrast images has not been thoroughly investigated.
To date, most exoplanet yield studies assume that we are able to subtract exozodiacal dust down to the Poisson noise limit (e.g. Brown, 2005; Stark et al., 2014; Savransky and Garrett, 2016; Gaudi et al., 2020; The LUVOIR Team, 2019). While this appears roughly valid for smooth disks with densities \(<30\) zodis (Kammerer et al., 2022), we do not know if this is the case for disks with structures. In this work, we simulate observations of planetary systems with exozodiacal disk structures and test our ability to subtract down to the Poisson noise limit using a high-pass filtering technique. We consider systems covering a range of inclinations and zodi levels up to 100 times the SS zodiacal dust level, and test our ability to detect an exoplanet in the post-processed images. To help inform trades for the required mirror diameter for the nominally \(\sim\)6 m inscribed diameter Habitable Worlds Observatory--the top recommendation for the flagship mission of the Astro2020 Decadal Survey (National Academies of Sciences, Engineering, and Medicine, 2021)--we examine two possibilities for primary mirror size. We consider an 8 m circumscribed diameter mirror with an inscribed diameter similar to the Decadal recommendation, as well as a larger 12 m option.
In Section 2, we present our methods for generating astrophysical scenes, synthesizing coronagraph observations, subtracting the disk structure, and estimating the resulting signal-to-noise ratio of an injected Earth-like exoplanet. In Section 3, we present our results for a grid of simulations. In Section 4, we discuss our results and the lessons learned, then conclude in Section 5.
To investigate how exozodiacal disk structure affects our ability to extract planetary signal, we adopt worst-case scenario models of gravitational mean motion resonant rings created by Earth twins in exozodiacal disks, simulate images of the coronagraph response including stellar speckles, and add photon noise to the simulated observations. We then process the images by applying a high-pass filter to remove residual exozodiacal structure from PSF-subtracted images, and apply methods to detect the planetary signal. We quantify the performance of our technique by analyzing the residual noise in the post-processed image.
### Simulating debris disk images
#### 2.1.1 N-body Models
We adopted the exozodiacal disk models of Stark (2011), who simulate mean motion resonant ring structures created by planets around Sun-like stars for disks ranging from 1 to 100 zodis in density. These debris disk models were generated via n-body simulations, taking into account three-body gravitational dynamics between the star, a single planet, and a large population of dust grains, Poynting-Robertson and corpuscular drag, radiation pressure, and destructive collisions between dust grains. The models assumed a Dohnanyi size distribution at the moment of launch of the dust grains and self-consistently calculated the size distribution at all later points in time via collisional equilibrium (Stark & Kuchner, 2009). Notably, these models were specifically generated to represent a "worst case scenario" for mean motion resonant disk structures by tuning all of the physics to produce as much structure as possible. Specifically, these systems are composed of single planets on circular orbits around Sun-like stars and the parent bodies that generate the dust were placed at 2.5 to 3.0 times the semi-major axis of the planet to ensure as much dust as possible was delivered to the planet's mean motion resonant orbits via drag forces. Upon delivery, a fraction of the dust is gravitationally trapped in mean motion resonances, producing large-scale overdensities in the disk that orbit in lock with the planet. Kuchner & Holman (2003) found these single-planet circular orbit scenarios typically exhibit asymmetries in the form of a density deficit, or "gap", at the location of the planet, and density enhancements or "clumps" both leading and trailing the planet, the former typically being slightly less dense. From the library of models generated by Stark (2011), we included those with Earth-mass planets at 1 AU and models with zodi levels of 1, 5, 10, 20, 50, and 100 zodis. Figure 1 shows a sample of the debris disks used in this work, plotted after the dithering and Mie theory mitigation steps described Sections 2.1.3 and 2.1.4, respectively.
#### 2.1.2 Generating images with dustmap
dustmap(Stark, 2011) is an IDL suite designed to simulate density histograms, optical depth maps, thermal emission images, and scattered light images given a list of 3D particle locations. Each particle is assumed to represent a large number of dust grains, and we adopt the optical constants for astronomical silicates (Draine & Lee, 1984) and use Mie theory to calculate the scattering efficiency and phase functions. In this work, we use dustmap to calculate scattered light images of these models for inclinations of 0\({}^{\circ}\) (face-on), 30\({}^{\circ}\), 60\({}^{\circ}\), and 90\({}^{\circ}\) (edge-on) with respect to the observer. We define the pixel scale to be 1.074 mas at 500 nm, which allows us to bin the model pixels by integer values (avoiding interpolation) to achieve the resolution of our coronagraph models for both the 8 and 12 m telescope configurations. For the scattered light images, we assume that the disk is illuminated by a Sun-like star with stellar properties of 1 R\({}_{\odot}\), 1 L\({}_{\odot}\), T\({}_{\rm surface}=5770\) K, and log(g) \(=4.5\).
#### 2.1.3 Reducing particle noise
Because N-body simulations are composed of a finite number of particles, the resulting dustmap outputs are not smoothly varying functions. This limitation in resolution introduces particle noise to the final simulation. To mitigate particle noise, we dither the image in both the longitudinal and radial directions, creating a series of images that vary slightly in longitude and magnification, and take the median of this series of images as our smoothed image. Dithering the disk in this fashion differs from the coronagraphic PSF convolution discussed later in Section 2.2.2 because it is a physical dither applied relative to the disk plane, which allows us to average over particle noise on sub-pixel scales. We find that ten dithers in each of the radial and longitudinal directions are required to adequately smooth the image, for a total of 100 dithers per exozodiacal disk model (see Figure 2).
In the longitudinal direction, dithering is achieved by adjusting the longitude of the system in the dustmap call, effectively rotating the disk around the axis normal to the disk midplane. When generating scattered light maps, we run dustmap for an array of longitudes centered on the true longitude of the system spanning 5\({}^{\circ}\). At the planet location, this 5\({}^{\circ}\) span translates to a width slightly less than the PSF of the telescope. The top panel of Figure 2 shows the standard deviation of a 7x7 pixel region centered at the location of where the planet would be in the disk simulation as a function of the number of dithers. We find that ten dithers in the
longitudinal direction is adequate to stabilize the standard deviation of the region.
In the radial direction, we dither by adjusting the distance to the system in the dustmap call. Similar to longitudinal dithering, we run dustmap for an array of distances to the system centered on 10 pc, and spanning 0.02 pc. In our case, this span is sufficient to shrink or enlarge the scale of the image by one pixel, which translates to a fraction of the size of the PSF. Again, we find that ten dithers in the radial direction is adequate to stabilize the standard deviation of the region defined in the text above (see bottom panel of Figure 2).
#### 2.1.4 Reducing Mie theory artifacts
Mie theory assumes perfectly spherical grains. As a result, the calculated scattering coefficients and phase functions of a single grain size can feature unrealistic "ringing," a well-known limitation of Mie theory. The original N-body simulations of Stark (2011) use a relatively coarse set of grain sizes for the dust in the system, with 25 grain sizes spanning 0.69 and 480 \(\mu m\). Such a coarse grid does not sufficiently remove these ringing artifacts, which appear as visible discontinuities in the disk. These discontinuities in the contrast curve (see
Figure 1: A sample of scattered light images of exozodiacal dust disks including worst-case-scenario resonant structure used in this work. Each row represents a disk with 1, 10, or 100 zodis and includes a colorbar representing the contrast of 0, 30, 60, and 90 degree inclined disks. The blue dot represents an Earth-like planet at 1 AU away from the star, and the star is located in the center of the image. The structure appears less pronounced for the \(>1\) zodi cases because it accounts for a smaller percentage of the total surface brightness due to enhanced collisional destruction of grains in denser disks. The disks in this figure are shown after the dithering and Mie theory artifact mitigation steps.
Figure 2: Standard deviation of a 7x7 pixel region centered on the planet location in the smoothed dustmap image as a function of the number of dithers in both the longitudinal (top panel) and radial (bottom panel) directions. For both dimensions, ten dithers is adequate to smooth over N-body particle noise in scattered light images generated using dustmap.
Figure 3) would limit future studies of the impact of these exozodi models on spectral extraction, thus we opt to remove them. One option to remove these artifacts is to subresolve the input particle grain sizes and weight them according to a Dohnanyi distribution; however, this would increase noise properties, and given that our investigation focuses on measuring the noise contribution of exozodi, this is not a viable option. Instead, we opt to subresolve the grain sizes by interpolating over the coarse grain size list, equalizing the weight of the individual grains to maintain the original cross-sections. We subresolve the coarse grain size list into 500 equally spaced grain sizes in log space. While this is the best option for the present study, which focuses on broadband imaging, it may create a disk color that is redder than that expected from a Dohnanyi size distribution and Mie theory.
Although using the subresolved and normalized particle size list reduces the Mie theory ringing artifacts, these changes increase the run time of an individual dustmap call by a factor of six, as more Mie theory calculations are required to accommodate the additional grain sizes. To reduce runtime while maintaining the reduction of Mie theory ringing artifacts, we limit our subresolving methods to grains \(<9.4\ \mu m\) in size. The fractional difference between the contrast curves calculated with the partially-subresolved grain size list and the fully-subresolved list is \(<0.1\%\) (plotted in the bottom panel of Figure 3). The final run time for an individual dustmap call using the partially-subresolved grain size list is a factor of 1.7 slower than using the original coarse grain size list and well within tolerance to run on a personal laptop in a few days. All dustmap output models are publicly available1.
Footnote 1: [https://asd.gsfc.nasa.gov/Christopher.Stark/catalog.php](https://asd.gsfc.nasa.gov/Christopher.Stark/catalog.php)
### Synthesizing coronagraph observations
After generating images of our structured exozodiacal disk models, we inject an Earth-like planet and a Sun-like star into the system and convolve the astrophysical scene with simulated spatially dependent point-spread functions (PSFs) for high-contrast coronagraphs. We simulate PSF subtraction to further suppress the stellar speckles, considering both reference differential imaging and angular differential imaging.
#### 2.2.1 Astrophysical scenes
Our astrophysical scenes are comprised of the exozodiacal images discussed in Section 2.1, a Sun-like star, and an Earth twin. We manage the star and planet separately from the disk, allowing us to convolve the coronagraph's PSF with the disk while treating the star and planet with individualized on- and off-axis PSF models, and add all sources together later. The systems are placed 10 pc from Earth. We assume the host star has 1 R\({}_{\odot}\), 1 L\({}_{\odot}\), T\({}_{\rm surface}\) = 5770 K, with a magnitude of 4.83 in the V-band and an angular diameter of 0.465 mas at 10 pc. The planetary companion is an Earth-twin located at quadrature (maximum apparent separation). The planet properties include 1 R\({}_{\oplus}\), 1 M\({}_{\oplus}\), and an Earth-like albedo derived from models of disk-integrated flux presented in Stark (2022).
#### 2.2.2 Coronagraph and PSF models
We simulate realistic observations of a future high contrast imaging space telescope using the high-contrast coronagraph models described in Kammerer et al. (2022). We investigate two coronagraph designs, each paired with a mirror that has a different circumscribed diameter.
The first coronagraph-mirror configuration we consider is an 8 m primary mirror with a deformable mirror-assisted vortex charge 6 coronagraph originally designed for LUVOIR-B (VC6, Mawet et al. (2010)). This VC6
Figure 3: Upper panel: Contrast curves for different grain size lists. Ringing Mie theory artifacts are present when using the original coarse grain size list, and are reduced by using a subresolved or partially-subresolved grain size list. The run time of an individual dustmap call is significantly improved by using a partially subresolved grain size list. Lower panel: Fractional difference of the contrast curves produced by using the partially-subresolved grain size list and the fully-subresolved grain size list. The resulting contrast curve using the partially-subresolved grain size list exhibits negligible differences (\(<0.1\%\)) when compared to the contrast curve of the fully-subresolved grain size list.
coronagraph design achieves a raw contrast of \(<10^{-10}\) beyond \(\sim 5~{}\lambda/D\) separation.
We also investigate a larger mirror size of 12 m, for which we adopt the apodized pupil Lyot coronagraph (APLC) designed for LUVOIR-A (Aime et al., 2002; Soummer, 2005; St. Laurent et al., 2018). The APLC design assumes an 18% bandwidth achieving a raw contrast of less than \(10^{-10}\) beyond \(\sim 6~{}\lambda/D\) separation for sufficiently small stellar angular diameters (\(\leq 0.5~{}\lambda/D\) ). See Kammerer et al. (2022) for a full description of each coronagraph design we use in this work. The instrument throughputs for both coronagraph cases are assumed to be an unrealistic 100%-- ultimately this assumption does not matter, as we do not attempt to calculate realistic absolute exposure times in this study, and instead compare the planet's measured S/N to the expected S/N.
We convolved each exozodi model with the spatially varying coronagraph PSF using the coronagraph simulation tool developed by Kammerer et al. (2022). Briefly, this tool loads a pre-generated discrete set of off-axis PSFs, interpolates them to form a 3D datacube of PSFs centered on each pixel, and then performs a fast matrix multiplication to convolve with the astrophysical scene, creating realistic simulated direct imaging observations.
The planet's PSF is modeled by simply interpolating the discrete set of off-axis PSFs over position on the detector, and evaluating at the radial offset and position angle of the planet.
For the star, we use a pre-generated discrete set of on-axis PSFs calculated over a range of stellar angular diameters. We interpolate this set of PSFs over stellar diameter, and evaluate at the angular diameter of 0.465 mas, which is the size we assume for the case of a Sun-like star at 10 pc away from Earth.
#### 2.2.3 Photon noise
After convolving the images by the PSF models described in Section 2.2.2, we scale the images to a constant detector pixel scale of 0.5 \(\lambda/D\) for an observing wavelength of \(0.5\mu m\) and add photon noise. In this work, we do not consider nor model detector noise. We adopt an exposure time sufficient to set a planetary S/N of 7 (see Section 2.4), and calculate the number of photons collected per pixel on the detector. To add photon noise to each pixel, we draw from a Poisson distribution with a mean corresponding to the number of photons collected in the noiseless pixel.
#### 2.2.4 PSF subtraction
Although the stellar light is suppressed to a raw contrast of \(\sim 10^{-10}\), the optics leave behind a field of spatially variable residuals known as speckles. To further suppress these speckles, we assume that two images are observed: a science and a reference image. The science image is always an image of the target system, while the reference image can either be another image of the science target, or an observation of a similar, but isolated, stellar target. Subtracting the science and reference images allows us to suppress the speckle field.
We consider two possible methods of PSF subtraction: reference differential imaging (RDI) and angular differential imaging (ADI). The RDI technique removes the residual stellar speckle pattern by empirically measuring the speckle field of an isolated, but otherwise similar star to the science target. This image is subtracted from the science image to remove the speckle field. For RDI, we make the ideal assumption that the reference star is identical to the science target.
The ADI technique removes the residual stellar speckle pattern by using two images of an astrophysical scene separated a roll angle. Because the optics internal to the observatory roll with the telescope, the speckle pattern remains stationary on the detector, while the astrophysical scene is rotated according to the roll angle. By taking two exposures at different roll angles, a target can therefore serve as its own reference star, though this results in some degree of disk self-subtraction, as well as positive and negative copies of planetary companions. For ADI, we assume a \(30^{\circ}\) roll angle, which is approximately the minimum roll angle required to avoid planetary self-subtraction for a system 10 pc away with a planet at orbital radius of 1 AU.
If our telescope were perfectly stable, the two speckle patterns would subtract to the Poisson noise limit. In reality, time-varying wavefront error (WFE) will result in two slightly different speckle patterns, such that the PSF subtraction is imperfect and we are left with a systematic noise floor. We included this effect by adopting unique WFE time series for the science and reference observations. These WFE time series are propagated through the coronagraph model as optical path difference (OPD) error maps present at the entrance pupil of the coronagraph that vary as a function of time (during 20 seconds, corresponding to 8000 OPD maps). These time series for the 8 m (Potier et al., 2022) and 12 m (Potier et al., 2022) designs were generated by Lockheed Martin via an integrated model of the telescope and spacecraft structural dynamics, and include the rigid body motion of the primary mirror segments, the dynamic interaction of flexible structures, and the disturbances from the pointing control system. The two WFE time series produce speckle fields that differ by \(<1~{}\%\) of the raw contrast.
### Exozodi mitigation and planet detection
Regardless of the PSF subtraction technique used, exozodiacal disk structure does not fully self-subtract and significant residual structure is left in the subtracted image (see left panel of Figure 4). Because this structure is spatially inhomogeneous, it cannot be easily removed with high-order polynomials. We thus convolve the image with a high-pass filter to model and subtract the disk structure from the image. To detect the planetary signal in the disk-subtracted image, we use either aperture photometry or a PSF matching technique. Finally, we measure the noise in a region immediately surrounding the planet, and compute the planetary S/N.
#### 2.3.1 Exozodi subtraction via high-pass filtering
To remove the exozodiacal disk and its structure, we convolve our synthesized observations with a 2D Gaussian high-pass filter. For each scenario, we optimize the FWHM of the filter to remove the residual disk structure while preserving the planet signal by applying filter sizes ranging from 0.5 to 50 \(\lambda/D\) to the image, and choosing the filter size that maximizes the measured planetary S/N. A filter size identical to the size of the image (in our case, 50 \(\lambda/D\) ) effectively does nothing, while a filter size of 0.5 \(\lambda/D\), identical to the pixel scale, removes all information, including point sources.
Generally, exozodiacal structure is an extended source, and a filter size comparable to the physical size of the exozodiacal structure will ideally remove residual structure in a subtracted image. The minimum filter size we consider is set by the instrumental PSF: a filter size comparable to the size of the PSF will subtract planetary signal. Figure 4 shows a pre- and post-processed image of an ADI subtracted face-on exozodiacal disk structure (\(\mathbf{p}_{\mathrm{disk}}\)) convolved with a high pass filter (\(\mathbf{f}_{\mathrm{HP}}\)) with a filter size of 5 \(\lambda/D\) for an 8 m mirror configuration.
For each combination of inclination and zodi level, we vary the size of the high-pass filter in increments of 0.5 \(\lambda/D\), and measure the noise after the filter is applied (see Section 2.3.3 for a description of the noise measurement technique). Figure 5 shows both the measured S/N of the planet and the ratio of measured to expected noise as a function of the filter size in units of \(\lambda/D\) for all zodi levels at all inclinations for both ADI and RDI PSF subtraction methods for an 8 m mirror configuration. We also include a case with a uniform disk background for comparison.
We then identify the optimal filter size for a given simulation that reduces the measured noise in the image to the expected Poisson noise limit, thereby maximizing the measured planetary S/N. For low zodi, low inclination cases, filter sizes of \(\sim\) 20 \(\lambda/D\) (40 pixels) are sufficient for removing the disk structure. High zodi, high inclination cases require smaller, more aggressive filter sizes which consequently also subtract planetary signal. Furthermore, we are unable to remove the disk structure down to the Poisson noise limit for edge-on cases with \(>\) 1 zodis using any filter size. Applying a small enough filter size to remove disk structure down to the Poisson noise limit coincides with the maximum planetary S/N we are able to measure (see Figure 5), and we choose this optimal filter size for each simulation we consider. We report the filter size used for each scenario in Table 1.
#### 2.3.2 Measuring SNR via aperture photometry
We measure the signal of the planet and the noise in the region surrounding the planet by placing apertures of radius 0.7 \(\lambda/D\), which is approximately the size of the planetary PSF, and summing the signal within each aperture. Because the background of the image, including exozodi and PSF subtraction residuals, is spatially inhomogenous (especially for inclined systems), noise estimation is sensitive to both the location and size of the region used to sample the noise. Thus, we are often limited in the number of resolution elements we can use to estimate the noise, and we adopt the small sample statistics formalism recommended by Mawet et al. (2014). We calculate S/N by employing the two-sample t-test to determine the significance of one resolution element (i.e. the signal) compared to the resolution elements in a given region of the image (i.e. the area used to estimate noise). We use the following equation to calculate planetary S/N:
\[\mathrm{S/N}=\frac{x_{00}}{\sigma_{n}\sqrt{1+\frac{1}{N_{n}}}} \tag{1}\]
Figure 4: ADI PSF subtraction for an 8 m mirror configuration of a 1 zodi face-on exozodiacal disk, \(\mathbf{p}_{\mathrm{disk}}\), before (left) and after (right) convolution with a 5 \(\lambda/D\) high-pass filter, \(\mathbf{f}_{\mathrm{HP}}(5\lambda/\mathrm{D})\)).
In Equation 1, \(x_{00}\) is the intensity of the planet signal, and \(\sigma_{n}\) is the standard deviation of the resolution element intensities used for noise estimation with \(N_{n}-1\) degrees of freedom, where \(N_{n}\) is the number of resolution elements used for noise estimation. The term \(\sqrt{1+\frac{1}{N_{n}}}\) is a correction factor for small number statistics as derived in the two sample t-test formalism of Mawet et al. (2014). The noise term \(\sigma_{n}\) is defined as
\[\sigma_{n}=\sqrt{\frac{\sum(x_{ij}-\bar{\mathbf{x}})^{2}}{N_{n}-1}}, \tag{2}\]
where \(x_{ij}\) is a resolution element intensity centered on pixel (i,j) calculated within a defined region suitable for estimating the noise at the planet location, and \(\bar{\mathbf{x}}\) is the mean of an array of resolution element intensities \(x_{n}\).
Assuming the location of the planet is known, we measure the planet signal, \(x_{00}\), by placing an aperture of radius 0.7 \(\lambda/D\) centered on the planet location and summing the pixel values within the aperture. In the case of ADI PSF subtraction, which contains a "positive" and a "negative" copy of the planet separated by the roll angle, we place an additional aperture on the "negative" copy of the planet in the PSF subtracted image. The sum of the absolute values of the "negative" planet signal and the "positive" planet signal is the total planetary signal in the image.
Because an inclined exozodiacal disk can exhibit significant forward scattering and is not azimuthally sym
Figure 5: Measured planetary S/N (first and third rows) and measured noise relative to the expected Poisson noise (second and fourth rows) as a function of the applied high-pass filter size for an 8 m mirror configuration. The high-pass filter size is the FWHM of a two-dimensional Gaussian in units of \(\lambda/D\) convolved with ADI (upper two rows) and RDI (lower two rows) PSF subtracted images. Each panel includes all zodi levels considered in this work. The horizontal dotted line represents either the input planetary S/N or the Poisson noise limit. For most cases, an optimal high-pass filter size exists when the measured noise in the filtered image is equal to the expected Poisson noise; the planetary S/N measured in the optimally filtered image is our fiducial S/N measurement.
metric, we measure noise in a local region immediately surrounding the planet. We define a small annulus centered on the planet location with an inner radius of 1 \(\lambda/D\) and an outer radius of 3 \(\lambda/D\), and place apertures within this region. Figure 6 shows a schematic for the noise regions used in RDI and ADI PSF subtraction images for an 8 m mirror configuration. The size of the annulus region limits the number of apertures we can place; however, using the ADI PSF subtraction technique allows us to double the number of apertures we place because two copies of the planet exist. We therefore place \(\sim 15\) and \(\sim 30\) apertures in RDI and ADI subtracted images, respectively. We sum the intensities within each aperture and create an array of noise measurements, \(\mathbf{x}\). The standard deviation of the array of noise measurements is calculated using Equation 2, and this value multiplied by the correction factor in the denominator of Equation 1 is the measured noise.
#### 2.3.3 Measuring SNR via matched filtering
In addition to measuring S/N using aperture photometry, we also consider a more advanced PSF matching technique for measuring planetary S/N. The PSF matching technique leverages the known shape of an off-axis PSF via matched filtering to detect possible point source companions more robustly than aperture photometry (e.g. Kasdin & Braems, 2006). In this approach, we use our library of off-axis coronagraphic PSF models described in Section 2.2.2 to interpolate an offset PSF model centered on each pixel in the image. For each pixel, the PSF model is then truncated (i.e. everything outside of some radius is set to zero), and the truncated model is normalized. We explored a range of truncation radii, and found a radius of 0.7 \(\lambda/D\) was sufficient for our purposes. In this matched filtering formalism, the intensity of a given resolution element centered on location (i, j) is given by
\[x_{ij}=\frac{(\mathbf{p}*\mathbf{f}_{\rm HP})\cdot\mathbf{m}_{ij}}{\mathbf{m}_{ij}\cdot\mathbf{m}_ {ij}}, \tag{3}\]
where \(\mathbf{p}\) is the vectorized PSF-subtracted image, \(f_{\rm HP}\) is the Gaussian high pass filter, \(m_{ij}\) is the vectorized matched filter PSF model for pixel (i, j), and \(*\) and \(\cdot\) are the convolution and dot product operators, respectively. The intensity of the planetary signal, \(x_{00}\), is thus Equation 3 applied at the planet location. We estimate the noise using the technique described for aperture photometry; however, instead of placing apertures within the defined noise region, we apply Equation 3 to the same locations as in aperture photometry.
The above describes a generalized matched filtering formalism which can be directly applied when using the RDI PSF-subtraction technique. However, for the ADI technique, the planet signal appears as two components in the subtracted image: a "positive" and a "negative" signal separated by the roll angle. In this case, we modify our matched filter PSF model slightly by appropriately including a negative PSF companion in the
\begin{table}
\begin{tabular}{c c|c c c c|c c c} \hline \hline & \multicolumn{5}{c}{8 m mirror} & \multicolumn{5}{c}{12 m mirror} \\ & zodis & 0\({}^{\circ}\) incl. & 30\({}^{\circ}\) & 60\({}^{\circ}\) & 90\({}^{\circ}\) & 0\({}^{\circ}\) & 30\({}^{\circ}\) & 60\({}^{\circ}\) & 90\({}^{\circ}\) \\ \hline & 1 & 30 & 46 & 28 & 8 & 20 & 45 & 13 & 5 \\ & 5 & 50 & 18 & 11 & 4 & 29 & 18 & 10 & 3 \\ & 10 & 34 & 13 & 8 & 3 & 35 & 13 & 7 & 2 \\ & 20 & 29 & 9 & 6 & 2 & 27 & 10 & 6 & 2 \\ & 50 & 17 & 6 & 3 & 2 & 11 & 7 & 4 & 2 \\ & 100 & 12 & 4 & 3 & 2 & 7 & 5 & 3 & 2 \\ \hline & 1 & 11 & 12 & 9 & 5 & 19 & 14 & 10 & 4 \\ & 5 & 10 & 11 & 8 & 3 & 11 & 11 & 8 & 2 \\ & 10 & 7 & 7 & 6 & 2 & 9 & 8 & 7 & 2 \\ & 20 & 5 & 5 & 4 & 2 & 7 & 7 & 5 & 2 \\ & 50 & 4 & 3 & 3 & 2 & 5 & 5 & 3 & 2 \\ & 100 & 3 & 3 & 2 & 2 & 4 & 4 & 2 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: Chosen filter sizes for each system inclination and zodi level optimized for subtracting exozodi to the Poisson noise limit. The upper and lower quadrants of this table correspond to ADI and RDI PSF subtraction, respectively. The left and right quadrants correspond to 8 m and 12 m mirror architectures.
term of Equation 3, separated by the roll angle. We multiply the companion PSF by \(-1\), and in a similar procedure as the generalized formalism, we truncate this companion PSF to a radius of 0.7 \(\lambda/D\) and normalize the entire PSF model such that its absolute sum is unity. The "negative" PSF copy is offset from the "positive" one by accounting for the roll angle of the telescope, and this offset changes with angular separation from the star. This results in a unique matched filter PSF model for each pixel (i,j). The PSF pair is fixed by the roll angle, and therefore change in separation with circumstellar distance. This may help to reduce the impact of stellar speckles, though we do not investigate this effect here. When this matched filter is convolved with an image, the contributions from both the "positive" and "negative" images are included in the calculation for the intensity of a given resolution element (Equation 3).
### Comparing to expected values
After post-processing the synthesized images, we compare our measurements for planetary S/N and photon noise to their expected values. We adopted an exposure time sufficient to set a planetary S/N of 7, given by
\[T_{\rm int}=(\rm S/N)^{2}\frac{CR_{plan}+CR_{back}}{CR_{plan}^{2}}, \tag{4}\]
where \(T_{\rm int}\) is the total integration time of the observation, SNR is the signal-to-noise ratio of a faint planet companion, \(\rm CR_{plan}\) is the photon count rate of the planet, and \(\rm CR_{back}\) is the photon count rate of the background count rate consisting of stellar speckle and exozodiacal disk contributions. The photon count rates are calculated by integrating over an aperture of radius 0.7 \(\lambda/D\) in the noiseless images. The stellar and disk contributions of \(\rm CR_{back}\) vary according to the PSF subtraction method (see Section 2.2.4), and are given by
\[\rm CR_{back}=\begin{cases}2CR_{star}+CR_{disk}&\text{for RDI}\\ 2CR_{star}+2CR_{disk}&\text{for ADI}\end{cases}, \tag{5}\]
where \(\rm CR_{star}\) and \(\rm CR_{disk}\) are the average count rates of the stellar speckles and exozodiacal dust in the region used to ultimately measure noise in the post-processed image (see Section 2.3.3). In Equation 5, \(\rm CR_{star}\) is doubled for both RDI and ADI because we assume that we spend the same amount of time integrating on the reference image as we do on the science image. Only one \(\rm CR_{disk}\) term is counted for RDI because we assume that the reference star is a perfect copy of the star in the science image and does not include a debris disk or planets.
In the following section, we report planetary S/N and photon noise measurements using both aperture photometry and PSF matching, and compare these values to the input S/N and the expected background noise given by Equation 5. We report these relative comparisons for all zodi, disk inclination, primary mirror sizes, and PSF subtraction techniques considered in this work.
## 3 Results
Using the tools and models described in the previous section, we synthesize realistic high-contrast observations of Earth-like exoplanets in systems with significant exozodiacal disk structure. We remove exozodiacal disk structure from the observations by applying an optimized high-pass filter, and compare the measured residual noise in the post-processed image to the expected Poisson noise. The process of simulating observations, removing exozodi, and measuring noise and S/N is repeated 1000 times for each zodi level/inclination configuration to average over photon noise, and we report our
Figure 6: Schematic showing the regions used to measure noise for RDI (left) and ADI (right) PSF subtracted images for an 8 m mirror configuration.
measurements as the median value of the iterations for each configuration. Unless otherwise specified, results are reported primarily for the 8 m mirror configuration to provide estimates for the nominal \(\sim 6\) m inscribed mirror recommendation of the National Academies of Sciences, Engineering, and Medicine (2021).
### Disk Calibration
The input disk models have inherent noise associated with the limited resolution of N-body simulations (see Section 2.1.3), and we test how this limitation affects our simulations. We attempted to smooth over particle noise by dithering the disk models in both the radial and longitudinal directions, and although our dithering process significantly reduced the particle noise, it did not eliminate this noise source entirely (see Figure 2). We therefore consider how particle noise contributes to the overall noise.
To test whether particle noise in the disk model significantly contributes to the noise budget, we re-scale the nominal disk models to construct calibration disk models that do not exhibit mean-motion resonant structure at the orbital radius of the planet. The exozodiacal debris disks in this work are generally composed of the three distinct regions shown in Figure 7: a parent ring where the dust particles originate, mean motion resonant structures at the orbital radius of the planet, and smoothly varying dust between the parent ring and structure. The disk models in Figure 7 include the inverse square illumination factor, but the true calibration models do not include scattered stellar light to help isolate the source of the particle noise. We construct the calibration disk models by re-scaling the size of the disk model such that the planet lies in the center of the structure-less "smooth region". Subsequently, we re-scale the contrast of the new calibration disk to match the contrast of the nominal disk at the planet location. Figure 7 illustrates this process for an 8 m mirror configuration.
We insert the calibration disk models into astrophysical scenes in lieu of the nominal disk models, and apply the same treatment described in Section 2, assuming an exposure time given by Equation 4. We plot the ratio of measured noise in the nominal disk model to that of the calibration disk model for each inclination as a function of zodi level in the upper panel of Figure 8. We consider noise ratios within \(\pm 5\%\) of unity to be limited by the particle noise, and indicate these cases with a small "x" in the center of their markers. Cases limited by particle noise include \(0^{\circ}\) inclination for all zodi levels, \(30^{\circ}\) inclination with 1, 5, and 10 zodis, \(60^{\circ}\) inclination with 1 zodis, and \(90^{\circ}\) inclination with 1 zodis. All other cases are limited by the physical properties of the disk, including structure and spatially-varying brightness. However, we do not expect the cases limited by particle noise to impact the validity of our analysis, as explained below.
Systematic noise from either N-body particles or disk structure can limit the maximum recoverable planetary S/N, but all cases limited by particle noise have maximum S/N values greater than our target S/N. To calculate this maximum S/N limitation for each of our scenarios, we run our analysis pipeline adopting an exposure time of \(10^{10}\) s, which approximates an observation with infinite exposure time, allowing the measured planetary S/N to saturate at the maximum possible value. We assume an 8 m mirror and ADI PSF subtraction. We report these values in the lower panel of Figure 8, and flag the scenarios dominated by particle noise. All cases limited by particle noise have maximum S/N measurements greater than our input planetary S/N of 7. We therefore conclude that the particle noise inherent to the N-body simulations will not affect the validity of our results.
Some maximum S/N measurements for high-inclination, high-zodi cases are less than our input planetary S/N of 7; these cases include \(30^{\circ}\) inclination with 100 zodis, \(60^{\circ}\) inclination with \(\geq 50\) zodis, and \(90^{\circ}\) inclination with \(\geq 5\) zodis (see Figure 8). However, these cases are not dominated by particle noise, and instead are limited by the disk structure. For high-inclination, high-zodi cases, it is impossible to recover the input S/N due to systematic noise associated with the disk structure, even with unlimited exposure time. However, this is a limitation of our methodology, and not an absolute limitation. Other techniques may be required to mitigate the disk in these examples (see Section 4.1 for further discussion).
### Planet detection and S/N measurements
After applying our high-pass filtering routine to subtract residual disk structure from PSF-subtracted images, we measure both the signal of the planet and an estimate for the noise in the post-processed image using the formulae and processes described in Sections 2.3.2 and 2.3.3. Here we report both the recovered S/N of the exoplanet, and the estimated noise at the planet location as it compares to the expected Poisson noise (CR\({}_{\rm back}\)) given by Equation 5. The planet was injected in the image at a S/N of 7, and the integration time for each case was calculated using Equation 4. Figures 9 and 10 show recovered planetary S/N measurements in the ADI and RDI PSF-subtracted post-processed image as well as the ratio of measured (N\({}_{\rm meas}\)) to expected (N\({}_{\rm expt}\)) noise in the image for a space-based direct imaging tele
scope with 8 m and 12 m architectures, respectively, for all cases. In these plots, N\({}_{\rm expt}\) is the expected background noise at the location of the planet, and is calculated by multiplying the count rate of the background noise given by Equation 5 by the exposure time we adopt (Equation 4). We also include results for a system with a uniform, completely smooth exozodi background for comparison. Figures 9 and 10 also present a comparison of planet detection methods, with aperture photometry (Section 2.3.2) and PSF matching (Section 2.3.3) plotted as solid and dashed lines, respectively.
The noise vs. zodi panels of Figures 9 and 10 suggest that it is possible to choose a high-pass filter size that subtracts exozodiacal dust structure down to the Poisson noise limit in nearly all cases, except for edge-on systems with \(>1\) zodi, for both ADI and RDI PSF subtraction routines. However, the S/N vs. zodi panels show that in some cases we are unable to recover the input planetary S/N in the post-processed image with the integration time specified by Equation 4. For high-zodi, high-inclination systems, the maximum measurable S/N due to systematic noise (see Figure 8) is lower than the input S/N.
We note a peak at \(\sim 10\) zodis in the S/N curve for the face-on (0\({}^{\circ}\) inclined) ADI case of Figure 9. As described in Section 2.3.1, filter size selection can impact the planetary signal, and we optimize the balance between exozodi subtraction and planetary signal preservation by choosing the high pass filter size that maximizes the planetary S/N (see Table 1). In the low-zodi regime of the face-on ADI S/N curves, the optimal filter size is larger than the scale of the mean motion resonance structure because this maximizes the measured S/N. However, in these cases the structure is not fully mitigated and the corresponding noise term remains, slightly reducing the measured S/N. This effect is less pronounced at a density of \(\sim 10\) zodis because the structure accounts for a smaller percentage of the overall surface brightness in cases with increased disk density due to enhanced collisional destruction of grains in denser disks, resulting in an overall smoother disk profile amenable to efficient removal via our high-pass filter technique.
The choice of PSF subtraction technique also affects our S/N measurements. In Figures 9 and 10, results for ADI and RDI PSF-subtracted images are presented in
Figure 7: Comparison of the nominal disk model and the calibration disk model assuming observations with an 8 m mirror architecture. The planet in the nominal system is located in a region of disk structure, while the planet in the calibration disk model has a smoothly varying background. The lower panel shows azimuthally averaged disk-to-star contrast as a function of radius for the nominal and calibration disks.
the left and right columns, respectively. In both cases, the input planetary S/N is able to be recovered with a uniform disk background; however, introducing disk structure into a system results in clear differences between the measured S/N values in the ADI and RDI cases. For face-on cases, planetary S/N does not degrade until 20 and 5 zodis are reached for the ADI and RDI PSF subtraction techniques, respectively. For inclinations \(>30^{\circ}\), PSF subtraction using either the ADI or RDI technique produces similar trends, although S/N measurements for ADI are up to \(\sim 30\%\) larger than for RDI. We were unable to recover the expected S/N of 7 in these high-inclination cases, even for a cases with a density of 1 zodi. We also compare the aperture photometry and PSF matching techniques for planet detection in Figures 9 and 10, and find that using the PSF matching technique yields up to \(\sim 10\%\) and \(\sim 25\%\) higher S/N measurements for the 8 m and 12 m cases, respectively. This improvement in S/N translates to \(\sim 20\%\) and \(\sim 50\%\) reductions in the required exposure times to achieve the aperture photometry S/N in most cases.
Figure 8 suggests that the presence of systematic noise associated with disk structure may limit the maximum measurable S/N; however, if this maximum measurable S/N is larger than the desired S/N, it may be possible to integrate for longer on the target to achieve the desired S/N. Accounting for all systematic noise terms, we calculate the integration time necessary to achieve a detection significance of 7 for each simulation we consider, and present the results as ratios with the theoretical integration time in Table 2, including 8 m and 12 m primary mirror architectures.
We also see improvements in measured S/N by increasing the diameter of the telescope's primary mirror. Table 2 provides a direct comparison for the integration time required to detect an Earth-like exoplanet for each simulation. As mentioned in Section 2.2.2, these two mirror architectures differ in both size and coronagraph design. The larger mirror has a smaller PSF, effectively allowing it to "resolve out" the extended exozodiacal disk and structure and requiring less exposure time to achieve our desired S/N (see Table 2).
## 4 Discussion
### Systematic Noise
In this work, we have identified two systematic noise terms that contribute to the overall noise budget of the system, impeding planetary S/N measurements. The first is the particle noise inherent to the input N-body simulations. Although we attempt to smooth over this noise in Section 2.1.3, it nevertheless sets a noise floor (see Figure 8).This noise term does not impact the validity of our results, and will not be relevant for real observations. Despite this systematic noise, we are able to recover the input planetary S/N for all affected cases, except for systems with \(>50\) zodis. Thus, all S/N measurements \(<50\) zodis are largely unaffected by particle noise, and instead may be limited by systematic noise associated with the disk.
The second systematic noise term we identify is the noise due to exozodiacal disk structure at the location of the planet. Similarly, this term sets a noise floor that limits the measurable planetary S/N. Some cases may have maximum S/N limits below the desired detection significance, as in the case of high-zodi, high-inclination simulations (see Figure 8). In these cases, it is impossible to achieve S/N beyond these limits because we are unable to both subtract the exozodiacal disk structure down to the Poisson noise limit and preserve the planet signal with our analysis pipeline. In particular,
Figure 8: Upper panel: Ratio of measured noise in the nominal disk model to calibration disk model as a function of zodi level. Markers within \(\pm 5\%\) of the horizontal dotted line (with an “x” in the center) indicate cases where the total measured noise in the nominal disk model is dominated by particle noise inherent to the N-body models. Lower panel: Maximum measurable S/N for cases that include the nominal disk model. For most edge-on or high-zodi cases, the input S/N of 7 (dotted horizontal line) is not recoverable due to systematic noise associated with the physical disk properties. The results in this figure assume an 8 m primary mirror and ADI PSF subtraction.
the high-pass filter leverages the fact that disk structure is typically more extended and larger in scale than a planetary point source. In high-inclination scenarios as the disk inclination approaches \(90^{\circ}\), the disk itself becomes brighter due to its forward scattering properties, and its spatial scale is reduced to a sharp, knife-edge feature about the same scale as a planetary PSF (see Figure 1). In this scenario, the high-pass filter must be applied with an aggressively small FWHM to fit and remove the disk, and consequently the high-pass filter also removes significant planetary signal in the process. We conclude that these high-inclination systems will likely require an alternate technique that removes the disk, but preserves the planetary signal as much as possible. One option may be to fit the edge-on disk shape with a Gaussian, Lorentzian, or other parametric function centered on the disk. Additionally, it may be possible to leverage wavelength-dependent flux of the disk to better remove it from the system, however we leave these options to future work. We note that Kammerer et al. (2022) had similar difficulties removing the disk contribution for edge-on systems with smooth disks using a high-order polynomial-- the high-inclination disk subtraction problem remains an open issue.
### Mirror size comparison
In this work, we test 8 m and 12 m mirror configurations, and present the resulting noise and planetary S/N measurements in Figures 9 and 10, respectively. The recovered planetary S/N is generally improved by increasing the primary mirror size from 8 m to 12 m. The pixel size of the detector scales with the inverse of the diameter of the mirror, thus the photons of an extended source are spread over more pixels in the 12 m configuration, resulting in the exozodiacal disk being "resolved out" with increased mirror size, and improvements in the measured planetary S/N.
The most extreme improvement in recovered planetary S/N by increasing the mirror size is seen in the low-zodi high-inclination cases. In the 1 zodi, edge-on case, the recovered planetary S/N increases by \(\sim 25\%\) when increasing the mirror diameter from 8 m to 12 m. This is due to the smaller PSF of the larger mirror "resolving out" the sharp features of the edge-on disk.
### Relative Integration Time
Figure 9: Measured noise relative to the expected Poisson noise (top row) and measured planetary S/N (bottom row) as a function of the zodi level of a system after optimizing the high-pass filter size to maximize S/N for an 8 m mirror architecture. The columns are results for ADI and RDI PSF subtraction techniques. Each panel includes results using aperture photometry and PSF matching, designated by solid and dashed lines, respectively. Each panel shows results for all system inclinations considered in this work, as well as systems with uniform backgrounds for comparison. We find that applying a high-pass filter can subtract exozodiacal disk structure to the Poisson noise limit at the expense of signal for nearly all systems \(<90^{\circ}\) inclined; however edge-on systems remain challenging.
\begin{table}
\begin{tabular}{c c c|c c|c c|c c|c c} \hline \hline \multicolumn{2}{c}{Zodis} & \multicolumn{3}{c}{Face-on} & \multicolumn{3}{c}{30\({}^{\circ}\) incl.} & \multicolumn{3}{c}{60\({}^{\circ}\) incl.} & \multicolumn{3}{c}{Edge-on} \\ \multicolumn{2}{c}{} & \multicolumn{1}{c}{C23} & K22 & C23 & K22 & C23 & K22 & C23 & K22 \\ \hline \multirow{4}{*}{\(\alpha\)} & 1 & 1.2 & 1.0 & 1.3 & 1.0 & 1.7 & 1.0 & 10 & \(\cdots\) \\ & 5 & 1.2 & 1.0 & 1.5 & 1.0 & 2.6 & 1.0 & \(\cdots\) & \(\cdots\) \\ & 10 & 1.1 & 1.0 & 1.8 & 1.0 & 4.9 & 1.0 & \(\cdots\) & \(\cdots\) \\ & 20 & 1.2 & 1.0 & 2.5 & 1.0 & 76 & 1.2 & \(\cdots\) & \(\cdots\) \\ & 50 & 2.2 & 1.0 & 17 & 1.0 & \(\cdots\) & 1.6 & \(\cdots\) & \(\cdots\) \\ & 100 & 10 & 1.0 & \(\cdots\) & 1.0 & \(\cdots\) & 2.1 & \(\cdots\) & \(\cdots\) \\ \hline \multirow{4}{*}{\(\alpha\)} & 1 & 1.2 & 1.0 & 1.2 & 1.0 & 1.2 & 1.0 & 1.0 & \(\cdots\) \\ & 5 & 1.1 & 1.0 & 1.3 & 1.0 & 1.7 & 1.0 & \(\cdots\) & \(\cdots\) \\ & 10 & 1.1 & 1.0 & 1.4 & 1.0 & 2.6 & 1.0 & \(\cdots\) & \(\cdots\) \\ & 20 & 1.1 & 1.0 & 2.3 & 1.0 & 36 & 1.0 & \(\cdots\) & \(\cdots\) \\ & 50 & 1.3 & 1.0 & \(\cdots\) & 1.0 & \(\cdots\) & 1.2 & \(\cdots\) & \(\cdots\) \\ & 100 & 2.3 & 1.0 & \(\cdots\) & 1.0 & \(\cdots\) & 1.6 & \(\cdots\) & \(\cdots\) \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of relative integration times for exozodi with structure (this work, C23) and smooth exozodi (K22, Kammerer et al., 2022). Each cell represents the ratio of realistic integration time needed to account for systematics and the theoretical exposure time given by Equation 4. We assume a target S/N of 7 and ADI PSF subtraction.
Figure 10: Same as Figure 9, but for a 12 m mirror architecture. The smaller PSF of the larger mirror size helps to “resolve out” the exozodi, and thus the measured planetary S/N is improved in nearly all scenarios.
We compare the relative integration time needed for a S/N = 7 planetary detection for 8 m and 12 m configurations using the ADI PSF subtraction technique in Table 2. Equation 4 gives the total theoretical integration time assumed for our simulations, assuming only photon noise and 100% instrument throughput, and does not take detector noise into account. Thus, any exposure times derived from this work should not be interpreted as absolute. Without the presence of systematics, we would expect the measured and calibration S/N curves to follow the theoretical S/N function given by solving Equation 4 for S/N. However, the presence of systematic noise forces the measured S/N curves to deviate from the theoretical S/N curve, eventually plateauing at a S/N maximum where more integration time does not improve the measurement. For most cases with inclinations \(<90^{\circ}\), this maximum S/N lies above the S/N = 7 line, implying that more integration time is required to achieve the desired S/N value. High-zodi cases typically require the largest increase in integration time to achieve the desired S/N, by an order of magnitude or more. Therefore, knowledge of a system's zodi level would be necessary to accurately estimate the integration time in a real-world scenario.
### Comparison to Smooth Disks
For systems with smooth exozodiacal disks, Kammerer et al. (2022) found that it may be possible to subtract the disk down to the Poisson noise limit using high-order polynomials and no prior information of the system for all cases up to \(\sim 10\) and \(\sim 50\) zodi for 8 m and 12 m mirror architectures, respectively. For the present study, we find that it may be possible to use high-pass filtering technique to subtract exozodiacal disks with mean motion resonance structure down to the photon noise limit for all cases up to 100 zodi and \(<90^{\circ}\) inclination. However, as noted in Section 3.2, achieving this level of disk subtraction for high-zodi, high-inclination cases requires an aggressive high-pass filter that consequently removes planetary signal, and additional exposure time may be necessary to achieve the desired planetary S/N.
In Table 2, we compare relative exposure times calculated in this work to values derived from the results of Kammerer et al. (2022). Assuming the planetary signal is perfectly preserved, Kammerer et al. (2022) predict that the theoretical exposure time given by Equation 4 will be sufficient to detect planets in systems that are \(0^{\circ}\) and \(30^{\circ}\) inclined with up to 100 zodis, and \(60^{\circ}\) inclined with up to 10 zodis for an observatory with an 8 m aperture. Beyond 10 zodis, up to double the theoretical exposure time may be required to detect planets in a \(60^{\circ}\) inclined system. Kammerer et al. (2022) does not present results for edge-on disks. However, systems with exozodiacal disk structure may require more exposure time, with a few times the theoretical value for most cases, and up to an order of magnitude more in the most extreme examples. Therefore, the presence of exozodiacal disk structure will likely impact exposure time for real observations.
In summary, it may be feasible to subtract exozodiacal dust for low-zodi, moderately inclined systems whether the disk exhibits completely smooth dust or worst-case-scenario mean motion resonance structure, representing two extremes in disk morphology possibilities.
### Nearby Systems
This technique shows promise for effectively removing exozodiacal disk structure for the median observed zodi level of nearby systems. The HOSTS survey reported a best-fit median habitable zone zodi level of 3 zodis with a 95% upper limit of 27 zodis Ertel et al. (2020). If these nearby systems included an Earth-like planet in their habitable zones as well as exozodiacal structure, we may be able to detect the planetary companion for all inclinations \(\leq 60^{\circ}\) using an 8 m telescope. Additionally, it may be feasible to subtract exozodiacal structure down to the Poisson noise limit even at the 95% upper limit zodi level from these systems for inclinations \(<60^{\circ}\), however more integration time may be required. Although the orientation of the disks in the Ertel et al. (2020) sample were usually unknown, the median inclination of stellar systems with respect to Earth is statistically \(60^{\circ}\) if all systems are randomly oriented. Therefore, we may be able to contend with exozodiacal disks with significant mean motion resonance structure in over half of all nearby targets, so long as disk densities generally follow the distribution found in Ertel et al. (2020). To access the other half of possible targets, future studies should focus on exozodi mitigation at higher inclinations.
## 5 Conclusions
We simulated high-contrast images of Earth-like exoplanets in astrophysical scenes that include significant exozodiacal disk structure, and quantified our ability to subtract mean motion resonant structure to detect these exoplanets at 500 nm. We find that using an optimized high pass filter is an effective way to fit and subtract exozodiacal disk structure while preserving the planetary signal. This method is particularly powerful for low-to-moderately inclined systems with debris disks up to approximately an order of magnitude denser than the habitable zone dust in our Solar System.
In addition to the physical properties of the disk itself, we consider observations with an 8 m and 12 m
primary mirror diameter, each with a different coronagraph design, as well as ADI and RDI PSF-subtraction techniques, and two planet detection methods including aperture photometry and PSF matching. Our 8 m architecture is analogous to the \(\sim 6\) m inscribed diameter recommended by National Academies of Sciences, Engineering, and Medicine (2021). We find that increasing the primary mirror diameter from 8 m to 12 m helps "resolve out" the extended source of exozodiacal dust, broadly decreasing the relative time to planetary detection in most cases. The ADI PSF subtraction technique has a clear advantage over RDI, allowing us to subtract exozodiacal dust for zodi levels greater than 5 zodis. Finally, we find that using the more advanced PSF matching technique over simple aperture photometry may reduce the required exposure times to detect planets in our synthesized images by up to \(\sim 20\%\) and \(\sim 50\%\) for the 8 m and 12 m cases, respectively.
The median zodi level of nearby Sun-like stars is 3 zodis, with a 95% upper limit of 27 zodis (Ertel et al., 2020). Our results suggest that for moderately inclined systems, we may be able to subtract the exozodiacal dust from direct images of these nearby systems to detect Earth-like exoplanets in the habitable zone, even in the wake of worst-case-scenario mean motion resonance structures. However, mitigating exozodi in high-inclination systems remains an open problem.
## 6 Acknowledgments
We thank our anonymous reviewer for their thoughtful and thorough review that improved the clarity and strength of the paper. We acknowledge funding support from the University of Washington's Astrobiology Program, and the Virtual Planetary Laboratory Team, a member of the NASA Nexus for Exoplanet System Science, funded via NASA Astrobiology Program Grant No. 80NSSC18K0829. The simulations in this work were facilitated though the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington.
|
2309.10052 | Chapter 12: The moment problem on compact semi-algebraic sets (revised
version) | The following is an improved version of Chapter 12 of my book [Sm17]. Among
others, we present a new unified approach to the Archimedean
Positivstellens\"atze for quadratic modules and semirings in Section 12.4 and
we add a number of new results on Positivstellens\"atze for semirings and the
corresponding moment problems.All references to formulas and to the
bibliography of the book are retained. This version is essentially based on
results from the recent paper [SmS23]. We will also use a result form book
[Sm20]. | Konrad Schmüdgen | 2023-09-18T18:07:27Z | http://arxiv.org/abs/2309.10052v1 | # Chapter 12: The moment problem on compact semi-algebraic sets (revised version)
###### Abstract.
The following is an improved version of Chapter 12 of my book [10]. Among others, we present a new unified approach to the Archimedean Positivstellensatze for quadratic modules and semirings in Section 12.4 and we add a number of results on Positivstellensatze for semirings and the corresponding moment problems.All references to formulas and to the bibliography of the book are retained.
This version is essentially based on results from the recent paper [10]. We will also use a result from the book [10].
Key words and phrases:Moment problem, Positivstellensatz, real algebraic geometry 2020 Mathematics Subject Classification: 46A60 (Primary); 14P10 (Secondary). Acknowledgment: The author would like to thank Matthias Schotz for the fruitful cooperation. In this chapter we begin the study of the multidimensional moment problem. The passage to dimensions \(d\geq 2\) brings new difficulties and unexpected phenomena. In Section 3.2 we derived solvability criteria of the moment problem on intervals in terms of positivity conditions. It seems to be natural to look for similar characterizations in higher dimensions as well. This leads us immediately into the realm of real algebraic geometry and to descriptions of positive polynomials on semi-algebraic sets. In this chapter we treat this approach for basic closed _compact_ semi-algebraic subsets of \(\mathbb{R}^{d}\). It turns out that for such sets there is a close interaction between the moment problem and real algebraic geometry. Generally speaking, combined with Haviland's theorem any denominator-free Positivstellensatz yields an existence result for the moment problem. We develop this connection in detail and give complete proofs of the corresponding Positivstellensatze.
Basic notions and facts from real algebraic geometry that are needed for our treatment of the moment problem are collected in Section 12.1. Section 12.2 contains general facts on localizing functionals and supports of representing measures.
In Section 12.3, we prove our main existence result for the moment problem on compact semi-algebraic sets (Theorem 12.29) and the corresponding Positivstellensatz for preorderings (Theorem 12.28).
In Section 12.4 we derive a fundamental result, the Archimedean Positivstellensatz for quadratic modules and semirings (Theorem 12.43). In Section 12.5, we restate this theorem for the polynomial algebra \(\mathbb{R}[x_{1},\ldots,x_{d}]\) and give applications to the moment problem (Theorems 12.48, 12.50, and 12.51). Section 12.7 contains a Positivstellensatz and its application to the moment problem (Theorem 12.59) for semi-algebraic sets which are contained in compact polyhedra. In Section 12.8, we derive a number of classical results and examples on the moment problem for concrete compact sets. The results in Sections 12.3, 12.4, 12.5, 12.7, and 12.8 are formulated in the language of real algebra, that is, in terms of preorderings, quadratic modules, or semirings.
Apart from real algebraic geometry the theory of self-adjoint Hilbert space operators is our main tool for the multidimensional moment problem. In Section 12.6 we develop this method by studying the GNS construction and the multidimensional spectral theorem. This approach yields a short and elegant approach to the Positivstellensatz and to the moment problem for Archimedean quadratic modules.
Throughout this chapter, \(\mathsf{A}\) denotes a **commutative real algebra with unit element** denoted by \(1\). For notational simplicity we write \(\lambda\) for \(\lambda\cdot 1\), where \(\lambda\in\mathbb{R}\). Recall that \(\sum\mathsf{A}^{2}\) is the set of finite sums \(\sum_{i}a_{i}^{2}\) of squares of elements \(a_{i}\in\mathsf{A}\).
### Semi-algebraic sets and Positivstellensatze
The following definition contains three basic notions which are needed in the sequel.
**Definition 12.1**.: A _quadratic module_ of \(\mathsf{A}\) is a subset \(Q\) of \(\mathsf{A}\) such that
\[Q+Q\subseteq Q,\ 1\in Q,\ a^{2}Q\in Q\ \text{for all}\ a\in\mathsf{A}. \tag{12.1}\]
A quadratic module \(T\) is called a _preordering_ if \(\,T\cdot T\subseteq T\).
A _semiring_ is a subset \(S\) of \(\mathsf{A}\) satisfying
\[S+S\subseteq S,\ S\cdot S\subseteq S,\ \lambda\in S\ \text{for all}\ \lambda\in\mathbb{R},\lambda\geq 0. \tag{12.2}\]
In the literature "semirings" are also called "preprimes". The name "quadratic module" stems from the last condition in (12.1) which means that \(Q\) is invariant under multiplication by squares. Setting \(a=\sqrt{\lambda}\), this implies that \(\lambda\cdot Q\subseteq Q\) for \(\lambda\geq 0\). While semirings and preorderings are closed under multiplication, quadratic modules are not necessarily. Semirings do not contain all squares in general. Clearly, a quadratic module is a preordering if and only if it is a semiring. In this book, we work mainly with quadratic modules and preorderings.
_Example 12.2_.: The subset \(\,S=\{\sum_{j=0}^{n}a_{j}x^{j}:\,a_{j}\geq 0,n\in\mathbb{N}\}\,\) of \(\,\mathbb{R}[x]\,\) is a semiring, but not a quadratic module. Clearly, \(Q=\sum\mathbb{R}_{d}[\underline{x}]^{2}+x_{1}\sum\mathbb{R}_{d}[\underline{x}] ^{2}+x_{2}\sum\mathbb{R}_{d}[\underline{x}]^{2}\) is a quadratic module of \(\mathbb{R}_{d}[\underline{x}],d\geq 2\), but \(Q\) is neither a semiring nor a preordering. \(\circ\)
Obviously, \(\sum\mathsf{A}^{2}\) is the smallest quadratic module of \(\mathsf{A}\). Since \(\mathsf{A}\) is commutative, \(\sum\mathsf{A}^{2}\) is invariant under multiplication, so it is also the smallest preordering of \(\mathsf{A}\).
Our guiding example for \(\,\mathsf{A}\,\) is the polynomial algebra \(\,\mathbb{R}_{d}[\underline{x}]:=\mathbb{R}[x_{1},\ldots,x_{d}]\).
Let \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) be a finite subset of \(\mathbb{R}_{d}[\underline{x}]\). The set
\[\mathcal{K}(\mathsf{f})\equiv\mathcal{K}(f_{1},\ldots,f_{k})=\{x\in\mathbb{R} ^{d}:f_{1}(x)\geq 0,\ldots,f_{k}(x)\geq 0\} \tag{12.3}\]
is called the _basic closed semi-algebraic set associated with \(\mathsf{f}\)_. It is easily seen that
\[Q(\mathsf{f})\equiv Q(f_{1},\ldots,f_{k})=\big{\{}\,\sigma_{0}+f_{1}\sigma_{1 }+\cdots+f_{k}\sigma_{k}:\,\sigma_{0},\ldots,\sigma_{k}\in\sum\mathbb{R}_{d}[ \underline{x}]^{2}\big{\}} \tag{12.4}\]
is the _quadratic module generated by the set \(\mathsf{f},\)_
\[S(\mathsf{f})\equiv S(f_{1},\ldots,f_{k})=\bigg{\{}\,\sum_{n_{1},\ldots,n_{k} =0}^{r}\alpha_{n_{1},\ldots,n_{k}}f_{1}^{n_{1}}\cdots f_{r}^{n_{r}}:\alpha_{n_{ 1},\ldots,n_{r}}\geq 0,t\in\mathbb{N}_{0}\bigg{\}} \tag{12.5}\]
is the _semiring generated by_\(\mathsf{f}\), and
\[T(\mathsf{f})\equiv T(f_{1},\dots,f_{k})=\bigg{\{}\sum_{e=(e_{1},\dots,e_{k})\in \{0,1\}^{k}}f_{1}^{e_{1}}\cdots f_{k}^{e_{k}}\sigma_{e}:\,\sigma_{e}\in\sum \mathbb{R}_{d}[\underline{x}]^{2}\,\bigg{\}} \tag{12.6}\]
is the _preordering generated by the set \(\mathsf{f}\)_.
These sets \(\mathcal{K}(\mathsf{f})\), \(Q(\mathsf{f})\), \(S(\mathsf{f})\), \(T(\mathsf{f})\) play a crucial role in this chapter and the next.
**Definition 12.3**.: A _cone_ is a subset \(C\) of \(\mathsf{A}\) such that
\[C+C\subseteq C\text{ and }\lambda\cdot C\subseteq C\text{ for }\lambda\geq 0.\]
A _unital cone_ of \(\mathsf{A}\) is a cone \(C\) which contain the unit element of \(\mathsf{A}\).
An _\(S\)-module_ for a semiring \(S\) is a unital cone such that
\[ac\in C\text{ for }a\in S\text{ and }c\in C. \tag{12.7}\]
Obviously, semirings, quadratic modules, and preorderings are unital cones.
Setting \(c=1\) in (12.7) yields \(a\in C\) for \(a\in S\). Thus, \(S\subseteq C\) for any \(S\)-module \(C\).
Each cone \(C\) of \(\mathsf{A}\) yields an ordering \(\preceq\) on \(\mathsf{A}\) by defining
\[a\preceq b\quad\text{if and only if}\quad b-a\in C.\]
_Example 12.4_.: Let \(S\) be a semiring of \(\mathsf{A}\) and \(g_{0}:=1,g_{1},\dots,g_{r}\in\mathsf{A}\), where \(r\in\mathbb{N}\). Then
\[C:=g_{0}S+g_{1}S+\cdots+g_{r}S\]
is the _\(S\)-module of \(\mathsf{A}\) generated by \(g_{1},\dots,g_{r}\)_._
By the above definitions, all polynomials from \(T(\mathsf{f})\) are nonnegative on \(\mathcal{K}(\mathsf{f})\), but in general \(T(\mathsf{f})\) does not exhaust the nonnegative polynomials on \(\mathcal{K}(\mathsf{f})\).
The following _Positivstellensatz of Krivine-Stengle_ is a fundamental result of real algebraic geometry. It describes nonnegative resp. positive polynomials on \(\mathcal{K}(\mathsf{f})\) in terms of _quotients_ of elements of the preordering \(T(\mathsf{f})\).
**Theorem 12.5**.: _Let \(\mathcal{K}(\mathsf{f})\) and \(T(\mathsf{f})\) be as above and let \(g\in\mathbb{R}_{d}[\underline{x}]\). Then we have:_
1. _[label=()]_
2. (Positivstellensatz)_ \(g(x)>0\) _for all_ \(x\in\mathcal{K}(\mathsf{f})\) _if and only if there exist polynomials_ \(p,q\in T(\mathsf{f})\) _such that_ \(pg=1+q\)_._
3. (Nichtnegativstellensatz)_ \(g(x)\geq 0\) _for all_ \(x\in\mathcal{K}(\mathsf{f})\) _if and only if there exist_ \(p,q\in T(\mathsf{f})\) _and_ \(m\in\mathbb{N}\) _such that_ \(pg=g^{2m}+q\)_._
4. (Nullstellensatz)_ \(g(x)=0\) _for_ \(x\in\mathcal{K}(\mathsf{f})\) _if and only if_ \(-g^{2n}\in T(\mathsf{f})\) _for some_ \(n\in\mathbb{N}\)_._
5. \(\mathcal{K}(\mathsf{f})\) _is empty if and only if_ \(-1\) _belongs to_ \(T(\mathsf{f})\)_._
Proof.: See [PD] or [Ms1]. The original papers are [Kv1] and [Ste1].
All _"if"_ assertions are easily checked and it is not difficult to show that all four statements are equivalent, see e.g. [Ms1]. Standard proofs of Theorem 12.5 as given in [PD] or [Ms1] are based on the Tarski-Seidenberg transfer principle. Assertion (i) of Theorem 12.5 will play an essential role in the proof of Proposition 12.26 below.
Now we turn to algebraic sets. For a subset \(S\) of \(\mathbb{R}_{d}[\underline{x}]\), the real zero set of \(S\) is
\[\mathcal{Z}(S)=\{x\in\mathbb{R}^{d}:f(x)=0\quad\text{for all }f\in S\}. \tag{12.8}\]
A subset \(V\) of \(\mathbb{R}^{d}\) of the form \(\mathcal{Z}(S)\) is called a _real algebraic set_.
Hilbert's basis theorem [CLO, p. 75] implies that each real algebraic set is of the form \(\mathcal{Z}(S)\) for some _finite_ set \(S=\{h_{1},\ldots,h_{m}\}\). In particular, each real algebraic set is a basic closed semi-algebraic set, because \(\mathcal{K}(h_{1},\ldots,h_{m},-h_{1},\ldots,-h_{m})=\mathcal{Z}(S)\).
Let \(S\) be a subset of \(\mathbb{R}_{d}[\underline{x}]\) and \(V:=\mathcal{Z}(S)\) the corresponding real algebraic set. We denote by \(\mathcal{I}\) the ideal of \(\mathbb{R}_{d}[\underline{x}]\) generated by \(S\) and by \(\hat{\mathcal{I}}\) the ideal of \(f\in\mathbb{R}_{d}[\underline{x}]\) which vanish on \(V\). Clearly, \(\mathcal{Z}(S)=\mathcal{Z}(\mathcal{I})\) and \(\mathcal{I}\subseteq\hat{\mathcal{I}}\). In general, \(\mathcal{I}\neq\hat{\mathcal{I}}\). (For instance, if \(d=2\) and \(S=\{x_{1}^{2}+x_{2}^{2}\}\), then \(V=\{0\}\) and \(x_{1}^{2}\in\hat{\mathcal{I}}\), but \(x_{1}^{2}\notin\mathcal{I}\).)
It can be shown [BDRo, Theorem 4.1.4] that \(\mathcal{I}=\hat{\mathcal{I}}\) if and only if \(\sum p_{j}^{2}\in\mathcal{I}\) for finitely many \(p_{j}\in\mathbb{R}_{d}[\underline{x}]\) implies that \(p_{j}\in\mathcal{I}\) for all \(j\). An ideal that obeys this property is called _real_. In particular, \(\hat{\mathcal{I}}\) is real. The ideal \(\mathcal{I}\) generated by a single irreducible polynomial \(h\in\mathbb{R}_{d}[\underline{x}]\) is real if and only if \(h\) changes its sign on \(\mathbb{R}^{d}\), that is, there are \(x_{0},x_{1}\in\mathbb{R}^{d}\) such that \(h(x_{0})h(x_{1})<0\), see [BCRo, Theorem 4.5.1].
The quotient algebra
\[\mathbb{R}[V]:=\mathbb{R}_{d}[\underline{x}]/\hat{\mathcal{I}} \tag{12.9}\]
is called the algebra of _regular functions_ on \(V\). Since \(\hat{\mathcal{I}}\) is real, it follows that
\[\sum\mathbb{R}[V]^{2}\cap\big{(}-\sum\mathbb{R}[V]^{2}\big{)}=\{0\}. \tag{12.10}\]
_Example 12.6_.: Let us assume that the set \(\mathsf{f}\) is of the form
\[\mathsf{f}=\{g_{1},\cdots,g_{l},h_{1},-h_{1},\ldots,h_{m},-h_{m}\}.\]
If \(\mathsf{g}:=\{g_{1},\ldots,g_{l}\}\) and \(\mathcal{I}\) denotes the ideal of \(\mathbb{R}_{d}[\underline{x}]\) generated by \(h_{1},\ldots,h_{m}\), then
\[\mathcal{K}(\mathsf{f})=\mathcal{K}(\mathsf{g})\cap\mathcal{Z}(\mathcal{I}), \ Q(\mathsf{f})=Q(\mathsf{g})+\mathcal{I},\ \text{and}\ T(\mathsf{f})=T(\mathsf{g})+\mathcal{I}. \tag{12.11}\]
We prove (12.11). The first equality of (12.11) and the inclusions \(Q(\mathsf{f})\subseteq Q(\mathsf{g})+\mathcal{I}\) and \(T(\mathsf{f})\subseteq T(\mathsf{g})+\mathcal{I}\) are clear from the corresponding definitions. The identity
\[ph_{j}=\frac{1}{4}[(p+1)^{2}h_{j}+(p-1)^{2}(-h_{j})]\in Q(\mathsf{f}),\ p\in \mathbb{R}_{d}[\underline{x}],\]
implies that \(\mathcal{I}\subseteq Q(\mathsf{f})\subseteq T(\mathsf{f})\). Hence \(Q(\mathsf{g})+\mathcal{I}\subseteq Q(\mathsf{f})\) and \(T(\mathsf{g})+\mathcal{I}\subseteq T(\mathsf{f})\). \(\circ\)
Another important concept is introduced in the following definition.
**Definition 12.7**.: Let \(C\) be a unital cone in \(\mathsf{A}\). Define
\[\mathsf{A}_{b}(C):=\{a\in\mathsf{A}:\text{there exists a $\lambda>0$ such that $\lambda-a\in C$ and $\lambda+a\in C$}\}.\]
We shall say that \(C\) is _Archimedean_ if \(\mathsf{A}_{b}(C)=\mathsf{A}\), or equivalently, for every \(a\in\mathsf{A}\) there exists a \(\lambda>0\) such that \(\lambda-a\in C\).
**Lemma 12.8**.: _Let \(Q\) be a quadratic module of \(\mathsf{A}\) and let \(a\in\mathsf{A}\). Then \(a\in\mathsf{A}_{b}(Q)\) if and only if \(\lambda^{2}-a^{2}\in Q\) for some \(\lambda>0\)._
Proof.: If \(\lambda\pm a\in Q\) for \(\lambda>0\), then
\[\lambda^{2}-a^{2}=\frac{1}{2\lambda}\big{[}(\lambda+a)^{2}(\lambda-a)+( \lambda-a)^{2}(\lambda+a)\big{]}\in Q.\]
Conversely, if \(\lambda^{2}-a^{2}\in Q\) and \(\lambda>0\), then
\[\lambda\pm a=\frac{1}{2\lambda}\big{[}(\lambda^{2}-a^{2})+(\lambda\pm a)^{2} \big{]}\in Q.\qed\]
**Lemma 12.9**.: _Suppose that \(Q\) is a quadratic module or a semiring of \(\mathsf{A}\)._
1. \(\mathsf{A}_{b}(Q)\) _is a unital subalgebra of_ \(\mathsf{A}\)_._
2. _If the algebra_ \(\mathsf{A}\) _is generated by elements_ \(a_{1},\ldots,a_{n}\)_, then_ \(Q\) _is Archimedean if and only if each_ \(a_{i}\) _there exists a_ \(\lambda_{i}>0\) _such that_ \(\lambda_{i}\pm a_{i}\in Q\)_._
Proof.: (i): Clearly, sums and scalar multiples of elements of \(\mathsf{A}_{b}(Q)\) are again in \(\mathsf{A}_{b}(Q)\). It suffices to verify that this holds for the product of elements \(a,b\in\mathsf{A}_{b}(Q)\).
First we suppose that \(Q\) is a quadratic module. By Lemma 12.8, there are \(\lambda_{1}>0\) and \(\lambda_{2}>0\) such that \(\lambda_{1}^{2}-a^{2}\) and \(\lambda_{2}^{2}-b^{2}\) are in \(Q\). Then
\[(\lambda_{1}\lambda_{2})^{2}-(ab)^{2}=\lambda_{2}^{2}(\lambda_{1}^{2}-a^{2})+a ^{2}(\lambda_{2}^{2}-b^{2})\in Q,\]
so that \(ab\in\mathsf{A}_{b}(Q)\) again by Lemma 12.8.
Now let \(Q\) be a semiring. If \(\lambda_{1}-a\in Q\) and \(\lambda_{2}-b\in Q\), then
\[\lambda_{1}\lambda_{2}\mp ab=\frac{1}{2}\big{(}(\lambda_{1}\pm a)(\lambda_{2}- b)+(\lambda_{2}\mp a)(\lambda_{2}+b)\big{)}\in Q.\]
(ii) follows at once from (i).
By Lemma 12.9(ii), it suffices to check the Archimedean condition \(\lambda\pm a\in Q\) for algebra generators. Often this simplifies proving that \(Q\) is Archimedean.
**Corollary 12.10**.: _For a quadratic module \(Q\) of \(\,\mathbb{R}_{d}[\underline{x}]\) the following are equivalent:_
1. \(Q\) _is Archimedean._
2. _There exists a number_ \(\lambda>0\) _such that_ \(\lambda-\sum_{k=1}^{d}x_{k}^{2}\in Q\)_._
3. _For any_ \(k=1,\ldots,d\) _there exists a_ \(\lambda_{k}>0\) _such that_ \(\lambda_{k}-x_{k}^{2}\in Q\)_._
Proof.: (i)\(\rightarrow\)(ii) is clear by definition. If \(\lambda-\sum_{j=1}^{d}x_{j}^{2}\in Q\), then
\[\lambda-x_{k}^{2}=\lambda-\sum\nolimits_{j}x_{j}^{2}\ +\ \sum\nolimits_{j\neq k}x_{j}^{2}\in Q.\]
This proves (ii)\(\rightarrow\)(iii). Finally, if (iii) holds, then \(x_{k}\in\mathsf{A}_{b}(Q)\) by Lemma 12.8 and hence \(\mathsf{A}_{b}(Q)=\mathsf{A}\) by Lemma 12.9(ii). Thus, (iii)\(\rightarrow\)(i).
Note that \(S=\mathbb{R}_{+}\cdot 1\) is a semiring, so semirings could be rather "small".
**Definition 12.11**.: A semiring \(S\) is called _generating_ if \(\,A=S-S\).
An Archimedean semiring is always generating, since \(a=\lambda-(\lambda-a)\) for \(a\in A\) and \(\lambda\in\mathbb{R}\).
**Corollary 12.12**.: _If the quadratic module \(Q(\mathsf{f})\) of \(\mathbb{R}_{d}[\underline{x}]\) is Archimedean, then the set \(\mathcal{K}(\mathsf{f})\) is compact._
Proof.: By the respective definitions, polynomials of \(Q(\mathsf{f})\) are nonnegative on \(\mathcal{K}(\mathsf{f})\). Since \(Q(\mathsf{f})\) is Archimedean, \(\lambda-\sum_{k=1}^{d}x_{k}^{2}\in Q(\mathsf{f})\) for some \(\lambda>0\) by Corollary 12.10, so \(\mathcal{K}(\mathsf{f})\) is contained in the ball centered at the origin with radius \(\sqrt{\lambda}\).
The converse of Corollary 12.12 does not hold, as the following example shows. (However, it does hold for the preordering \(T(\mathsf{f})\) as shown by Proposition 12.26 below.)
_Example 12.13_.: Let \(f_{1}=2x_{1}-1\), \(f_{2}=2x_{2}-1\), \(f_{3}=1-x_{1}x_{2}\). Then the set \(\mathcal{K}(\mathsf{f})\) is compact, but \(Q(\mathsf{f})\) is not Archimedean (see [PD, p. 146] for a proof). \(\circ\)
The following separation result will be used in Sections 12.4 and 12.6.
**Proposition 12.14**.: _Let \(C\) be an Archimedean unital cone of \(\mathsf{A}\). If \(a_{0}\in\mathsf{A}\) and \(a_{0}\notin C\), there exists a \(C\)-positive linear functional \(\varphi\) on \(\mathsf{A}\) such that \(\varphi(1)=1\) and \(\varphi(a_{0})\leq 0\). The functional \(\varphi\) may be chosen as an extremal functional of the dual cone_
\[C^{\wedge}:=\{L\in A^{*}:L(c)\geq 0\text{ for }c\in C\,\}. \tag{12.12}\]
Proof.: Let \(a\in\mathsf{A}\) and choose \(\lambda>0\) such that \(\,\lambda\pm a\in C\). If \(\,0<\delta\leq\lambda^{-1},\,\) then \(\,\delta^{-1}\pm a\in C\) and hence \(1\pm\delta a\in C\). Thus 1 is an internal point of \(C\) and an order unit for \(C\). Therefore a separation theorem for convex sets (see e.g. Proposition C.5 in [10]) applies, so there exists an extremal functional \(\varphi\) of \(C^{\wedge}\) such that \(\varphi(1)=1\) and \(\varphi(a_{0})\leq 0\). (Without the extremality of \(\varphi\) this result follows also from Eidelheit's separation Theorem A.27.)
_Example 12.15_.: Let \(\mathsf{A}=\mathbb{R}_{d}[\underline{x}]\) and let \(K\) be a closed subset of \(\mathbb{R}^{d}\). If \(C\) is the preordering \(\mathrm{Pos}(K)\) of nonnegative polynomials on \(K\), then \(\mathsf{A}_{b}(C)\) is just the set of bounded polynomials on \(K\). Hence \(C\) is Archimedean if and only if \(K\) is compact. \(\circ\)
Recall from Definition 1.13 that \(\hat{\mathsf{A}}\) denotes the set of characters of the real algebra \(\mathsf{A}\), that is, the set of unital algebra homomorphism \(\chi:\mathsf{A}\to\mathbb{R}\).
For a subset \(C\) of \(\mathsf{A}\) we define
\[\mathcal{K}(C):=\{\chi\in\hat{\mathsf{A}}:\chi(c)\geq 0\text{ for all }c\in C\}. \tag{12.13}\]
_Example 12.16_.: \(\mathsf{A}=\mathbb{R}_{d}[\underline{x}]\)
Then \(\hat{A}\) is the set of evaluations \(\chi_{t}(p)=p(t),p\in\mathsf{A}\), at points of \(\mathbb{R}^{d}\). As usual, we identify \(\chi_{t}\) and \(t\), so that \(\hat{A}\cong\mathbb{R}^{d}\). Then, if \(C\) is the quadratic module \(Q(\mathsf{f})\) defined by (12.4) or \(\,C\) is the semiring \(S(\mathsf{f})\) defined by (12.5) or \(\,C\) is the preordering \(T(\mathsf{f})\) defined by (12.6), the set \(\mathcal{K}(C)\) is just the semi-algebraic set \(\mathcal{K}(\mathsf{f})\) given by (12.3). \(\circ\)
Let \(C\) be a quadratic module or a semiring. The set \(C^{\mathrm{sat}}=\mathrm{Pos}(\mathcal{K}(C))\) of all \(f\in\mathsf{A}\) which are nonnegative on the set \(\mathcal{K}(C)\) is obviously a preordering of \(\mathsf{A}\) that contains \(C\). Then \(C\) is called _saturated_ if \(\,C=C^{\mathrm{sat}}\), that is, if \(C\) is equal to its _saturation_\(\,Q^{\mathrm{sat}}\).
Real algebraic geometry is treated in the books [BCRo], [PD], [Ms1]; a recent survey on positivity and sums of squares is given in [Sr3].
### Localizing functionals and supports of representing measures
Haviland's Theorem 1.12 shows that there is a close link between positive polynomials and the moment problem. However, in order to apply this result reasonable descriptions of positive, or at least of strictly positive, polynomials are needed.
Recall that the moment problem for a functional \(L\) on the interval \([a,b]\) is solvable if and only if \(L(p^{2}+(x-a)(b-x)q^{2})\geq 0\) for all \(p,q\in\mathbb{R}[x]\). This condition means that two infinite Hankel matrices are positive semidefinite and this holds if and only if all principal minors of these matrices are nonnegative. In the multidimensional case we are trying to find similar solvability criteria. For this it is natural to consider sets that are defined by finitely many polynomial inequalities \(f_{1}(x)\geq 0,\ldots,f_{k}(x)\geq 0\). These are precisely the basic closed semi-algebraic sets \(\mathcal{K}(\mathsf{f})\), so we have entered the setup of real algebraic geometry.
Let us fix a semi-algebraic set \(\mathcal{K}(\mathfrak{f})\). Let \(L\) be a \(\mathcal{K}(\mathfrak{f})\)-moment functional, that is, \(L\) is of the form \(\,L(p)=L^{\mu}(p)\equiv\int p\,d\mu\) for \(p\in\mathbb{R}_{d}[\underline{x}],\,\) where \(\mu\) is a Radon measure supported on \(\mathcal{K}(\mathfrak{f})\). If \(g\in\mathbb{R}_{d}[x]\) is nonnegative on \(\mathcal{K}(\mathfrak{f})\), then obviously
\[L(gp^{2})\geq 0\quad\text{for all}\quad p\in\mathbb{R}_{d}[\underline{x}], \tag{12.14}\]
so (12.14) is a _necessary_ condition for \(L\) being a \(\mathcal{K}(\mathfrak{f})\)-moment functional.
The overall strategy in this chapter and the next is to solve the \(\mathcal{K}(\mathfrak{f})\)-moment problem by _finitely many sufficient_ conditions of the form (12.14). That is, our aim is to "find" nonnegative polynomials \(g_{1},\dots,g_{m}\,\) on \(\mathcal{K}(\mathfrak{f})\) such that the following holds:
_Each linear functional \(L\) on \(\mathbb{R}_{d}[\underline{x}]\) which satisfies condition (12.14) for \(g=g_{1},\dots,g_{m}\) and \(g=1\) is a \(\mathcal{K}(\mathfrak{f})\)-moment functional._ (The polynomial \(g=1\) is needed in order to ensure that \(L\) itself is a positive functional.)
In general it is not sufficient to take only the polynomials \(f_{j}\) themselves as \(g_{j}\). For our main results (Theorems 12.29 and 13.10), the positivity of the functional on the preordering \(T(\mathfrak{f})\) is assumed. This means that condition (12.14) is required for _all_ mixed products \(g=f_{1}^{e_{1}}\cdots f_{k}^{e_{k}}\), where \(e_{j}\in\{0,1\}\) for \(j=1,\dots,k\).
**Definition 12.17**.: Let \(L\) be a linear functional on \(\mathbb{R}_{d}[\underline{x}]\) and let \(g\in\mathbb{R}_{d}[\underline{x}]\). The linear functional \(L_{g}\) on \(\mathbb{R}_{d}[\underline{x}]\) defined by \(L_{g}(p)=L(gp),\,p\in\mathbb{R}_{d}[\underline{x}]\), is called the _localization_ of \(L\) at \(g\) or simply the _localized functional_.
Condition (12.14) means the localized functional \(L_{g}\) is a positive linear functional on \(\mathbb{R}_{d}[\underline{x}].\) Further, if \(L\) comes from a measure \(\mu\) supported on \(\mathcal{K}(\mathfrak{f})\) and \(g\) is nonnegative on \(\mathcal{K}(\mathfrak{f})\), then
\[L_{g}(p)=L(gp)=\int_{\mathcal{K}(\mathfrak{f})}p(x)\,g(x)d\mu(x),\,\,p\in \mathbb{R}_{d}[\underline{x}],\]
that is, \(L_{g}\) is given by the measure \(\nu\) on \(\mathcal{K}(\mathfrak{f})\) defined by \(d\nu=g(x)d\mu.\)
Localized functionals will play an important role throughout our treatment. They are used to localize the support of the measure (see Propositions 12.22 and 12.23 and Theorem 14.25) or to derive determinacy criteria (see Theorem 14.12).
Now we introduce two other objects associated with the functional \(L\) and the polynomial \(g\). Let \(s=(s_{\alpha})_{\alpha\in\mathbb{N}_{0}^{d}}\) be the \(d\)-sequence given by \(s_{\alpha}=L(x^{\alpha})\) and write \(g=\sum_{\gamma}g_{\gamma}x^{\gamma}\). Then we define a \(d\)-sequence \(g(E)s=((g(E)s)_{\alpha})_{\alpha\in\mathbb{N}_{0}^{d}}\) by
\[(g(E)s)_{\alpha}:=\sum_{\gamma}\,\,g_{\gamma}s_{\alpha+\gamma},\,\,\alpha\in \mathbb{N}_{0}^{d},\]
and an infinite matrix \(\,H(gs)=(H(gs)_{\alpha,\beta})_{\alpha,\beta\in\mathbb{N}_{0}^{d}}\,\) over \(\,\mathbb{N}_{0}^{d}\times\mathbb{N}_{0}^{d}\) with entries
\[H(gs)_{\alpha,\beta}:=\sum_{\gamma}\,\,g_{\gamma}s_{\alpha+\beta+\gamma},\, \,\alpha,\beta\in\mathbb{N}_{0}^{d}. \tag{12.15}\]
Using these definitions for \(p(x)=\sum_{\alpha}a_{\alpha}x^{\alpha}\in\mathbb{R}_{d}[\underline{x}]\) we compute
\[L_{s}(gp^{2})=\sum_{\alpha,\beta,\gamma}a_{\alpha}a_{\beta}g_{\gamma}s_{ \alpha+\beta+\gamma}=\sum_{\alpha,\beta}a_{\alpha}a_{\beta}(g(E)s)_{\alpha+ \beta}=\sum_{\alpha,\beta}\,a_{\alpha}a_{\beta}H(gs)_{\alpha,\beta}. \tag{12.16}\]
This shows that \(g(E)s\) is the \(\,d\)-sequence for the functional \(L_{g}\) and \(H(gs)\) is a Hankel matrix for the sequence \(g(E)s\). The matrix \(H(gs)\) is called the _localized Hankel matrix_ of \(s\) at \(g\).
**Proposition 12.18**.: _Let \(\,Q(\mathfrak{g})\,\) be the quadratic module generated by the finite subset \(\,\mathfrak{g}=\{g_{1},\ldots,g_{m}\}\) of \(\,\mathbb{R}_{d}[\underline{x}]\). Let \(L\) be a linear functional on \(\mathbb{R}_{d}[\underline{x}]\) and \(\,s=(s_{\alpha})_{\alpha\in\mathbb{N}_{0}^{d}}\) the \(d\)-sequence defined by \(\,s_{\alpha}=L(x^{\alpha}).\) Then the following are equivalent:_
1. \(L\) _is a_ \(Q(\mathfrak{g})\)_-positive linear functional on_ \(\,\mathbb{R}_{d}[\underline{x}]\)_._
2. \(L,L_{g_{1}},\ldots,L_{g_{m}}\) _are positive linear functionals on_ \(\,\mathbb{R}_{d}[\underline{x}]\)_._
3. \(s,g_{1}(E)s,\ldots,g_{m}(E)s\) _are positive semidefinite_ \(d\)_-sequences._
4. \(H(s),H(g_{1}s),\ldots,H(g_{m}s)\) _are positive semidefinite matrices._
Proof.: The equivalence of (i) and (ii) is immediate from the definition (12.4) of the quadratic module \(Q(\mathfrak{g})\) and Definition 12.17 of the localized functionals \(L_{g_{j}}\).
By Proposition 2.7, a linear functional is positive if and only if the corresponding sequence is positive semidefinite, or equivalently, the Hankel matrix is positive semidefinite. By (12.16) this gives the equivalence of (ii), (iii), and (iv).
The solvability conditions in the existence theorems for the moment problem in this chapter and the next are given in the form (i) for some finitely generated quadratic module or preordering. This means that condition (12.14) is satisfied for finitely many polynomials \(g\). Proposition 12.18 says there are various _equivalent_ formulations of these solvability criteria: They can be expressed in the language of real algebraic geometry (in terms of quadratic modules, semirings or preorderings), of \(*\)-algebras (as positive functionals on \(\mathbb{R}_{d}[\underline{x}]\)), of matrices (by the positive semidefiniteness of Hankel matrices) or of sequences (by the positive semidefiniteness of sequences).
The next proposition contains a useful criterion for localizing supports of representing measures. We denote by \(\mathcal{M}_{+}(\mathbb{R}^{d})\) the set of Radon measure \(\mu\) on \(\mathbb{R}^{d}\) for which all moments are finite, or equivalently, \(\int|p(x)|\,d\mu<\infty\,\) for all \(p\in\mathbb{R}_{d}[\underline{x}]\).
**Proposition 12.19**.: _Let \(\mu\in\mathcal{M}_{+}(\mathbb{R}^{d})\) and let \(s\) be the moment sequence of \(\mu\). Further, let \(g_{j}\in\mathbb{R}_{d}[\underline{x}]\) and \(c_{j}\geq 0\) be given for \(j=1,\ldots,k.\) Set_
\[\mathcal{K}=\{x\in\mathbb{R}^{d}:|g_{j}(x)|\leq c_{j}\text{ for }j=1,\ldots,k\}. \tag{12.17}\]
_Then we have \(\,\mathrm{supp}\,\,\mu\subseteq\mathcal{K}\) if and only if there exist constants \(M_{j}>0\) such that_
\[L_{s}(g_{j}^{2n})\leq M_{j}c_{j}^{2n}\text{ for }n\in\mathbb{N},\ j=1,\ldots,k. \tag{12.18}\]
Proof.: The only if part is obvious. We prove the if direction and slightly modify the argument used in the proof of Proposition 4.1.
Let \(t_{0}\in\mathbb{R}^{d}\backslash\mathcal{K}\). Then there is an index \(\,j=1,\ldots,k\) such that \(|g_{j}(t_{0})|>c_{j}\). Hence there exist a number \(\lambda>c_{j}\) and a ball \(U\) around \(t_{0}\) such that \(|g_{j}(t)|\geq\lambda\) for \(t\in U\). For \(n\in\mathbb{N}\) we then derive
\[\lambda^{2n}\mu(U)\leq\int_{U}g_{j}(t)^{2n}\,d\mu(t)\leq\int_{\mathbb{R}^{d}} g_{j}(t)^{2n}\,d\mu(t)=L_{s}(g_{j}^{2n})\leq M_{j}c_{j}^{2n}.\]
Since \(\lambda>c_{j}\), this is only possible for all \(n\in\mathbb{N}\) if \(\,\mu(U)=0\). Therefore, \(t_{0}\notin\mathrm{supp}\,\,\mu\). This proves that \(\mathrm{supp}\,\,\mu\subseteq\mathcal{K}\).
We state the special case \(g_{j}(x)=x_{j}\) of Proposition 12.19 separately as
**Corollary 12.20**.: _Suppose \(c_{1}>0,\ldots,c_{d}>0\). A measure \(\mu\in\mathcal{M}_{+}(\mathbb{R}^{d})\) with moment sequence \(s\) is supported on the \(d\)-dimensional interval \([-c_{1},c_{1}]\times\cdots\times[-c_{d},c_{d}]\) if and only if there are positive constants \(M_{j}\) such that_
\[L_{s}(x_{j}^{2n})\equiv s_{(0,\ldots,0,1,0,\ldots,0)}^{2n}\leq M_{j}c_{j}^{2n} \text{ for }n\in\mathbb{N},\ j=1,\ldots,d.\]
The following two propositions are basic results about the moment problem on _compact_ sets. Both follow from Weierstrass' theorem on approximation of continuous functions by polynomials.
**Proposition 12.21**.: _If \(\mu\in\mathcal{M}_{+}(\mathbb{R}^{d})\) is supported on a compact set, then \(\mu\) is determinate. In particular, if \(K\) is a compact subset of \(\mathbb{R}^{d}\), then each \(K\)-moment sequence, so each measure \(\mu\in\mathcal{M}(\mathbb{R}^{d})\) supported on \(K\), is determinate._
Proof.: Let \(\nu\in\mathcal{M}_{+}(\mathbb{R}^{d})\) be a measure having the same moments and so the same moment functional \(L\) as \(\mu\). Fix \(h\in C_{c}(\mathbb{R}^{d},\mathbb{R})\). We choose a compact \(d\)-dimensional interval \(K\) containing the supports of \(\mu\) and \(h\). From Corollary 12.20 it follows that \(\operatorname{supp}\nu\subseteq K\). By Weierstrass' theorem, there is a sequence \((p_{n})_{n\in\mathbb{N}}\) of polynomials \(p_{n}\in\mathbb{R}_{d}[\underline{x}]\) converging to \(h\) uniformly on \(K\). Passing to the limits in the equality
\[\int_{K}p_{n}\,d\mu=L(p_{n})=\int_{K}p_{n}\,d\nu\]
we get \(\int h\,d\mu=\int h\,d\nu\). Since this holds for all \(h\in C_{c}(\mathbb{R}^{d},\mathbb{R})\), we have \(\mu=\nu\).
**Proposition 12.22**.: _Suppose that \(\mu\in\mathcal{M}_{+}(\mathbb{R}^{d})\) is supported on a compact set. Let \(\mathsf{f}=\{f_{1},\dots,f_{k}\}\) be a finite subset of \(\mathbb{R}_{d}[\underline{x}]\) and assume that the moment functional defined by \(L^{\mu}(p)=\int p\,d\mu\), \(p\in\mathbb{R}_{d}[\underline{x}]\), is \(Q(\mathsf{f})\)-positive. Then \(\operatorname{supp}\,\mu\subseteq\mathcal{K}(\mathsf{f})\)._
Proof.: Suppose that \(t_{0}\in\mathbb{R}^{d}\backslash\mathcal{K}(\mathsf{f})\). Then there exist a number \(j\in\{1,\dots,k\}\), a ball \(U\) with radius \(\rho>0\) around \(t_{0}\), and a number \(\delta>0\) such that \(f_{j}\leq-\delta\) on \(2U\). We define a continuous function \(h\) on \(\mathbb{R}^{d}\) by \(h(t)=\sqrt{2\rho-\lvert\lvert t-t_{0}\rvert\rvert}\,\) for \(\,\lvert\lvert t-t_{0}\rvert\rvert\leq 2\rho\) and \(h(t)=0\) otherwise and take a compact \(d\)-dimensional interval \(K\) containing \(2U\) and \(\operatorname{supp}\,\mu\). By Weierstrass' theorem, there is a sequence of polynomials \(p_{n}\in\mathbb{R}_{d}[\underline{x}]\) converging to \(h\) uniformly on \(K\). Then \(\,f_{j}p_{n}^{2}\to f_{j}h^{2}\,\) uniformly on \(K\) and hence
\[\lim_{n}\,L^{\mu}(f_{j}p_{n}^{2})=\int_{K}(\lim_{n}\,f_{j}p_{n}^{ 2})\,d\mu=\int_{K}\,f_{j}h^{2}\,d\mu=\int_{2U}f_{j}(t)(2\rho-\lvert\lvert t-t _{0}\rvert\rvert)\,d\mu(t)\\ \leq\int_{2U}-\delta(2\rho-\lvert\lvert t-t_{0}\rvert\rvert)\,d \mu\leq-\int_{U}\delta\rho\,d\mu(t)=-\delta\rho\mu(U). \tag{12.19}\]
Since \(L^{\mu}\) is \(Q(\mathsf{f})\)-positive, we have \(\,L^{\mu}(f_{j}p_{n}^{2})\geq 0\). Therefore, \(\mu(U)=0\) by (12.19), so that \(t_{0}\notin\operatorname{supp}\,\mu\). This proves that \(\,\operatorname{supp}\,\mu\subseteq\mathcal{K}(\mathsf{f})\).
The assertions of Propositions 12.21 and 12.22 are no longer valid if the compactness assumptions are omitted. But the counterpart of Proposition 12.22 for zero sets of ideals holds without any compactness assumption.
**Proposition 12.23**.: _Let \(\mu\in\mathcal{M}_{+}(\mathbb{R}^{d})\) and let \(\mathcal{I}\) be an ideal of \(\mathbb{R}_{d}[\underline{x}]\). If the moment functional \(L^{\mu}\) of \(\,\mu\) is \(\mathcal{I}\)-positive, then \(L^{\mu}\) annihilates \(\mathcal{I}\) and \(\,\operatorname{supp}\mu\subseteq\mathcal{Z}(\mathcal{I})\). (As usual, \(\mathcal{Z}(\mathcal{I})=\{x\in\mathbb{R}^{d}:p(x)=0\text{ for }p\in\mathcal{I}\}\) is the zero set of \(\mathcal{I}\).)_
Proof.: If \(p\in\mathcal{I}\), then \(-p\in\mathcal{I}\) and hence \(L^{\mu}(\pm p)\geq 0\) by the \(\mathcal{I}\)-positivity of \(L^{\mu}\), so that \(L^{\mu}(p)=0\). That is, \(L^{\mu}\) annihilates \(\mathcal{I}\).
Let \(p\in\mathcal{I}\). Since \(p^{2}\in\mathcal{I}\), we have \(L^{\mu}(p^{2})=\int p^{2}\,d\mu=0\). Therefore, from Proposition 12 it follows that \(\operatorname{supp}\mu\subseteq\mathcal{Z}(p^{2})=\mathcal{Z}(p)\). Thus, \(\operatorname{supp}\mu\subseteq\mathcal{Z}(\mathcal{I})\).
For a linear functional \(L\) on \(\mathbb{R}_{d}[\underline{x}]\) we define
\[\mathcal{N}_{+}(L):=\{f\in\operatorname{Pos}(\mathbb{R}^{d}):L(p)=0\,\}.\]
**Proposition 12.24**.: _Let \(L\) be a moment functional on \(\mathbb{R}_{d}[\underline{x}]\), that is, \(L=L^{\mu}\) for some \(\mu\in\mathcal{M}_{+}(\mathbb{R}^{d})\). Then the ideal \(\mathcal{I}_{+}(L)\) of \(\mathbb{R}_{d}[\underline{x}]\) generated by \(\mathcal{N}_{+}(L)\) is annihilated by \(L\) and the support of each representing measure of \(L\) is contained in \(\mathcal{Z}(\mathcal{I}_{+}(L))\)._
Proof.: Let \(\nu\) be an arbitrary representing measure of \(L\). If \(f\in\mathcal{N}_{+}(L)\), then we have \(L(f)=\int f(x)\,d\nu=0\). Since \(f\in\operatorname{Pos}(\mathbb{R}^{d})\), Proposition 12 applies and yields \(\operatorname{supp}\nu\subseteq\mathcal{Z}(f)\). Hence \(\operatorname{supp}\nu\subseteq\mathcal{Z}(\mathcal{N}_{+}(L)))=\mathcal{Z}( \mathcal{I}_{+}(L)).\) In particular, the inclusion \(\operatorname{supp}\nu\subseteq\mathcal{Z}(\mathcal{I}_{+}(L))\) implies that \(L=L^{\nu}\) annihilates \(\mathcal{I}_{+}(L)\).
### The moment problem on compact semi-algebraic sets and the strict Positivstellensatz
The solutions of one-dimensional moment problems have been derived from descriptions of nonnegative polynomials as weighted sums of squares. The counterparts of the latter in the multidimensional case are the so-called "Positivstellensatze" of real algebraic geometry. In general these results require denominators (see Theorem 12.5), so they do not yield reasonable criteria for solving moment problems. However, for _strictly positive_ polynomials on _compact_ semi-algebraic sets \(\mathcal{K}(\mathsf{f})\) there are _denominator free_ Positivstellensatze (Theorems 12.28 and 12.50) which provides solutions of moment problems. Even more, it turns out that there is a close interplay between this type of Positivstellensatze and moment problems on compact semi-algebraic sets, that is, existence results for the moment problem can be derived from Positivstellensatze and vice versa.
We state the main technical steps of the proofs separately as Propositions 12.25-12.27. Proposition 12.27 is also used in a crucial manner in the proof of Theorem 13.10 below.
Suppose that \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) is a finite subset of \(\mathbb{R}_{d}[\underline{x}]\). Let \(B(\mathcal{K}(\mathsf{f}))\) denote the algebra of all polynomials of \(\mathbb{R}_{d}[\underline{x}]\) which are bounded on the set \(\mathcal{K}(\mathsf{f})\).
**Proposition 12.25**.: _Let \(g\in B(\mathcal{K}(\mathsf{f}))\) and \(\lambda>0\). If \(\,\lambda^{2}>g(x)^{2}\,\) for all \(x\in\mathcal{K}(\mathsf{f})\), then there exists a \(p\in T(\mathsf{f})\) such that_
\[g^{2n}\preceq\lambda^{2n+2}p\text{ for }n\in\mathbb{N}. \tag{12.20}\]
Proof.: By the Krivine-Stengle Positivstellensatz (Theorem 12.5(i)), applied to the positive polynomial \(\lambda^{2}-g^{2}\) on \(\mathcal{K}(\mathsf{f})\), there exist polynomials \(p,q\in T(\mathsf{f})\) such that
\[p(\lambda^{2}-g^{2})=1+q. \tag{12.21}\]
Since \(q\in T(\mathsf{f})\) and \(T(\mathsf{f})\) is a quadratic module, \(g^{2n}(1+q)\in T(\mathsf{f})\) for \(n\in\mathbb{N}_{0}\). Therefore, using (12.21) we conclude that
\[g^{2n+2}p=g^{2n}\lambda^{2}p-g^{2n}(1+q)\preceq g^{2n}\lambda^{2}p.\]
By induction it follows that
\[g^{2n}p\preceq\lambda^{2n}p. \tag{12.22}\]
Since \(g^{2n}(q+pg^{2})\in T(\mathsf{f})\), using first (12.21) and then (12.22) we derive
\[g^{2n}\preceq g^{2n}+g^{2n}(q+pg^{2})=g^{2n}(1+q+pg^{2})=g^{2n}\lambda^{2}p \preceq\lambda^{2n+2}p\,.\qed\]
**Proposition 12.26**.: _If the set \(\mathcal{K}(\mathsf{f})\) is compact, then the associated preordering \(T(\mathsf{f})\) is Archimedean._
Proof.: Put \(g(x):=(1+x_{1}^{2})\cdots(1+x_{d}^{2})\). Since \(g\) is bounded on the compact set \(\mathcal{K}(\mathsf{f})\), we have \(\lambda^{2}>g(x)^{2}\) on \(\mathcal{K}(\mathsf{f})\) for some \(\lambda>0\). Therefore, by Proposition 12.25 there exists a \(p\in T(\mathsf{f})\) such that (12.20) holds.
Further, for any multiindex \(\alpha\in\mathbb{N}_{0}^{d}\), \(|\alpha|\leq k\), \(k\in\mathbb{N}\), we obtain
\[\pm 2x^{\alpha}\preceq x^{2\alpha}+1\preceq\sum_{|\beta|\leq k}x^{2\beta}=g^{k}. \tag{12.23}\]
Hence there exist numbers \(c>0\) and \(k\in\mathbb{N}\) such that \(p\preceq 2cg^{k}\). Combining the latter with \(g^{2n}\preceq\lambda^{2n+2}p\) by (12.20), we get \(g^{2k}\preceq\lambda^{2k+2}2cg^{k}\) and so
\[(g^{k}-\lambda^{2k+2}c)^{2}\preceq(\lambda^{2k+2}c)^{2}\cdot 1.\]
Hence, by Lemma 12.8, \(g^{k}-\lambda^{2k+2}c\in\mathsf{A}_{b}(T(\mathsf{f}))\) and so \(g^{k}\in\mathsf{A}_{b}(T(\mathsf{f}))\), where \(\mathsf{A}:=\mathbb{R}_{d}[\underline{x}]\). Since \(\pm x_{j}\preceq g^{k}\) by (12.23) and \(g^{k}\in\mathsf{A}_{b}(T(\mathsf{f}))\), we obtain \(x_{j}\in\mathsf{A}_{b}(T(\mathsf{f}))\) for \(j=1,\cdots,d\). Now from Lemma 12.9(ii) it follows that \(\mathsf{A}_{b}(T(\mathsf{f}))=\mathsf{A}\). This means that \(T(\mathsf{f})\) is Archimedean.
**Proposition 12.27**.: _Suppose that \(L\) is a \(\,T(\mathsf{f})\)-positive linear functional on \(\mathbb{R}_{d}[\underline{x}]\)._
1. _If_ \(\,g\in B(\mathcal{K}(\mathsf{f}))\) _and_ \(\|g\|_{\infty}\) _denotes the supremum of_ \(g\) _on_ \(\mathcal{K}(\mathsf{f}),\) _then_ \[|L(g)|\leq L(1)\ \|g\|_{\infty}.\] (12.24)
2. _If_ \(\,g\in B(\mathcal{K}(\mathsf{f}))\) _and_ \(g(x)\geq 0\) _for_ \(x\in\mathcal{K}(\mathsf{f})\)_, then_ \(L(g)\geq 0\)_._
Proof.: (i): Fix \(\varepsilon>0\) and put \(\lambda:=\parallel g\parallel_{\infty}+\varepsilon\). We define a real sequence \(s=(s_{n})_{n\in\mathbb{N}_{0}}\) by \(s_{n}:=L(g^{n})\). Then \(L_{s}(q(y))=L(q(g))\) for \(q\in\mathbb{R}[y]\). For any \(p\in\mathbb{R}[y]\), we have \(p(g)^{2}\in\sum\mathbb{R}_{d}[\underline{x}]^{2}\subseteq T(\mathsf{f})\) and hence \(L_{s}(p(y)^{2})=L(p(g)^{2})\geq 0\), since \(L\) is \(T(\mathsf{f})\)-positive. Thus, by Hamburger's theorem 3.8, there exists a Radon measure \(\nu\) on \(\mathbb{R}\) such that \(s_{n}=\int_{\mathbb{R}}t^{n}d\nu(t)\), \(n\in\mathbb{N}_{0}\).
For \(\gamma>\lambda\) let \(\chi_{\gamma}\) denote the characteristic function of the set \((-\infty,-\gamma]\cup[\gamma,+\infty)\). Since \(\lambda^{2}-g(x)^{2}>0\) on \(\mathcal{K}(\mathsf{f})\), we have \(g^{2n}\preceq\lambda^{2n+2}p\) by equation (12.20) in Proposition 12.25. Using the \(T(\mathsf{f})\)-positivity of \(L\) we derive
\[\gamma^{2n}\int_{\mathbb{R}}\chi_{\gamma}(t)\ d\nu(t)\leq\int_{\mathbb{R}}t^{ 2n}d\nu(t)=s_{2n}=L(g^{2n})\leq\lambda^{2n+2}L(p) \tag{12.25}\]
for all \(n\in\mathbb{N}\). Since \(\gamma>\lambda\), (12.25) implies that \(\int_{\mathbb{R}}\chi_{\gamma}(t)\ d\nu(t)=0\). Therefore, \(\operatorname{supp}\,\nu\subseteq[-\lambda,\lambda]\). (The preceding argument has been already used in the proof of Proposition 12.19 to obtain a similar conclusion.) Therefore, applying the Cauchy-Schwarz inequality for \(L\) we derive
\[|L(g)|^{2} \leq L(1)L(g^{2})=L(1)s_{2}=L(1)\int_{-\lambda}^{\lambda}\ t^{2} \ d\nu(t)\] \[\leq L(1)\nu(\mathbb{R})\lambda^{2}=L(1)^{2}\lambda^{2}=L(1)^{2}( \parallel g\parallel_{\infty}+\varepsilon)^{2}.\]
Letting \(\varepsilon\to+0\), we get \(\,|L(g)|\leq L(1)\parallel g\parallel_{\infty}\).
(ii): Since \(g\geq 0\) on \(\mathcal{K}(\mathsf{f})\), we clearly have \(\,\|\,1\cdot\|g\|_{\infty}-2\,g\|_{\infty}=\|g\|_{\infty}.\) Using this equality and (12.24) we conclude that
\[L(1)\|g\|_{\infty}-2\,L(g)=L(1\cdot\|g\|_{\infty}-2\,g)\leq L(1)\|1\cdot\|g\|_{ \infty}-2\,g\|_{\infty}=L(1)\|g\|_{\infty},\]
which in turn implies that \(\,L(g)\geq 0\).
The following theorem is the _strict Positivstellensatz_ for compact basic closed semi-algebraic sets \(\mathcal{K}(\mathsf{f})\).
**Theorem 12.28**.: _Let \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) be a finite subset of \(\mathbb{R}_{d}[\underline{x}]\) and let \(\,h\in\mathbb{R}[x]\). If the set \(\mathcal{K}(\mathsf{f})\) is compact and \(h(x)>0\) for all \(x\in\mathcal{K}(\mathsf{f})\), then \(h\in T(\mathsf{f})\)._
Proof.: Assume to the contrary that \(h\) is not in \(T(\mathsf{f})\). By Proposition 12.26, \(T(\mathsf{f})\) is Archimedean. Therefore, by Proposition 12.14, there exists a \(T(\mathsf{f})\)-positive linear functional \(L\) on \(\mathsf{A}\) such that \(L(1)=1\) and \(L(h)\leq 0\). Since \(h>0\) on the compact set \(\mathcal{K}(\mathsf{f})\), there is a positive number \(\delta\) such that \(h(x)-\delta>0\) for all \(x\in\mathcal{K}(\mathsf{f})\). We extend the continuous function \(\sqrt{h(x)-\delta}\) on \(\mathcal{K}(\mathsf{f})\) to a continuous function on some compact \(d\)-dimensional interval containing \(\mathcal{K}(\mathsf{f})\). Again by the classical Weierstrass theorem, \(\sqrt{h(x)-\delta}\) is the uniform limit on \(\mathcal{K}(\mathsf{f})\) of a sequence \((p_{n})\) of polynomials \(p_{n}\in\mathbb{R}_{d}[\underline{x}]\). Then \(\,p_{n}^{2}-h+\delta\to 0\) uniformly on \(\mathcal{K}(\mathsf{f})\), that is, \(\lim_{n}\parallel p_{n}^{2}-h+\delta\parallel_{\infty}=0\). Recall that \(B(\mathcal{K}(\mathsf{f}))=\mathbb{R}_{d}[\underline{x}]\), since \(\mathcal{K}(\mathsf{f})\) is compact. Hence \(\,\lim_{n}L(p_{n}^{2}-h+\delta)=0\) by the inequality (12.24) in Proposition 12.27(i). But, since \(L(p_{n}^{2})\geq 0\), \(L(h)\leq 0\), and \(L(1)=1\), we have \(\,L(p_{n}^{2}-h+\delta)\geq\delta>0\) which is the desired contradiction. This completes the proof of the theorem.
The next result gives a solution of the \(\mathcal{K}(\mathsf{f})\)-moment problem for compact basic closed semi-algebraic sets.
**Theorem 12.29**.: _Let \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) be a finite subset of \(\mathbb{R}_{d}[\underline{x}]\). If the set \(\mathcal{K}(\mathsf{f})\) is compact, then each \(T(\mathsf{f})\)-positive linear functional \(L\) on \(\mathbb{R}_{d}[\underline{x}]\) is a \(\mathcal{K}(\mathsf{f})\)-moment functional._
Proof.: Since \(\mathcal{K}(\mathsf{f})\) is compact, \(B(\mathcal{K}(\mathsf{f}))=\mathbb{R}_{d}[\underline{x}]\). Therefore, it suffices to combine Proposition 12.27(ii) with Haviland's Theorem 1.12.
_Remark 12.30_.: Theorem 12.29 was obtained from Proposition 12.27(ii) and Haviland's Theorem 1.12. Alternatively, it can derived from Proposition 12.27(i) combined with Riesz' representation theorem. Let us sketch this proof. By (12.24), the functional \(L\) on \(\mathbb{R}_{d}[\underline{x}]\) is \(\|\cdot\|_{\infty}\)- continuous. Extending \(L\) to \(C(\mathcal{K}(\mathsf{f}))\) by the Hahn-Banach theorem and applying Riesz' representation theorem for continuous linear functionals, \(L\) is given by a signed Radon measure on \(\mathcal{K}(\mathsf{f})\). Setting \(g=1\) in (12.24), it follows that \(L\), hence the extended functional, has the norm \(L(1)\). It is not difficult to show that this implies that the representing measure is positive. \(\,\circ\)
The shortest path to Theorems 12.28 and 12.29 is probably to use Proposition 12.27 as we have done. However, in order to emphasize the interaction between both theorems and so in fact between the moment problem and real algebraic geometry we now derive each of these theorems from the other.
_Proof of Theorem 12.29 (assuming Theorem 12.28):_
Let \(h\in\mathbb{R}_{d}[\underline{x}]\). If \(h(x)>0\) on \(\mathcal{K}(\mathsf{f})\), then \(h\in T(\mathsf{f})\) by Theorem 12.28 and so \(L(h)\geq 0\) by the assumption. Therefore \(L\) is a \(\mathcal{K}(\mathsf{f})\)-moment functional by the implication (ii)\(\rightarrow\)(iv) of Haviland's Theorem 1.12.
_Proof of Theorem 12.28 (assuming Theorem 12.29 and Proposition 12.26):_
Suppose \(h\in\mathbb{R}_{d}[\underline{x}]\) and \(h(x)>0\) on \(\mathcal{K}(\mathsf{f})\). Assume to the contrary that \(h\notin T(\mathsf{f})\). Since the preordering \(T(\mathsf{f})\) is Archimedean by Proposition 12.26, Proposition 12.14 applies, so there is a \(T(\mathsf{f})\)-positive linear functional \(L\) on \(\mathbb{R}_{d}[\underline{x}]\) such that \(L(1)=1\) and \(L(h)\leq 0\). By Theorem 12.29, \(L\) is a \(\mathcal{K}(\mathsf{f})\)-moment functional, that is, there is a measure \(\mu\in M_{+}(\mathcal{K}(\mathsf{f}))\) such that \(L(p)=\int_{\mathcal{K}(\mathsf{f})}p\,d\mu\) for \(p\in\mathbb{R}_{d}[\underline{x}]\). But \(L(1)=\mu(\mathcal{K}(\mathsf{f}))=1\) and \(h>0\) on \(\mathcal{K}(\mathsf{f})\) imply that \(L(h)>0\). This is a contradiction, since \(L(h)\leq 0\)
The preordering \(T({\sf f})\) was defined as the sum of sets \(f_{1}^{e_{1}}\cdots f_{k}^{e_{k}}\cdot\sum\mathbb{R}_{d}[\underline{x}]^{2}\). It is natural to ask whether or not all such sets with mixed products \(f_{1}^{e_{1}}\cdots f_{k}^{e_{k}}\) are really needed. To formulate the corresponding result we put \(l_{k}:=2^{k-1}\) and let \(g_{1},\ldots,g_{l_{k}}\) denote the first \(l_{k}\) polynomials of the following row of mixed products:
\[f_{1},\ldots,f_{k},f_{1}f_{2},f_{1}f_{3},\ldots,f_{1}f_{k},\ldots,f_{k-1}f_{k},f_{1}f_{2}f_{3},\ldots,f_{k-2}f_{k-1}f_{k},\ldots,f_{1}f_{2}\ldots,f_{k}.\]
Let \(Q({\sf g})\) denote the quadratic module generated by \(g_{1},\ldots,g_{l_{k}}\), that is,
\[Q({\sf g}):=\sum\mathbb{R}_{d}[\underline{x}]^{2}+g_{1}\sum\mathbb{R}_{d}[ \underline{x}]^{2}+\cdots+g_{l_{k}}\sum\mathbb{R}_{d}[\underline{x}]^{2}.\]
The following result of T. Jacobi and A. Prestel [JP] sharpens Theorem 12.28.
**Theorem 12.31**.: _If the set \(\mathcal{K}({\sf f})\) is compact and \(h\in\mathbb{R}_{d}[\underline{x}]\) satisfies \(h(x)>0\) for all \(x\in\mathcal{K}({\sf f})\), then \(h\in Q({\sf g})\)._
We do not prove Theorem 12.31; for a proof of this result we refer to [JP]. If we take Theorem 12.31 for granted and combine it with Haviland's theorem 1.12 we obtain the following corollary.
**Corollary 12.32**.: _If the set \(\mathcal{K}({\sf f})\) is compact and \(L\) is a \(Q({\sf g})\)-positive linear functional on \(\mathbb{R}_{d}[\underline{x}]\), then \(L\) is a \(\mathcal{K}({\sf f})\)-moment functional._
We briefly discuss Theorem 12.31. If \(k=1\), then \(Q({\sf f})=T({\sf f})\). However, for \(k=2\),
\[Q({\sf f})=\sum\mathbb{R}_{d}[\underline{x}]^{2}+f_{1}\sum\mathbb{R}_{d}[ \underline{x}]^{2}+f_{2}\sum\mathbb{R}_{d}[\underline{x}]^{2},\]
so \(Q({\sf f})\) differs from the preordering \(T({\sf f})\) by the summand \(f_{1}f_{2}\sum\mathbb{R}_{d}[\underline{x}]^{2}\). If \(k=3\), then
\[Q({\sf f})=\sum\mathbb{R}_{d}[\underline{x}]^{2}+f_{1}\sum\mathbb{R}_{d}[ \underline{x}]^{2}+f_{2}\sum\mathbb{R}_{d}[\underline{x}]^{2}+f_{3}\sum \mathbb{R}_{d}[\underline{x}]^{2}+f_{1}f_{2}\sum\mathbb{R}_{d}[\underline{x}]^ {2}\,,\]
that is, the sets \(g\sum\mathbb{R}_{d}[\underline{x}]^{2}\) with \(g=f_{1}f_{3},f_{2}f_{3},f_{1}f_{2}f_{3}\) do not enter into the definition of \(Q({\sf f})\). For \(k=4\), no products of three or four generators appear in the definition of \(Q({\sf f})\). For large \(k\), only a small portion of mixed products occur in \(Q({\sf f})\) and Theorem 12.31 is an essential strengthening of Theorem 12.28.
The next corollary characterizes in terms of moment functionals when a Radon measure on a compact semi-algebraic set has a _bounded_ density with respect to another Radon measure. A version for closed sets is stated in Exercise 14.11 below.
**Corollary 12.33**.: _Suppose that the semi-algebraic set \(\mathcal{K}({\sf f})\) is compact. Let \(\mu\) and \(\nu\) be finite Radon measures on \(\mathcal{K}({\sf f})\) and let \(L^{\mu}\) and \(L^{\nu}\) be the corresponding moment functionals on \(\mathbb{R}_{d}[\underline{x}]\). There exists a function \(\varphi\in L^{\infty}(\mathcal{K}({\sf f}),\mu)\), \(\varphi(x)\geq 0\)\(\mu\)-a.e. on \(\mathcal{K}({\sf f})\), such that \(\,d\nu=\varphi d\mu\,\) if and only if there is a constant \(c>0\) such that_
\[L^{\nu}(g)\leq cL^{\mu}(g)\quad\text{for }g\in T({\sf f}). \tag{12.26}\]
Proof.: Choosing \(c\geq\|\varphi\|_{L^{\infty}(\mathcal{K}({\sf f}),\mu)}\), the necessity of (12.26) is easily verified.
To prove the converse we assume that (12.26) holds. Then, by (12.26), \(\,L:=cL^{\mu}-L^{\nu}\) is a \(T({\sf f})\)-positive linear functional on \(\mathbb{R}_{d}[\underline{x}]\) and hence a \(\mathcal{K}({\sf f})\)-moment functional by Theorem 12.29. Let \(\tau\) be a representing measure of \(L\), that is, \(L=L^{\tau}\). Then we have \(L^{\tau}+L^{\nu}=cL^{\mu}\). Hence both \(\tau+\nu\) and \(c\mu\) are representing measures of the \(\mathcal{K}({\sf f})\)-moment functional \(cL^{\mu}\). Since \(\mathcal{K}({\sf f})\) is compact, \(c\mu\) is determinate by Proposition 12.21, so that \(\tau+\nu=c\mu\). In particular, this implies that \(\nu\) is absolutely continuous with respect to \(\mu\). Therefore, by the Radon-Nikodym theorem A.3,
\(d\nu=\varphi d\mu\) for some function \(\varphi\in L^{1}(\mathcal{K}(\mathsf{f}),\mu)\), \(\varphi(x)\geq 0\)\(\mu\)-a.e. on \(\mathcal{K}(\mathsf{f})\). Since \(\tau+\nu=c\mu\), for each Borel subset \(M\) of \(\mathcal{K}(\mathsf{f})\) we have
\[\tau(M)=c\mu(M)-\nu(M)=\int_{M}(c-\varphi(x))d\mu\geq 0.\]
Therefore, \(c-\varphi(x)\geq 0\)\(\mu\)-a.e., so that \(\varphi\in L^{\infty}(\mathcal{K}(\mathsf{f}),\mu)\) and \(\|\varphi\|_{L^{\infty}(\mathcal{K}(\mathsf{f}),\mu)}\leq c\).
We close this section by restating Theorems 12.28 and 12.29 in the special case of compact real algebraic sets.
**Corollary 12.34**.: _Suppose that \(\mathcal{I}\) is an ideal of \(\mathbb{R}_{d}[\underline{x}]\) such that the real algebraic set \(V:=\mathcal{Z}(\mathcal{I})=\{x\in\mathbb{R}^{d}:f(x)=0\text{ for }f\in \mathcal{I}\}\) is compact._
1. _If_ \(h\in\mathbb{R}_{d}[\underline{x}]\) _satisfies_ \(h(x)>0\) _for all_ \(x\in V\)_, then_ \(h\in\sum\mathbb{R}_{d}[\underline{x}]^{2}+\mathcal{I}\)_._
2. _If_ \(p\in\mathbb{R}_{d}[\underline{x}]/\mathcal{I}\) _and_ \(p(x)>0\) _for all_ \(x\in V\)_, then_ \(p\in\sum(\mathbb{R}_{d}[\underline{x}]/\mathcal{I})^{2}\)_._
3. _If_ \(q\in\mathbb{R}[V]\equiv\mathbb{R}_{d}[\underline{x}]/\hat{\mathcal{I}}\) _and_ \(q(x)>0\) _for all_ \(x\in V\)_, then_ \(q\in\sum\mathbb{R}[V]^{2}\)_._
4. _Each positive linear functional on_ \(\mathbb{R}_{d}[\underline{x}]\) _which annihilates_ \(\mathcal{I}\) _is a_ \(V\)_-moment functional._
Proof.: Put \(f_{1}=1,f_{2}=h_{1},f_{3}=-h_{1},\ldots,f_{2m}=h_{m},f_{2m+1}=-h_{m}\), where \(h_{1},\ldots,h_{m}\) is a set of generators of \(\mathcal{I}\). Then, by (12.11), the preordering \(T(\mathsf{f})\) is \(\sum\mathbb{R}_{d}[\underline{x}]^{2}+\mathcal{I}\) and the semi-algebraic set \(\mathcal{K}(\mathsf{f})\) is \(V=\mathcal{Z}(\mathcal{I})\). Therefore, Theorem 12.28 yields (i). Since \(\mathcal{I}\subseteq\hat{\mathcal{I}}\), (i) implies (ii) and (iii).
Clearly, a linear functional on \(\mathbb{R}_{d}[\underline{x}]\) is \(T(\mathsf{f})\)-positive if it is positive and annihilates \(\mathcal{I}\). Thus (iv) follows at once from Theorem 12.29.
_Example 12.35_.: (_Moment problem on unit spheres_)
Let \(S^{d-1}=\{x\in\mathbb{R}^{d}:x_{1}^{2}+\cdots+x_{d}^{2}=1\}\) be the unit sphere of \(\mathbb{R}^{d}\). Then \(S^{d-1}\) is the real algebraic set \(\mathcal{Z}(\mathcal{I})\) for the ideal \(\mathcal{I}\) generated by \(h_{1}(x)=x_{1}^{2}+\cdots+x_{d}^{2}-1\).
Suppose that \(L\) is a linear functional on \(\mathbb{R}_{d}[\underline{x}]\) such that
\[L(p^{2})\geq 0\quad\text{and }L((x_{1}^{2}+\cdots+x_{d}^{2}-1)p)=0\quad \text{for }p\in\mathbb{R}_{d}[\underline{x}].\]
Then it follows from Corollary 12.34(iv) that \(L\) is an \(S^{d-1}\)-moment functional.
Further, if \(q\in\mathbb{R}[S^{d-1}]\) is strictly positive on \(S^{d-1}\), that is, \(q(x)>0\) for \(x\in S^{d-1}\), then \(q\in\sum\mathbb{R}[S^{d-1}]^{2}\) by Corollary 12.34(iii).
### The Archimedean Positivstellensatz for quadratic modules and semirings
The main aim of this section is to derive a representation theorem for Archimedean semirings and Archimedean quadratic modules (Theorem 12.43) and its application to the moment problem (Corollary 12.47). By means of the so-called dagger cones we show that to prove this general result it suffices to do so in the special cases of Archimedian semirings _or_ of Archimedean quadratic modules. In this section we develop an approach based on semirings. At the end of Section 12.6 we give a proof using quadratic modules and Hilbert space operators.
Recall that \(\mathsf{A}\) is a _commutative real unital algebra_. The _weak topology_ on the dual \(\mathsf{A}^{*}\) is the locally convex topology generated by the family of seminorms \(f\to|f(a)|\), where \(a\in\mathsf{A}\). Then, for each \(a\in\mathsf{A}\), the function \(a\to f(a)\) is continuous on \(\mathsf{A}^{*}\) in the weak topology.
**Lemma 12.36**.: _Suppose that \(C\) is an Archimedean unital cone of \(\mathsf{A}\). Then the set \(\mathcal{K}(C)=\{\chi\in\hat{A}:\chi(a)\geq 0,a\in C\}\) is compact in the weak topology of \(A^{*}\)._
Proof.: Since \(C\) is Archimedean, for any \(a\in A\) there exists a number \(\lambda_{a}>0\) such that \(\lambda_{a}-a\in C\) and \(\lambda_{a}+a\in C\). Hence for \(\chi\in\mathcal{K}(C)\) we have \(\chi(\lambda_{a}-a)\geq 0\) and \(\chi(\lambda_{a}+a)\geq 0\), so that \(\chi(a)\in[-\lambda_{a},\lambda_{a}]\). Thus there is an injection \(\Phi\) of \(\mathcal{K}(C)\) into the topological product space
\[P:=\prod\nolimits_{a\in A}\,[-\lambda_{a},\lambda_{a}]\]
given by \(\Phi(\chi)=(\chi(a))_{a\in A}\). From the definitions of the corresponding topologies it follows that \(\Phi\) is a homeomorphism of \(\mathcal{K}(C)\), equipped with the weak topology, on the subspace \(\Phi(\mathcal{K}(C))\) of \(P\), equipped with the product topology.
We show that the image \(\Phi(\mathcal{K}(C))\) is closed in \(P\). Indeed, suppose \((\Phi(\chi_{i}))_{i\in I}\) is a net from \(\Phi(\mathcal{K}(C))\) which converges to \(\varphi=(\varphi_{a})_{a\in a}\in P\). Then, by the definition of the weak topology, \(\lim_{i}\Phi(\chi_{i})(a)=\lim_{i}\chi_{i}(a)=\varphi_{a}\) for all \(a\in A\). Since for each \(i\) the map \(a\mapsto\chi_{i}(a)\) is a character that is nonnegative on \(\mathcal{K}(C)\), so is \(a\mapsto\varphi_{a}\). Hence there exists \(\chi\in\mathcal{K}(C)\) such that \(\varphi_{a}=\chi(a)\) for \(a\in A\). Thus, \(\varphi=\Phi(\chi)\in\Phi(\mathcal{K}(C)\).
The product \(P\) is a compact topological space by Tychonoff's theorem. Hence its closed subset \(\Phi(\mathcal{K}(C))\) is also compact and so is \(\mathcal{K}(C)\), because \(\Phi\) is a homeomorphism of \(\mathcal{K}(C)\) and \(\Phi(\mathcal{K}(C))\).
In our approach to the Archimedean Positivstellensatz we use the following notion.
**Definition 12.37**.: For a unital convex cone \(C\) in \(\mathsf{A}\) we define
\[C^{\dagger}=\{a\in\mathsf{A}:\ a+\epsilon\in C\ \ \text{for all}\ \ \epsilon\in(0,+\infty)\}. \tag{12.27}\]
Clearly, \(C^{\dagger}\) is again a unital convex cone in \(\mathsf{A}\). Since \(1\in C\), we have \(C\subseteq C^{\dagger}\).
**Lemma 12.38**.: _For each unital convex cone \(C\) in \(\mathsf{A}\), we have \(\mathcal{K}(C)=\mathcal{K}(C^{\dagger})\) and \((C^{\dagger})^{\dagger}=C^{\dagger}\)._
Proof.: It is obvious that \(\mathcal{K}(C^{\dagger})\subseteq\mathcal{K}(C)\), because \(C\subseteq C^{\dagger}\). Conversely, let \(\chi\in\mathcal{K}(C)\). If \(a\in C^{\dagger}\), then \(a+\epsilon\in C\) and hence \(\chi(a+\epsilon)\geq 0\) for all \(\varepsilon>0\). Letting \(\varepsilon\searrow 0\), we get \(\chi(a)\geq 0\). Thus \(\chi\in\mathcal{K}(C^{\dagger})\).
Clearly, \(C^{\dagger}\subseteq(C^{\dagger})^{\dagger}\). To verify the converse, let \(a\in(C^{\dagger})^{\dagger}\). Then \(a+\varepsilon_{1}\in C^{\dagger}\) and \(a+\varepsilon_{1}+\varepsilon_{2}\in C\) for \(\varepsilon_{1}>0\), \(\varepsilon_{2}>0\), so \(a+\varepsilon\in\mathbb{C}\) for all \(\varepsilon>0\). Hence \(a\in C^{\dagger}\).
_Example 12.39_.: Let \(\mathsf{A}\) be a real algebra of bounded real-valued functions on a set \(X\) which contains the constant functions. Then
\[C:=\{f\in\mathsf{A}:f(x)>0\ \text{for all}\ x\in X\}\]
is an Archimedean preordering of \(\mathsf{A}\) and
\[C^{\dagger}=\{f\in\mathsf{A}:f(x)\geq 0\ \text{for all}\ x\in X\}. \tag{12.28}\]
We verify formula (12.28). If \(f(x)\geq 0\) on \(X\), then \(f(x)+\varepsilon>0\) on \(X\), hence \(f+\varepsilon\in C\) for all \(\varepsilon>0\), so that \(f\in C^{\dagger}\). Conversely, if \(f\in C^{\dagger}\), then \(f+\varepsilon\in C\), hence \(f(x)+\varepsilon>0\) on \(X\) for all \(\varepsilon>0\); letting \(\varepsilon\searrow 0\), we get \(f(x)\geq 0\) on \(X\). This proves (12.28).
**Proposition 12.40**.: _If \(Q\) is an Archimedean quadratic module of \(\mathsf{A}\), then \(Q^{\dagger}\) is an Archimedean preordering of \(\mathsf{A}\)._
Proof.: Clearly, \(Q^{\dagger}\) is a unital convex cone of \(\mathsf{A}\) that contains all squares. We only have to show that \(Q^{\dagger}\) is closed under multiplication.
Let \(p,q\in Q\) and \(\epsilon\in(0,+\infty)\) be given. We prove that \(pq+\epsilon\in Q\). Because \(Q\) is Archimedean, there exists a \(\lambda>0\) such that \(\lambda-p\in Q\). We recursively define a sequence \((r_{k})_{k\in\mathbb{N}_{0}}\) of elements of \(\mathsf{A}\) by \(\,r_{0}:=p/\lambda\) and \(\,r_{k+1}:=2r_{k}-r_{k}^{2}\), \(k\in\mathbb{N}_{0}\). Then we have \(pq-\lambda qr_{0}=0\) and
\[pq-2^{-(k+1)}\lambda qr_{k+1}=(pq-2^{-k}\lambda qr_{k})+2^{-(k+1)}\lambda qr_{ k}^{2}.\]
Therefore, since \(q\in Q\) and \(Q\) is a quadratic module, it follows by induction that
\[(pq-2^{-k}\lambda qr_{k})\in Q\quad\text{for }k\in\mathbb{N}_{0}. \tag{12.29}\]
Adding \(2^{-(k+1)}\lambda(q+r_{k})^{2}\in Q\) we obtain \(\,pq+2^{-(k+1)}\lambda(q^{2}+r_{k}^{2})\in Q\,\) for \(k\in\mathbb{N}_{0}\). For sufficiently large \(k\in\mathbb{N}_{0}\) we have \(\,\epsilon-2^{-(k+1)}\lambda(q^{2}+r_{k}^{2})\in Q\) because \(Q\) is Archimedean. Adding \(\,pq+2^{-(k+1)}\lambda(q^{2}+(r_{k})^{2})\in Q\) by (12.29) yields \((pq+\epsilon)\in Q\).
Now let \(r,s\in Q^{\dagger}\) and \(\epsilon\in(0,+\infty)\). As \(Q\) is Archimedean, there exists \(\lambda>0\) such that \(\lambda-(r+s)\in Q\). Set \(\delta:=\sqrt{\lambda^{2}+\epsilon}-\lambda\). Since \(r,s\in Q^{\dagger}\), we have \(r+\delta,s+\delta\in Q\) and \(((r+\delta)(s+\delta)+\delta\lambda)\in Q\), as shown in the preceding paragraph. Therefore, since \(\delta^{2}+2\lambda\delta=\epsilon\), we obtain
\[rs+\epsilon=\big{(}(r+\delta)(s+\delta)+\delta\lambda\big{)}+\delta\big{(} \lambda-(r+s)\big{)}\in Q.\]
Hence \(rs\in Q^{\dagger}\).
**Proposition 12.41**.: _Suppose that \(S\) is an Archimedean semiring of \(\mathsf{A}\) and \(C\) is an \(S\)-module. Then \(C^{\dagger}\) is an Archimedean preordering of \(\mathsf{A}\) and an \(S^{\dagger}\)-module. In particular, \(S^{\dagger}\) is an Archimedean preordering._
Proof.: Let \(a\in S^{\dagger}\) and \(c\in C^{\dagger}\). Then, by definition, \(a+\delta\in S\) and \(c+\delta\in C\) for all \(\delta>0.\) Since \(S\) is Archimedean, there exists a number \(\lambda>0\) such that \(\lambda-a\in S\subseteq C\) and \(\lambda-a\in S\subseteq C\). Given \(\epsilon\in(0,+\infty)\), we set \(\delta:=-\lambda+\sqrt{\lambda+\epsilon}\). Then \(\delta>0\) and \(\delta^{2}+2\delta\lambda=\epsilon\), so we obtain
\[ac+\epsilon=(a+\delta)(c+\delta)+\delta(\lambda-a)+\delta(\lambda-c)\in C.\]
Therefore, \(ac\in C^{\dagger}\). In particular, in the special case \(C=S\) this shows that \(S^{\dagger}\) is also a semiring. In the general case, it proves that \(C^{\dagger}\) is an \(S^{\dagger}\)-module.
Let \(a\in\mathsf{A}\). The crucial step is to prove that \(a^{2}\in S^{\dagger}\). For let \(\varepsilon>0\). Since the polynomial \(x^{2}+\varepsilon\) is positive for all \(x\in[-1,1]\), by Bernstein's theorem (Proposition 3.4) there exist numbers \(m\in\mathbb{N}\) and \(a_{kl}\geq 0\) for \(k,l=0,\ldots,m\) such that
\[x^{2}+\varepsilon=\sum_{k,l=0}^{m}a_{kl}(1-x)^{k}(1+x)^{l} \tag{12.30}\]
Since the semiring \(S\) is Archimedean, there exists a \(\lambda>0\) such that \((\lambda+a)\in S\) and \((\lambda-a)\in S\). Then \((1+a/\lambda)\in S\) and \((1-a/\lambda)\in S\) and hence \((1+a/\lambda)^{n}\in S\) and \((1-a/\lambda)^{n}\in S\) for all \(n\in\mathbb{N}_{0}\), because \(S\) is a semiring. As usual, we set \((1\pm a/\lambda)^{0}=1\). Therefore, using (12.30) and the fact that \(S\) is closed under multiplication, we find
\[(a/\lambda)^{2}+\varepsilon=\sum_{k,l=0}^{m}a_{kl}(1-(a/\lambda)^{k}(1+(a/ \lambda)^{l}\in S.\]
Hence \((a^{2}+\lambda^{2}\varepsilon)\in S\). Since \(\lambda\) depends only on \(a\) and \(\varepsilon>0\) was arbitrary, this implies that \(a^{2}\in S^{\dagger}\).
Thus, \(S^{\dagger}\) is a semiring which contains all squares, that is, \(S^{\dagger}\) is a preordering.
Since \(S\subseteq C\) and hence \(S^{\dagger}\subseteq C^{\dagger}\), \(C^{\dagger}\) contains also all squares, so \(C^{\dagger}\) is a quadratic module. Moreover, from \(S\subseteq S^{\dagger}\) and \(S\subseteq C\subseteq C^{\dagger}\) it follows that \(C^{\dagger}\) and \(S^{\dagger}\) are Archimedean because \(S\) is Archimedean by assumption.
Since \(C^{\dagger}\) is an Archimedean quadratic module as we have proved, \((C^{\dagger})^{\dagger}\) is an Archimedean preordering by Proposition 12.40. By Lemma 12.38, \((C^{\dagger})^{\dagger}=C^{\dagger}\).
_Remark 12.42_.: For \(\varepsilon=\frac{1}{k-1},k\in\mathbb{N}\), there is the following explicit form of the identity (12.30):
\[x^{2}+\frac{1}{k-1}=\frac{1}{2^{k}k(k-1)}\sum_{\ell=0}^{k}\binom{k}{\ell}(k-2 \ell)^{2}(1+x)^{k-\ell}(1-x)^{\ell}.\]
The following important result is the _Archimedean Positivstellensatz for quadratic modules and semirings_.
**Theorem 12.43**.: _Suppose that \(C\) is an \(S\)-module of an Archimedean semiring \(S\) or \(C\) is an Archimedean quadratic module of the commutative unital real algebra \(\mathsf{A}\). For any \(a\in\mathsf{A}\), the following are equivalent:_
\((i)_{C}\)__\(\chi(a)>0\) _for all_ \(\chi\in\mathcal{K}(C)\)_._
\((ii)_{C}\) _There exists_ \(\epsilon\in(0,+\infty)\) _such that_ \(a\in\epsilon+C\)_._
The following simple fact is crucial for our proofs of Theorem 12.43 given below.
**Lemma 12.44**.: _In the notation of Theorem 12.43, each of the conditions \((i)_{C}\) and \((ii)_{C}\) holds for \(C\) if and only if it does for \(C^{\dagger}\)._
Proof.: Since \(\mathcal{K}(C)=\mathcal{K}(C^{\dagger})\) by Lemma 12.38, this is obvious of \((i)_{C}\). For \((ii)_{C}\), since \(C\subseteq C^{\dagger}\), it suffices it verify that \((ii)_{C^{\dagger}}\) implies \((ii)_{C}\). Indeed, if \(a=2\epsilon+c^{\dagger}\) with \(\epsilon>0\) and \(c^{\dagger}\in C^{\dagger}\), then by the definition of \(C^{\dagger}\) we have \(c:=c^{\dagger}+\epsilon\in C\), so that \(a=\epsilon+c\in C\). Thus, \((ii)_{C}\) is equivalent to \((ii)_{C^{\dagger}}\).
Before proving the theorem, we discuss this result with a couple of remarks.
_Remark 12.45_.: 1.) First we emphasize that in strong contrast to Theorem 12.28 the above Theorem 12.43 does not require that \(\mathsf{A}\) or \(C\) or \(S\) is finitely generated.
2.) Using the fact that the preordering \(T(\mathsf{f})\) is Archimedean (by Proposition 12.26) it is clear that Theorem12.28 follows directly from Theorem 12.43. In Section 12.3 we have given an "elementary" proof of Theorem 12.28 which is based on Proposition 12.27(i) and does not depend on Theorem 12.43.
3.) The proof of implication \((ii)_{c}\to(i)_{C}\) is very easy: Indeed, if \(a=\epsilon+c\) with \(c\in C\), then \(\chi(a)=\epsilon\chi(1)+\chi(c)=\epsilon+\chi(c)\geq\epsilon>0\) for all \(\chi\in\mathcal{K}(C)\).
4.) Since \(1\in C\), \((ii)_{C}\) implies that \(a\in C\). The stronger statement \(a\in\epsilon+C\) is given in order to get an equivalence of conditions \((i)_{C}\) and \((ii)_{C}\).
The main assertion of Theorem 12.43 states that _the positivity (!) of the values \(\chi(a)\) for all \(C\)-positive characters on \(\mathsf{A}\) implies that \(a\) belongs to \(C\)_.
5.) Recall that \(C^{\dagger}\) is an Archimedean preordering by Propositions 12.40 and 12.41. Therefore, by Lemma 12.44, to prove Theorem 12.43 it suffices to do so in the case when \(C\) is an Archimedean preordering of \(\mathsf{A}\). In particular, it is enough to show Theorem 12.43 for Archimedean semirings or for Archimedean quadratic modules. In this section we prove of Theorem 12.43 for Archimedean semirings, while in Section 12.6 we give an approach for Archimedean quadratic modules.
_Proof of Theorem 12.43 for Archimedean semirings:_
The trivial implication \((ii)_{C}\to(i)_{C}\) was already noted in the preceding remark 3.).
We suppose that \(C\) is an Archimedean semirings of \(\mathsf{A}\) and prove the main implication \((i)_{C}\to(ii)_{C}\). For let \(c\in\mathsf{A}\) be such that \(c\notin C\). Then, by Proposition 12.14, there exists an extremal (!) functional \(\varphi\) of \(C^{\wedge}\) such that \(\varphi(1)=1\) and \(\varphi(c)\leq 0\). We prove that \(\varphi\in\hat{\mathsf{A}}\), that is,
\[\varphi(ab)=\varphi(a)\varphi(b)\quad\text{for }a,b\in\mathsf{A}. \tag{12.31}\]
Let \(a\in\mathsf{A}\). Since \(C\) is Archimedean, there exists \(\lambda>0\) such that \(\lambda+a\in C\), so that \(a=(\lambda+a)-\lambda\in C-C\). Thus, \(\mathsf{A}=C-C\). Hence it suffices to verify (12.31) for \(a\in C\) and similarly for \(b\in C.\) Then \(\varphi(a)\geq 0\), since \(\varphi\) is \(C\)-positive.
Case 1: \(\varphi(a)=0\).
Let \(b\in C\) and choose \(\lambda>0\) such that \(\lambda-b\in C\). Then \((\lambda-b)a\in C\) and \(ab\in C\) (because \(C\) is a semiring!), so that \(\varphi((\lambda-b)a)=\lambda\varphi(a)-\varphi(ab)=-\varphi(ab)\geq 0\) and \(\varphi(ab)\geq 0\). Hence \(\varphi(ab)=0\), so that (12.31) holds.
Case 2: \(\varphi(a)>0\).
We choose \(\lambda>0\) such that \((\lambda{-}a)\in C\) and \(\varphi(\lambda{-}a)>0\). Because \(C\) is a semiring, the functionals \(\varphi_{1}(\cdot):=\varphi(a)^{-1}\varphi(a\cdot)\) and \(\varphi_{2}(\cdot):=\varphi(\lambda-a)^{-1}\varphi((\lambda-a)\cdot)\) belong to the dual cone \(C^{\wedge}\). They satisfy
\[\varphi=\lambda^{-1}\varphi(a)\,\varphi_{1}+\lambda^{-1}\varphi(\lambda-a)\, \varphi_{2},\]
so \(\varphi\) is a convex combination of two functionals from \(C^{\wedge}\). Since \(\varphi\) is extremal, it follows that \(\varphi_{1}=\varphi\) which gives (12.31).
Summarizing both cases, we have shown that \(\varphi\in\hat{\mathsf{A}}\). Recall that \(\varphi(c)\leq 0\).
Now it is easy to prove that \((i)_{C}\) implies \((ii)_{C}\). Let \(a\in\mathsf{A}\) be as in \((i)_{C}\). Then, since the function \(a\to\varphi(a)\) is continuous on the compact set \(\mathcal{K}(C)\) in the weak topology (by Lemma 12.36), there exists \(\epsilon>0\) such that \(c:=a-\epsilon\) also satisfies \(\varphi(c)>0\) for all \(\varphi\in\mathcal{K}(C)\). Therefore, by the preceding proof, \(c\notin C\) cannot hold, so that \(c\in C\). Hence \(a=\epsilon+c\in\epsilon+C\).
**Corollary 12.46**.: _Under the assumptions of Theorem 12.43, we have_
\[C^{\dagger}=\{a\in\mathsf{A}:\chi(a)\geq 0\ \text{ for all }\ \chi\in\mathcal{K}(C)\,\}.\]
Proof.: If \(\chi(a)\geq 0\) for \(\chi\in\mathcal{K}(C)\), then for \(\epsilon>0\) we have \(\chi(a+\epsilon)=\chi(a)+\epsilon>0\). Therefore, \(a+\epsilon\in C\) by Theorem 12.43, so that \(a\in C^{\dagger}\).
Conversely, if \(a\in C^{\dagger}\) and \(\chi\in\mathcal{K}(C)\), then \(a+\epsilon\in C\). Hence \(\chi(a)+\epsilon=\chi(a+\epsilon)\geq 0\) for all \(\epsilon>0\). Letting \(\epsilon\searrow 0\) yields \(\chi(a)\geq 0\).
The following is the main application of Theorem 12.43 to the moment problem.
**Corollary 12.47**.: _Retain the assumptions of Theorem 12.43. Suppose that \(L\) is a linear functional on \(\mathsf{A}\) such that \(L(c)\geq 0\) for all \(a\in C\). Then there exists a Radon measure \(\mu\) on the compact topological space \(\mathcal{K}(C)\) such that_
\[L(a)=\int_{\mathcal{K}(C)}\chi(a)\ d\mu(\chi)\ \text{for }a\in\mathsf{A}. \tag{12.32}\]
Proof.: Let \(a\in\mathsf{A}\) be such that \(\chi(a)\geq 0\) for \(\chi\in\mathcal{K}(C)\). Then, for each \(\epsilon>0\), \(a+\epsilon\) satisfies \((i)_{C}\), so \(a+\epsilon\in C\) by Theorem 12.43. Hence \(L(a+\epsilon)=L(a)+\epsilon L(1)\geq 0\). Letting \(\epsilon\searrow 0\), we get \(L(a)\geq 0\). Now the assertion follows from Proposition 1.9.
### The Archimedean representation theorem for polynomial algebras
In this section we first restate Theorem 12.43 and Corollary 12.47 in the special case when \(\mathsf{A}\) is the polynomial algebra \(\mathbb{R}_{d}[\underline{x}]\).
We begin with the case of Archimedean quadratic modules. Assertion (i) of the following theorem is also called the _Archimedean Positivstellensatz_.
**Theorem 12.48**.: _Let \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) be a finite subset of \(\mathbb{R}_{d}[\underline{x}]\). Suppose that the quadratic module \(Q(\mathsf{f})\) defined by (12.4) is Archimedean._
1. _If_ \(h\in\mathbb{R}_{d}[\underline{x}]\) _satisfies_ \(f(x)>0\) _for all_ \(x\in\mathcal{K}(\mathsf{f})\)_, then_ \(h\in Q(\mathsf{f})\)_._
2. _Any_ \(Q(\mathsf{f})\)_-positive linear functional_ \(L\) _on_ \(\,\mathbb{R}_{d}[\underline{x}]\) _is a_ \(\mathcal{K}(\mathsf{f})\)_-moment functional, that is, there exists a measure_ \(\mu\in M_{+}(\mathbb{R}^{d})\) _supported on the compact set_ \(\mathcal{K}(\mathsf{f})\) _such that_ \(\,L(f)=\int f(x)\,d\mu(x)\,\) _for_ \(\,f\in\mathbb{R}_{d}[\underline{x}]\)_._
Proof.: Set \(\mathsf{A}=\mathbb{R}_{d}[\underline{x}]\) and \(C=Q(\mathsf{f})\). As noted in Example 12.16, characters \(\chi\) of \(\mathsf{A}\) correspond to points \(\chi_{t}\cong t\) of \(\mathbb{R}^{d}\) and we have \(\mathcal{K}(Q)=\mathcal{K}(\mathsf{f})\) under this identification. Hence the assertions of (i) and (ii) follow at once from Theorem 12.43 and Corollary 12.47, respectively.
Next we turn to modules for semirings.
_Example 12.49_.: Let \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) and \(\mathsf{g}=\{g_{0}=1,g_{1},\ldots,g_{r}\}\) be finite subsets of \(\mathbb{R}_{d}[\underline{x}]\), where \(k\in\mathbb{N},r\in\mathbb{N}_{0}\). Then
\[C(\mathsf{f},\mathsf{g}):=g_{0}S(\mathsf{f})+g_{1}S(\mathsf{f})+\cdots+g_{r}S (\mathsf{f}) \tag{12.33}\]
is an \(\,S(\mathsf{f})\)-module for the semiring \(\,S(\mathsf{f})\). Clearly, \(\mathcal{K}(C(\mathsf{f},\mathsf{g}))=\mathcal{K}(\mathsf{f})\cap\mathcal{K}( \mathsf{g})\).
Note that in the special case \(r=0\,\) the \(S(\mathsf{f})\)-module \(C(\mathsf{f},\mathsf{g})\) is just the semiring \(S(\mathsf{f})\) itself and \(\mathcal{K}(C(\mathsf{f},\mathsf{g}))=\mathcal{K}(\mathsf{f})\).
**Theorem 12.50**.: _Let \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) and \(\mathsf{g}=\{g_{0}=1,g_{1},\ldots,g_{r}\}\) be subsets of \(\mathbb{R}_{d}[\underline{x}]\), where \(k\in\mathbb{N},r\in\mathbb{N}_{0}\). Suppose that the semiring \(S(\mathsf{f})\) defined by (12.5) is Archimedean. Let \(C(\mathsf{f},\mathsf{g})\) denote the \(S(\mathsf{f})\)-module defined by (12.33)._
1. _If_ \(h\in\mathbb{R}_{d}[\underline{x}]\) _satisfies_ \(h(x)>0\) _for all_ \(x\in\mathcal{K}(\mathsf{f})\cap\mathcal{K}(\mathsf{g})\)_, then_ \(h\in C(\mathsf{f},\mathsf{g})\)_._
2. _Suppose_ \(L\) _is a linear functional on_ \(\,\mathbb{R}_{d}[\underline{x}]\) _such that_ \(L(f)\geq 0\) _for all_ \(f\in C(\mathsf{f},\mathsf{g})\)_. Then_ \(L\) _is a_ \(\mathcal{K}(\mathsf{f})\cap\mathcal{K}(\mathsf{g})\)_-moment functional, that is, there is a measure_ \(\mu\in M_{+}(\mathbb{R}^{d})\) _supported on the compact semi-algebraic set_ \(\mathcal{K}(\mathsf{f})\cap\mathcal{K}(\mathsf{g})\) _such that_ \(\,L(f)=\int f(x)\,d\mu(x)\,\) _for all_ \(\,f\in\mathbb{R}_{d}[\underline{x}]\)_._
Proof.: Combine Theorem 12.43 and Corollary 12.47 with Example 12.33.
If \(r=0\), then the \(S(\mathsf{f})\)-module \(C(\mathsf{f},\mathsf{g})\) coincides with the semiring \(S(\mathsf{f})\) and we have \(\mathcal{K}(C(\mathsf{f},\mathsf{g}))=\mathcal{K}(\mathsf{f}).\) Then Theorem 12.50(i) is the Archimedean Positivstellensatz for semirings in the special case of the polynomial algebra \(\mathbb{R}_{d}[\underline{x}]\).
The next theorem is an application of Theorem 12.50. It sharpens Theorem 12.28 by representing positive polynomials on a compact semi-algebraic set by a certain subset of the corresponding preordering.
**Theorem 12.51**.: _Suppose \(\mathsf{f}=\{f_{1},\ldots,f_{r}\}\), \(r\in\mathbb{N}\), is a subset of \(\mathbb{R}_{d}[\underline{x}]\) such that the semialgebraic set \(\mathcal{K}(\mathsf{f})\) is compact. Then there exist polynomials \(p_{1},\ldots,p_{s}\in\mathbb{R}_{d}[\underline{x}]\), \(s\in\mathbb{N},\) such that the semiring \(S\) of \(\mathbb{R}_{d}[\underline{x}]\) generated by \(f_{1},\ldots,f_{r},p_{1}^{2},\ldots,p_{s}^{2}\) is Archimedean._
_If \(h\in\mathbb{R}_{d}[\underline{x}]\) satisfies \(h(x)>0\) for all \(x\in\mathcal{K}(\mathsf{f})\), then \(h\) is a finite sum of polynomials_
\[\alpha\,f_{1}^{e_{1}}\cdots f_{r}^{e_{r}}\;f_{1}^{2n_{1}}\cdots f_{r}^{2n_{r}} \;p_{1}^{2k_{1}}\cdots p_{s}^{2k_{s}}, \tag{12.34}\]
_where \(\alpha\geq 0\), \(e_{1},\ldots,e_{r}\in\{0,1\}\), \(n_{1},\ldots,n_{r},k_{1},\ldots,k_{s}\in\mathbb{N}_{0}\)._
_Further, each linear functional on \(\mathbb{R}_{d}[\underline{x}]\) that is nonnegative on all polynomials (12.34) (with \(\alpha=1\)) is a \(\mathcal{K}(\mathsf{f})\)-moment functional._
Proof.: Since the set \(\mathcal{K}(\mathsf{f})\) is compact, there are numbers \(\alpha_{j}>0,\beta_{j}>0\) such that
\[\alpha_{j}+x_{j}>0\text{ and }\beta_{j}-x_{j}>0\text{ for }x\in\mathcal{K}(f_{1}, \ldots,f_{r}),\;j=1,\ldots,d. \tag{12.35}\]
Therefore, by Theorem 12.28, the polynomials \(\alpha_{j}+x_{j}>0,\beta_{j}-x_{j}>0\) are in the preordering \(T(f_{1},\ldots,f_{r})\). By the definition (12.6) of \(T(f_{1},\ldots,f_{r})\), this means that each polynomial \(\alpha_{j}+x_{j}\), \(\beta_{j}-x_{j}\) is a finite sum of polynomials of the form \(f_{1}^{e_{1}}\cdots f_{r}^{e_{r}}p^{2}\) with \(p\in\mathbb{R}_{d}[\underline{x}]\) and \(e_{1},\ldots,e_{r}\in\{0,1\}\). Let \(S\) denote the semiring generated by \(f_{1},\ldots,f_{r}\) and all squares \(p^{2}\) occurring in these representations of the polynomials
\(\alpha_{j}+x_{j},\beta_{1}-x_{j}\), where \(j=1,\ldots,d\). Then, by construction, \(x_{1},\ldots,x_{d}\) belong to \(\mathbb{R}_{d}[\underline{x}]_{b}(S)\), so \(S\) is Archimedean by Lemma 12.9. Since \(f_{1},\ldots,f_{r}\in S\), \(\mathcal{K}(S)\) is the set of point evaluations at \(\mathcal{K}(f_{1},\ldots,f_{r})\).
By its construction, the semiring \(S\) defined above is generated by polynomials \(f_{1},\ldots,f_{r}\), \(p_{1}^{2},\ldots,p_{s}^{2}\). The Archimedean Positivstellensatz for semirings (Theorem 12.43 or Theorem 12.50) yields \(h\in S\). This means that \(h\) is a finite sum of terms (12.34). By Haviland's theorem (Theorem 1.12) this implies the last assertion.
In the above proof the polynomials \(x_{1},\ldots,x_{d}\) can be replaced by any finite set of algebra generators of \(\mathbb{R}_{d}[\underline{x}].\) Note that (12.35) means that the set \(\mathcal{K}(\mathsf{f})\) is contained in the \(d\)-dimensional rectangle \([-\alpha_{1},\beta_{1}]\times\cdots\times[-\alpha_{d},\beta_{d}]\).
We illustrate the preceding result with an example.
_Example 12.52_.: Let \(S\) denote the semiring of \(\mathbb{R}_{d}[\underline{x}]\) generating by the polynomials
\[f(x):=1-x_{1}^{2}-\cdots-x_{d}^{2},\;g_{j,\pm}(x):=(1\pm x_{j})^{2},\;j=1, \ldots,d. \tag{12.36}\]
Obviously, \(\mathcal{K}(S)\) is the closed unit ball
\[\mathcal{K}(f)=\{x\in\mathbb{R}^{d}:\ x_{1}^{2}+\cdots+x_{d}^{2}\leq 1\}.\]
Then, since
\[d+1\pm 2x_{k}=(1-x_{1}^{2}-\cdots-x_{d}^{2})+(1\pm x_{k})^{2}+\frac{1}{2}\sum_{ i=1,i\neq k}^{d}\left((1+x_{j})^{2}+(1-x_{j})^{2}\right)\in S,\]
for \(k=1,\ldots,d\), Lemma 12.9 implies that \(S\) is Archimedean. Therefore, by Theorem 12.43 (or Theorem 12.50), each polynomial \(h\in\mathbb{R}_{d}[\underline{x}]\) that is positive in all points of the closed unit ball \(\mathcal{K}(f)\) belongs to \(S\). This means that \(h\) is of the form
\[h(x)=\sum_{n,k_{i},\ell_{i}=0}^{m}\alpha_{n,k_{1},\ell_{1},\ldots,k_{d},\ell_{d}}f^{2n}(1-x_{1})^{2k_{1}}(1+x_{1})^{2\ell_{1}}\cdots(1-x_{d})^{2 k_{d}}(1+x_{d})^{2\ell_{d}}\] \[+f\sum_{n,k_{i},\ell_{i}=0}^{m}\beta_{n,k_{1},\ell_{1},\ldots,k_{ d},\ell_{d}}f^{2n}(1-x_{1})^{2k_{1}}(1+x_{1})^{2\ell_{1}}\cdots(1-x_{d})^{2k_{d}}(1+x _{d})^{2\ell_{d}},\]
where \(\,m\in\mathbb{N}_{0}\) and \(\alpha_{n,k_{1},\ell_{1},\ldots,k_{d},\ell_{d}}\geq 0\), \(\,\beta_{n,k_{1},\ell_{1},\ldots,k_{d},\ell_{d}}\geq 0\). This formula is a distinguished weighted sum of squares representation of the positive polynomial \(h\).
The Archimedean Positivstellensatz for quadratic modules (Theorem 12.48) gives in this case the weaker assertion \(\,h(x)=\sigma_{1}+f\sigma_{2}\), with \(\sigma_{1},\sigma_{2}\in\sum\mathbb{R}_{d}[\underline{x}]^{2}\).
### The operator-theoretic approach to the moment problem
The spectral theory of self-adjoint operators in Hilbert space is well suited to the moment problem and provides powerful techniques for the study of this problem. The technical tool that relates the multidimensional moment problem to Hilbert space operator theory is the _Gelfand-Naimark-Segal construction_, briefly the _GNS-construction_. We develop this construction first for a general \(*\)-algebra (see [10, Section 8.6] or [10, Section 4.4]) and then we specialize to the polynomial algebra.
Suppose that \(\mathsf{A}\) is a unital (real or complex) \(*\)-algebra. Let \(\mathbb{K}=\mathbb{R}\) or \(\mathbb{K}=C\).
**Definition 12.53**.: Let \((\mathcal{D},\langle\cdot,\cdot\rangle)\) be a unitary space. A \(*\)_-representation_ of \(\mathsf{A}\) on \((\mathcal{D},\langle\cdot,\cdot\rangle)\) is an algebra homomorphism \(\pi\) of \(\mathsf{A}\) into the algebra \(L(\mathcal{D})\) of linear operators mapping \(\mathcal{D}\) into itself such that \(\pi(1)\varphi=\varphi\) for \(\varphi\in\mathcal{D}\) and
\[\langle\pi(a)\varphi,\psi\rangle=\langle\varphi,\pi(a^{*})\psi\rangle\quad \text{for}\quad a\in\mathsf{A},\ \varphi,\psi\in\mathcal{D}. \tag{12.37}\]
The unitary space \(\mathcal{D}\) is called the _domain_ of \(\,\pi\) and denoted by \(\mathcal{D}(\pi)\). A vector \(\varphi\in\mathcal{D}\) is called _algebraically cyclic_, briefly _a-cyclic_, for \(\pi\) if \(\,\mathcal{D}=\pi(\mathsf{A})\varphi\).
Suppose that \(L\) is a positive linear functional on \(\mathsf{A}\), that is, \(L\) is a linear functional such that \(L(a^{*}a)\geq 0\) for \(a\in\mathsf{A}\). Then, by Lemma 2.3, the Cauchy-Schwarz inequality holds:
\[|L(a^{*}b)|^{2}\leq L(a^{*}a)L(b^{*}b)\quad\text{for}\quad a,b\in\mathsf{A}. \tag{12.38}\]
**Lemma 12.54**.: \(\mathcal{N}_{L}:=\{a\in\mathsf{A}:L(a^{*}a)=0\}\) _is a left ideal of the algebra \(\mathsf{A}\)._
Proof.: Let \(a,b\in\mathcal{N}_{L}\) and \(x\in\mathsf{A}\). Using (12.38) we obtain
\[|L((xa)^{*}xa)|^{2}=|L((x^{*}xa)^{*}a)|^{2}\leq L((x^{*}xa)^{*}x^{*}xa)L(a^{*}a )=0,\]
so that \(xa\in\mathcal{N}_{L}\). Applying again (12.38) we get \(L(a^{*}b)=L(b^{*}a)=0\). Hence
\[L((a+b)^{*}(a+b))=L(a^{*}a)+L(b^{*}b)+L(a^{*}b)+L(b^{*}a)=0,\]
so that \(a+b\in\mathcal{N}_{L}\). Obviously, \(\lambda a\in\mathcal{N}_{L}\) for \(\lambda\in\mathbb{K}\).
Hence there exist a well-defined scalar product \(\langle\cdot,\cdot\rangle_{L}\) on the quotient vector space \(\mathcal{D}_{L}\)=\(\mathsf{A}/\mathcal{N}_{L}\) and a well-defined algebra homomorphism \(\pi_{L}:\mathsf{A}\)\(\rightarrow\)\(L(\mathcal{D}_{L})\) given by
\[\langle a+\mathcal{N}_{L},b+\mathcal{N}_{L}\rangle_{L}=L(b^{*}a)\text{ and }\pi_{L}(a)(b+\mathcal{N}_{L})=ab+\mathcal{N}_{L},\ a,b\in\mathsf{A}. \tag{12.39}\]
Let \(\mathcal{H}_{L}\) denote the Hilbert space completion of the pre-Hilbert space \(\mathcal{D}_{L}\). If no confusion can arise we write \(\langle\cdot,\cdot\rangle\) for \(\langle\cdot,\cdot\rangle_{L}\) and \(a\) for \(a+\mathcal{N}_{L}\). Then we have \(\pi_{L}(a)b=ab\), in particular \(\pi_{L}(1)a=a\), and
\[\langle\pi_{L}(a)b,c\rangle=L(c^{*}ab)=L((a^{*}c)^{*}b)=\langle b,\pi_{L}(a^{* })c\rangle\quad\text{for}\quad a,b,c\in\mathsf{A}. \tag{12.40}\]
Clearly, \(\mathcal{D}_{L}=\pi_{L}(\mathsf{A})1\). Thus, we have shown that \(\pi_{L}\)_is a \(*\)-representation of \(\mathsf{A}\) on the domain \(\mathcal{D}(\pi_{L})=\mathcal{D}_{L}\) and \(1\) is an_ a-_cyclic vector for \(\,\pi_{L}\)_. Further, we have
\[L(a)=\langle\pi_{L}(a)1,1\rangle\quad\text{for}\quad a\in\mathsf{A}. \tag{12.41}\]
**Definition 12.55**.: \(\pi_{L}\) is called the _GNS-representation_ of \(\mathsf{A}\) associated with \(L\).
We show that the GNS-representation is unique up to unitary equivalence. Let \(\pi\) be another \(*\)-representation of \(\mathsf{A}\) with a-cyclic vector \(\varphi\in\mathcal{D}(\pi)\) on a dense domain \(\mathcal{D}(\pi)\) of a Hilbert space \(\mathcal{G}\) such that \(L(a)=\langle\pi(a)\varphi,\varphi\rangle\) for all \(a\in\mathsf{A}\). For \(a\in\mathsf{A}\),
\[\|\pi(a)\varphi\|^{2}=\langle\pi(a)\varphi,\pi(a)\varphi\rangle=\langle\pi(a^ {*}a)\varphi,\varphi\rangle=L(a^{*}a)\]
and similarly \(\|\pi_{L}(a)1\|^{2}=L(a^{*}a)\). Hence there is an isometric linear map \(U\) given by \(\,U(\pi(a)\varphi)=\pi_{L}(a)1,a\in\mathsf{A}\), \(\,\)of \(\mathcal{D}(\pi)=\pi(\mathsf{A})\varphi\,\) onto \(\,\mathcal{D}(\pi_{L})=\pi_{L}(\mathsf{A})1\). Since the domains \(\mathcal{D}(\pi)\) and \(\mathcal{D}(\pi_{L})\) are dense in \(\mathcal{G}\) and \(\mathcal{H}_{L}\), respectively, \(U\) extends by continuity to a unitary operator of \(\mathcal{G}\) onto \(\mathcal{H}_{L}\). For \(a,b\in\mathsf{A}\) we derive
\[U\pi(a)U^{-1}(\pi_{L}(b)1)=U\pi(a)\pi(b)\varphi=U\pi(ab)\varphi=\pi_{L}(ab)1= \pi_{L}(a)(\pi_{L}(b)1),\]
that is, \(\,U\pi(a)U^{-1}\varphi=\pi_{L}(a)\varphi\,\) for \(\varphi\in\mathcal{D}(\pi_{L})\) and \(a\in\mathsf{A}\). By definition, this means that the \(*\)-representations \(\pi\) and \(\pi_{L}\) are unitarily equivalent.
Now we specialize the preceding to the \(*\)-algebra \(\mathbb{C}_{d}[\underline{x}]\equiv\mathbb{C}[x_{1},\ldots,x_{d}]\) with involution determined by \((x_{j})^{*}:=x_{j}\) for \(j=1,\ldots,d\).
Suppose that \(L\) is a positive linear functional on \(\mathbb{C}_{d}[\underline{x}]\). Since \((x_{j})^{*}=x_{j}\), it follows from (12.40) that \(X_{j}:=\pi_{L}(x_{j})\) is a symmetric operator on the domain \(\mathcal{D}_{L}\). The operators \(X_{j}\) and \(X_{k}\) commute (because \(x_{j}\) and \(x_{k}\) commute in \(\mathbb{C}_{d}[\underline{x}]\)) and \(X_{j}\) leaves the domain \(\mathcal{D}_{L}\) invariant (because \(x_{j}\mathbb{C}_{d}[\underline{x}]\subseteq\mathbb{C}_{d}[\underline{x}]\)). That is, \((X_{1},\ldots,X_{d})\) is a \(d\)-tuple of _pairwise commuting symmetric operators acting on the dense invariant domain_\(\,\mathcal{D}_{L}=\pi_{L}(\mathbb{C}_{d}[\underline{x}])1\,\) of the Hilbert space \(\mathcal{H}_{L}\). Note that this \(d\)-tuple \((X_{1},\ldots,X_{d})\) essentially depends on the given positive linear functional \(L\).
The next theorem is the crucial result of the operator approach to the multidimensional moment problem and it is the counterpart of Theorem 6.1.
. It relates solutions of the moment problem to spectral measures of strongly commuting \(d\)-tuples \((A_{1},\ldots,A_{d})\,\) of self-adjoint operators which extend our given \(d\)-tuple \((X_{1},\ldots,X_{d})\).
**Theorem 12.56**.: _A positive linear functional \(L\) on the \(*\)-algebra \(\mathbb{C}_{d}[\underline{x}]\) is a moment functional if and only if there exists a \(d\)-tuple \((A_{1},\ldots,A_{d})\) of strongly commuting self-adjoint operators \(A_{1},\ldots,A_{d}\) acting on a Hilbert space \(\mathcal{K}\) such that \(\,\mathcal{H}_{L}\) is a subspace of \(\,\mathcal{K}\) and \(X_{1}\subseteq A_{1},\ldots,X_{d}\subseteq A_{d}\). If this is fulfilled and \(\,E_{(A_{1},\ldots,A_{d})}\,\) denotes the spectral measure of the \(d\)-tuple \(\,(A_{1},\ldots,A_{d})\), then \(\,\mu(\cdot)=\langle E_{(A_{1},\ldots,A_{d})}(\cdot)1,1\rangle_{\mathcal{K}}\) is a solution of the moment problem for \(L\)._
_Each solution of the moment problem for \(L\) is of this form._
First we explain the notions occurring in this theorem (see [10, Chapter 5] for the corresponding results and more details).
A \(d\)-tuple \((A_{1},\ldots,A_{d})\) of self-adjoint operators \(A_{1},\ldots,A_{d}\) acting on a Hilbert space \(\mathcal{K}\) is called _strongly commuting_ if for all \(k,l=1,\ldots,d,k\neq l\), the resolvents \((A_{k}-\mathrm{i}I)^{-1}\) and \((A_{l}-\mathrm{i}I)^{-1}\) commute, or equivalently, the spectral measures \(E_{A_{k}}\) and \(E_{A_{l}}\) commute (that is, \(E_{A_{k}}(M)E_{A_{l}}(N)=E_{A_{l}}(N)E_{A_{k}}(M)\) for all Borel subsets \(M,N\) of \(\mathbb{R}\)). (If the self-adjoint operators are bounded, strong commutativity and "usual" commutativity are equivalent.) The spectral theorem states that, for such a \(d\)-tuple, there exists a unique spectral measure \(E_{(A_{1},\ldots,A_{d})}\) on the Borel \(\sigma\)-algebra of \(\mathbb{R}^{d}\) such that
\[A_{j}=\int_{\mathbb{R}^{d}}\lambda_{j}\ dE_{(A_{1},\ldots,A_{d})}(\lambda_{1}, \ldots,\lambda_{d}),\ j=1,\ldots,d.\]
The spectral measure \(E_{(A_{1},\dots,A_{d})}\) is the product of spectral measures \(E_{A_{1}},\cdots E_{A_{d}}\). Therefore, if \(M_{1},\dots,M_{d}\) are Borel subsets of \(\mathbb{R}\), then
\[E_{(A_{1},\dots,A_{d})}(M_{1}\times\cdots\times M_{d})=E_{A_{1}}(M_{1})\cdots E _{A_{d}}(M_{d}). \tag{12.42}\]
Proof of Theorem 12.56:.: First assume that \(L\) is the moment functional and let \(\mu\) be a representing measure of \(L\). It is well-known and easily checked by the preceding remarks that the multiplication operators \(A_{k}\), \(k=1,\dots,d\), by the coordinate functions \(x_{k}\) form a \(d\)-tuple of strongly commuting self-adjoint operators on the Hilbert space \(\mathcal{K}:=L^{2}(\mathbb{R}^{d},\mu)\) such that \(\mathcal{H}_{L}\subseteq\mathcal{K}\) and \(X_{k}\subseteq A_{k}\) for \(k=1,\dots,d\). The spectral measure \(E:=E_{(A_{1},\dots,A_{d})}\) of this \(d\)-tuple acts by \(\,E(M)f=\chi_{M}\cdot f\), \(f\in L^{2}(\mathbb{R}^{d},\mu)\), where \(\chi_{M}\) is the characteristic function of the Borel set \(M\subseteq\mathbb{R}^{d}\). This implies that \(\langle E(M)1,1\rangle_{\mathcal{K}}=\mu(M)\). Thus, \(\mu(\cdot)=\langle E(\cdot)1,1\rangle_{\mathcal{K}}\).
Conversely, suppose that \((A_{1},\dots,A_{d})\) is such a \(d\)-tuple. By the multidimensional spectral theorem [10, Theorem 5.23] this \(d\)-tuple has a joint spectral measure \(E_{(A_{1},\dots,A_{d})}\). Put \(\mu(\cdot):=\langle E_{(A_{1},\dots,A_{d})}(\cdot)1,1\rangle_{\mathcal{K}}\). Let \(p\in\mathbb{C}_{d}[\underline{x}]\). Since \(X_{k}\subseteq A_{k}\), we have
\[p(X_{1},\dots,X_{d})\subseteq p(A_{1},\dots,A_{d}).\]
Therefore, since the polynomial \(1\) belongs to the domain of \(p(X_{1},\dots,X_{d})\), it is also in the domain of \(p(A_{1},\dots,A_{d})\). Then
\[\int_{\mathbb{R}^{d}} p(\lambda)\ d\mu(\lambda)=\int_{\mathbb{R}^{d}}p( \lambda)\ d\langle E_{(A_{1},\dots,A_{d})}(\lambda)1,1\rangle_{\mathcal{K}}= \langle p(A_{1},\dots,A_{d})1,1\rangle_{\mathcal{K}}\] \[=\langle p(X_{1},\dots,X_{d})1,1\rangle=\langle\pi_{L}(p(x_{1}, \dots,x_{d}))1,1\rangle=L(p(x_{1},\dots,x_{d})),\]
where the second equality follows from the functional calculus and the last from (12.41). This shows that \(\mu\) is a solution of the moment problem for \(L\).
**Proposition 12.57**.: _Suppose \(Q\) is an Archimedean quadratic module of a commutative real unital algebra \(\mathsf{A}\). Let \(L_{0}\) be a \(Q\)-positive \(\mathbb{R}\)-linear functional on \(\mathsf{A}\) and let \(\pi_{L}\) be the GNS representation of its extension \(L\) to a \(\mathbb{C}\)-linear functional on the complexification \(\mathsf{A}_{\mathbb{C}}=\mathsf{A}+\mathsf{i}\mathsf{A}\). Then all operators \(\pi_{L}(a)\), \(a\in\mathsf{A}_{\mathbb{C}}\), are bounded._
Proof.: Since \(\sum(\mathsf{A}_{\mathbb{C}})^{2}=\sum\mathsf{A}^{2}\) by Lemma 2.17(ii) and \(\sum\mathsf{A}^{2}\subseteq Q\), \(L\) is a positive linear functional on \(\mathsf{A}_{\mathbb{C}}\), so the GNS representation \(\pi_{L}\) is well-defined.
It suffices to prove that \(\pi_{L}(a)\) is bounded for \(a\in\mathsf{A}\). Since \(Q\) is Archimedean, \(\lambda-a^{2}\in Q\) for some \(\lambda>0\). Let \(x\in\mathsf{A}_{\mathbb{C}}\). By Lemma 2.17(ii), \(x^{*}x(\lambda-a^{2})\in Q\) and hence \(\,L(x^{*}xa^{2})=L_{0}(x^{*}xa^{2})\leq\lambda L_{0}(x^{*}x)=\lambda L(x^{*}x)\), since \(L_{0}\) is \(Q\)-positive. Then
\[\|\pi_{L}(a)\pi_{L}(x)1\|^{2} =\langle\pi_{L}(a)\pi_{L}(x)1,\pi_{L}(a)\pi_{L}(x)1\rangle=\langle \pi_{L}((ax)^{*}ax)1,1\rangle\] \[=L((ax)^{*}ax)=L(x^{*}xa^{2})\leq\lambda L(x^{*}x)=\lambda\|\pi_{L }(x)1\|^{2},\]
where we used (12.37) and (12.41). That is, \(\pi_{L}(a)\) is bounded on \(\mathcal{D}(\pi_{L})\).
We now illustrate the power of the operator approach to moment problems by giving short proofs of Theorems 12.43 and 12.50.
From remark 12.45, 6.), we recall that in order to prove Theorem 12.43 in the general case it suffices to do this in the special case when \(C\) is an Archimedean semiring _or_ when \(C\) is an Archimedean quadratic module. In Section 12.4 we have given an approach based on semirings. Here we prove it for quadratic modules.
_Proof of Theorem 12.43 for Archimedean quadratic modules:_
Suppose that \(C\) is an Archimedean quadratic module of \(\mathsf{A}\). As in the proof for semirings, the implication \((ii)_{C}\to(i)_{C}\) is trivial and it suffices to prove that \((i)_{C}\) implies \(a\in C\) (otherwise replace \(a\) by \(a-\varepsilon\) for small \(\varepsilon>0\).).
Assume to the contrary that \(a\) satisfies \((i)_{C}\), but \(a\notin C\). Since \(C\) is Archimedean, by Proposition 12.14 there is a \(C\)-positive \(\mathbb{R}\)-linear functional \(L_{0}\) on \(\mathsf{A}\) such that \(L_{0}(1)=1\) and \(L_{0}(a)\leq 0\). Let \(\pi_{L}\) be the GNS representation of its extension to a \(\mathbb{C}\)-linear (positive) functional \(L\) on the unital commutative complex \(*\)-algebra \(\mathsf{A}_{\mathbb{C}}\).
Let \(c\in C\). If \(x\in\mathsf{A}_{\mathbb{C}}\), then \(x^{*}xc\in C\) by Lemma 2.17(ii), so \(L_{0}(x^{*}xc)\geq 0\), and
\[\langle\pi_{L}(c)\pi_{L}(x)1,\pi_{L}(x)1\rangle=L(x^{*}xc)=L_{0}(x^{*}xc)\geq 0 \tag{12.43}\]
by (12.41). This shows that the operator \(\pi_{L}(c)\) is nonnegative.
For \(b\in\mathsf{A}_{\mathbb{C}}\), the operator \(\pi_{L}(b)\) is bounded by Proposition 12.57. Let \(\overline{\pi_{L}(b)}\) denote its continuous extension to the Hilbert space \(\,\mathcal{H}_{L}\). These operators form a unital commutative \(*\)-algebra of bounded operators. Its completion \(\mathcal{B}\) is a unital commutative \(C^{*}\)-algebra.
Let \(\chi\) be a character of \(\mathcal{B}\). Then \(\,\tilde{\chi}(\cdot):=\chi(\,\overline{\pi_{L}(\cdot)}\,)\) is a character of \(\mathsf{A}\). If \(c\in C\), then \(\,\pi_{L}(c)\geq 0\,\) by (12.43) and so \(\,\overline{\pi_{L}(c)}\geq 0\). Hence \(\tilde{\chi}\) is \(C\)-positive, that is, \(\tilde{\chi}\in\mathcal{K}(C)\). Therefore, \(\tilde{\chi}(a)=\chi(\overline{\pi_{L}(a)}\,)>0\) by \((i)_{C}\). Thus, if we realize \(\mathcal{B}\) as a \(C^{*}\)-algebra of continuous functions on a compact Hausdorff space, the function corresponding to \(\,\overline{\pi_{L}(a_{0})}\,\) is positive, so it has a positive minimum \(\delta\). Then \(\overline{\pi_{L}(a_{0})}\,\geq\delta\cdot I\,\) and hence
\[0<\delta=\delta L(1)=\langle\delta 1,1\rangle\leq\langle\pi_{L}(a)1,1\rangle=L(a_ {0})=L_{0}(a)\leq 0,\]
which is the desired contradiction.
_Proof of Theorem 12.50(ii):_
We extend \(L\) to a \(\mathbb{C}\)-linear functional, denoted again by \(L\), on \(\mathbb{C}_{d}[\underline{x}]\) and consider the GNS representation \(\pi_{L}\). By Proposition 12.57, the symmetric operators \(\pi_{L}(x_{1}),\ldots,\pi_{L}(x_{d})\) are bounded. Hence their continuous extensions to the whole Hilbert space \(\mathcal{H}_{L}\) are pairwise commuting bounded self-adjoint operators \(A_{1},\ldots,A_{d}\). Therefore, by Theorem 12.56, if \(E\) denotes the spectral measure of this \(d\)-tuple \((A_{1},\ldots,A_{d})\), then \(\mu(\cdot)=\langle E(\cdot)1,1\rangle_{\mathcal{H}_{L}}\,\) is a solution of the moment problem for \(L\).
Since the operators \(A_{j}\) are bounded, the spectral measure \(E\), hence \(\mu\), has compact support. (In fact, \(\,\mathrm{supp}\,\,E\subseteq[-\|A_{1}\|,\|A_{1}\|]\times\cdots\times[-\|A_{d} \|,\|A_{d}\|]\).) Hence, since \(L\) is \(C(\mathsf{f})\)-positive by assumption, Proposition 12.22 implies that \(\mathrm{supp}\,\mu\subseteq\mathcal{K}(\mathsf{f})\). This shows that \(L\) is a \(\mathcal{K}(\mathsf{f})\)-moment functional.
The preceding proof of Theorem 12.50(ii) based on the spectral theorem is probably the most elegant approach to the moment problem for Archimedean quadratic modules. Next we derive Theorem 12.50(i) from Theorem 12.50(ii).
_Proof of Theorem 12.50(i):_
We argue in the same manner as in the second proof of Theorem 12.28 in Section 12.3. Assume to the contrary that \(h\notin Q(\mathsf{f})\). Since \(Q(\mathsf{f})\) is Archimedean, Proposition 12.14 and Theorem 12.50(ii) apply to \(Q(\mathsf{f})\). By these results, there is a \(Q(\mathsf{f})\)-positive linear functional \(L\) on \(\,\mathbb{R}_{d}[\underline{x}]\) satisfying \(L(1)=1\) and \(L(h)\leq 0\), and this functional is a \(\mathcal{K}(\mathsf{f})\)-moment functional. Then there is a measure \(\mu\in M_{+}(\mathbb{R}^{d})\) supported on \(\mathcal{K}(\mathsf{f})\) such that \(L(p)=\int p\,d\mu\) for \(p\in\mathbb{R}_{d}[\underline{x}]\). (Note that \(\mathcal{K}(\mathsf{f})\) is compact by Corollary 12.12.) Again \(h(x)>0\) on \(\mathcal{K}(\mathsf{f})\), \(L(1)=1\), and \(L(h)\leq 0\) lead to a contradiction.
### 12.7. The moment problem for semi-algebraic sets contained in compact polyhedra
Let \(k\in\mathbb{N}\). Suppose that \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) is a set of linear polynomials of \(\mathbb{R}_{d}[\underline{x}]\). By a linear polynomial we mean a polynomial of degree at most one. The semi-algebraic set \(\mathcal{K}(\mathsf{f})\) defined by the linear polynomials \(f_{1},\ldots,f_{k}\) is called a _polyhedron_.
Recall that \(S(\mathsf{f})\) is the semiring of \(\mathbb{R}_{d}[\underline{x}]\) generated by \(f_{1},\ldots,f_{k}\), that is, \(S(\mathsf{f})\) consists of all finite sums of terms \(\alpha\,f_{1}^{n_{1}}\cdots f_{k}^{n_{k}}\), where \(\alpha\geq 0\) and \(n_{1},\ldots,n_{k}\in\mathbb{N}_{0}\).
Further, let \(\mathsf{g}=\{g_{0}=1,g_{1},\ldots,g_{r}\}\), where \(r\in\mathbb{N}_{0}\), be a finite subset of \(\mathbb{R}_{d}[\underline{x}]\). Recall that \(C(\mathsf{f},\mathsf{g}):=g_{0}\,S(\mathsf{f})+g_{1}S(\mathsf{f})+\cdots+g_{r }S(\mathsf{f})\) denotes the \(S(\mathsf{f})\)-module considered in Example 12.49, see (12.33).
The following lemma goes back to H. Minkowski. In the optimization literature it is called _Farkas' lemma_. We will use it in the proof of Theorem 12.59 below.
**Lemma 12.58**.: _Let \(h,f_{1},\ldots,f_{k}\) be linear polynomials of \(\mathbb{R}_{d}[\underline{x}]\) such that the set \(\mathcal{K}(\mathsf{f})\) is not empty. If \(h(x)\geq 0\) on \(\mathcal{K}(\mathsf{f})\), there exist numbers \(\lambda_{0}\geq 0,\ldots,\lambda_{m}\geq 0\) such that \(h=\lambda_{0}+\lambda_{1}f_{1}+\cdots+\lambda_{m}f_{m}\)._
Proof.: Let \(E\) be the vector space spanned by the polynomials \(1,x_{1},\ldots,x_{d}\) and \(C\) the cone in \(E\) generated by \(1,f_{1},\ldots,f_{m}\). It is easily shown that \(C\) is closed in \(E\).
We have to prove that \(h\in C\). Assume to the contrary that \(g\notin C\). Then, by the separation of convex sets (Theorem A.26(ii)), there exists a \(C\)-positive linear functional \(L\) on \(E\) such that \(L(h)<0\). In particular, \(L(1)\geq 0\), because \(1\in C\).
Without loss of generality we can assume that \(L(1)>0\). Indeed, if \(L(1)=0\), we take a point \(x_{0}\) of the non-empty (!) set \(\mathcal{K}(\,\hat{\mathsf{f}}\,)\) and replace \(L\) by \(L^{\prime}=L+\varepsilon l_{x_{0}}\), where \(l_{x_{0}}\) denotes the point evaluation at \(x_{0}\) on \(E\). Then \(L^{\prime}\) is \(C\)-positive as well and \(L^{\prime}(h)<0\) for small \(\varepsilon>0\).
Define a point \(\,x:=L(1)^{-1}(L(x_{1}),\ldots,L(x_{d}))\in\mathbb{R}^{d}\). Then \(L(1)^{-1}L\) is the evaluation \(l_{x}\) at the point \(x\) for the polynomials \(x_{1},\ldots,x_{d}\) and for \(1\), hence on the whole vector space \(E\). Therefore, \(f_{j}(x)=l_{x}(f_{j})=L(1)^{-1}L(f_{j})\geq 0\) for all \(j\), so that \(x\in\mathcal{K}(\,\hat{\mathsf{f}}\,)\), and \(g(x)=l_{x}(h)=L(1)^{-1}L(h)<0\). This contradicts the assumption.
**Theorem 12.59**.: _Let \(k\in\mathbb{N}\), \(r\in\mathbb{N}_{0}\). Let \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) and \(\mathsf{g}=\{g_{0}=1,g_{1},\ldots,g_{r}\}\) be subsets of \(\mathbb{R}_{d}[\underline{x}]\) such that the polynomials \(f_{1},\ldots,f_{k}\) are linear. Suppose that the polyhedron \(\,\mathcal{K}(\,\mathsf{f}\,)\) is compact and nonempty._
1. _If_ \(h\in\mathbb{R}_{d}[\underline{x}]\) _satisfies_ \(h(x)>0\) _for all_ \(x\in\mathcal{K}(\mathsf{g})\)_, then_ \(h\in C(\mathsf{f},\mathsf{g})\)_, that is,_ \(h\) _is a finite sum of polynomials_ \[\alpha g_{j}\ f_{1}^{n_{1}}\cdots f_{k}^{n_{k}},\ \text{where}\ \alpha\geq 0,\ j=1,\ldots,r;\ n_{1}\ldots,n_{r}\in\mathbb{N}_{0}.\] (12.44)
2. _A linear functional_ \(L\) _on_ \(\mathbb{R}_{d}[\underline{x}]\) _is a_ \(\mathcal{K}(\mathsf{f})\cap\mathcal{K}(\mathsf{g})\)_-moment functional if and only if_ \[L(g_{j}\,f_{1}^{n_{1}}\cdots f_{k}^{n_{k}})\geq 0\quad\text{ for all }j=0,\ldots,r;n_{1},\ldots,n_{k}\in\mathbb{N}_{0}.\] (12.45)
Proof.: First we show that the semiring \(\,S(\mathsf{f})\) is Archimedean. Let \(j\in\{1,\ldots,d\}\). Since the set \(\mathcal{K}(\,\mathsf{f}\,)\) is compact, there exists a \(\lambda>0\) such that \(\lambda\pm x_{j}>0\) on \(\mathcal{K}(\,\mathsf{f}\,)\). Hence, since \(\mathcal{K}(\,\mathsf{f}\,)\) is nonempty, Lemma 12.58 implies that \((\lambda\pm x_{j})\in S(\mathsf{f})\). Hence \(S(\mathsf{f})\) is Archimedean by Lemma 12.9(ii).
The only if part in (ii) is obvious. Since \(S(\mathsf{f})\) is Archimedean, Theorem 12.50 applies to the \(S(\mathsf{f})\)-module \(C(\mathsf{f},\mathsf{g})\) and gives the other assertions. Note that the requirements (12.45) suffice, since \(h\) in (i) is a sum of terms (12.44).
We state the special case \(r=0\) of a polyhedron \(\mathcal{K}(\mathsf{f})\) separately as a corollary. Assertion (i) is called _Handelman's theorem_.
**Corollary 12.60**.: _Let \(k\in\mathbb{N}\). Suppose that \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) is a set of linear polynomials of \(\mathbb{R}_{d}[\underline{x}]\) such that the polyhedron \(\,\mathcal{K}(\mathsf{f}\,)\,\) is compact and nonempty._
1. _If_ \(h\in\mathbb{R}_{d}[\underline{x}]\) _satisfies_ \(h(x)>0\) _for all_ \(x\in\mathcal{K}(\mathsf{f})\)_, then_ \(h\in S(\mathsf{f})\)_._
2. _A linear functional_ \(L\) _on_ \(\mathbb{R}_{d}[\underline{x}]\) _is a_ \(\mathcal{K}(\mathsf{f})\)_-moment functional if and only if_ \[L(f_{1}^{n_{1}}\cdots f_{k}^{n_{k}})\geq 0\quad\text{for all }n_{1},\ldots,n_{k}\in \mathbb{N}_{0}.\] (12.46)
Proof.: Set \(r=0,g_{0}=1\) in Theorem 12.59 and note that \(\mathcal{K}(C(\mathsf{f},\mathsf{g}))=\mathcal{K}(\mathsf{f})\).
### Examples and applications
Throughout this section, \(\mathsf{f}=\{f_{1},\ldots,f_{k}\}\) is a finite subset of \(\mathbb{R}_{d}[\underline{x}]\) and \(L\) denotes a linear functional on \(\mathbb{R}_{d}[\underline{x}]\).
If \(L\) is a \(\mathcal{K}(\mathsf{f})\)-moment functional, it is obviously \(T(\mathsf{f})\)-positive, \(Q(\mathsf{f})\)-positive, and \(S(\mathsf{f})\)-positive. Theorems 12.29, 12.50(ii), and 12.59(ii) deal with the converse implication and are the main solvability criteria for the moment problem in this chapter.
First we discuss Theorems 12.29 and 12.50(ii). Theorem 12.29 applies to _each_ compact semi-algebraic set \(\mathcal{K}(\mathsf{f})\) and implies that \(L\) is a \(\mathcal{K}(\mathsf{f})\)-moment functional if and only if it is \(T(\mathsf{f})\)-positive. For Theorem 12.50(ii) the compactness of the set \(\mathcal{K}(\mathsf{f})\) is not sufficient; it requires that the quadratic module \(Q(\mathsf{f})\) is Archimedean. In this case, \(L\) is a \(\mathcal{K}(\mathsf{f})\)-moment functional if and only if it is \(Q(\mathsf{f})\)-positive.
_Example 12.61_.: Let us begin with a single polynomial \(f\in\mathbb{R}_{d}[\underline{x}]\) for which the set \(\mathcal{K}(f)=\{x\in\mathbb{R}^{d}:f(x)\geq 0\}\) is compact. (A simple example is the \(d\)-ellipsoid given by \(f(x)=1-a_{1}x_{1}^{2}-\cdots-a_{d}x_{d}^{2}\), where \(a_{1}>0,\ldots,a_{d}>0\).) Clearly, \(T(f)=Q(f)\). Then, \(L\) _is a \(\mathcal{K}(f)\)-moment functional if and only if it is \(T(f)\)-positive, or equivalently, if \(L\) and \(L_{f}\) are positive functionals on \(\mathbb{R}_{d}[\underline{x}]\)._
Now we add further polynomials \(f_{2},\ldots,f_{k}\) and set \(\mathsf{f}=\{f,f_{2},\ldots,f_{k}\}\). (For instance, one may take coordinate functions as \(f_{j}=x_{l}\).) Since \(T(f)\) is Archimedean (by Proposition 12.26, because \(\mathcal{K}(f)\) is compact), so is the quadratic module \(Q(\mathsf{f})\). Therefore, \(L\) _is a \(\mathcal{K}(\mathsf{f})\)-moment functional if and only if it is \(Q(f)\)-positive, or equivalently, if \(L,L_{f},L_{f_{2}},\ldots,L_{f_{k}}\) are positive functionals on \(\mathbb{R}_{d}[\underline{x}]\)._
_Example 12.62_.: _(\(d\)-dimensional compact interval \([a_{1},b_{1}]\times\cdots\times[a_{d},b_{d}]\))_
Let \(a_{j},b_{j}\in\mathbb{R}\), \(a_{j}<b_{j}\), and set \(f_{2j-1}:=b_{j}-x_{j}\), \(f_{2j}:=x_{j}-a_{j}\), for \(j=1,\ldots,d\). Then the semi-algebraic set \(\mathcal{K}(\mathsf{f})\) for \(\mathsf{f}:=\{f_{1},\ldots,f_{2d}\}\) is the \(d\)-dimensional interval \([a_{1},b_{1}]\times\cdots\times[a_{d},b_{d}]\).
Put \(\lambda_{j}=|a_{j}|+|b_{j}|.\) Then \(\lambda_{j}-x_{j}=f_{2j-1}+\lambda_{j}-b_{j}\) and \(\lambda_{j}+x_{j}=f_{2j}+\lambda_{j}+a_{j}\) are \(Q(\mathsf{f})\), so each \(x_{j}\) is a bounded element with respect to the quadratic module \(Q(\mathsf{f})\). Hence \(Q(\mathsf{f})\) is Archimedean by Lemma 12.9(ii).
Thus, \(L\) _is a \(\mathcal{K}(\mathsf{f})\)-moment functional if and only if it is \(Q(f)\)-positive, or equivalently, if \(\,L_{f_{1}},L_{f_{2}},\ldots,L_{f_{k}}\) are positive functionals, that is,_
\[L((b_{j}{-}x_{j})p^{2})\geq 0\text{ and }L((x_{j}{-}a_{j})p^{2})\geq 0 \text{ for }j=1,\ldots,d,\,p\in\mathbb{R}_{d}[\underline{x}]. \tag{12.47}\]
Clearly, (12.47) implies that \(\,L\,\) itself is positive, since \(L=(b_{1}{-}a_{1})^{-1}(L_{f_{1}}{+}L_{f_{2}})\).
_Example 12.63_.: (\(1\)-dimensional interval \([a,b]\))
Let \(a<b\), \(a,b\in\mathbb{R}\) and let \(l,n\in\mathbb{N}\) be odd. We set \(f(x):=(b-x)^{l}(x-a)^{n}\). Then \(\mathcal{K}(f)=[a,b]\) and \(\,T(f)=\sum\mathbb{R}[x]^{2}+f\sum\mathbb{R}[x]^{2}\). Hence, by Theorem 12.29, _a linear functional \(L\) on \(\mathbb{R}[x]\) is an \([a,b]\)-moment functional if and only if \(L\) and \(L_{f}\) are positive functionals on \(\mathbb{R}[x]\)_.
This result extends Hausdorff's Theorem 3.13. It should be noted that this solvability criterion holds for arbitrary (!) odd numbers \(l\) and \(n\), while the equality \(\mathrm{Pos}([a,b])=T(f)\) is only true if \(l=n=1\), see Exercise 3.4 b. in Chapter 3. \(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
(12.48) vanishes after this substitution. Hence, because \(f\) is homogeneous, (12.48) yields
\[\big{(}\sum\nolimits_{i}x_{i}\big{)}^{-m}f(x)=g\big{(}x_{1}\big{(}\sum \nolimits_{i}x_{i}\big{)}^{-1},\ldots,x_{d}\big{(}\sum\nolimits_{i}x_{i}\big{)} ^{-1}\big{)}, \tag{12.49}\]
where \(m=\deg(f)\). Since \(g\in S_{0}\), \(g(x)\) has only nonnegative coefficients. Therefore, after multiplying (12.49) by \((\sum\nolimits_{i}x_{i})^{n+m}\) with \(n\) sufficiently large to clear the denominators, we obtain the assertion.
Finally, we mention two examples of polyhedrons based on Corollary 12.60(ii).
_Example 12.67_.: \([-1,1]^{d}\)
Let \(k=m=2d\) and \(f_{1}=1-x_{1},f_{2}=1+x_{1},\ldots,f_{2d-1}=1-x_{d},f_{2d}=1+x_{d}\). Then \(\mathcal{K}(\mathfrak{f})=[-1,1]^{d}\). Therefore, by Corollary 12.60(ii), _a linear functional \(L\) on \(\mathbb{R}_{d}[x_{d}]\) is a \([-1,1]^{d}\)-moment functional if and only if
\[L((1-x_{1})^{n_{1}}(1+x_{1})^{n_{2}}\cdots(1-x_{d})^{n_{2d-1}}(1+x_{d})^{n_{2 d}})\geq 0\quad\text{for $n_{1},\ldots,n_{2d}\in\mathbb{N}_{0}$}.\ \ \circ\]
_Example 12.68_.: \((\)_Multidimensional Hausdorff moment problem on \([0,1]^{d}\)_\()\)
Set \(f_{1}=x_{1},f_{2}=1-x_{1},\ldots,f_{2d-1}=x_{d},f_{2d}=1-x_{d},k=2d\). Then \(\mathcal{K}(\mathfrak{f})=[0,1]^{d}\). Let \(s=(s_{\mathfrak{n}})_{\mathfrak{n}\in\mathbb{N}_{0}^{d}}\) be a multisequence. We define the shift \(E_{j}\) of the \(j\)-th index by
\[(E_{j}s)_{\mathfrak{m}}=s_{(m_{1},\ldots,m_{j-1},m_{j}+1,m_{j+1},\ldots,m_{d}) },\ \mathfrak{m}\in\mathbb{N}_{0}^{d}.\]
**Proposition 12.69**.: _The following five statements are equivalent:_
1. \(s\) _is a Hausdorff moment sequence on_ \([0,1]^{d}\)_._
2. \(L_{s}\) _is a_ \([-1,1]^{d}\)_-moment functional on_ \(\mathbb{R}_{d}[\underline{x}]\)_._
3. \(L_{s}(x_{1}^{m_{1}}(1-x_{1})^{n_{1}}\cdots x_{d}^{m_{d}}(1-x_{d})^{n_{d}})\geq 0\) _for all_ \(\mathfrak{n},\mathfrak{m}\in\mathbb{N}_{0}^{d}\)_._
4. \(((I-E_{1})^{n_{1}}\ldots(I-E_{d})^{n_{d}}s)_{\mathfrak{m}}\geq 0\) _for all_ \(\mathfrak{n},\mathfrak{m}\in\mathbb{N}_{0}^{d}\)_._
5. \[\sum_{\mathfrak{j}\in\mathbb{N}_{0}^{d},\mathfrak{j}\leq\mathfrak{n}}\ (-1)^{| \mathfrak{j}|}\binom{n_{1}}{j_{1}}\cdots\binom{n_{d}}{j_{d}}s_{\mathfrak{m}+ \mathfrak{j}}\geq 0\]
_for all_ \(\mathfrak{n},\mathfrak{m}\in\mathbb{N}_{0}^{d}\)_. Here_ \(|\mathfrak{j}|:=j_{1}+\cdots+j_{d}\) _and_ \(\mathfrak{j}\leq\mathfrak{n}\) _means that_ \(j_{i}\leq n_{i}\) _for_ \(i=1,\ldots,d\)_._
Proof.: (i)\(\leftrightarrow\)(ii) holds by definition. Corollary 12.60(ii) yields (ii)\(\leftrightarrow\)(iii). Let \(\mathfrak{n},\mathfrak{m}\in\mathbb{N}_{0}^{d}\). We repeat the computation from the proof of Theorem 3.15 and derive
\[L_{s}(x_{1}^{m_{1}}(1-x_{1})^{n_{1}} \cdots x_{d}^{m_{d}}(1-x_{d})^{n_{d}})=((I-E_{1})^{n_{1}}\ldots(I -E_{d})^{n_{d}}s)_{\mathfrak{m}}\] \[=\sum_{\mathfrak{j}\in\mathbb{N}_{0}^{d},\mathfrak{j}\leq\mathfrak{ n}}\ (-1)^{|\mathfrak{j}|}\binom{n_{1}}{j_{1}}\cdots\binom{n_{d}}{j_{d}}s_{ \mathfrak{m}+\mathfrak{j}}.\]
This identity implies the equivalence of conditions (iii)-(v).
### 12.9. Exercises
1. Suppose that \(Q\) is a quadratic module of a commutative real algebra \(\mathsf{A}\). Show that \(Q\cap(-Q)\) is an ideal of \(\mathsf{A}\). This ideal is called the _support ideal_ of \(Q\).
2. Let \(K\) be a closed subset of \(\mathbb{R}^{d}\). Show that \(\operatorname{Pos}(K)\) is saturated.
3. Formulate solvability criteria in terms of localized functionals and in terms of \(d\)-sequences for the following sets. 1. Unit ball of \(\mathbb{R}^{d}\). 2. \(\{x\in\mathbb{R}^{d}:x_{1}^{2}+\cdots+x_{d}^{2}\leq r^{2},\ x_{1}\geq 0,\ldots,x_{d }\geq 0\}\). 3. \(\{(x_{1},x_{2},x_{3},x_{4})\in\mathbb{R}^{4}:x_{1}^{2}+x_{2}^{2}\leq 1,x_{3}^{2}+x_{4 }^{2}\leq 1\}\). 4. \(\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}:x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\leq 1,x_{1 }+x_{2}+x_{3}\leq 1\}\). 5. \(\{x\in\mathbb{R}^{2d}:x_{1}^{2}+x_{2}^{2}=1,\ldots,x_{2d-1}^{2}+x_{2d}^{2}=1\}\).
4. Decide whether or not the following quadratic modules \(Q(\mathfrak{f})\) are Archimedean. 1. \(f_{1}=x_{1},f_{2}=x_{2},f_{3}=1-x_{1}x_{2},f_{4}=4-x_{1}x_{2}\). 2. \(f_{1}=x_{1},f_{2}=x_{2},f_{3}=1-x_{1}-x_{2}\). 3. \(f_{1}=x_{1},f_{2}=x_{2},f_{3}=1-x_{1}x_{2}\).
5. Let \(f_{1},\ldots,f_{k},g_{1},\ldots,g_{l}\in\mathbb{R}_{d}[\underline{x}]\). Set \(\mathsf{g}=(f_{1},\ldots,f_{k},g_{1},\ldots,g_{l})\), \(\mathsf{f}=(f_{1},\ldots,f_{k})\). Suppose that \(Q(\mathfrak{f})\) is Archimedean. Show that each \(Q(\mathsf{g})\)-positive linear functional \(L\) is a determinate \(\mathcal{K}(\mathsf{g})\)-moment functional.
6. Formulate solvability criteria for the moment problem of the following semi-algebraic sets \(\mathcal{K}(\mathsf{f})\). 1. \(f_{1}=x_{1}^{2}+\cdots+x_{d}^{2},f_{2}=x_{1},\ldots,f_{k}=x_{k-1}\), where \(2\leq k\leq d+1\). 2. \(f_{1}=x_{1},f_{2}=2-x_{1},f_{3}=x_{2},f_{4}=2-x_{2},f_{5}=x_{1}^{2}-x_{2}\), where \(d=2\). 3. \(f_{1}=x_{1}^{2}+x_{2}^{2},f_{2}=ax_{1}+bx_{2},f_{3}=x_{2}\), where \(d=2,a,b\in\mathbb{R}\).
7. Let \(d=2\), \(f_{1}=1-x_{1},f_{2}=1+x_{1},f_{3}=1-x_{2},f_{4}=1+x_{2},f_{5}=1-x_{1}^{2}-x_{2 }^{2}\) and \(\mathsf{f}=(f_{1},f_{2},f_{3},f_{4},f_{5})\). Describe the set \(\mathcal{K}(\,\mathsf{f}\,)\) and use Theorem 12.59(ii) to characterize \(\mathcal{K}(\,\mathsf{f}\,)\)-moment functionals.
8. Find a \(d\)-dimensional version of Exercise 7, where \(d\geq 3\).
9. (_Tensor product of preorderings_) Let \(n,k\in\mathbb{N}\). Suppose that \(\mathsf{f}_{1}\) and \(\mathsf{f}_{2}\) are finite subsets of \(\mathbb{R}_{n}[\underline{x}]\equiv\mathbb{R}[x_{1},\ldots,x_{n}]\) and \(\mathbb{R}_{k}[\underline{x}^{\prime}]\equiv\mathbb{R}[x_{n+1},\ldots,x_{n+k}]\), respectively, such that the semi-algebraic sets \(\mathcal{K}(\mathsf{f}_{1})\) of \(\mathbb{R}^{n}\) and \(\mathcal{K}(\mathsf{f}_{2})\) of \(\mathbb{R}^{k}\) are compact. Define a subset \(T\) of \(\mathbb{R}[x_{1},\ldots,x_{n+k}]\) by \[T:=\Big{\{}p(x,x^{\prime})=\sum_{j=1}^{r}p_{j}(x)q_{j}(x^{\prime}):\ p_{1}, \ldots,p_{r}\in T(\mathsf{f}_{1}),\,q_{1},\ldots,q_{r}\in T(\mathsf{f}_{2}),\, r\in\mathbb{N}\Big{\}}.\] 1. Show that \(T\) is an Archimedean semiring of \(\mathbb{R}[x_{1},\ldots,x_{n+k}]\). 2. Give an example of \(\mathsf{f}_{1}\) and \(\mathsf{f}_{2}\) for which \(T\) is not a preordering. 3. Let \(p\in\mathbb{R}[x_{1},\ldots,x_{n+k}]\). Suppose \(p(x,x^{\prime})>0\) for all \(x\in\mathcal{K}(\mathsf{f}_{1})\), \(x^{\prime}\in\mathcal{K}(\mathsf{f}_{2})\). Prove that \(p\in T\). Hint: The preorderings \(T(\mathsf{f}_{1})\) and \(T(\mathsf{f}_{2})\) are Archimedean (Proposition 12.26). Hence \(f\otimes 1\) and \(1\otimes g\) satisfy the Archimedean condition for \(f\in T(\mathsf{f}_{1})\) and \(g\in T(\mathsf{f}_{2})\). The semiring \(T\) is generated by these elements, so \(T\) is Archimedean. For b.) try \(p=(x_{1}-x_{n+1})^{2}\). For c.), apply the Archimedean Positivstellensatz.
10. (_Supporting polynomials of compact convex sets of \(\mathbb{R}^{d}\)_) Let \(K\) be a non-empty compact convex subset of \(\mathbb{R}^{d}\). By a _supporting polynomial_ of \(K\) at some point \(t_{0}\in K\) we mean a polynomial \(h\in\mathbb{R}_{d}[\underline{x}]\) of degree one such that \(h(t_{0})=0\) and \(h(t)\geq 0\) for all \(t\in K\). (In this case, \(t_{0}\) a is a boundary point of \(K\).) Suppose that \(H\) is a set of supporting polynomials at points of \(K\) such that \[K=\{t\in\mathbb{R}^{d}:h(t)\geq 0\ \text{ for all }h\in H\}.\]
1. Prove that the semiring \(S(H)\) of \(\mathbb{R}_{d}[\underline{x}]\) generated by \(H\) is Archimedean. b. Let \(f\in\mathbb{R}_{d}[\underline{x}]\) be such that \(f(t)>0\) for all \(t\in K\). Prove that \(f\in S(H)\).
* Elaborate Exercise 10. for the unit disc \(K=\{(x,y)\in\mathbb{R}^{2}:x^{2}+y^{2}\leq 1\}\) and \(H:=\{h_{\theta}:=1+x\,\cos(\theta)+y\,\sin\theta:\theta\in[0,2\pi)\}\) or for appropriate subsets of \(K\).
* (_Reznick's theorem_ [Re2]) Let \(f\in\mathbb{R}_{d}[\underline{x}]\) be a homogeneous polynomial such that \(f(x)>0\) for \(x\in\mathbb{R}^{d}\), \(x\neq 0\). Prove that there exists an \(n\in\mathbb{N}\) such that \((x_{1}^{2}+\cdots+x_{d}^{2})^{n}f(x)\in\sum\mathbb{R}_{d}[\underline{x}]^{2}\). Hint: Mimic the proof of Proposition 12.66: Let \(T\) denote the preordering \(\sum\mathbb{R}_{d}[\underline{x}]+\mathcal{I}\), where \(\mathcal{I}\) is the ideal generated by the polynomial \(1-(x_{1}^{2}+\cdots+x_{d}^{2})\). Show that \(T\)-positive characters corresponds to points of the unit sphere, substitute \(x_{j}(\sum_{i}x_{i}^{2})^{-1}\) for \(x_{j}\), apply Theorem 12.59(i) to \(T\), and clear denominators.
### Notes
The interplay between real algebraic geometry and the moment problem for compact semi-algebraic sets and the corresponding Theorems 12.28 and 12.29 were discovered by the author in [Sm6]. A small gap in the proof of [Sm6, Corollary 3] (observed by A. Prestel) was immediately repaired by the reasoning of the above proof of Proposition 12.26 (taken from [Sm8, Proposition 18]).
The fact that the preordering is Archimedean in the compact case was first noted by T. Wormann [Wo]. An algorithmic proof of Theorem 12.28 was developed by M. Schweighofer [Sw1], [Sw2].
The operator-theoretic proof of Theorem 12.50(ii) given above is long known among operator theorists; it was used in [Sm6]. The operator-theoretic approach to the multidimensional moment theory was investigated in detail by F. Vasilescu [Vs1], [Vs2].
The Archimedean Positivstellensatz (Theorem 12.43) has a long history. It was proved in various versions by M.H. Stone [Stn], R.V. Kadison [Kd], J.-L. Krivine [Kv1], E. Becker and N. Schwartz [BS], M. Putinar [Pu2], and T. Jacobi [Jc]. The general version for quadratic modules is due to Jacobi [Jc], while the version for semirings was proved much earlier by Krivine [Kr1]. A more general version and a detailed discussion can be found in [Ms1, Section 5.4]. The unified approach to Theorem 12.43 in Section 12.4 using the dagger cones is based on results obtained in the paper [SmS23]. Theorem 12.51 and Example 12.52 are also taken from [SmS23].
M. Putinar [Pu2] has proved that a finitely generated quadratic module \(Q\) in \(\mathbb{R}_{d}[\underline{x}]\) is Archimedean if (and only if) there exists a polynomial \(f\in Q\) such that the set \(\{x\in\mathbb{R}^{d}:f(x)\geq 0\}\) is compact.
Corollary 12.33 and its non-compact version in Exercise 14.11 below are from [Ls3]. The moment problem with bounded densities is usually called the _Markov moment problem_ or \(L\)-moment problem. In dimension one it goes back to A.A. Markov [Mv1], [Mv2], see [AK], [Kr2]. An interesting more recent work is [DF]. The multidimensional case was studied in [Pu1], [Pu3], [Pu5], [Ls3], [Ls4].
For compact polyhedra with nonempty interiors Corollary 12.60(i) was proved by D. Handelman [Hn]. A special case was treated earlier by J.-L. Krivine [Kv2]. A related version can be found in [Cs, Theorem 4]. The general Theorem 12.59 is taken from [SmS23]; it is a slight generalization of [PD, Theorem 5.4.6].
Polya's theorem was proved in [P]. Polya's original proof is elementary; the elegant proof given in the text is from [Wo]. Proposition 12.69 is a classical result obtained in [HS]. It should be noted that Reznick's theorem [Re2] can be derived as an immediate consequence of Theorem 12.28, see [Sr3, 2.1.8].
Reconstructing the shape of subsets of \(\mathbb{R}^{d}\) from its moments with respect to the Lebesgue measure is another interesting topic, see e.g. [GHPP] and [GLPR].
|
2309.16423 | A deep dive into the Type II Globular Cluster NGC 1851 | About one-fifth of the Galactic globular clusters (GCs), dubbed Type II GCs,
host distinct stellar populations with different heavy elements abundances. NGC
1851 is one of the most studied Type II GCs, surrounded by several
controversies regarding the spatial distribution of its populations and the
presence of star-to-star [Fe/H], C+N+O, and age differences. This paper
provides a detailed characterization of its stellar populations through Hubble
Space Telescope (HST), ground-based, and Gaia photometry. We identified two
distinct populations with different abundances of s-process elements along the
red-giant branch (RGB) and the sub-giant branch (SGB) and detected two
sub-populations among both s-poor (canonical) and s-rich (anomalous) stars. To
constrain the chemical composition of these stellar populations, we compared
observed and simulated colors of stars with different abundances of He, C, N,
and O. It results that the anomalous population has a higher CNO overall
abundance compared to the canonical population and that both host stars with
different light-element abundances. No significant differences in radial
segregation between canonical and anomalous stars are detected, while we find
that among their sub-populations, the two most chemical extremes are more
centrally concentrated. Anomalous and canonical stars show different 2D spatial
distributions outside ~3 arcmin, with the latter developing an elliptical shape
and a stellar overdensity in the northeast direction. We confirm the presence
of a stellar halo up to ~80 arcmin with Gaia photometry, tagging 14 and five of
its stars as canonical and anomalous, respectively, finding a lack of the
latter in the south/southeast field. | E. Dondoglio, A. P. Milone, A. F. Marino, F. D'Antona, G. Cordoni, M. V. Legnardi, E. P. Lagioia, S. Jang, T. Ziliotto, M. Carlos, F. Dell'Agli, A. Karakas, A. Mohandasan, Z. Osborn, M. Tailo, P. Ventura | 2023-09-28T13:19:28Z | http://arxiv.org/abs/2309.16423v1 | # A deep dive into the Type II Globular Cluster NGC 1851
###### Abstract
About one-fifth of the Galactic globular clusters (GCs), dubbed Type II GCs, host distinct stellar populations with different heavy elements abundances. NGC 1851 is one of the most studied Type II GCs, surrounded by several controversies regarding the spatial distribution of its populations and the presence of star-to-star [Fe/H], C+N+O, and age differences. This paper provides a detailed characterization of its stellar populations through _Hubble Space Telescope (HST)_, ground-based, and Gaia photometry. We identified two distinct populations with different abundances of s-process elements along the red-giant branch (RGB) and the sub-giant branch (SGB) and detected two sub-populations among both s-poor (canonical) and s-rich (anomalous) stars. To constrain the chemical composition of these stellar populations, we compared observed and simulated colors of stars with different abundances of He, C, N, and O. It results that the anomalous population has a higher CNO overall abundance compared to the canonical population and that both host stars with different light-element abundances. No significant differences in radial segregation between canonical and anomalous stars are detected, while we find that among their sub-populations, the two most chemical extremes are more centrally concentrated. Anomalous and canonical stars show different 2D spatial distributions outside \(\sim\)3 arcmin, with the latter developing an elliptical shape and a stellar overdensity in the northeast direction. We confirm the presence of a stellar halo up to \(\sim\)80 arcmin with Gaia photometry, tagging 14 and five of its stars as canonical and anomalous, respectively, finding a lack of the latter in the south/southeast field.
keywords: techniques: photometry - stars: Population II - stars: abundances
## 1 Introduction
Globular Clusters (GCs) host distinct groups of stars with different chemical compositions, as well established by the past few decades of research in stellar astrophysics. The multiple populations phenomenon, i.e., the evidence of star-to-star abundance variations of light elements (e.g., He, C, N, O, Al, Na), is widespread among Galactic GCs and has been detected among star clusters in the nearby galaxies, such as the Magellanic Clouds, Fornax, and M 31. Despite the intense effort put in throughout the years, its origin is still not clear (see Bastian & Lardo, 2018; Gratton et al., 2019; Milone & Marino, 2022, for reviews).
An additional challenge in the field is the presence of a sub-set (\(\sim\)18%) of GCs which, beyond the typical light-elements variations, also show the following three observational features: (i) a split sub-giant branch (SGB) in color-magnitude diagrams (CMDs) constructed with optical filters, (ii) a secondary red-giant branch (RGB) sequence, which is associated with the faint SGB, (iii) abundance variations in C+N+O, metallicity, and/or \(s\)-process elements (see Milone et al., 2017). These were defined as Type II GCs by Milone and collaborators, in opposition to the typical Milky Way Type I clusters. Similarly to Type I GCs, Type II GCs host stellar populations with different abundances in light elements, but they also exhibit additional sequences of stars in the CMD that produce the three aforementioned features. For that, we will refer hereafter as 'canonical' the stars that yield the multiple population patterns observed in all GCs, and as 'anomalous' the stars present in Type II GCs only.
Several questions arise at this point. What is the origin of anomalous stars? Why do they appear in some GCs and not in others? Did
these GCs originate through different mechanisms with respect to the typical Type I clusters? Which is the sequence of events in the star formation history of these objects that led to such a complex chemical pattern?
In this context, NGC 1851 is one of the most intriguing and controversial Type II GCs, with numerous studies in both photometry and spectroscopy aimed to shed light on the mechanisms that produced the cluster we nowadays observe. Photometry was instrumental in the first discovery of anomalous features in NGC1851, with the detection of a split SGB in optical filters (Milone et al., 2008, 2009; Zoccali et al., 2009). The faint and bright SGBs evolve into red and blue RGBs, respectively, clearly visible in CMDs constructed with the \(U-I\) colors (Han et al., 2009; Milone et al., 2017; Jang et al., 2022).
Spectroscopy shows that the faint SGB and the red RGB are populated by stars with enhanced abundances of s-process elements, with respect to the bright SGB and the blue RGB (e.g. Yong et al., 2008; Villanova et al., 2010; Carretta et al., 2011; Gratton et al., 2012; Marino et al., 2014; McKenzie et al., 2022; Tautvaisiene et al., 2022) and that both s-rich and s-poor stars exhibit internal variations in some light elements, including C, N, O, and Na (e.g. Yong et al., 2009, 2015; Lardo et al., 2012; Carretta et al., 2010; Campbell et al., 2012; Carretta et al., 2014; Milone et al., 2017; Simpson et al., 2017; Jang et al., 2022).
The physical reasons that are responsible for the split SGB are widely debated. Works based on the comparison between photometry and stellar models reveal that the faint SGB is composed of stars that are either older by \(\sim\)1 Gyr than the bright SGB or have nearly the same age as bright SGB stars but are enhanced in their overall C+N+O content by a factor of \(\sim\)3 (Cassisi et al., 2008; Ventura et al., 2009; D'Antona et al., 2009). However, spectroscopic investigations provided controversial results. Yong et al. (2009, 2015) and Simpson et al. (2017) detected large differences in the overall C+N+O content of s-rich and s-poor stars, whereas other authors concluded that all stars in NGC 1851 share the same C+N+O content (e.g. Villanova et al., 2010; Tautvaisiene et al., 2022). The correct chemical characterization of multiple populations in NGC 1851 and their relative ages is further challenged by the possibility that \(s\)-rich stars are enhanced in iron by \(\sim\)0.05-0.10 dex with respect to \(s\)-poor stars (see Gratton et al., 2012; Lardo et al., 2012; Tautvaisiene et al., 2022, for discussion on the presence or lack of metallicity difference between \(s\)-rich and \(s\)-poor stars in NGC 1851).
Controversial conclusions come also from the radial distribution of stellar populations in NGC 1851. As an example, Zoccali et al. (2009) concluded that the faint-SGB stars are centrally concentrated and tend to disappear moving away from the GC center. Conversely, Milone et al. (2009) find a nearly constant ratio between the number of stars in the two SGBs (see also Cummings et al., 2014).
Overall, several unsolved issues affect our current understanding of the processes originating the complex observational features of NGC 1851, among all the presence of an anomalous stellar population. Therefore, an accurate definition of the chemical, spatial, and kinematic properties of these stars is mandatory to explain how anomalous stars were born.
In this work, we analyze photometry from different space- and ground-based telescopes to provide new tools for the photometrical tagging of the different populations that inhabit NGC 1851, with a particular focus on their spatial behavior. Section 2 describes the dataset used in our work. Section 3 illustrates the method adopted to disentangle the multiple populations of NGC 1851 among RGB stars, while their chemical composition is inferred in Section 4. Section 5 is dedicated to multiple populations along the SGB and the MS. Section 6 presents the calculation of the fractions of the multiple stellar populations spotted in this cluster and explores their radial distribution, while the 2D spatial distribution of canonical and anomalous stars is investigated in Section 7. Finally, Section 8 provides a summary and conclusions.
## 2 Dataset
In this work, we exploit three photometric datasets. First, we build a catalog of the innermost \(\sim\)2.7\(\times\)2.7 arcmin\({}^{2}\) stars by exploiting _Hubble Space Telescope (HST)_ observations taken with the Ultraviolet and Visual Channel of the Wide Field Camera 3 (WFC3/UVIS) filters F275W, F336W, and F438W (GO-13297), and the Wide Field Channel of the Advanced Camera for Survey (ACS/WFC) filters F606W and F814W (GO-10775). We perform effective point-spread function (PSF) photometry (see Anderson & King, 2000) to obtain accurate stellar positions and magnitudes through the KS2 software, developed by Jay Anderson (see Sabbi et al., 2016; Bellini et al., 2017; Milone et al., 2023, for details), which is an extended version of the program kitchen_sync (Anderson et al., 2008). To the obtained catalog, we apply the quality diagnostics described in Nardisello et al. (2018, see their Sections 2.5 and 3 for details) to select stellar sources with high-quality astrometry and photometry. No correction for differential reddening has been performed since this cluster is characterized by very small reddening variations (e.g., Jang et al., 2022; Legnardi et al., 2023), which produce negligible effects on the photometric quality of the catalog. We instead correct this catalog for zero-point spatial variations effects, following the recipe presented in Milone et al. (2012, see their Section 3.2).
To investigate the cluster regions outside the _HST_ Field of view (FoV), we use the ground-based catalog by Stetson et al. (2019). This catalog includes stellar magnitudes in the \(U\), \(B\), \(V\), \(R\), and \(I\) bands and reaches distances from the cluster's center up to \(\sim\)20 arcmin. It was built by performing PSF photometry on images from multiple ground-based facilities taken at different epochs. To this catalog, we apply a cleaning procedure to isolate the cluster's stars through the diagnostics defined by Stetson and collaborators (see their Section 4.1), and a correction for zero-points variations by extending the procedure used on the _HST_ catalog to this dataset (see also Jang et al., 2022).
Figure 1 presents examples of the resulting _HST_ and ground-based CMDs. Specifically, we show the \(m_{\rm F814W}\) vs. \(m_{\rm F336W}\)-\(m_{\rm F814W}\) CMD (left panel) from _HST_ photometry, and the \(I\) vs. \(U\)-\(I\) (right panel) CMD from ground-based photometry. In both diagrams, two sequences are clearly distinguishable from the SGB up to the RGB tip.
Finally, we exploit Gaia Data Release 3 (DR3, Gaia Collaboration et al., 2021) observations to explore the stars in the halo of NGC 1851 (i.e., at distances much larger than the tidal radius), reaching a radial distance of about 80 arcmin from the center. This feature will be discussed in Section 7.
### Artificial star test
We perform artificial-star (AS) tests to estimate the photometric errors in the _HST_ dataset and to account for the effects of the large crowding in the innermost regions. To do that, we applied the procedure described in Anderson et al. (2008), which consists in adding ASs (i.e., sources with known position and magnitude) into the images and then applying to them the reduction procedure used on real stars.
Each test performed in this work is based on a catalog of 100,000
ASs, which position and magnitude are defined by following the crowding distribution and the CMDs sequences described by the observed stars, respectively. To pass the test stars must have position and magnitude differences smaller than 0.5 pixel and 0.75 mag, respectively, between this input catalog and the output produced after the data reduction. These stars are then used to estimate the photometric errors and the amount of contamination that a given stellar population introduces in the area of photometric diagrams belonging to another stellar population, and hence the uncertainties associated with the population ratios inferred in Section 6.
## 3 A Zoo of populations along the red giant branch
In this Section, we explore _HST_ and ground-based photometry of RGB stars to identify the multiple populations of NGC 1851. To do this, we adopt the Chromosome Map (ChM), which is a two pseudo-color diagram that maximizes the separation between chemically-different populations (see Milone et al., 2015, 2017, for details). We introduce two new ChMs that maximize at the same time the separations of the canonical and anomalous stellar populations and of the populations with different light-element abundances.
The \(m_{\rm F336W}\)-\(m_{\rm F814W}\) and the analogous \(U\)-\(I\) color are effective tools to separate the blue and red RGBs that host the canonical and anomalous stars, respectively (see also Han et al., 2009; Milone et al., 2017). In the _HST_ dataset, we combine this information with the \(C_{\rm F275W}\),\(F_{\rm 336W}\),\(F_{\rm 438W}=m_{\rm F275W}\)-\(2m_{\rm F336W}\)+\(m_{\rm F438W}\), which is sensitive to stellar populations with different carbon, nitrogen, and oxygen content (e.g. Milone and Marino, 2022, and references therein).
The procedure to derive this ChM is illustrated in Figure 2 for RGB stars with \(14.3<m_{\rm F814W}<17.7\) (black dots), where the separation between different sequences is well visible in both filter combinations. We follow the recipe by Milone et al. (2017, see their Section 3.1 and 3.2) to derive the red and blue boundaries of both RGBs. Moreover, we calculated the RGB widths, defined as the difference between the red and blue boundaries at a magnitude level of 2 \(m_{\rm F814W}\) above the MS Turn-Off (dotted aqua line). Finally, by applying their Equations (1) and (2), we derive the ChM coordinates \(\Delta_{\rm F336W}\),\(F_{\rm 814W}\) and \(\Delta_{\rm CF275W}\),\(F_{\rm 336W}\),\(F_{\rm 438W}\), plotted in the top-right panel. We used the AS photometry to simulate a single stellar population in the ChM plane. The simulated points are arbitrarily shifted near the bottom-right corner of the ChM and represented in pink, while the purple ellipse includes 68.27% of them.
Similarly, we derived the ChM from ground-based photometry, by using the \(U\)-\(I\) color, which is analogous to \(m_{\rm F336W}\)-\(m_{\rm F814W}\), and \(C_{\rm U,B,I}=U\)-\(2B\)+\(I\) pseudo-color, which is an efficient tool to separate stellar populations with different light-element abundances (e.g. Jang et al., 2022). The \(I\) vs. \(U\)-\(I\) and the \(I\) vs. \(C_{\rm U,B,I}\) diagrams, and the resulting ChM are shown in the bottom panels of Figure 2.
As illustrated in Figure 2, the canonical and anomalous stars define two distinct sequences in both ChMs with \(\Delta_{\rm F336W}\),\(F_{\rm 814W}\) (or \(\Delta_{\rm U,I}\)) smaller and larger than \(\sim-0.1\), respectively. Both canonical and anomalous stars show \(\Delta_{\rm CF275W}\),\(F_{\rm 336W}\),\(F_{\rm 438W}\) and \(\Delta_{\rm CU,B,I}\) distributions wider than what expected by observational errors only. This fact demonstrates that both RGBs present variations in their light-elements abundances. Specifically, we detect the first- and second-population stars typically present in GCs (hereafter 1G and 2G) along the canonical RGB, forming two separate blobs in both ChMs, and two anomalous populations distinguishable in the _HST_-based ChM
Figure 1: _Left panel:_\(m_{\rm F814W}\) vs. \(m_{\rm F336W}\)-\(m_{\rm F814W}\) CMD obtained from _HST_ photometry. _Right panel:_\(I\) vs. \(U\)-\(I\) CMD obtained from the ground-based observations.
(hereafter AI and AII). Intriguingly, 2G stars span wider intervals of \(\Delta_{\rm CF275W,F336W,F438W}\) and \(\Delta_{\rm CU,B,I}\) than the other populations, thus indicating that their stars are not chemically homogeneous.
The ChM regions occupied by the bulk of 1G, 2G, AI, and AII stars are enclosed in the ellipses displayed in Figure 3. These ellipses, defined as in Dondoglio et al. (2022), are used to estimate the fraction of stars in each population. In a nutshell, we first select by hand the bonafide members of each population and measure their median ChM coordinates to define the center of the ellipse. Secondly, to find the major axis direction, we consider the direction of a line that crosses the center and minimizes the orthogonal dispersion of the bonafide members. Finally, we fix the length of the semi-major and -minor axis as 2.5 times the dispersion of stars along the directions parallel and orthogonal to the major axis direction, respectively. We show encircled in green and azure 1G and 2G stars, respectively, and in yellow and purple the two groups of anomalous stars, AI and AII. Due to the small number of stars, the AI stars do not form a clear distinguishable blob in the ground-based ChM. Although we could not classify them with confidence as a distinct population by using this diagram alone, we define by eye an ellipse that encloses the probable AI stars identified from ground-based photometry.
The fraction of stars in each population is calculated as in previous work from our group. As an example, to estimate the fraction of 1G stars, we first derived the number of stars within the green ellipse, which provides a very crude estimate of the number of 1G stars. To estimate the total number of 1G stars we first subtracted to this value the numbers of 2G, AI, and AII stars that, due to observational errors, lie within the green ellipse. Then, we added the number of 1G stars
Figure 2: _Top-left and -middle panels:_\(m_{\rm F814W}\) vs.\(m_{\rm F336W}\)-\(m_{\rm F814W}\) CMD and \(m_{\rm F814W}\) vs.\(\rm C_{\rm F275W,F336W,F438W}\) pseudo-CMD of stars in the _HST_ FoV. _Top-right panel:_\(\rm AC_{\rm F275W,F336W,F438W}\) vs.\(\rm A_{\rm F336W,F438W}\) CMD of RGB stars. _Bottom-Left and -middle panels:_\(I\) vs. \(U\)-\(I\) CMD and \(I\) vs. \(\rm C_{\rm U,B,I}\) pseudo-CMD of stars in the ground field. _Bottom-right panel:_\(\rm AC_{\rm U,B,I}\) vs. \(\rm A_{\rm U,I}\) CMD of RGB stars. The brown dot-dashed horizontal lines separate the stars included (black points) and excluded (grey points) from each ChM determination. The dotted aqua lines indicate the magnitude level at which the ChM widths were normalized (see the text for details). Pink points illustrate the distribution in both CMs of a simulated single stellar population, while the purple ellipses include 68.27% of the simulated stars.
outside the green ellipse. Similarly, we derived the fraction of 2G, AI, and AII stars. The fraction of stars of each population within the four ellipses is inferred by means of ASs (see Milone et al., 2012; Zennaro et al., 2019; Dondoglio et al., 2022, for details).
The results are summarized in Table 1. We find that \(\sim\)70% and \(\sim\)30% of stars belong to the canonical and anomalous population, respectively. 1G stars comprise more than one-third of the total number of canonical stars, whereas AI stars include less than 10% of the anomalous stars.
## 4 The chemical composition of the multiple populations in NGC 1851
To derive the average chemical compositions of the four stellar populations that we identified along the RGB, we combine information from photometry and spectroscopy. The left panels of Figure 4 show the ChMs introduced in Figure 2, where we encircle stars in common with the spectroscopic dataset by Carretta et al. (2011) from GIRAFFE spectra of 124 RGB stars, color-coded according to their belonging to the ellipses defined in Figure 3. The upper-middle and -right panels display the sodium-oxygen anticorrelation among the canonical and anomalous stars for which both our photometric tagging and abundance measurements from Carretta and collaborators are available. Filled points with black contours indicate the average abundances1. The anomalous stars span a smaller range of [Na/Fe] and [O/Fe] with respect to canonical ones. As expected, we find that 2G stars are enhanced in sodium and depleted in oxygen, with respect to the 1G. Similarly, AII stars are more sodium-rich and oxygen-poor than AI stars. Intriguingly, AI stars exhibit higher sodium content than 1G stars in close analogy with what is observed in other Type II GCs (Marino et al., 2009, 2011, 2011).
Footnote 1: Other spectroscopic datasets with similar information (e.g., Mészáros et al., 2020; Tautvaisiene et al., 2022) were not considered in this comparison, because the number of stars with ChMs tagging in the considered magnitude intervals is much lower than the Carretta and collaborators’ catalog, not allowing us for meaningful estimates of the average abundances of the four populations.
The lower-middle and -right panels of Figure 4 show that the anomalous stars are enhanced in [Ba/Fe] with respect to canonical stars, while their average [Fe/H] are consistent within uncertainties. There is no evidence for internal variations among canonical stars, while AI have larger average [Ba/Fe] and [Fe/H] than AII stars (even though our sample includes only three AI stars). Table 1 reports the average [O/Fe], [Na/Fe], [Ba/Fe], and [Fe/H] of canonical and anomalous stars and their subpopulations.
Spectroscopic results corroborate the conclusions inferred from photometry alone. The finding of distinct groups of 1G-2G and AII within the canonical and anomalous RGB of NGC 1851 are in agreement with the presence of Na-O anti-correlation in both RGBs (e.g., Carretta et al., 2011; Tautvaisiene et al., 2022). We also note that the three AI stars with available spectroscopy, including two AI stars selected from ground-based photometry, are more Na-poor than AII stars. This fact supports our choice of the elliptical regions used to select AI stars in the _HST_ and ground-based ChMs.
To further investigate the chemical composition of the stellar populations in NGC 1851, we combine information from multi-band photometry and synthetic spectra. To do that, we apply to our dataset a method widely used in previous papers from our group (e.g. Milone et al., 2012; Lagioia et al., 2019), which allows us to constrain the relative abundances of helium, carbon, nitrogen, and oxygen of two stellar populations. Specifically, we compared the chemical composition of 2G and 1G stars, AII and AI stars, and anomalous and canonical stars. In a nutshell, we first derive fiducial lines along the RGB for each population in the \(m_{\rm{FB14W}}\) vs.\(m_{X}-m_{\rm{FB14W}}\) (for _HST_ observations) and in the \(I\) vs. \(X-I\) (for ground-based data) CMDs, with X=F275W, F336W, F438W, F606W, and F814W, and X=U, B, V, R, and I, respectively. Then, we identify three equally-spaced reference magnitudes (\(m_{\rm{ref}}\)) fainter than the RGB bump. For each value of \(m_{\rm{ref}}\) we measure the color differences (\(\Delta(m_{X}-m_{\rm{FB14W}})\) and \(\Delta(X-I)\)) between the two population fiducial lines. We portray in Figure 5 the relative colors at \(m_{\rm{ref}}=15.5\) mag.
Qualitatively, the color differences between 2G and 1G stars for the different filters (upper panels) follow a similar pattern to what is generally observed in most GCs (e.g. Milone et al., 2018). The 2G stars are typically bluer than the 1G, and their color separation reaches its maximum when using a wide color baseline such as \(m_{\rm{F275W}}-m_{\rm{FB14W}}\). The F336W/U band provides a remarkable exception because the 2G stars exhibit redder \(m_{\rm{FB36W}}-m_{\rm{FB14W}}\) (and \(U-I\)) colors than the 1G.
To infer the relative abundances of 2G and 1G stars, we first derive
Figure 3: Elliptical regions that encapsulate each spotted population in the _HST_ and ground-based (left and right panels, respectively) ChMs. Green and azure ellipses define the 1G and 2G regions of canonical stars, while the yellow and purple ones are the AI and AII regions of anomalous stars.
the values of the effective temperature (\(T_{\rm eff}\)) and gravity (\(g\)) corresponding to each value of \(m_{\rm ref}\) by using the best-fitting isochrones from Ventura et al. (2009) and D'Antona et al. (2009). We compute a reference spectrum, with pristine helium content of \(Y\)=0.246, [O/Fe]=0.4 dex, solar carbon abundance, and [N/Fe]=0.5 dex. Moreover, we derive a grid of comparison spectra with helium mass fractions ranging from Y=0.246 to 0.280 in steps of 0.001, [O/Fe] from 0.0 to 0.6 in steps of 0.1 dex, while both [C/Fe] and [N/Fe] span the intervals between \(-\)0.5 to 0.2 dex and between 0.5 and 2.0 dex, respectively in steps of 0.1 dex. When we used the He-enhanced chemical composition, we adopted the corresponding values for the effective temperature and gravity derived by the isochrones. The spectra are computed by using the ATLAS12 and SYNTHE computer programs (e.g. Castelli 2005; Kurucz 2005; Sbordone et al. 2007). We used isochrones with constant C+N+O abundance. We find that 2G stars are enhanced in nitrogen by 0.80\(\pm\)0.10 dex, and depleted in carbon and oxygen by 0.25\(\pm\)0.10 and 0.20\(\pm\)0.10 dex respectively, when compared to 1G stars. Moreover, they have a slightly larger helium mass fraction (\(\Delta Y\)=0.008\(\pm\)0.006) than the 1G. The errors are estimated as the dispersion of the abundance determinations corresponding to the three magnitude levels, divided by the square root of two. We repeated the same analysis by using isochrones from the Dartmouth database (Dotter et al. 2008) and obtained similar conclusions. Specifically, we inferred differences in [C/Fe], [N/Fe], and [O/Fe] between 2G and 1G stars of 0.85\(\pm\)0.10 dex, and depleted in carbon and oxygen of \(-\)0.25\(\pm\)0.10 and \(-\)0.30\(\pm\)0.10 dex, respectively. Moreover, we find a difference in helium mass fraction of (\(\Delta Y\)=0.007\(\pm\)0.005).
The relative colors of canonical and anomalous stars (middle panels) in different filters differ significantly from those of 1G and 2G stars. The anomalous RGB exhibits redder \(m_{\rm X}-m_{\rm F814W}\) (\(X-I\)) colors than the canonical RGB for X=F275W, F336W, and F438W (X=U and B) with similar F814W (I) magnitudes. Conversely, the color difference disappears in the F606W\(-\)F814W, \(V-I\), and \(R-I\) colors.
By assuming the same overall C+N+O content and helium content for the canonical and the anomalous stars, we find that the latter would be enhanced in C and O by \(\sim\)0.9 and \(\sim\)0.8 dex, respectively, and in N by \(\sim\)0.9 dex compared to the canonical population. Noticeably, these results on carbon and oxygen would be in disagreement with the conclusions of papers based on high-resolution spectroscopy. As an example, both Yong et al. (2015) and Tautvaisiene et al. (2022) find nearly the same values of [C/Fe] for the canonical and anomalous population, with the latter being slightly depleted in oxygen by \(\sim\)0.2 dex with respect to the canonical population. Moreover, according to the results that we inferred from multi-band photometry, the
Figure 4: _Left panels:_ HST (upper) and ground-based (bottom) ChMs where the stars in common with the spectroscopic dataset from Carretta et al. (2011) are highlighted with open bullets, color-coded following the prescriptions of Figure 3. _Middle panels:_ Reproduction of the [Na/Fe] vs. [O/Fe] and [Ba/Fe] vs. [Fe/H] relations for the two canonical populations (upper and lower panels, respectively). Dark-grey points represent all the stars in the Carretta and collaborators dataset. Filled dots with black contours mark the average abundance of stars in each population and black bars indicate their errors. Gray bars highlight the average uncertainties of the spectroscopic measurements. _Right panels:_ same as the middle panels but for the two anomalous populations.
anomalous stars would be significitively enhanced in their overall C+N+O abundance. This fact would indicate that the atmospheric parameters that we used to compute the spectra of anomalous stars, which are derived from isochrones with constant C+N+O content, are not correct.
To further investigate the chemical composition of anomalous and canonical stars, we estimate the relative abundances of anomalous and canonical stars by using the method above and the atmospheric parameters inferred from the isochrones by D'Antona et al. (2009) and Ventura et al. (2009). These isochrones have different C+N+O content, pristine helium abundance Y=0.25, and reproduce the double SGB of NGC 1851. The most remarkable difference with the isochrones with constant C+N+O abundance is that, for a fixed F814W magnitude, the RGB of CNO-enhanced stars is colder by \(\sim 30\)K than the canonical RGB.
By assuming that the canonical and anomalous stars share the same helium content, we reproduce their relative colors by assuming that the anomalous are enhanced in nitrogen by 0.90\(\pm\)0.15 dex and share the same carbon and oxygen abundances \(\Delta\)[C/Fe]=\(0.10\pm\)0.15, \(\Delta\)[O/Fe]=\(-0.05\pm\)0.15). We thus confirm the results by Yong et al. (2015) based on high-resolution spectroscopy.
Finally, we infer the relative abundances of AI and AII stars (lower panels), by using the same approach used for the 1G and 2G stars. Based on the isochrones from the Roma database (Ventura et al., 2009; D'Antona et al., 2009), we find that AII stars have slightly higher content of helium and nitrogen (\(\Delta\)Y=0.005\(\pm\)0.013 and \(\Delta\)[N/Fe]=\(0.30\pm\)0.20 dex), and lower abundances of carbon and oxygen (\(\Delta\)[C/Fe]=\(-0.25\pm\)0.15 and \(\Delta\)[O/Fe]=\(-0.20\pm\)0.10 dex). We obtain similar conclusions by using the isochrones from Dotter et al. (2008) (\(\Delta\)Y=0.006\(\pm\)0.011, \(\Delta\)[C/Fe]=\(-0.30\pm\)0.15, \(\Delta\)[N/Fe]=\(0.40\pm\)0.20, and \(\Delta\)[O/Fe]=\(-0.15\pm\)0.15 dex).
### Comparison with Yong et al. (2015)
The relative differences in the F438W/B bands can explain the \(C_{\rm B,V,I}\) = _B-2V+I_ pseudo-color distribution of RGB stars in the ground-based catalog displayed in panel a) of Figure 6. This combination, introduced by Marino et al. (2015) and Marino et al. (2019), revealed to be particularly effective in disentangling canonical and anomalous stars in every studied Type II GC, with the latter having a wider pseudo-color distribution than the former. To better highlight this feature, we build the \(\Delta_{\rm CB,V,I}\) vs. \(A_{\rm U,I}\) ChM by considering the RGB stars between 17.2\(<\)\(l\)\(<\)11.8 (black stars in panel a)). The result is portrayed in panel b), where canonical and anomalous stars (at \(\Delta_{\rm U,I}\) smaller and bigger than \(\sim-0.1\), respectively) are characterized by different extensions along the y-axis, with the latter spread over a larger \(\Delta_{\rm CB,V,I}\) range, as also shown by the kernel density distributions of the two populations represented in panel c1). Panel c2) represents instead the kernel density distribution of the four populations, in which we notice that while the two canonical sub-populations are distributed almost equally, AI and AII stars are clustered around different \(\Delta_{\rm CB,V,I}\), \(\sim\)0.02 and \(\sim\)0.04, respectively. The observed behavior of NGC 1851's stars in this color combination is consistent with the results illustrated in Figure 5, so with canonical having an almost null internal spread in F438W/B and with anomalous being enhanced in these magnitudes and presenting a significant spread among their two sub-populations.
We explore the link between this pseudo-color and the C, N, and O abundances inferred by Yong et al. (2015), who measured these quantities for a sample of 15 giants and concluded that anomalous stars are enriched in total C+N+O. Panels d1)-d4), from left to right, illustrate \(\Delta_{\rm CB,V,I}\) vs. the C, N, O, and the total C+N+O abundance for the six stars from the Yong and collaborators dataset which we can characterize in the \(\Delta_{\rm CB,V,I}\) vs. \(\Delta_{\rm U,I}\) ChM. We colored in blue and red the stars that based on our photometric tagging belong to the canonical and anomalous population. The C and O values of the two
Figure 5: _Left panels:_\(\Delta(m_{\rm X}-m_{\rm F814W})\) between different populations in the _HST_ filters at a magnitude level \(m_{\rm F814W}\) 15.5, with X=F275W, F336W, F438W, F606W, and F814W. From top to bottom, IG and 2G, canonical and anomalous, AI and AII stellar populations are compared. _Right panels:_ same as left panels but for the ground-based filters. Here, X=U, B, V, R, and I.
populations span similar ranges, suggesting no significant variations between the two in these two elements, while the N abundance correlates with \(\Delta_{\rm CB,V,I}\) (Spearman's rank correlation coefficient 0.94), and becomes larger among anomalous stars. This leads also to a correlation with the total C+N+O, corroborating our results obtained through synthetic spectra.
## 5 Multiple Populations along the Sub-giant Branch and the Main Sequence
The RGB stars are fertile ground to study multiple populations. This is in part due to the fact that they are among the brightest GC stars and typically have low photometric errors. Moreover, thanks to their structure and atmospheric parameters, the luminosity and the colors of RGB stars can be very sensitive to the abundance of some light elements. But can we identify the counterparts of the populations defined in the previous Section even among fainter stars? In this Section, we analyze the SGB and the MS to explore the populations of NGC 1851 among these stars.
### The sub-giant branch of NGC 1851
Figure 1 reveals that NGC 1851 exhibits a split SGB and that the bright and the faint SGBs are the counterparts of the canonical and anomalous RGBs, respectively.
To identify the sub-populations of each SGB, we follow the procedure illustrated in Figure 7. We first select the bulk of bright and faint-SGB stars from the \(m_{\rm F336W}\) vs. \(m_{\rm F336W}\)-\(m_{\rm F814W}\) CMD displayed in panel a). For this purpose, we derive the fiducial lines of the two SGBs by selecting bonafide stars, calculating the median color and magnitude in different color bins, and fitting these pairs of points with cubic splines (blue and red lines in panel a)). We calculate the maximum and minimum magnitudes of both fiducials (aqua bullets) and use them to select bright- and faint-SGB stars at similar evolutionary stages (black points in Figure 7a) as the ones between the two lines that cross the bluest and reddest pair of aqua points.
To better investigate multiple populations among each SGB, we apply to the stellar colors and magnitudes the transformations by Milone et al. (2009, see their Appendix A) that allow us to define the reference frame ('abscissa', 'ordinate') shown in Figure 7b). In this diagram, the brown lines defined in panel a) are horizontal at 'ordinate' 0 and 1 and the four aqua points have coordinates (0,0), (0,1), (1,0), and (1,1). The canonical and anomalous SGBs form the sequences centered around 'abscissa' 0 and 1, respectively. We then apply the method described in Section 3 to derive the 'abscissa' red and blue boundaries that we use to calculate the 'Aabscissa' values plotted in panel c) against the 'ordinate'.
To improve the selection of SGB stars, we derive the histogram distribution of 'abscissa' (see panel d)) and fit it with a function given by the sum of two Gaussians by means of least squares. The two components of the best-fit function are represented in blue and red in Figure 7d). We exclude stars outside the external dot-dashed
Figure 6: _Panel a):_\(I\) vs. \(C_{\rm B,V,I}\) pseudo-CMD. RGB stars with 11.8-c1-17.2 are highlighted with black points, while the remaining stars are colored in grey. _Panel b):_\(\Delta_{\rm CB,V,I}\) vs. \(\rm A_{\rm U,I}\) CMD for stars marked with black points in panel a). _Panels c1) and c2):_\(\Delta_{\rm CB,V,I}\) kernel density distribution of canonical and anomalous stars and their sub-populations, respectively. _Panels d1)-d6):_\(\Delta_{\rm CB,V,I}\) vs. C, N, O, and C+N+O abundances for stars in common with the Yong et al. (2015) dataset. Blue and red points highlight the stars that, according to their position on the \(\Delta_{\rm CB,V,I}\) vs. \(\rm A_{\rm U,I}\) ChM, are canonical and anomalous, respectively.
lines, which are obtained shifting the center of the blue and the red Gaussian function by three times their standard deviations. The central line separates the regions populated by the bulk of canonical and anomalous SGB stars and corresponds to the '\(\Delta\)abscissa' value at the minimum of the bi-Gaussian function. We use these lines to define the blue and red regions in panel c), which contain our sample of canonical and anomalous stars, respectively. We exploit these regions to evaluate the fraction of canonical and anomalous SGB stars. To do that, we measure the number of stars within each interval and then we correct it by means of ASs (see Section 2.1) repeating the procedure applied in Section 3. The resulting ratios obtained by analyzing both the _HST_ and the ground-based catalogs are listed in Table 1 and are consistent with the values inferred from the RGB stars.
In panels e1) and e2), we plot the \(m_{\rm F275W}\)-\(m_{\rm F336W}\) vs. \(m_{\rm F336W}\)-\(m_{\rm F438W}\) two-color diagrams of canonical and anomalous stars, respectively, highlighting in black the SGB stars within the two regions introduced in panel c) and in gray the MS and RGB stars that belong to the same branch (selected by eye). As proven by Milone et al. (2012c), this two-color diagram is an efficient tool to disentangle stellar populations with different C, N, and O abundances, since the F275W, F336W, and F438W filters encompass the OH, NH, and CH and NH absorption bands, respectively, and it is sensitive to the same chemical variations of the \(\Delta_{\rm CF275W}\),F336W,F438W index in the ChM. For that, stars with smaller \(\Delta_{\rm CF275W}\),F336W,F438W (hence larger C and O and smaller N), have smaller \(m_{\rm F336W}\)-\(m_{\rm F438W}\) and larger \(m_{\rm F275W}\)-\(m_{\rm F336W}\) than the stars with larger \(\Delta_{\rm CF275W}\),F336W,F438W (with smaller C and O and larger N). Stellar populations with different C, N, and O abundances form discrete sequences that run parallel on the two-color diagram. In both panels, each AGB splits in two sequences, which are connected to two separated RGB sequences. By coloring as in Figure 3 the RGB populations identified in Section 3, it is clear how the two SGB sequences in panel e1) are the counterparts to the canonical 1G and 2G populations, and that the two anomalous SGBs in panel e2) are linked to the AI and AII RGB stars. This provides independent confirmation of the quadrimodality observed in Section 3.
We then apply, to the ground-based \(U\) vs. \(U\)-\(I\) CMD, the same procedure to define a sample of canonical and anomalous SGB stars. However, with the available filters, it is not possible to build a diagram able to reveal the four sub-populations, avoiding us to spot the SGB
Figure 7: _Panel a)_: \(m_{\rm F336W}\) vs. \(m_{\rm F336W}\)-\(m_{\rm F814W}\) CMD zoomed around the SGB. The blue and red lines indicate the fiducials of the canonical and anomalous SGB stars, respectively, while the aqua points represent their median brightest and faintest magnitude. The two brown lines delimit the considered SGB sample of stars. _Panel b)_: ‘ordinate’ vs. ‘abscissa’ diagram of SGB stars, where lines, symbols, and colors have the same meaning as the previous panel (see text for details). _Panel c)_: ‘verticalized’ ‘ordinate’ vs. ‘abscissa’ diagram, where the three vertical black dot-dashed lines delimit the region within which canonical and anomalous stars lie, colored in blue and red, respectively. _Panel d)_: ‘histogram (in grey) and best-fit Gaussian functions of the two SGB populations (colored as in panel c)). _Panel e1)_ and e2): \(m_{\rm F253W}\)-\(m_{\rm F336W}\) vs. \(m_{\rm F336W}\)-\(m_{\rm F438W}\) two-color diagrams for stars inside the blue and red regions identified in panel c), respectively (black dots). Blue and red lines connect these two regions to their respective two-color diagram. Grey points represent the MS and RGB prosecutions of each SGB (selected on the CMD). RGB stars tagged with the ChM presented in Section 3 are color-coded as in Figure 3. Error bars are shown in purple.
counterpart of 1G, 2G, AI, and AII stars in the outer part of the cluster.
### Main Sequence
To investigate the canonical and anomalous MS stars, we consider the \(m_{\rm F814W}\) vs. \(m_{\rm F336W}-m_{\rm F814W}\) CMD and exclude the stars within the innermost 0.7 arcmin to mitigate the effect of crowding on photometry. We show a zoom of this CMD on the MS in the left panel of Figure 8, where the prosecution of the two SGBs is visible in the upper MS, with the bluest and reddest MSs connected to the canonical and anomalous SGBs, respectively.
To further investigate the double MS, we define blue and red boundaries of MS stars between \(18.9<m_{\rm F814W}<20.5\) mag and derive their verticalized color \(\Delta(m_{\rm F336W}-m_{\rm F814W})\) (see Section 3 for details). The result is plotted in seven different magnitude bins in the middle panels, while in the right panels, we show the histogram (in gray) and the kernel density (in aqua) distribution of \(\Delta(m_{\rm F336W}-m_{\rm F814W})\). For each bin, we highlight in pink the distribution of observational errors derived through AS test, arbitrarily centered at the maximum of the kernel distribution, and the Bi-modality Coefficient (BC2; SAS Institute Inc. Staff 1988) of the \(\Delta(m_{\rm F336W}-m_{\rm F814W})\) distribution of stars. According to the BC criterion, a distribution is considered bimodal if its values exceed the critical threshold \(BC_{\rm crit}=0.555\).
Footnote 2: \(BC=\frac{m_{\rm F814W}^{2}+1}{m_{\rm F814W}+3},\frac{(n-1)^{2}}{(m_{\rm F814W }^{2}-2)},\) where \(m_{\rm F814W}\) and \(m_{\rm F814W}\) indicate the skewness of the distribution and its excess of kurtosis, and \(n\) is the number of considered points.
Moving from brighter to fainter magnitudes, we notice that: (i) the bimodality becomes less and less clear-cut, as shown by the decrease of the BC, and (ii) the color distribution becomes narrower even if the error increases. These facts agree with two distinct MSs that merge going through fainter magnitudes, with a statistically significant bimodality (i.e., BC\(>\)0.555) down to \(\sim\)20.05 mag.
## 6 The radial distribution of multiple stellar populations
To investigate the radial distribution of multiple populations in NGC 1851, we divided the FoV into five (four) circular regions that
Figure 8: _Left panel: \(m_{\rm F814W}\) vs. \(m_{\rm F336W}-m_{\rm F814W}\) CMD of stars in the HST catalog outside the innermost 0.7’. Blue and red lines represent the boundaries used to verticalize the color distribution (see text for details), while the brown dot-dashed horizontal lines define the magnitude interval considered in our MS analysis. Middle panels: verticalized \(\Delta(m_{\rm F336W}-m_{\rm F814W})\) distribution of MS stars in the \(18.9<m_{\rm F814W}<20.5\) interval, divided into 7 magnitude bins. Right panels: \(\Delta(m_{\rm F336W}-m_{\rm F814W})\) histogram (in grey) and kernel density (in aqua) distributions of MS stars in each bin defined by the middle panels. The pink line represents the distribution expected from observational errors._
include the same number of RGB (SGB) stars. We derived the fractions of stars in each population by applying to each region the methods of Sections 3 and 5.1 for RGB and SGB stars, respectively.
As shown in Figure 9, the fractions of canonical and anomalous stars are constant at the 1-\(\sigma\) level over the entire FoV, and such result is obtained from both RGB (top panels) and SGB stars (bottom panels). We perform a p-value test to infer the probability that the observed behavior is produced by a flat distribution. The derived p-values are 0.92 and 0.29 for RGB and SGB stars, respectively, which strongly support the flat-trend hypothesis (which would be disproved at values \(<\)0.05).
Since the farthest bin covers the whole ground-based radial range, we also consider this catalog alone and divide it into two equal-number bins to explore with higher radial resolution the trend outside \(\sim\)2 arcmin, as represented in the right panels, finding again no significant radial variation. In each figure, the grey dot-dashed vertical lines represent the core, half-mass, and tidal radius of NGC 18513.
Footnote 3: According to Harris (1996, 2010 edition), the values of the core, half-mass, and tidal radius are 0.09, 0.51, and 6.52 arcmin, respectively. To study the cluster halo (see Section 7.1), we want to be as conservative as possible to select stars that lie outside the tidal radius. For that, we follow the approach by Marino et al. (2014) and consider as the tidal radius the largest estimate present in literature, which is from Trager et al. (1993) and values 11.7 arcmin.
Figure 10 explores the radial distributions of the four populations. We considered different pairs of populations and derived their fractions in equal-number bins. Moreover, we further divided the ground-based field into two circular regions with the same number of stars. Results are shown in Figure 10. The left column represents the radial behavior of the 2G, AI, and AII star fractions to the 1G population, revealing that 2G and AII are more centrally concentrated than 1G stars, while no variation appears between 1G and AI stars. The central column compares the AI population, to 2G and AII stars. Their ratios decrease when moving through larger radii, suggesting that AI stars, such as the 1G, are more diffused than the 2G and AII stars. Finally, in the right column, we compare the 2G with the AII population, detecting no radial difference.
Finally, we derive the global fraction of the different populations spotted in NGC 1851. To do that we convolve, from the center to the tidal radius, the radial trends illustrated in Figures 9, 10 by the best-fit King profile (King, 1962) derived by Harris (1996, 2010 edition) to account for the radial density distribution of the cluster stars. Our resulting fractions, derived from RGB stars, are listed in Table 1. To estimate the uncertainties, we simulate 10,000 radial distributions by scattering the observed radial trends by their errors. Then, we repeat the procedure to infer the global ratios for each sample and considered as our uncertainties the 68-th percentiles of the distribution of the global fraction obtained from all the simulations is our uncertainty.
## 7 Spatial distribution
To investigate the 2D spatial distribution of multiple populations, we applied to the canonical and anomalous RGB and SGB stars the method by Cordoni et al. (2020, see their Section 3), which is based on a 2D kernel smoothing of the coordinate distribution (ARA and \(\Delta\)DEC).
The resulting smoothed 2D distribution of canonical and anomalous stars in the ground-based catalog are portrayed in panels a1) and b1) of Figure 11, respectively. We then compute their isodensity lines and fit them with ellipses by means of least-squares by using the Halir & Flusser (1998) algorithm. The best-fitting ellipses are displayed in panels a2) and b2), where we highlight the major axis of each of them (grey lines) and the average center (aqua bullet). Panels a3) and a4) represent the 2D kernel-smoothed distribution in the innermost \(\sim\)1.5 arcmin, obtained with _HST_ data and the corresponding best-fitting ellipses.
This figure highlights some differences between the two populations. Canonical stars exhibit a nearly circular distribution over the entire FoV and their position angle are poorly constrained. Conversely, the anomalous population shows a circular distribution in the innermost areas only, whereas for radial distances larger than \(\sim 3.5\) arcmin the anomalous stars exhibit higher ellipticity values
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & [O/Fe] & [Na/Fe] & [Ba/Fe] & [Fe/H] & Fraction & Fraction & Fraction \\ & & & & & (\(<\)1.5 arcmin) & (\(>\)1.5 arcmin) & (global) \\ \hline \multirow{3}{*}{CANONICAL} & \multirow{3}{*}{0.21 \(\pm\) 0.03} & \multirow{3}{*}{0.09 \(\pm\) 0.04} & \multirow{3}{*}{0.46 \(\pm\) 0.03} & \multirow{3}{*}{-1.15 \(\pm\) 0.01} & \multirow{3}{*}{0.701 \(\pm\) 0.014} & \multirow{3}{*}{0.721 \(\pm\) 0.031} & \multirow{3}{*}{0.705 \(\pm\) 0.029} \\ & & & & & (0.706 \(\pm\) 0.027) & (0.728 \(\pm\) 0.031) & (0.720 \(\pm\) 0.030) \\ \cline{1-1} \cline{5-7} & & & & & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) \\ \hline \multirow{3}{*}{1G} & \multirow{3}{*}{0.12 \(\pm\) 0.03} & \multirow{3}{*}{0.02 \(\pm\) 0.05} & \multirow{3}{*}{0.40 \(\pm\) 0.04} & \multirow{3}{*}{-1.16 \(\pm\) 0.01} & \multirow{3}{*}{0.368 \(\pm\) 0.018} & \multirow{3}{*}{0.437 \(\pm\) 0.039} & \multirow{3}{*}{0.330 \(\pm\) 0.038} \\ & & & & & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) \\ \hline \multirow{3}{*}{2G} & \multirow{3}{*}{-0.05 \(\pm\) 0.04} & \multirow{3}{*}{0.24 \(\pm\) 0.03} & \multirow{3}{*}{0.50 \(\pm\) 0.03} & \multirow{3}{*}{-1.15 \(\pm\) 0.01} & \multirow{3}{*}{0.632 \(\pm\) 0.018} & \multirow{3}{*}{0.563 \(\pm\) 0.039} & \multirow{3}{*}{0.670 \(\pm\) 0.038} \\ \cline{1-1} \cline{5-7} & & & & & & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) \\ \hline \multirow{3}{*}{ANOMALOUS} & \multirow{3}{*}{-0.06 \(\pm\) 0.04} & \multirow{3}{*}{0.36 \(\pm\) 0.05} & \multirow{3}{*}{0.74 \(\pm\) 0.06} & \multirow{3}{*}{-1.14 \(\pm\) 0.02} & \multirow{3}{*}{0.299 \(\pm\) 0.014} & \multirow{3}{*}{0.279 \(\pm\) 0.031} & \multirow{3}{*}{0.295 \(\pm\) 0.029} \\ & & & & & (\(>\)2.94 \(\pm\) 0.027) & (0.272 \(\pm\) 0.031) & (0.280 \(\pm\) 0.030) \\ \cline{1-1} \cline{5-7} & & & & & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) \\ \cline{1-1} \cline{5-7} & & & & & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) & (\(>\)1.5 arcmin) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average chemical abundances (from Carretta et al., 2011) of the populations photometrically tagged among RGB stars and their fraction inferred from _HST_ (within the innermost 1.5 arcmin), ground-based photometry (outside 1.5 arcmin) and over the whole cluster field from its center to the tidal radius. Values inside brackets indicate, when present the analogous fraction estimated from the SGB.
than the canonical stars4. All the best-fitting ellipses that describe the anomalous stars share similar orientations with an average position angle of \(\sim\)30\({}^{o}\). These results are illustrated in panel c), in which we plot the ellipticity of each best-fitting ellipse against its major axis, showing that outside \(\sim\)3.5' the anomalous population is more elliptical at a 1-\(\sigma\) level than the canonical one. The uncertainty associated with the ellipticity is derived by simulating 1,000 sample of stars with the same number of stars and spatial distributions as canonical and anomalous stars. For each simulation, we measured the ellipticities with the same method used for the real stars. The error associated with each measurement is then calculated as the 68\({}^{\rm th}\) percentile of each simulated ellipticity distribution. Finally, we notice an overdensity of anomalous stars in the north-eastern quadrant, that forms the elongation in the contour plot in panel b1) around \((\Delta RA,\Delta DEC)\sim(3.5,3.0)\) arcmin. Indeed, the fraction of ground-based-field anomalous stars in this quadrant is significantly higher than average, being 0.38\(\pm\)0.05 against the overall 0.28\(\pm\)0.03.
Footnote 4: The ellipticity is defined as \(1-\frac{b}{a}\), where \(a\) and \(b\) are the major and minor axis, respectively.
We repeat the analysis by considering the 1G and 2G canonical subpopulations in the RGB (the separation between these two populations among SGB stars is not clear enough for a reliable quantitative analysis). The low number of stars leads to poor statistics, hence large uncertainties in our ellipticity measurements. With that in mind, we still point out that no significant difference between their average ellipticity emerged between the two spatial distributions, obtaining 0.128\(\pm\)0.069 and 0.086\(\pm\)0.62 for the 1G and 2G stellar populations, respectively.
### Stars outside the tidal radius
An intriguing feature of NGC 1851 is the presence of a halo of extratidal stars that surrounds the cluster up to \(\sim\)500 pc, as discovered by Olszewski et al. (2009) and further explored over larger scales by Carballo-Bello et al. (2018); Kuzma et al. (2018); Ibata et al. (2021).
The recent Gaia DR3 data allows us to reach the outermost areas of the cluster, exploring its stellar halo. We apply to the Gaia catalog the following criteria to identify halo stars: (i) we consider only sources with \(G<\)20 to exclude the ones with low signal-to-noise ratio. (ii) we use the astrometric_ogf, the renormalized unit weight error (RUR), and the parallax diagnostics provided in the Gaia catalog to select only the sources with high-quality photometry. (iii), we analyze the Gaia proper motions and select the stars within a radius of 0.9 mas yr\({}^{-1}\) centered on the average proper motion. (iv) we select by eye in the \(G\) vs. \(G_{\rm BP}\)-\(G_{\rm RP}\) CMD the stars that lie on the MS-SGB-RGB-HB evolutionary sequence, therefore that they are reasonable cluster members. We show, in panel a1) of Figure 12, the \(G\) vs. \(G_{\rm BP}\)-\(G_{\rm RP}\) CMD of NGC 1851. We highlight in grey all the stars that fulfill (i), (ii), and (iii), while stars in the halo (i.e., outside the tidal radius) that pass also the CMD selection criterion are represented with black points. Azure crosses indicate the stars outside the tidal radius that, according to our fourth criterion, are excluded from belonging to the cluster.
We consider a circular region of the sky extended up to 80 arcmin (\(\sim\)260 pc) from the cluster center, and after applying these strict selection criteria we still detect NGC 1851 stars all over this FoV, thus confirming of the presence of stars outside the tidal radius. Furthermore, for some of the black stars, there are also available radial velocity measurements in the Gaia DR3, which can serve as a further diagnostic for cluster membership. Based on the spectroscopy results in literature (e.g., Sollima et al., 2012; Marino et al., 2014), we consider as cluster members the stars with a radial velocity between 300 and 350 km s\({}^{-1}\). The ones that, beyond respecting the aforementioned selection criteria, also fulfill the radial velocity criteria, are encircled in aqua, and can be observed up to a radius of \(\sim\)38'.
To investigate the contamination of field stars with proper motions and CMD position similar to NGC 1851 stars in our sample of halo stars, we consider an annulus with the same area as the halo field but
Figure 9: _Top panels:_ radial trend of the canonical and anomalous RGB star fractions, colored in blue and red respectively, in the _HST_ and ground-based combined catalog (left panel) and the ground-based catalog only (right panel). Filled and open dots represent measurements obtained from the _HST_ and ground-based catalog, respectively. _Bottom panels:_ same but for SGB stars. The three vertical dot-dashed lines highlight the core, half-mass, and tidal radius values.
located further from the cluster center, between 140 and 160 arcmin, where we expect a negligible presence of cluster stars. The CMD of this field is represented in panel a2). From the tidal radius up to 80 arcmin, we find 140 halo stars and 1,256 field stars. In the outer annulus, the number of field stars is comparable (1,401), while only 35 stars share the same colors, magnitude and proper motions as cluster members. These results prove that our sample of halo stars is not consistent with being made by field stars only (as in the outer annulus). Specifically, by assuming an uniform distribution in the sky, we expect that the contamination from field stars is about 25% ( 35/140).
We use ground-based photometry to identify extratidal canonical and anomalous stars and explore their distribution along the FoV. The FoV with available \(U\) and \(I\) photometry covers a part of the NGC 1851 halo only. Specifically, the catalog includes stars within \(\sim\)20 arcmin at east and south, and within 13 and 18 arcmin at north and west, respectively, that pass the criteria of selection described in Section 2.
A similar investigation was performed by Marino et al. (2014), who inferred s-process elements abundances through spectroscopy in the southern area of the halo, finding that 15 of these stars have radial velocity and metallicity consistent with the cluster and that they all share the s-process elements abundances with the canonical stars. We show in the panel b) of Figure 12 the \(I\) vs. \(U-I\) CMD, where black points and azure crosses represent the extratidal stars that we included and excluded in our sample of cluster stars, respectively, based on their CMD position. Magenta starred-symbols highlight the halo stars of the Marino and collaborators sample that are present in our catalog and pass the photometric quality criteria. Canonical and anomalous halo SGB and RGB stars are indicated with blue and red open triangles, respectively. While the former are identified by their position on the CMD, the latter were classified through the \(\Delta_{\rm CU,B,I}\) vs. \(\Delta_{\rm U,I}\) ChM (in panel c)), derived as in Section 3, extended down to I=18 mag. Here, we identify one probable canonical RGB star, consistent with belonging to the 1G population, and two anomalous AII stars. We consider only these three stars as reliable canonical and anomalous RGB stars. We calculate the coordinates, relative to the cluster center, \(\Delta\)DEC and \(\Delta\)RA, and show in the panel d), the \(\Delta\)DEC vs. \(\Delta\)RA diagram of the halo population that we identified from Gaia data. Finally, panel e) illustrates a zoom of the region covered by well-measured stars from the Stetson et al. (2019) catalog. In agreement with the result from Marino et al. (2014), we do not detect anomalous stars in the southern part of the halo below \(\Delta DEC--10\) arcmin (with two of these stars being tagged both by spectroscopy and photometry), while along other directions we identified five probable anomalous stars.
This result, even though based on a limited number of stars, suggests an uneven distribution of anomalous stars in the halo, with a lack of them in the south and southeast directions. Noticeably, this is qualitatively consistent with the findings shown in Figure 11, where the anomalous population is less extended along these directions.
## 8 Summary and Conclusions
We used multi-wavelength photometry from _HST_, Gaia DR3, and ground-based facilities to disentangle and characterize the stellar populations of the Type II GC NGC 1851. The multiple populations are analyzed over a wide area that ranges from the cluster center to the outskirts. Our main results are summarized in the following:
* Both _HST_ and ground-based photometry reveal that the distribution of stars along the ChM of both canonical and anomalous RGB stars is bimodal. The canonical population comprises the s-poor stars while the anomalous population hosts the s-rich stars discovered by Yong et al. (2008). The canonical and anomalous stars can be followed continuously along the RGB, SGB, and upper MS, where they define two distinct sequences that merge around one F814W magnitude below the turnoff.
Figure 10: _Left panels:_ radial distribution of the fraction of the 2G (top), AI (middle), and AII (bottom) populations with respect to the amount of 1G stars. _Middle panels:_ same as the right panels but with the 2G and AII populations with respect to AI stars. _Right panel:_ fraction of AII stars with respect to the 2G population.
* Based on the ChMs, we identified the stellar populations within the canonical and anomalous populations. The canonical population hosts the distinct groups of 1G and 2G stars, typically observed in Type I GCs. These two populations have different abundances of helium, carbon, nitrogen, and oxygen. Both 1G and 2G stars are not chemically homogeneous. Similarly, the anomalous RGB hosts two main populations, AI and AII, with different light-elements abundances.
* To constrain the overall CNO abundance of canonical and anomalous stars we compared their observed colors with the colors derived from synthetic spectra with different contents of carbon, nitrogen, and oxygen. We found that canonical and anomalous stars share similar average abundances of carbon, while the anomalous stars are enhanced in [N/Fe] by \(\sim\)1.0 dex and slightly depleted in oxygen by \(\sim\)0.1 dex. Hence, the anomalous stars have enhanced CNO content by 0.35\(\pm\)0.10 dex with respect to the canonical population. Our results, which are based on multi-band photometry, confirm the findings by Yong et al. (2015), who obtained similar conclusion by using high-resolution spectra.
* We investigated the radial distribution of the distinct population fractions up to the tidal radius. We find that the canonical and anomalous stars share the same radial distribution. We instead found that 2G stars are more centrally concentrated than the 1G, and AII stars are more centrally concentrated than AI stars. We did not detect significant differences between the 1G and AI, and the 2G and AII.
* We then exploited the radial trend of the different population fractions to measure the global fraction of the different populations reported in Table 1. The global fractions of the four disentangled populations with respect to the total number of canonical and anomalous stars are \(f^{\rm G}_{\rm 1G}=0.229\pm 0.030\), \(f^{\rm G}_{\rm 2G}=0.474\pm 0.030\), \(f^{\rm G}_{\rm AI}=0.027\pm 0.030\), and \(f^{\rm G}_{\rm AII}=0.270\pm 0.030\).
* Canonical and anomalous stars differ in the 2D spatial distributions. The isodense contours of canonical stars have nearly circular shapes (ellipticity of \(\sim\)0.1) in the entire FoV. The contours of anomalous stars deviate from a circular-like shape outside the innermost three arcmin increasing in ellipticity up to \(\sim\)0.3 and having its best-fit ellipse oriented along the north-east/south-west direction. Moreover, there is a hint for an overdensity of anomalous stars in the northeast direction, where their fraction increases by \(\sim\)10% with respect to the average, which, as shown in Section 6, is significant at a 2\(\sigma\) level.
By combining the analysis of the radial and spatial 2D distribution of canonical and anomalous stars, we found that their overall fractions
Figure 11: _Panels a1) and b1):_ spatial distribution of canonical and anomalous stars in the ground FoV, represented in blue and red color scale, respectively. Dark grey lines are the isodensity contours. _Panels a2) and b2):_ best-fit ellipses of canonical (in blue) and anomalous (in red) isodensity lines. Grey straight lines represent the major-axis direction of each ellipse, while aqua dots display the averaged ellipse centers. _Panels a3), b3), a4), and b4):_ same as panels a1), b1), a2), and b2) but for stars in the _HST_ FoV (within the black boxes in panels a1) and a2)). _Panel c):_ Ellipticity of canonical and anomalous stars with respect to the major axis of their isodensity contours best-fit ellipses. The two dash-dotted lines represent the core and the half-mass radius.
do not vary within the tidal radius, in agreement with the findings by Milone et al. (2009), but the uneven distribution of the anomalous population introduces local gradients. In particular, their drop in the south/southeast outer field of the cluster is qualitatively consistent with the results by Zoccali et al. (2009), who detected a gradient by studying a similar field.
* We identify NGC 1851 stars outside the tidal radius, thus confirming previous results (Olszewski et al., 2009; Sollima et al., 2012; Marino et al., 2014; Kuzma et al., 2018). By using Gaia DR3 data, we detect a stellar halo up to 80 arcmin from the cluster center. We identified 14 canonical and five anomalous probable cluster members outside the tidal radius (radial distances between \(\sim\)12 and \(\sim\)20 arcmin) thanks to the available ground-based photometry. The tagging of canonical and anomalous stars outside the tidal radius corroborates the idea that anomalous stars nearly disappear along the south/southeast direction (Marino et al., 2014), but are still visible in other directions. Since the available observations allow us to separate these two populations up to about 20 arcmin, and the halo is extended up to (at least) 80 arcmin, it is clear how extending this analysis to higher radii is mandatory to shed light on this phenomenon.
We conclude by providing some considerations, although strictly qualitative, about the formation of anomalous stars in Type II GCs. Two main ideas are particularly appealing based on our observational constraints.
The first one predicts that Type II clusters result from a merging between two (or more) initially separated Type I GCs (Carretta et al., 2010; Bekki & Tsujimoto, 2016). According to this idea, they form
Figure 12: _Panel a1) and a2):_ Gaia \(G\) vs. \(G_{\rm BP}\)–\(G_{\rm RP}\) CMD of stars within 80 arcmin from the cluster center and from the FoV dominated by field stars, respectively. Stars that pass the photometric diagnostics and the proper motion selection are marked with gray points. Black points and azure crosses represent the extratidal stars that are consistent with belonging to NGC 1851 and to the field according to the CMD selection, respectively. Stars with radial velocity measurements consistent with the cluster motion are encircled in aqua (see text for details). _Panel b):_ I vs. \(U\)-\(I\) CMD from ground-based photometry. Grey and black points and azure crosses have the same meaning than in panel a1) and a2). Halo canonical and anomalous stars are displayed with blue and red triangles, respectively, while magenta starred-symbols display the stars in common with the work by Marino et al. (2014). _Panel c):_\(\rm{AC_{U,B,V}}\) vs. \(\Delta_{U,I}\) CMD for RGB stars within the pink box in panel b). _Panel d):_\(\rm{ADEC}\) vs. ARA position of stars in the Gaia FoV, color-coded as in panel a1) and a2). _Panel e):_ zoom of the Gaia \(\Delta\)DEC vs. ARA diagram within the pink rectangle representing the position of stars in the ground-based FoV. The brown circle in panels d) and e) indicates the tidal radius.
within the same dwarf galaxy, develop their own 1G-2G patterns, and then spiral in the nuclear region of the host galaxy, merging in one. Finally, the galaxy is accreted by the Milky Way, which strips its stars leaving only the naked nuclei, i.e., the Type II GC. Here, the differences between the canonical and anomalous stars chemistry would arise as a result of the dwarf galaxy chemical evolution. Indeed, the cluster in which anomalous stars were born would have formed later, thus when the star-forming gas in the host galaxy had a different chemical composition. Bekki & Tsujimoto (2016) performed simulations to show that the iron and s-process differences observed in M22 could be achieved within a few hundred of Myr, hence before the dwarf disruption. This idea naturally accounts for the presence of two anomalous populations, AI and AII, which would be the first and second generation of an initially-separated GC. Moreover, the 1G-2G and AI-AII patterns also share similar relative chemical differences and radial distribution, which would indicate them being produced by the same mechanisms. On the other hand, the large C+N+O difference observed between canonical and anomalous stars in NGC 1851 is not straightforward in this scenario, which would require excessively long timescales to produce it Bekki & Tsujimoto (2016, see their Section 4.2). Finally, we notice how AI stars would have a rather extreme chemical composition for being first-generation stars, having intermediate [Na/Fe] and [O/Fe] between 1G and 2G stars, and it is not clear if the chemical evolution of a host dwarf galaxy could account, in the required timescales, to such chemical differences.
The second scenario, proposed by D'Antona et al. (2016) and D'Ercole et al. (2016) is an extension of the AGB scenario (e.g., D'Ercole et al. 2008). Here, Type II GCs experienced a prolonged star formation with respect to Type I GCs, allowing subsequent stellar generations to form. After the formation of 2G stars, the explosions of delayed SN II in binaries destroy the cooling flow and halts the stellar formation, but because their frequency is not as high as in the single SN II epoch, they are not strong enough to push the intra-cluster medium (formed by pristine material and AGB ejecta) out of the cluster proximity. Type II GCs, differently from Type I, can re-accrete this gas several Myr later, when the delayed SN II events become rare. This could be possible by assuming that these clusters were particularly massive at this epoch or if they are nuclei of disrupted dwarf galaxies. The re-accreted material would be contaminated by SN II ejecta, thus enriched in iron. In this time span, -3.5-4 \(M_{\odot}\) AGB stars are polluting the winds injecting material strongly affected by the third dredge-up, hence enriching the surroundings in total CNO and s-process elements. In this mixed medium, anomalous stars would form and would be enriched in total CNO, s-process elements, and/or [Fe/H] depending on the influence of the different polluters within a given GC (like the number of delayed SN II events). If the mixing between different ejecta is inhomogeneous, it is possible to develop a Na-O anticorrelation among anomalous stars (D'Ercole et al. 2016, see their Section 4.2), thus producing the observed AI and AII populations.
Both scenarios agree on the possibility that Type II GCs may be remnants of a larger structure, like a dwarf galaxy. The presence of a halo of stars more extended than the tidal radius of NGC 1851 could be a sign that this cluster was originally a larger structure. An extensive study of the halo, aimed at identifying the populations that compose it, will provide additional constraints on the origin of NGC 1851 and, possibly, of other Type II GCs.
Our results provide new constraints and challenges to the Type II GC formation scenarios. To unveil the origin of these structures further works based on investigating anomalous stars in a wider sample of clusters, combining photometry, spectroscopy, and theoretical modeling, are mandatory.
## Acknowledgements
We thank the anonymous referee for the valuable comments. This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research innovation programme (Grant Agreement ERC-StG 2016, No 716082 'GALFOR', PI: Milone, [http://progetti.dfa.unipd.it/GALFOR](http://progetti.dfa.unipd.it/GALFOR)) and from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 101034319 and from the European Union - NextGenerationEU, beneficiary: Ziliotto. SJ acknowledges support from the NRF of Korea (2022R1A2C3002992, 2022R1A6A1A03053472). APM, MT, and ED acknowledge support from MIUR through the FARE project R164RM933XW SEMPLICE (PI: Milone). APM and ED have been supported by MIUR under PRIN program 2017Z2HSMF (PI: Bedin). FD and PV acknowledge the support received from the PRIN INAF 2019 grant Obfu 1.05.01.85.14 ("Building up the halo: chemo-dynamical tagging in the age of large surveys", PI. S. Lucatello) and the INAF-GTO-GRANTS 2022 ("Understanding the formation of globular clusters with their multiple stellar generations", PI. A. F. Marino). ZO acknowledges this research was supported by an Australian Government Research Training Program (RTP) Scholarship. AK and ZO were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
## Data Availability
The data underlying this article will be shared upon reasonable request to the corresponding author.
|
2309.10046 | Separation properties for positive-definite functions on locally compact
quantum groups and for associated von Neumann algebras | Using Godement mean on the Fourier-Stieltjes algebra of a locally compact
quantum group we obtain strong separation results for quantum positive-definite
functions associated to a subclass of representations, strengthening for
example the known relationship between amenability of a discrete quantum group
and existence of a net of finitely supported quantum positive-definite
functions converging pointwise to $I$. We apply these results to show that von
Neumann algebras of unimodular discrete quantum groups enjoy a strong form of
non-$w^*$-CPAP, which we call the matrix $\epsilon$-separation property. | Jacek Krajczok, Adam Skalski | 2023-09-18T18:03:45Z | http://arxiv.org/abs/2309.10046v1 | Separation properties for positive-definite functions on locally compact quantum groups and for associated von Neumann algebras
###### Abstract.
Using Godement mean on the Fourier-Stieltjes algebra of a locally compact quantum group we obtain strong separation results for quantum positive-definite functions associated to a subclass of representations, strengthening for example the known relationship between amenability of a discrete quantum group and existence of a net of finitely supported quantum positive-definite functions converging pointwise to \(\mathds{1}\). We apply these results to show that von Neumann algebras of unimodular discrete quantum groups enjoy a strong form of non-\(w^{*}\)-CPAP, which we call the matrix \(\varepsilon\)-separation property.
Key words and phrases:Locally compact quantum group; positive-definite function; approximation property; von Neumann algebra 2020 Mathematics Subject Classification: Primary 46L65; Secondary 43A35, 46L89
## 1. Introduction
The connection between properties of positive-definite functions on a locally compact group \(G\), geometric properties of \(G\) and approximation properties of operator algebras associated with \(G\) are well-known and form one of the key aspects of the analytic geometric group theory. The situation is most satisfactory for discrete groups, where for example injectivity (in other words, \(w^{*}\)-CPAP) of the group von Neumann algebra is equivalent to the existence of a net of finitely supported normalised positive-definite functions convergent pointwise to \(\mathds{1}\), and further to the amenability of the group in question (see for example [BO]). A similar correspondence holds for the Haagerup property, with the finitely supported positive-definite functions replaced by these which vanish at infinity ([CCJV], [Cho]).
Analogues of these statements remain true for locally compact quantum groups in the sense of [KV], with the most satisfactory equivalences available for (unimodular) discrete quantum groups (see [Bra] and references therein). Once again a central role is played by nets of 'positive-definite functions' on a locally compact quantum group \(\mathbb{G}\) which have good decay properties and in the limit approximate the constant function \(\mathds{1}\). The corresponding operator algebraic picture concerns studying normal unital completely positive (UCP) maps on a von Neumann algebra which are'small' in a certain sense, and yet in the limit approximate the identity operator.
The main question studied in this paper is a possibility of weakening the limit property above. Instead of approximating the identity operator (or a constant function \(\mathds{1}\)) we want to ask what happens if we can only achieve being 'uniformly separated from \(0\) in the limit'. Motivated by the classical results of [Der] (see also [Boz]), which used the Godement mean of [God] to characterise properties of some subclasses of positive-definite functions, we obtain the first of the main results of the paper, which shows that the existence of a net of ('compactly supported') quantum positive-definite functions on \(\mathbb{G}\) which in the limit is '\(\varepsilon\)-away from \(0\)'
already implies the amenability of \(\mathbb{G}\). We also obtain a corresponding result for the Haagerup property of [DFSW]. For simplicity we formulate these below only for discrete quantum groups.
**Theorem A**.: _Let \(\mathbb{G}\) be a discrete quantum group. If \(\mathbb{G}\) is not amenable (respectively, not Haagerup), then there is no \(\varepsilon>0\) and no net \((f_{i})_{i\in I}\) of normalised finitely supported (respectively, vanshing at infinity) positive definite functions on \(\mathbb{G}\) such that_
\[\forall_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{G}})}\exists_{i_{0}\in I }\forall_{i\geq i_{0}}\quad p_{\alpha}f_{i}\geq\varepsilon p_{\alpha}.\]
In the next step we turn to the analogous operator algebraic question. Recall that a von Neumann algebra \(\mathrm{M}\) is said to have the _\(w^{*}\)-CPAP_ (equivalently, by [Con], is injective) if there exists a net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, finite rank, UCP maps on \(\mathrm{M}\) which approximate the identity map in the pointwise-ultraweak topology.
We say that a von Neumann algebra \(\mathrm{M}\) has the _matrix \(\varepsilon\)-separation property_ (for a fixed \(\varepsilon\in(0,1)\)) if for every net \((\Phi_{i})_{i\in I}\) of normal, finite rank, UCP maps on \(\mathrm{M}\) there is a Hilbert space \(\mathsf{H}\) and \(x\in\mathrm{M}\,\bar{\otimes}\,\mathrm{B}(\mathsf{H}),\omega\in(\mathrm{M}\, \bar{\otimes}\,\mathrm{B}(\mathsf{H}))_{*}\) with \(\|x\|=\|\omega\|=1\) such that \(\limsup_{i\in I}|\langle x-(\Phi_{\lambda}\otimes\mathrm{id})(x),\omega \rangle|>\varepsilon\). It is thus easy to see that if \(\mathrm{M}\) has the matrix \(\varepsilon\)-separation property it cannot be injective. The converse statement - for a fixed \(\varepsilon\) - appears however non-obvious (an easy 'diagonal' argument shows that every non-injective von Neumann algebra must have the matrix \(\varepsilon\)-separation property for _some_\(\varepsilon\in(0,1)\)). Appealing to the _Haagerup trick_, which allows us to average arbitrary UCP maps on a von Neumann algebra into quantum Herz-Schur multipliers, and exploiting the separation results for quantum positive-definite functions mentioned above, we are led to the following theorem.
**Theorem B**.: _Let \(\mathbb{G}\) be a compact quantum group of Kac type, \(\varepsilon\in(0,1)\). Then \(\mathrm{L}^{\infty}(\mathbb{G})\) has the matrix \(\varepsilon\)-separation property if and only if \(\mathrm{L}^{\infty}(\mathbb{G})\) is non-injective._
The above theorem also has a Haagerup property counterpart. It can be strengthened for these compact quantum groups which admit a uniform bound on the dimension of irreducible representations; in particular we obtain the following result (dropping the word'matrix' from the \(\varepsilon\)-separation property amounts to setting \(\mathsf{H}=\mathbb{C}\) in the definition above).
**Corollary C**.: _Let \(\Gamma\) be a discrete group, \(\varepsilon\in(0,1)\). Then \(\mathrm{vN}(\Gamma)\) has the \(\varepsilon\)-separation property if and only if \(\mathrm{vN}(\Gamma)\) is non-injective._
This naturally leads to a question whether the same equivalence persists for general von Neumann algebras; we formulate it in the end of the paper, providing also certain equivalent reformulations.
Specific plan of the paper is as follows: in the second section we first recall certain preliminary facts concerning locally compact quantum groups and establish a simple technical lemma, and then we pass to studing generalized quantum Fourier-Stieltjes algebras in the spirit of [Eym] or [KL]. Also here the Godement mean makes an appearance and we show in Proposition 2.8 how its behaviour relates to properties of the locally compact quantum group in question. Section 3 is devoted to studying separation properties for quantum positive-definite functions associated to specific classes of representations. We first establish a general approximation result for such functions in Proposition 3.1, and then use it to prove the main result of this Section, providing a sufficient condition for non-vanishing of the Godement mean on a specific generalized Fourier-Stieltjes algebra, Theorem 3.5. Together with the facts shown in Section 2 this implies several corollaries, in particular Theorem A above follows from Corollaries 3.6, 3.7 and 3.8. Finally in Section 4 we study separation properties for von Neumann
algebras of discrete quantum groups. We begin by looking at separation conditions expressed in terms of quantum Herz-Schur multipliers (Proposition 4.1), use it to motivate the definition of (matrix) \(\varepsilon\)-separation property (Definition 4.2) and discuss the easy consequences. We then consider specifically unimodular discrete quantum groups, first assuming that we have a bound on the size of irreducibles in Theorem 4.6 and then dropping this assumption in Theorems 4.7 and 4.8. In particular here we prove Theorem B (and Corollary C follows from Theorem 4.6).
## 2. Preliminaries and quantum Fourier-Stieltjes algebras
We will work in the setting of locally compact quantum groups, introduced by Kustermans and Vaes [KV] (see also [VD]). By definition, any locally compact quantum group \(\mathbb{G}\) comes together with a von Neumann algebra \(\mathrm{L}^{\infty}(\mathbb{G})\), _comultiplication_\(\Delta\colon\mathrm{L}^{\infty}(\mathbb{G})\to\mathrm{L}^{\infty}(\mathbb{G}) \,\bar{\otimes}\mathrm{L}^{\infty}(\mathbb{G})\) which is a normal, unital \(*\)-homomorphism and two n.s.f. weights \(\varphi,\psi\) which are called _Haar integrals_ and satisfy left (resp. right) invariance conditions. The predual of \(\mathrm{L}^{\infty}(\mathbb{G})\) is denoted by \(\mathrm{L}^{1}(\mathbb{G})\) and the GNS Hilbert space of \(\varphi\) is \(\mathrm{L}^{2}(\mathbb{G})\) - it can be also identified with the GNS Hilbert space of \(\psi\). The corresponding GNS map is denoted \(\Lambda_{\varphi}\colon\mathfrak{N}_{\varphi}\to\mathrm{L}^{2}(\mathbb{G})\). With any locally compact quantum group one can associate its dual \(\widehat{\mathbb{G}}\) and by construction \(\mathrm{L}^{2}(\widehat{\mathbb{G}})\) is equal to \(\mathrm{L}^{2}(\mathbb{G})\). The assignment \(\mathbb{G}\mapsto\widehat{\mathbb{G}}\) extends the classical Pontryagin duality of locally compact abelian groups and is itself a duality in the sense that the dual of \(\widehat{\mathbb{G}}\) is canonically isomorphic with \(\mathbb{G}\). We will follow the convention which favours left objects over the right ones.
An important result in the theory states the existence of _Kac-Takesaki operator_\(\mathrm{W}\in\mathrm{L}^{\infty}(\mathbb{G})\,\bar{\otimes}\,\mathrm{L}^{ \infty}(\widehat{\mathbb{G}})\). It is a unitary operator which implements comultiplication via \(\Delta(x)=\mathrm{W}^{*}(\mathds{1}\otimes x)\mathrm{W}\) for \(x\in\mathrm{L}^{\infty}(\mathbb{G})\). One can construct a weak\({}^{*}\)-dense \(\mathrm{C}^{*}\)-subalgebra of \(\mathrm{L}^{\infty}(\mathbb{G})\) via \(\mathrm{C}_{0}(\mathbb{G})=\overline{\{(\mathrm{id}\otimes\omega)\mathrm{W}\,| \,\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\}}\). Then \(\mathrm{W}\in\mathrm{M}(\mathrm{C}_{0}(\mathbb{G})\otimes\mathrm{C}_{0}( \widehat{\mathbb{G}}))\) and comultiplication restricts to a non-degenerate \(*\)-homomorphism \(\mathrm{C}_{0}(\mathbb{G})\to\mathrm{M}(\mathrm{C}_{0}(\mathbb{G})\otimes \mathrm{C}_{0}(\mathbb{G}))\). We write \(\mathrm{C}_{b}(\mathbb{G})\) for \(\mathrm{M}(\mathrm{C}_{0}(\mathbb{G}))\). The \(\mathrm{C}^{*}\)-algebra introduced above has a universal counterpart, \(\mathrm{C}_{0}^{u}(\mathbb{G})\) (see [Kus]). It is equipped with its own comultiplication and the _reducing map_\(\Lambda_{\mathbb{G}}\colon\mathrm{C}_{0}^{u}(\mathbb{G})\to\mathrm{C}_{0}( \mathbb{G})\), a surjective \(*\)-homomorphism commuting with respective comultiplications. Kac-Takesaki operator admits several lifts, in particular the right-universal version \(\mathbb{W}\in\mathrm{M}(\mathrm{C}_{0}(\mathbb{G})\otimes\mathrm{C}_{0}^{u}( \widehat{\mathbb{G}}))\) with similar properties to \(\mathrm{W}\). Both operators are related via the formula \((\mathrm{id}\otimes\Lambda_{\widehat{\mathbb{G}}})\mathbb{W}=\mathrm{W}\). One says that \(\mathbb{G}\)_compact_ if \(\mathrm{C}_{0}(\mathbb{G})\) is unital (equivalently, the Haar integrals are states - and thus coincide); we then simply write \(\mathrm{C}(\mathbb{G})\) instead of \(\mathrm{C}_{0}(\mathbb{G})\). Further \(\mathbb{G}\) is _discrete_ if \(\widehat{\mathbb{G}}\) is compact. In this case \(\widehat{\mathbb{G}}\) is said to be _Kac_ if its Haar integral is tracial; equivalently, \(\mathbb{G}\) is _unimodular_, i.e. its left and right Haar integrals coincide.
A _(unitary) representation_ of \(\mathbb{G}\) on a Hilbert space \(\mathsf{H}\) is a unitary element \(U\in\mathrm{M}(\mathrm{C}_{0}(\mathbb{G})\otimes\mathcal{K}(\mathsf{H}))\) which satisfies \((\Delta\otimes\mathrm{id})(U)=U_{13}U_{23}\), where we use the standard leg-numbering notation. _Coefficients_ of \(U\) are then elements of the form \((\mathrm{id}\otimes\omega_{\xi,\eta})(U^{*})\in\mathrm{C}_{b}(\mathbb{G})\,( \xi,\eta\in\mathsf{H})\) - we adopt this definition because of our convention concerning Fourier algebra. There is a one-to-one correspondence between representations \(U\) and non-degenerate \(*\)-representations \(\phi_{U}\colon\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})\to\mathrm{B}(\mathsf{H})\), given by the formula \(U=(\mathrm{id}\otimes\phi_{U})(\mathbb{W})\). We shall write \(\mathrm{Rep}(\mathbb{G})\) for the family of unitary equivalence classes of unitary representations of a locally compact quantum group \(\mathbb{G}\). We will follow the conventional abuse of notation and identify representation with its unitary equivalence class. Given two representations \(U,V\) we say that \(U\) is weakly contained in \(V\) (written \(U\preceq V\)) if \(\phi_{U}\) is weakly contained in \(\phi_{V}\) (written as \(\phi_{U}\preceq\phi_{V}\)), i.e.
when \(\ker(\phi_{V})\subset\ker(\phi_{U})\) (see also [Fell, Theorem 1.2] for other equivalent conditions). We will always assume that \(\operatorname{Rep}(\mathbb{G})\) is equipped with the Fell topology. Note that if \(\mathbb{G}\) is discrete we have \(\operatorname{c}_{0}(\mathbb{G})=\bigoplus_{\alpha\in\operatorname{Irr}( \widehat{\mathbb{G}})}\operatorname{M}_{\operatorname{dim}(\alpha)}\), where \(\operatorname{Irr}(\widehat{\mathbb{G}})\) denotes the set of equivalence classes of _irreducible_ unitary representations of \(\widehat{\mathbb{G}}\). On the other hand if \(\mathbb{G}\) is compact, the coefficients of all irreducible unitary representations of \(\mathbb{G}\) span a canonical dense Hopf *-subalgebra of \(\operatorname{C}(\mathbb{G})\), denoted \(\operatorname{Pol}(\mathbb{G})\).
We will also work with another subclass of locally compact quantum groups, called _algebraic quantum groups_. These are defined via a multiplier Hopf\({}^{*}\)-algebra \((\mathfrak{C}_{c}^{\infty}(\mathbb{G}),\Delta)\) and a left Haar integral which satisfy certain conditions, see [KVD]. Every algebraic quantum group gives rise to a locally compact quantum group in such a way that \(\mathfrak{C}_{c}^{\infty}(\mathbb{G})\) is dense in \(\operatorname{C}_{0}(\mathbb{G})\) and the respective comultiplications agree. In particular, compact and discrete quantum groups are special cases of algebraic quantum groups - the corresponding multiplier Hopf\({}^{*}\)-algebras are given by respectively \(\operatorname{Pol}(\mathbb{G})\) and \(\operatorname{c}_{c}(\mathbb{G})=alg-\bigoplus_{\alpha\in\operatorname{Irr}( \widehat{\mathbb{G}})}\operatorname{M}_{\operatorname{dim}(\alpha)}\).
We end the preliminary part with a general technical fact, which is easy and likely well-known. As we could not find an exact reference, we provide a proof.
**Proposition 2.1**.: _Suppose \(X\) is a dual Banach space and that \(Y\) is a closed subspace of \(X\) whose unit ball is weak\({}^{*}\)-dense in the unit ball of \(X\). Then we can identify \(X_{*}\) isometrically with a weak\({}^{*}\)-dense subspace of \(Y^{*}\); moreover the unit ball of \(X_{*}\) is weak \({}^{*}\)-dense in the unit ball of \(Y^{*}\). If \(Y\) is a \(C^{*}\)-algebra which is weak\({}^{*}\)-dense in a von Neumann algebra \(X\), then the positive part of the unit ball of \(X_{*}\) is weak \({}^{*}\)-dense in the positive part of the unit ball of \(Y^{*}\)._
Proof.: Indeed, given \(\phi\in X_{*}\) we can always restrict it to \(Y\); this procedure is injective, as \(Y\) is weak\({}^{*}\)-dense, and isometric due to the assumption of the weak\({}^{*}\)-density of the unit ball of \(Y\) in the unit ball of \(X\). On the other hand, given a contractive \(\omega\in Y^{*}\) we can first extend it by Hahn-Banach to a contractive \(\tilde{\omega}\in X^{*}\) and then approximate \(\tilde{\omega}\) in weak\({}^{*}\)-topology by contractive functionals in \(X_{*}\) using Goldstine's theorem; this does the job.
For the last part note first that due to Kaplansky Theorem the density of the balls holds automatically. Moreover we can work with states and then the only non-trivial element is the fact that we can approximate states on \(X\) by states in \(X_{*}\). Let then \(\phi\in X^{*}\) be a state and let \((\phi_{i})_{i\in\mathcal{I}}\) be a net of contractive non-zero elements in \(X_{*}\) convergent to \(\phi\) in weak\({}^{*}\)-topology. Note that we must have \(\lim_{i\in\mathcal{I}}\|\phi_{i}\|=1=\|\phi\|\) (as \(1\geq\|\phi_{i}\|\geq|\phi_{i}(\mathds{1})|\xrightarrow[i\in\mathcal{I}]{}\phi (\mathds{1})=1\)). By [Tak, Proposition 4.11] the absolute value of \(\phi_{i}\) - which is a contractive non-zero positive element of \(X_{*}\) - converges in the weak\({}^{*}\)-topology of \(X^{*}\) to the absolute value of \(\phi\) (i.e. to \(\phi\)). Then \((|\phi_{i}|(\mathds{1})^{-1}|\phi_{i}|)_{i\in\mathcal{I}}\) is the desired net.
### Quantum Fourier-Stieltjes algebras
Let us start with the following definition, in the spirit of [Eym, Definition 2.2].
**Definition 2.2**.: Suppose that \(\emptyset\neq S\subset\operatorname{Rep}(\mathbb{G})\). Denote by \(B_{S}(\mathbb{G})\) the set of all coefficients of unitary representations weakly contained in \(S\) (equivalently coefficients of representations in \(\overline{S}\)):
\[B_{S}(\mathbb{G})=\{(\operatorname{id}\otimes\omega_{\xi,\eta})(U^{*})\,|\,U \preceq S,\xi,\eta\in\mathsf{H}_{U}\}\subset\operatorname{C}_{b}(\mathbb{G}).\]
On the other hand define \(J_{S}=\bigcap_{U\preceq S}\operatorname{Ker}(\phi_{U})\) and let \(\operatorname{C}_{0}^{S}(\widehat{\mathbb{G}})=\operatorname{C}_{0}^{u}( \widehat{\mathbb{G}})/J_{S}\).
Note that in particular _the Fourier-Stieltjes algebra of_\(\mathbb{G}\), going back at least to \([\operatorname{Daw}_{1}]\), is \(B(\mathbb{G})=B_{\{\Psi\}}(\mathbb{G})=\{(\operatorname{id}\otimes\omega_{ \xi,\eta})(U^{*})\,|\,U\in\operatorname{Rep}(\mathbb{G}),\xi,\eta\in\mathsf{H }_{U}\}\). We shall denote by \(\lambda\) the left regular representation of \(\mathbb{G}\) and write \(B_{\lambda}(\mathbb{G})\) for \(B_{\{\Psi\}}(\mathbb{G})\). We will also need to consider
\(S_{mix}\), the collection of all _mixing_ representations of \(\mathbb{G}\), i.e. these whose all coefficients belong to \(\mathrm{C}_{0}(\mathbb{G})\) (see [10, Definition 4.1]). We will write \(B_{0}(\mathbb{G})\) for \(B_{S_{mix}}(\mathbb{G})\).
It is also easy to see - directly from the definitions - that \(J_{S}=\bigcap_{U\in S}\mathrm{Ker}(\phi_{U})\) and that \(\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})\) can be alternatively described as a completion of \(\mathrm{L}_{1}^{\sharp}(\mathbb{G})\) (say viewed as a \({}^{*}\)-subalgebra of \(\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})\)) with respect to the norm \(\|f\|_{S}=\sup_{U\in S}\|\phi_{U}(f)\|\). If \(\mathbb{G}\) is discrete we can also describe \(\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})\) as the analogous completion of \(\mathrm{Pol}(\widehat{\mathbb{G}})\). Using the fact that a direct sum of representations weakly contained in \(S\) is also weakly contained in \(S\) we deduce that \(B_{S}(\mathbb{G})\) is a vector subspace of \(\mathrm{C}_{b}(\mathbb{G})\).
Note that given \(U\preceq S\) and \(\xi,\eta\in\mathsf{H}_{U}\) the functional \(\omega_{\xi,\eta}\circ\phi_{U}\in\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})^{*}\) vanishes on \(J_{S}\) and hence defines a new functional on the quotient space \(\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})=\mathrm{C}_{0}^{u}(\widehat{\mathbb{ G}})/J_{S}\). We will often abuse the notation and denote it again by \(\omega_{\xi,\eta}\circ\phi_{U}\).
**Lemma 2.3**.: _Suppose that \(\emptyset\neq S\subset\mathrm{Rep}(\mathbb{G})\). The formula_
\[(\mathrm{id}\otimes\omega_{\xi,\eta})(U^{*})\mapsto\omega_{\xi,\eta}\circ \phi_{U},\ \ \ U\preceq S,\xi,\eta\in\mathsf{H}_{U}\]
_defines a linear bijection between \(B_{S}(\mathbb{G})\) and \(\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})^{*}\)._
Proof.: Suppose that \(U,V\preceq S\) and \(\xi,\eta\in\mathsf{H}_{U}\), \(\xi^{\prime},\eta^{\prime}\in\mathsf{H}_{V}\),
\[(\mathrm{id}\otimes\omega_{\xi,\eta})(U^{*})=(\mathrm{id}\otimes\omega_{\xi^ {\prime},\eta^{\prime}})(V^{*}).\]
Then we have - understanding the functionals \(\omega_{\xi,\eta}\circ\phi_{U}\) and \(\omega_{\xi^{\prime},\eta^{\prime}}\circ\phi_{V}\) as functionals on \(\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})\) - the following equality:
\[(\mathrm{id}\otimes\omega_{\xi,\eta}\circ\phi_{U})(V\!\!\!W^{*})=(\mathrm{id} \otimes\omega_{\xi^{\prime},\eta^{\prime}}\circ\phi_{V})(V\!\!\!W^{*}).\]
As left slices of \(V\!\!\!W^{*}\) are dense in \(\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})\) by [11, Proposition 4.2], we have \(\omega_{\xi,\eta}\circ\phi_{U}=\omega_{\xi^{\prime},\eta^{\prime}}\circ\phi_{V}\). We view the latter as bounded functionals on \(\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})\), but as \(U,V\preceq S\), they also descend to (equal) functionals in \(\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})^{*}\). This implies that the map introduced in the lemma is well-defined and injective.
As it is clearly linear, to verify surjectivity it suffices to consider states on \(\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})\). Every such state \(\omega\) is of the form \(\omega_{\xi,\xi}\circ\pi\), where \(\pi\colon\,\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})\to\mathrm{B}(\mathsf{H})\) is a representation, and \(\xi\in\mathsf{H}\). But then we can consider \(\pi\circ q_{S}\colon\,\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})\to\mathrm{B}( \mathsf{H})\) and note that \(\pi\circ q_{S}=\phi_{V}\) for \(V\in\mathrm{Rep}(\mathbb{G})\), \(V\preceq S\). Naturally we have \(\omega=\omega_{\xi,\xi}\circ\phi_{V}\).
The result of the last lemma allows us to introduce the norm on \(B_{S}(\mathbb{G})\) induced by the norm of \(\mathrm{C}_{0}^{S}(\widehat{\mathbb{G}})^{*}\), namely set
\[\|a\|_{B_{S}(\mathbb{G})}=\|\omega_{\xi,\eta}\circ\phi_{U}\|_{\mathrm{C}_{0}^{S }(\widehat{\mathbb{G}})^{*}}\ \ \ (a=(\mathrm{id}\otimes\omega_{\xi,\eta})(U^{*})\in B_{S}(\mathbb{G})).\]
It is also worth noting that in the case where \(S=\mathrm{Rep}(\mathbb{G})\) the inverse of the map above is given simply by the formula
\[\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})^{*}\ni\mu\mapsto(\mathrm{id}\otimes \mu)(V\!\!W^{*})\in B(\mathbb{G}). \tag{2.1}\]
Recall the definition of the Fourier algebra (for the early definitions in the quantum group context see for example [11] or [10], although these papers use a different convention; here we follow rather [11]):
\[\mathrm{A}(\mathbb{G})=\{(\mathrm{id}\otimes\omega_{\xi,\eta})(\mathrm{W}^{*}) \,|\,\xi,\eta\in\mathrm{L}^{2}(\mathbb{G})\}\subset\mathrm{C}_{0}(\mathbb{G}). \tag{2.2}\]
It is well-known that the formula appearing in Lemma 2.3 identifies \(\mathrm{A}(\mathbb{G})\) with \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\); this can be proved following the same lines as above, using the fact that \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) acts on \(\mathrm{L}^{2}(\mathbb{G})\) in a standard form. Again we view \(\mathrm{A}(\mathbb{G})\) as a normed space equipped with the norm of \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\).
**Lemma 2.4**.: _Let \(S,T\subset\mathrm{Rep}(\mathbb{G})\), \(\lambda\in S\subset T\). We have natural isometric inclusions \(\mathrm{A}(\mathbb{G})\subset B_{\lambda}(\mathbb{G})\subset B_{S}(\mathbb{G })\subset B_{T}(\mathbb{G})\subset B(\mathbb{G})\). Moreover for \(f\in B(\mathbb{G})\subset\mathrm{C}_{b}(\mathbb{G})\) we have \(\|f\|_{\mathrm{C}_{b}(\mathbb{G})}\leq\|f\|_{B(\mathbb{G})}\)._
Proof.: The first part is an easy consequence of Lemma 2.1 (applied to \(Y=\mathrm{C}_{0}(\widehat{\mathbb{G}})\) and \(X=\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\)), Lemma 2.3 and properties of dual spaces of quotient \(\mathrm{C}^{*}\)-algebras. The second follows from the formula (2.1).
An element \(x\) in \(B(\mathbb{G})\) is said to be _positive-definite_ if it is positive as a functional on \(\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})\). For an extended discussion of this notion and equivalent characterizations see [DS]; note in particular that it makes sense to talk of positive-definite functions in \(\mathrm{L}^{\infty}(\mathbb{G})\) - which then automatically turn out to belong to \(B(\mathbb{G})\), at least if \(\mathbb{G}\) is coamenable. Note however that the authors of [DS] call the elements as above rather 'completely positive-definite functions'.
We shall say that a (quantum) positive-definite function is _normalised_ if the associated functional is a state. Note that by [DFSW, Lemma 4.3] positive-definite functions in \(\mathrm{C}_{0}(\mathbb{G})\) automatically belong to \(B_{0}(\mathbb{G})\) (as the relevant GNS representations are mixing).
The next statement is well-known for discrete quantum groups.
**Proposition 2.5**.: _Let \(\mathbb{G}\) be an algebraic quantum group. Then_
1. \(\mathfrak{C}_{c}^{\infty}(\mathbb{G})\subset\mathrm{A}(\mathbb{G})\)_;_
2. \(\Lambda_{\varphi}(\mathfrak{C}_{c}^{\infty}(\mathbb{G}))=\Lambda_{\psi}( \mathfrak{C}_{c}^{\infty}(\mathbb{G}))=\Lambda_{\widehat{\varphi}}(\mathfrak{C }_{c}^{\infty}(\widehat{\mathbb{G}}))=\Lambda_{\widehat{\psi}}(\mathfrak{C}_ {c}^{\infty}(\widehat{\mathbb{G}}))\)_;_
3. \(\mathrm{A}(\mathbb{G})\) _coincides with the linear span of_ (\(\mathrm{A}(\mathbb{G})\)-) _norm limits of normalised positive definite functions in_ \(\mathfrak{C}_{c}^{\infty}(\mathbb{G})\)_._
Proof.: From [KVD, Lemma 2.3], we have that for every \(a,b\in\mathfrak{C}_{c}^{\infty}(\mathbb{G})\)
\[(\mathrm{id}\otimes\omega_{\Lambda_{\varphi}(b),\Lambda_{\varphi}(a)})( \mathrm{W})=(\mathrm{id}\otimes\varphi)(\Delta(b^{*})(\mathds{1}\otimes a)),\]
so that
\[(\mathrm{id}\otimes\omega_{\Lambda_{\varphi}(a),\Lambda_{\varphi}(b)})( \mathrm{W}^{*})=(\mathrm{id}\otimes\varphi)((\mathds{1}\otimes a^{*})\Delta( b)). \tag{2.3}\]
By the axioms of algebraic quantum groups we have
\[\mathfrak{C}_{c}^{\infty}(\mathbb{G})=\mathrm{span}\{(\mathrm{id}\otimes \varphi)((\mathds{1}\otimes a^{*})\Delta(b))\,|\,a,b\in\mathfrak{C}_{c}^{ \infty}(\mathbb{G})\}\]
hence the first claim follows from equation (2.3).
Recall that one can identify \(\mathfrak{C}_{c}^{\infty}(\widehat{\mathbb{G}})\) with the space of functionals \(\{a\varphi\,|\,a\in\mathfrak{C}_{c}^{\infty}(\mathbb{G})\}\) on \(\mathfrak{C}_{c}^{\infty}(\mathbb{G})\). Then \(\Lambda_{\widehat{\psi}}(a\varphi)=\Lambda_{\varphi}(a)\) ([KVD, Page 1077]), which readily implies \(\Lambda_{\widehat{\psi}}(\mathfrak{C}_{c}^{\infty}(\widehat{\mathbb{G}}))= \Lambda_{\varphi}(\mathfrak{C}_{c}^{\infty}(\mathbb{G}))\). Next, by [KVD, Definition 8.13] for any \(z\in\mathbb{C}\), the operator \(\delta^{z}\) is a multiplier of \(\mathfrak{C}_{c}^{\infty}(\mathbb{G})\) and \(\Lambda_{\psi}(a)=\Lambda_{\varphi}(a\delta^{1/2})\) by [KVD, Page 1134] (see also the proof of [KVD, Lemma 9.15]). This shows that \(\Lambda_{\varphi}(\mathfrak{C}_{c}^{\infty}(\mathbb{G}))=\Lambda_{\psi}( \mathfrak{C}_{c}^{\infty}(\mathbb{G}))\). Since the dual of an algebraic quantum group is also an algebraic quantum group, this finishes the claim.
It follows from the very construction that the image of \(\Lambda_{\varphi}|_{\mathfrak{C}_{c}^{\infty}(\mathbb{G})}\) is dense in \(\mathrm{L}^{2}(\mathbb{G})\). Thus we can approximate in norm any vector state \(\omega_{\xi}\) on \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) for \(\xi\in\mathrm{L}^{2}(\widehat{\mathbb{G}})\) by vector states associated with vectors in \(\Lambda_{\varphi}(\mathfrak{C}_{c}^{\infty}(\mathbb{G}))\). Together with the previous paragraph it shows that
each normalised positive-definite function in \(\mathrm{A}(\mathbb{G})\) is a norm limit of normalised positive-definite functions in \(\mathfrak{C}_{c}^{\infty}(\mathbb{G})\), which together with the polarisation identity proves the third part of the lemma.
Note that the Fourier-Stieltjes algebra \(B(\mathbb{G})\) is closed with respect to natural left and right actions of \(\mathrm{L}^{1}(\mathbb{G})\) induced from \(\mathrm{L}^{\infty}(\mathbb{G})\) (see [DSV, Proof of Proposition 2.15]). The existence of the _Godement mean_ ([God]) for quantum \(B(\mathbb{G})\), i.e. a specific bi-invariant linear contractive unital functional \(M\colon B(\mathbb{G})\to\mathbb{C}\) of norm \(1\), was established in [DSV, Proposition 2.15] (see also [DD]). Note the following consequence of the proof of [DSV, Proposition 2.15].
**Proposition 2.6**.: _Let \(\mathbb{G}\) be a locally compact quantum group and let \(U\) be a unitary representation of \(\mathbb{G}\) on a Hilbert space \(\mathsf{H}\). Then the following are equivalent:_
1. _the Godement mean vanishes on all matrix coefficients of_ \(U\)_;_
2. \(U\) _admits no invariant vectors._
Proof.: Indeed, the proof of [DSV, Proposition 2.15] shows that for all \(\xi,\eta\in\mathsf{H}\) we have
\[M((\mathrm{id}\otimes\omega_{\xi,\eta})(U^{*}))=\langle\xi|p\eta\rangle,\]
where \(p\in\mathrm{B}(\mathsf{H})\) is the projection onto the invariant vectors of \(U\).
The following is an immediate consequence of Proposition 2.6 and the transitivity of the weak containment relation.
**Proposition 2.7**.: _Let \(S\) be a non-empty collection of representations of \(\mathbb{G}\). Then the following are equivalent:_
1. _the Godement mean vanishes on_ \(B_{S}(\mathbb{G})\)_;_
2. \(S\) _does not weakly contain the trivial representation._
Proof.: Indeed, if (1) does not hold then by the previous proposition there is a representation of \(\mathbb{G}\) which is weakly contained in \(S\) and contains the trivial representation. Thus \(S\) weakly contains the trivial representation.
On the other hand if \(S\) weakly contains the trivial representation, then \(\mathds{1}\in B_{S}(\mathbb{G})\) and we have \(M(\mathds{1})=1\).
The last proposition allows us to characterise for a given \(\mathbb{G}\) the coamenability of the dual, and the Haagerup property via the properties of the Godement mean. See [Der, Proposition 2] for classical analogues and recall that we write \(B_{\lambda}(\mathbb{G})\) for \(B_{\{\mathrm{W}\}}(\mathbb{G})\) and \(B_{0}(\mathbb{G})\) for \(B_{S_{mix}}(\mathbb{G})\).
**Proposition 2.8**.: _Let \(\mathbb{G}\) be a locally compact quantum group. Then_
1. \(\widehat{\mathbb{G}}\) _is not coamenable if and only if the Godement mean vanishes on_ \(B_{\lambda}(\mathbb{G})\)_;_
2. \(\mathbb{G}\) _does not have the Haagerup property if and only if the Godement mean vanishes on_ \(B_{0}(\mathbb{G})\)_._
Proof.: The first statement follows from Proposition 2.7 and [BT, Theorem 3.1]. The second follows from Proposition 2.7 and the very definition of the Haagerup property for quantum groups, [DFSW, Definition 6.1].
## 3. Separation properties for positive-definite functions on locally compact quantum groups
This section is devoted to establishing various separation properties for quantum positive-definite functions associated with particular classes of representations.
We begin with providing several equivalent descriptions of (normalised) positive-definite functions in \(B_{S}(\mathbb{G})\) under a mild assumption on the set \(S\). Note that the left regular representation \(\mathrm{W}\) is (up to equivalence) equal to its multiple, hence the proposition below applies to the set of positive definite functions in \(B_{\lambda}(\mathbb{G})\) with norm bounded by \(1\).
**Proposition 3.1**.: _Let \(\emptyset\neq S\subset\mathrm{Rep}(\mathbb{G})\) be a set closed under finite direct sums. The following convex sets are equal:_
1. _the set of positive definite functions_ \(a\) _in_ \(B_{S}(\mathbb{G})\) _with_ \(\|a\|_{B_{S}(\mathbb{G})}\leq 1\)_;_
2. \(\{(\mathrm{id}\otimes\omega_{\xi})(U^{*})\,|\,U\preceq S,\xi\in\mathsf{H}_{U}, \|\xi\|\leq 1\}\)_;_
3. \(\{a\in B_{S}(\mathbb{G})\,|\,a\,\text{ is a weak}^{*}\,\text{ limit of a net of positive definite functions}\) \(\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
Furthermore for each \(i\in I\) we have
\[1\geq\|\xi\|_{\mathsf{H}_{U}}=\|\omega_{\xi}\circ\phi_{U}\|_{\mathrm{C}^{u}_{0}( \widehat{\mathbb{G}})^{*}}=\big{\|}\!\sum_{j=1}^{N_{i}}\omega_{\xi_{i,j}}\circ \phi_{U_{i,j}}\big{\|}_{\mathrm{C}^{u}_{0}(\widehat{\mathbb{G}})^{*}}=\sum_{j=1 }^{N_{i}}\|\xi_{i,j}\|^{2}=\|\zeta_{i}\|^{2},\]
which ends the claim.
To see that (3) is contained in (5), take \(a\in B_{S}(\mathbb{G})\) which is a weak\({}^{*}\) limit of a normalised net \(\big{(}(\mathrm{id}\otimes\omega_{\xi_{i}})(U_{i}^{*})\big{)}_{i\in I}\) in \(B_{S}(\mathbb{G})\), where \(U_{i}\in S,\xi_{i}\in\mathsf{H}_{U_{i}}\), \(\|\xi_{i}\|\leq 1\). That means that we can find \(U\preceq S,\xi,\eta\in\mathsf{H}_{U}\) so that \(a=(\mathrm{id}\otimes\omega_{\xi,\eta})(U^{*})\) and \(\omega_{\xi_{i}}\circ\phi_{U_{i}}\xrightarrow[i\in I]{}\omega_{\xi,\eta}\circ \phi_{U}\) in the weak\({}^{*}\) topology of \(\mathrm{C}^{u}_{0}(\widehat{\mathbb{G}})^{*}\). By [RV, Theorem 4.6] we deduce that \(\big{(}(\mathrm{id}\otimes\omega_{\xi_{i}})(U_{i}^{*})\big{)}_{i\in I}\) converges strictly to \(a\). An analogous argument shows that (4) is contained in (6).
It remains to show that the set in (6) is contained in (1). Take \(a\in\mathrm{C}_{b}(\mathbb{G})\) which is a strict limit of a net \(\big{(}(\mathrm{id}\otimes\omega_{i})(U_{i}^{*})\big{)}_{i\in I}\), where \(U_{i}\preceq S\) and \(\|\omega_{i}\|\leq 1\). We want to show that \(a\) is a positive definite function in \(B_{S}(\mathbb{G})\) with \(\|a\|_{B_{S}(\mathbb{G})}\leq 1\). Consider the net of contractive functionals \((\omega_{i}\circ\phi_{U_{i}})_{i\in I}\) on \(\mathrm{C}^{u}_{0}(\widehat{\mathbb{G}})\). After passing to a subnet we can assume that there is \(\omega\in\mathrm{C}^{u}_{0}(\widehat{\mathbb{G}})^{*}\) such that \(\omega_{i}\circ\phi_{U_{i}}\xrightarrow[i\in I]{}\omega\) weak\({}^{*}\). Clearly \(\omega\) is positive and \(\|\omega\|\leq 1\). Let \((\pi_{\omega},\zeta_{\omega},\mathsf{H}_{\omega})\) be the GNS representation for \(\omega\). We claim that \(\pi_{\omega}\preceq S\) and \(a=(\mathrm{id}\otimes\omega_{\zeta_{\omega}}\circ\pi_{\omega})(\not{\!\!W}^{*})\), which will end the proof. The first property follows again by [Dix, Proposition 3.4.9]. Denote \(b=(\mathrm{id}\otimes\omega_{\zeta_{\omega}}\circ\pi_{\omega})(\not{\!\!W}^{*} )\in\mathrm{C}_{b}(\mathbb{G})\) and take \(\rho\in\mathrm{L}^{1}(\mathbb{G})\subset\mathrm{C}_{0}(\mathbb{G})^{*},c\in \mathrm{C}_{0}(\mathbb{G})\). We have
\[\langle\rho c,a\rangle =\langle\rho,ca\rangle=\lim_{i\in I}\langle\rho,c(\mathrm{id} \otimes\omega_{i})(U_{i}^{*})\rangle=\lim_{i\in I}\langle\omega_{i}\circ\phi _{U_{i}},(\rho c\otimes\mathrm{id})(\not{\!\!W}^{*})\rangle\] \[=\langle\omega_{\zeta_{\omega}}\circ\pi_{\omega},(\rho c\otimes \mathrm{id})(\not{\!\!W}^{*})\rangle=\langle\rho c,b\rangle.\]
Note that above we used the fact that \(\rho c\in\mathrm{L}^{1}(\mathbb{G})\), so that \((\rho c\otimes\mathrm{id})(\not{\!\!W}^{*})\in\mathrm{C}^{u}_{0}(\widehat{ \mathbb{G}})\). Since the set of functionals of the form \(\rho c\) as above is dense in \(\mathrm{L}^{1}(\mathbb{G})\), we obtain \(a=b\), which ends the proof.
**Corollary 3.2**.: _Let \(\mathbb{G}\) be a locally compact quantum group and \(V\subset\mathrm{L}^{2}(\mathbb{G})\) a dense subspace. The following convex sets are equal:_
1. _the set of positive definite functions_ \(a\) _in_ \(B_{\lambda}(\mathbb{G})\) _with_ \(\|a\|_{B_{\lambda}(\mathbb{G})}\leq 1\)_;_
2. \(\{a\in B_{\lambda}(\mathbb{G})\,|\,a\,\text{ is a weak${}^{*}$ limit of a net of positive definite functions}\) \(\text{ of the form }(\mathrm{id}\otimes\omega_{\xi})(\mathrm{W}^{*})\) where \(\,\xi\in V,\|\xi\|\leq 1\}\)_;_
3. \(\{a\in\mathrm{C}_{b}(\mathbb{G})\,|\,a\,\text{ is a strict limit of a net of positive definite functions}\) \(\text{ of the form }(\mathrm{id}\otimes\omega_{\xi})(\mathrm{W}^{*})\) where \(\,\xi\in V,\|\xi\|\leq 1\}\)_._
Proof.: The above result follows directly from Proposition 3.1 for \(S=\{\mathrm{W}\}\) and density of \(V\) in \(\mathrm{L}^{2}(\mathbb{G})\).
The next result should be compared to Proposition 2.5.
**Proposition 3.3**.: _Let \(\mathbb{G}\) be an algebraic quantum group. Then \(B_{\lambda}(\mathbb{G})\subset\mathrm{C}_{b}(\mathbb{G})\) coincides with the linear span of strict limits of normalised positive definite functions in \(\mathfrak{C}^{\infty}_{c}(\mathbb{G})\)._
Proof.: It suffices to show that positive-definite functions in \(B_{\lambda}(\mathbb{G})\) of norm not greater than \(1\) are precisely strict limits of positive definite functions in \(\mathfrak{C}^{\infty}_{c}(\mathbb{G})\) of norm not greater then \(1\). This follows from Corollary 3.2 applied to \(V=\Lambda_{\varphi}(\mathfrak{C}^{\infty}_{c}(\mathbb{G}))\) and from [KVD, Lemma 2.3], arguing again as in the proof of Proposition 2.5.
**Remark 3.4**.: It is worth noting that in spite of Proposition 3.3 the space \(B_{\lambda}(\mathbb{G})\) need not be closed under strict limits; indeed, this is not the case whenever \(\mathbb{G}\) is a non-amenable discrete group (it suffices to consider any net of finitely supported functions converging pointwise to \(\mathds{1}\)).
The next theorem is our main result in this section. We show that, roughly speaking, if \(\widehat{\mathbb{G}}\) is not coamenable, one cannot find a net of normalised positive-definite functions in \(B_{\lambda}(\mathbb{G})\) which would be "eventually separated from \(0\)". To make this statement precise, we use an auxiliary strictly dense subspace \(A\subset\mathrm{C}_{b}(\mathbb{G})\).
**Theorem 3.5**.: _Let \(\mathbb{G}\) be a locally compact quantum group, \(\emptyset\neq S\) a subset of \(\mathrm{Rep}(\mathbb{G})\) and \(A\subset\mathrm{C}_{b}(\mathbb{G})\) a strictly dense subspace. If there exists \(\varepsilon>0\) and a net \((f_{i})_{i\in I}\) of normalised positive definite functions in \(B_{S}(\mathbb{G})\) such that_
\[\forall_{a\in A}\exists_{i_{0}\in I}\forall_{i\geq i_{0}}\quad a^{*}f_{i}a \geq\varepsilon a^{*}a\ \ \mathrm{in}\ \mathrm{C}_{b}(\mathbb{G}), \tag{3.3}\]
_then the Godement mean does not vanish on \(B_{S}(\mathbb{G})\)._
Proof.: Assume that there is such \(\varepsilon>0\) and a net \((f_{i})_{i\in I}\). For each \(i\in I\), write \(f_{i}=(\mathrm{id}\otimes\omega_{i}\circ\pi_{i})(\mathcal{W}^{*})\) for a state \(\omega_{i}\in\mathrm{B}(\mathsf{H}_{i})_{*}\) and a representation \(\pi_{i}\) weakly contained in \(S\). Since \(f_{i}\) are normalised, we can pass to a subnet \((\omega_{j}\circ\pi_{j})_{j\in J}\) in \(\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})^{*}\) which converges weak\({}^{*}\) to a positive functional \(\omega\in\mathrm{C}_{0}^{u}(\widehat{\mathbb{G}})^{*}\). Let \((\zeta_{\omega},\pi_{\omega},\mathsf{H}_{\omega})\) be the GNS representation for \(\omega\). As in the proof of Proposition 3.1 we use [Dix, Proposition 3.4.9] to deduce that since all \(\pi_{j}\) are weakly contained in \(S\), the same is true for \(\pi_{\omega}\). Consequently \(b=(\mathrm{id}\otimes\omega)(\mathcal{W}^{*})\in B_{S}(\mathbb{G})\). Choose \(a\in A\) and \(\nu\in\mathrm{L}^{1}(\mathbb{G})^{+}\). We have
\[\langle\nu,a^{*}ba\rangle =\langle a\nu a^{*},b\rangle=\langle\omega,(a\nu a^{*}\otimes \mathrm{id})(\mathcal{W}^{*})\rangle=\lim_{j\in J}\langle\omega_{j}\circ\pi_{ j},(a\nu a^{*}\otimes\mathrm{id})(\mathcal{W}^{*})\rangle\] \[=\lim_{j\in J}\langle a\nu a^{*},f_{j}\rangle=\lim_{j\in J} \langle\nu,a^{*}f_{j}a\rangle\geq\liminf_{j\in J}\langle\nu,\varepsilon a^{*} a\rangle=\langle\nu,\varepsilon a^{*}a\rangle.\]
Since \(\nu\geq 0\) is arbitrary, we obtain \(a^{*}ba\geq\varepsilon a^{*}a\) in \(\mathrm{L}^{\infty}(\mathbb{G})\), hence in \(\mathrm{C}_{b}(\mathbb{G})\). Since \(a\in A\) was arbitrary, strict density of \(A\) in \(\mathrm{C}_{b}(\mathbb{G})\) implies \(b\geq\varepsilon\mathds{1}\). Indeed, take \(\xi=c\xi_{0}\) for \(c\in\mathrm{C}_{0}(\mathbb{G}),\xi_{0}\in\mathrm{L}^{2}(\mathbb{G})\) and \((a_{k})_{k\in K}\) a net in \(A\) which converges strictly to \(\mathds{1}\). Then
\[0\leq\langle\xi|(a^{*}_{k}ba_{k}-\varepsilon a^{*}_{k}a_{k})\xi\rangle= \langle a_{k}c\xi_{0}|(b-\varepsilon\mathds{1})a_{k}c\xi_{0}\rangle\xrightarrow {}_{k\in K}\langle c\xi_{0}|(b-\varepsilon\mathds{1})c\xi_{0}\rangle.\]
As this holds for all vectors \(\xi=c\xi_{0}\) in a dense subset of \(\mathrm{L}^{2}(\mathbb{G})\), we can deduce that \(b\geq\varepsilon\mathds{1}\). Applying Godement mean \(M\) to both sides of this inequality gives \(M(b)\geq\varepsilon\).
The above theorem together with Proposition 2.8 immediately yields the following two corollaries.
**Corollary 3.6**.: _Let \(\mathbb{G}\) be a locally compact quantum group and \(A\subset\mathrm{C}_{b}(\mathbb{G})\) a strictly dense subspace. If \(\widehat{\mathbb{G}}\) is not coamenable, then there is no \(\varepsilon>0\) and no net \((f_{i})_{i\in I}\) of normalised positive definite functions in \(B_{\lambda}(\mathbb{G})\) such that_
\[\forall_{a\in A}\exists_{i_{0}\in I}\forall_{i\geq i_{0}}\quad a^{*}f_{i}a\geq \varepsilon a^{*}a\ \ \mathrm{in}\ \mathrm{C}_{b}(\mathbb{G}). \tag{3.4}\]
**Corollary 3.7**.: _Let \(\mathbb{G}\) be a locally compact quantum group and \(A\subset\mathrm{C}_{b}(\mathbb{G})\) a strictly dense subspace. If \(\mathbb{G}\) does not have the Haagerup property then there is no \(\varepsilon>0\) and no net \((f_{i})_{i\in I}\) of normalised positive definite functions in \(B_{0}(\mathbb{G})\) such that_
\[\forall_{a\in A}\exists_{i_{0}\in I}\forall_{i\geq i_{0}}\quad a^{*}f_{i}a\geq \varepsilon a^{*}a\ \ \mathrm{in}\ \mathrm{C}_{b}(\mathbb{G}). \tag{3.5}\]
In what follows we formulate certain consequences of the above corollaries.
**Corollary 3.8**.: _Let \(\mathbb{G}\) be a locally compact quantum group, \(\varepsilon>0\) and \((f_{i})_{i\in I}\) a net of normalised positive definite functions in \(B_{\lambda}(\mathbb{G})\). Assume that one of the following is true:_
1. \(\mathbb{G}\) _is arbitrary and_ \(A=\operatorname{A}(\mathbb{G})\) _or_ \(A=\{(\operatorname{id}\otimes\omega)(\operatorname{W}^{*})\,|\,\omega\in \operatorname{L}^{1}_{\sharp}(\widehat{\mathbb{G}})\}\)_;_
2. \(\mathbb{G}\) _is a discrete quantum group and_ \(A\) _is equal to_ \(\operatorname{c}_{c}(\mathbb{G})\) _the algebraic direct sum_ \(alg-\bigoplus_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{G}})} \operatorname{B}(\mathsf{H}_{\alpha})\)_;_
3. _more generally,_ \(\mathbb{G}\) _is an algebraic quantum group and_ \(A=\mathfrak{C}_{c}^{\infty}(\mathbb{G})\)_._
_If the condition (3.4) holds, then \(\widehat{\mathbb{G}}\) is coamenable._
Let us note that in cases \((2),(3)\), we can interpret condition (3.4) as an inequality "\(f_{i}\geq\varepsilon 1\)" which holds pointwise-eventually (resp. eventually uniformly on compact sets). A similar corollary is true in the case of Haagerup property.
Finally let us record the corresponding corollary for classical locally compact groups, whose 'non-amenable' part can be also deduced from [Der].
**Corollary 3.9**.: _Let \(G\) be a locally compact group. If \(G\) is not amenable (respectively, does not have the Haagerup property) then for no \(\varepsilon>0\) can we find a net \((f_{i})_{i\in I}\) of normalised positive-definite functions which are compactly supported (respectively, belong to \(\operatorname{C}_{0}(G)\)) and on every compact subset of \(G\) are eventually greater than \(\varepsilon\)._
Proof.: Recall first that, as already mentioned above, compactly supported positive-definite functions automatically belong to \(\operatorname{A}(G)\subset B_{\lambda}(G)\) (as shown already in [Eym]) and positive-definite functions in \(\operatorname{C}_{0}(G)\) automatically belong to \(B_{0}(G)\) by [DFSW, Lemma 4.3].
We can apply Corollaries 3.6 and 3.7 with \(A=\operatorname{C}_{c}(G)\). In the commutative case the inequality appearing in these corollaries amounts then to saying that given a supposed net of positive-definite functions \((f_{i})_{i\in I}\), for every compact set \(Z\subset G\) we have \(i_{Z}\in I\) such that for each \(i\geq i_{Z}\) we have \(f_{i}|_{Z}\geq\varepsilon\).
Note that for \(G\) discrete the last condition (being on every compact subset of \(G\) eventually greater than \(\varepsilon\)) means simply that for every point \(t\in G\) we have \(\limsup_{i\in I}f_{i}(t)\geq\varepsilon\).
## 4. Separation properties for von Neumann algebras of discrete quantum groups
In this section we let \(\mathbb{G}\) be a compact quantum group. If \(\mathbb{G}\) is coamenable, then the von Neumann algebra \(\operatorname{L}^{\infty}(\mathbb{G})\) is injective, i.e. has \(w^{*}\)-CPAP ([BMT, Theorem 1.1]) and we can find a net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, finite rank UCP maps \(\operatorname{L}^{\infty}(\mathbb{G})\to\operatorname{L}^{\infty}(\mathbb{G})\) which converges to identity in the point-\(w^{*}\)-topology. In fact, we can assume that maps \(\Phi_{\lambda}\) are _quantum Herz-Schur multipliers_ (known also as _adjoints of centralisers_), i.e. they are given by \(\Phi_{\lambda}=(\omega_{\lambda}\otimes\operatorname{id})\Delta\) for some states \(\omega_{\lambda}\in\operatorname{L}^{1}(\mathbb{G})\).
We will now exploit the separation results for quantum positive-definite functions obtained in the previous section to obtain their analogues on the von Neumann algebraic level. This will in particular strengthen the result mentioned above.
Before we formulate the first proposition, let us make some preliminary observations. Recall that by Lemma 2.3 we have a linear bijection \(B_{\lambda}(\widehat{\mathbb{G}})\simeq\operatorname{C}(\mathbb{G})^{*}\) and any \(a\in B_{\lambda}(\widehat{\mathbb{G}})\) can be written as \(a=(\operatorname{id}\otimes\omega_{\xi,\eta})(U^{*})\) for some unitary representation \(U\preceq\operatorname{W}^{\widehat{\mathbb{G}}}\) and vectors \(\xi,\eta\in\mathsf{H}_{U}\). As \(U\preceq\operatorname{W}^{\widehat{\mathbb{G}}}\), functional \(\omega_{\xi,\eta}\circ\phi_{U}\in\operatorname{C}^{u}(\mathbb{G})^{*}\) is well defined on \(\operatorname{C}^{u}(\mathbb{G})/\ker\lambda_{\mathbb{G}}=\operatorname{C}( \mathbb{G})\) and we can write \(a=(\operatorname{id}\otimes\omega)(\operatorname{W}^{\widehat{\mathbb{G}}*})\) for some (not necesarilly normal) functional \(\omega\in\operatorname{C}(\mathbb{G})^{*}\) - the
image of \(\omega_{\xi,\eta}\circ\phi_{U}\). Furthermore, as \(B_{\lambda}(\widehat{\mathbb{G}})\subset\mathrm{M}^{l}_{cb}(\mathrm{A}(\widehat{ \mathbb{G}}))\), we can use the associated normal CB map \(\Theta^{l}(a)\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\mathbb{G}))\). According to [Bra, Proposition 4.8] (see also [Daw\({}_{2}\)]), this map is given by
\[\Theta^{l}(a)\colon\mathrm{L}^{\infty}(\mathbb{G})\ni x\mapsto(\mathrm{id} \otimes\omega_{\xi,\eta})(U(x\otimes\mathds{1})U^{*})\in\mathrm{L}^{\infty}( \mathbb{G}).\]
Combining these two properties gives us
\[\Theta^{l}(a)\colon\mathrm{L}^{\infty}(\mathbb{G})\ni x\mapsto(\mathrm{id} \otimes\omega)(\mathrm{W}^{\widehat{\mathbb{G}}}(x\otimes\mathds{1})\mathrm{W} ^{\widehat{\mathbb{G}}*})\in\mathrm{L}^{\infty}(\mathbb{G}), \tag{4.1}\]
where we interpret \(\mathrm{W}^{\widehat{\mathbb{G}}},\mathrm{W}^{\widehat{\mathbb{G}}*}\) as elements of \(\mathrm{M}(\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}))\otimes\mathrm{C}(\mathbb{G}))\), so that \(\mathrm{W}^{\widehat{\mathbb{G}}}(x\otimes\mathds{1})\mathrm{W}^{\widehat{ \mathbb{G}}*}\in\mathrm{M}(\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}))\otimes \mathrm{C}(\mathbb{G}))\) and (4.1) makes sense.
Here and below \(\|\cdot\|_{2}\) denotes the \(\mathrm{L}^{2}\)-norm on \(\mathrm{L}^{\infty}(\mathbb{G})\) induced by the Haar integral, i.e. \(\|x\|_{2}=h(x^{*}x)^{1/2}\), \(x\in\mathrm{L}^{\infty}(\mathbb{G})\). One can view the next proposition (and most results in this section) as a way of separating finite rank maps from the identity using \(x\in\mathrm{L}^{\infty}(\mathbb{G})\) and \(\omega\in\mathrm{L}^{1}(\mathbb{G})\).
**Proposition 4.1**.: _Let \(\mathbb{G}\) be a compact quantum group which is not coamenable and let \(\varepsilon\in(0,1)\). Suppose that \((\Phi_{\lambda})_{\lambda\in\Lambda}\) is a net of normal, UCP quantum Herz-Schur multipliers \(\Phi_{\lambda}\colon\mathrm{L}^{\infty}(\mathbb{G})\to\mathrm{L}^{\infty}( \mathbb{G})\), given by \(\Phi_{\lambda}=\Theta^{l}(a_{\lambda})\) for \(a_{\lambda}\in B_{\lambda}(\widehat{\mathbb{G}})\). Then there exist \(x\in\mathrm{Pol}(\mathbb{G})\) and \(\omega\in\mathrm{L}^{1}(\mathbb{G})\) with \(\|x\|_{2}=\|\omega\|=1\) such that \(\limsup_{\lambda\in\Lambda}|\langle x-\Phi_{\lambda}(x),\omega\rangle|>\varepsilon\)._
Proof.: For each \(\alpha\in\mathrm{Irr}(\mathbb{G})\) choose a representative \(U^{\alpha}\in\alpha\) and an orthonormal basis \(\{\xi_{i}^{\alpha}\}_{i=1}^{\dim(\alpha)}\) in \(\mathsf{H}_{\alpha}\). Assume furthermore that \(\uprho_{\alpha}\)-operators (see [NT, Section 1.7]) are diagonal with respect to the chosen basis, with eigenvalues \(\{\uprho_{\alpha,i}\}_{i=1}^{\dim(\alpha)}\).
Assume by contradiction that for all \(x\in\mathrm{Pol}(\mathbb{G}),\rho\in\mathrm{L}^{1}(\mathbb{G})\) with \(\|x\|_{2}=\|\rho\|=1\) we have
\[\limsup_{\lambda\in\Lambda}|\langle x-\Phi_{\lambda}(x),\rho\rangle|\leq\varepsilon. \tag{4.2}\]
Fix \(\lambda\in\Lambda\). As recalled above (equation (4.1)) the map \(\Phi_{\lambda}=\Theta^{l}(a_{\lambda})\) is given by
\[\Phi_{\lambda}(x)=(\mathrm{id}\otimes\omega_{\lambda})(\mathrm{W}^{\widehat{ \mathbb{G}}}(x\otimes\mathds{1})\mathrm{W}^{\widehat{\mathbb{G}}*})\]
for \(x\in\mathrm{L}^{\infty}(\mathbb{G})\) and some functionals \(\omega_{\lambda}\in\mathrm{C}(\mathbb{G})^{*}\). As \(\Phi_{\lambda}\) is UCP, \(\omega_{\lambda}\) is a state. Consequently
\[\langle\Phi_{\lambda}(x),\rho\rangle=\omega_{\lambda}\big{(}(\rho\otimes \mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}}(x\otimes\mathds{1})\mathrm{W}^{ \widehat{\mathbb{G}}*})\big{)}\quad(\rho\in\mathrm{L}^{1}(\mathbb{G})) \tag{4.3}\]
and \((\rho\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}}(x\otimes\mathds{1}) \mathrm{W}^{\widehat{\mathbb{G}}*})\in\mathrm{M}(\mathrm{C}(\mathbb{G}))= \mathrm{C}(\mathbb{G})\). For each \(\lambda\in\Lambda\), we can find a net \((\omega_{\lambda,i})_{i\in I}\) of normal states in \(\mathrm{L}^{1}(\mathbb{G})\) such that \(\omega_{\lambda,i}\xrightarrow[i\in I]{}\omega_{\lambda}\) pointwise on \(\mathrm{C}(\mathbb{G})\). Using the fact that \(\mathrm{L}^{\infty}(\mathbb{G})\subset\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\) is standard and approximating further, we can assume that \(\omega_{\lambda,i}=\omega_{\xi_{\lambda,i}}\) for length \(1\) vectors \(\xi_{\lambda,i}\in\Lambda_{h}(\mathrm{Pol}(\mathbb{G}))\). Define \(\Phi_{\lambda,i}=(\omega_{\lambda,i}\otimes\mathrm{id})\Delta\). Observe using (4.3) that for any \(x\in\mathrm{L}^{\infty}(\mathbb{G}),\rho\in\mathrm{L}^{1}(\mathbb{G})\)
\[\lim_{i\in I}\langle x-\Phi_{\lambda,i}(x),\rho\rangle =\lim_{i\in I}\langle x-(\omega_{\lambda,i}\otimes\mathrm{id}) \Delta(x),\rho\rangle=\langle x,\rho\rangle-\lim_{i\in I}\langle\omega_{ \lambda,i},(\rho\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}}(x \otimes\mathds{1})\mathrm{W}^{\widehat{\mathbb{G}}*})\rangle\] \[=\langle x,\rho\rangle-\langle\omega_{\lambda},(\rho\otimes \mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}}(x\otimes\mathds{1})\mathrm{W}^{ \widehat{\mathbb{G}}*})\rangle=\langle x-\Phi_{\lambda}(x),\rho\rangle.\]
Consequently (after passing to a new net), we obtain a net of finite rank UCP maps \((\widetilde{\Phi}_{\lambda})_{\lambda\in\Lambda}\) which still satisfies (4.2) for all \(x\in\mathrm{Pol}(\mathbb{G}),\rho\in\mathrm{L}^{1}(\mathbb{G})\) with \(\|x\|_{2}=\|\rho\|=1\) with each \(\widetilde{\Phi}_{\lambda}\) given by \(\widetilde{\Phi}_{\lambda}=\Theta^{l}(\widetilde{a}_{\lambda})\) where \(\widetilde{a}_{\lambda}=(\mathrm{id}\otimes\widetilde{\omega}_{\lambda})( \mathrm{W}^{\widehat{\mathbb{G}}*})\) for some \(\widetilde{\omega}_{\lambda}=\omega_{\xi_{\lambda}}\), \(\xi_{\lambda}\in\Lambda_{h}(\mathrm{Pol}(\mathbb{G}))\).
Explicitly, the map \(\widetilde{\Phi}_{\lambda}\) is given by \(\widetilde{\Phi}_{\lambda}=(\widetilde{\omega}_{\lambda}\otimes\mathrm{id})\Delta\). Next we need to correct functionals \(\widetilde{\omega}_{\lambda}\).
Pick \(m_{\mathbb{R}}\in\mathrm{L}^{\infty}(\mathbb{R})^{*}\), a mean on \(\mathbb{R}\), and define (c.f. [Tom])
\[\omega_{\lambda}^{(1)}\colon\,\mathrm{L}^{\infty}(\mathbb{G})\ni y\mapsto m_{ \mathbb{R}}\big{(}\mathbb{R}\ni t\mapsto\widetilde{\omega}_{\lambda}(\tau_{t}^ {\mathbb{G}}(y))\in\mathbb{C}\big{)}\in\mathbb{C}\quad(\lambda\in\Lambda).\]
Since \(\widetilde{\Phi}_{\lambda}\) is finite rank, acts via \(\widetilde{\Phi}_{\lambda}(U_{i,j}^{\alpha})=\sum_{k=1}^{\dim(\alpha)} \widetilde{\omega}_{\lambda}(U_{i,k}^{\alpha})U_{k,j}^{\alpha}\) and \(U_{i,j}^{\alpha}\)'s form a linearly independent set, we can conclude that there is a finite set \(F_{\lambda}\subset\mathrm{Irr}(\mathbb{G})\) so that \(\omega_{\lambda}(U_{i,j}^{\alpha})=0\) for all \(\alpha\in F_{\lambda}^{c}=\mathrm{Irr}(\mathbb{G})\setminus F_{\lambda}\), \(1\leq i,j\leq\dim(\alpha)\). Enlarge \(F_{\lambda}\), so that it is closed under taking the contragradient representation. Next, as \(\tau_{t}^{\mathbb{G}}(U_{i,j}^{\alpha})=\big{(}\frac{\rho_{\alpha,i}}{\rho_{ \alpha,j}}\big{)}^{it}U_{i,j}^{\alpha}\) we still have \(\omega_{\lambda}^{(1)}(U_{i,j}^{\alpha})=0\) for \(\alpha\in F_{\lambda}^{c}\) and consequently \(\omega_{\lambda}^{(1)}\) is a normal state. Finally, define normal states
\[\omega_{\lambda}^{(2)}=\tfrac{1}{2}(\omega_{\lambda}^{(1)}+\omega_{\lambda}^{( 1)}\circ R_{\mathbb{G}})\in\mathrm{L}^{1}(\mathbb{G})\]
(observe that \(\omega_{\lambda}^{(1)}\circ R_{\mathbb{G}}\) is a normal state since \(R_{\mathbb{G}}\) is positive and normal) and normalised positive definite functions
\[f_{\lambda}=(\mathrm{id}\otimes\omega_{\lambda}^{(2)})(\mathrm{W}^{\widehat{ \mathbb{G}}*}).\]
Since also \(\omega_{\lambda}^{(2)}(U_{i,j}^{\alpha})=0\) whenever \(\alpha\in F_{\lambda}^{c}\), we have \(f_{\lambda}\in\mathrm{c}_{c}(\widehat{\mathbb{G}})\). Let us argue that \(f_{\lambda}\) are self-adjoint. Observe first that since \(\omega_{\lambda}^{(1)}\circ\tau_{t}^{\mathbb{G}}=\omega_{\lambda}^{(1)}\), \((R_{\widehat{\mathbb{G}}}\otimes R_{\mathbb{G}})(\mathrm{W}^{\widehat{ \mathbb{G}}})=\mathrm{W}^{\widehat{\mathbb{G}}}\) and \((\tau_{t}^{\widehat{\mathbb{G}}}\otimes\tau_{t}^{\mathbb{G}})(\mathrm{W}^{ \widehat{\mathbb{G}}})=\mathrm{W}^{\widehat{\mathbb{G}}}\) for all \(t\in\mathbb{R}\), we have
\[(\mathrm{id}\otimes\omega_{\lambda}^{(1)})(\mathrm{W}^{\widehat{ \mathbb{G}}*}) =S_{\widehat{\mathbb{G}}}\big{(}(\mathrm{id}\otimes\omega_{\lambda}^{(1 )})(\mathrm{W}^{\widehat{\mathbb{G}}})\big{)}=R_{\widehat{\mathbb{G}}}\tau_{- i/2}^{\widehat{\mathbb{G}}}\big{(}(\mathrm{id}\otimes\omega_{\lambda}^{(1)})( \mathrm{W}^{\widehat{\mathbb{G}}})\big{)}\] \[=R_{\widehat{\mathbb{G}}}\big{(}(\mathrm{id}\otimes\omega_{ \lambda}^{(1)})(\mathrm{W}^{\widehat{\mathbb{G}}})\big{)}=(\mathrm{id} \otimes\omega_{\lambda}^{(1)}\circ R_{\mathbb{G}})(\mathrm{W}^{\widehat{ \mathbb{G}}}).\]
Consequently
\[f_{\lambda} =\tfrac{1}{2}\big{(}(\mathrm{id}\otimes\omega_{\lambda}^{(1)})( \mathrm{W}^{\widehat{\mathbb{G}}*})+(\mathrm{id}\otimes\omega_{\lambda}^{(1)} \circ R_{\mathbb{G}})(\mathrm{W}^{\widehat{\mathbb{G}}*})\big{)}\] \[=\tfrac{1}{2}\big{(}(\mathrm{id}\otimes\omega_{\lambda}^{(1)} \circ R_{\mathbb{G}})(\mathrm{W}^{\widehat{\mathbb{G}}})+(\mathrm{id}\otimes \omega_{\lambda}^{(1)}\circ R_{\mathbb{G}})(\mathrm{W}^{\widehat{\mathbb{G}}})^ {*}\big{)}\]
is self-adjoint.
Since \(0<\varepsilon<1\), we can choose \(\varepsilon^{\prime},\varepsilon^{\prime\prime}\) satisfying \(\varepsilon<\varepsilon^{\prime}<\varepsilon^{\prime\prime}<1\). To obtain a contradiction, we will use point (2) of Corollary 3.8. Namely, let us argue that for any \(a\in\mathrm{c}_{c}(\widehat{\mathbb{G}})\) there is \(\lambda_{0}\in\Lambda\) so that for \(\lambda\geq\lambda_{0}\)
\[a^{*}f_{\lambda}a\geq(1-\varepsilon^{\prime\prime})a^{*}a. \tag{4.4}\]
This will give us a contradiction. First observe that to get (4.4) it is enough to consider central projections \(a=p_{\alpha}\) for \(\alpha\in\mathrm{Irr}(\mathbb{G})\). Indeed, write \(a=\sum_{\alpha\in F}ap_{\alpha}\) for a finite set \(F\subset\mathrm{Irr}(\mathbb{G})\) and assume that (4.4) holds for all \(p_{\alpha}\,(\alpha\in F)\) and corresponding \(\lambda_{\alpha}\). Choose \(\lambda_{0}\in\Lambda\) such that \(\lambda_{0}\geq\lambda_{\alpha}\,(\alpha\in F)\). Then for \(\lambda\geq\lambda_{0}\) we have
\[a^{*}f_{\lambda}a=\sum_{\alpha\in F}(a^{*}p_{\alpha})(p_{\alpha}^{*}f_{ \lambda}p_{\alpha})(ap_{\alpha})\geq\sum_{\alpha\in F}(a^{*}p_{\alpha})((1- \varepsilon^{\prime\prime})p_{\alpha}^{*}p_{\alpha})(ap_{\alpha})=(1- \varepsilon^{\prime\prime})a^{*}a.\]
Thus it is enough to prove (4.4) for \(a=p_{\alpha}\). Fix \(\alpha\in\mathrm{Irr}(\mathbb{G})\), non-zero vectors \(\xi,\eta,\zeta\in\mathsf{H}_{\alpha}\) with \(\|\eta\|=1\) and consider elements \(x=\frac{U_{\xi,\eta}^{c}}{\|U_{\xi,\eta}^{c}\|_{2}},\rho=\frac{h(U_{\xi,\eta}^{c \alpha})}{\|h(U_{\xi,\eta}^{c})\|}\). The assumption (4.2) (or
rather its variant for the modified maps \(\widetilde{\Phi}_{\lambda}\)) implies in particular that
\[\limsup_{\lambda\in\Lambda}\bigl{|}h\bigl{(}U^{\alpha*}_{\xi,\eta}(U^{\alpha}_{ \zeta,\eta}-\widetilde{\Phi}_{\lambda}(U^{\alpha}_{\zeta,\eta}))\bigr{)}\bigr{|} \leq\|U^{\alpha}_{\zeta,\eta}\|_{2}\|h(U^{\alpha*}_{\xi,\eta})\|\,\varepsilon, \tag{4.5}\]
i.e. there exists \(\lambda_{0}\in\Lambda\) (depending on \(\xi,\eta,\zeta\)) so that for \(\lambda\geq\lambda_{0}\)
\[\bigl{|}h\bigl{(}U^{\alpha*}_{\xi,\eta}(U^{\alpha}_{\zeta,\eta}-\widetilde{ \Phi}_{\lambda}(U^{\alpha}_{\zeta,\eta}))\bigr{)}\bigr{|}\leq\|U^{\alpha}_{ \zeta,\eta}\|_{2}\|h(U^{\alpha*}_{\xi,\eta})\|\,\varepsilon^{\prime}.\]
The left hand side of the above inequality is equal to
\[\bigl{|}h\bigl{(}U^{\alpha*}_{\xi,\eta}(U^{\alpha}_{\zeta,\eta}- \widetilde{\Phi}_{\lambda}(U^{\alpha}_{\zeta,\eta}))\bigr{)}\bigr{|} =\bigl{|}\tfrac{\langle\zeta|\rho_{\alpha}^{-1}\xi\rangle}{\dim_{ q}(\alpha)}-\sum_{m=1}^{\dim(\alpha)}\omega_{\lambda}(U^{\alpha}_{\zeta,\xi \alpha})\tfrac{\langle\xi\alpha|\rho_{\alpha}^{-1}\xi\rangle}{\dim_{q}(\alpha)} \bigr{|}\] \[=\tfrac{1}{\dim_{q}(\alpha)}\bigl{|}\langle\zeta|\rho_{\alpha}^{-1 }\xi\rangle-\omega_{\lambda}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})\bigr{|}.\]
Hence we obtain
\[\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi\rangle-\omega_{\lambda}(U^{\alpha}_ {\zeta,\rho_{\alpha}^{-1}\xi})\bigr{|}\leq\dim_{q}(\alpha)\|U^{\alpha}_{\zeta,\eta}\|_{2}\|h(U^{\alpha*}_{\xi,\eta})\|\,\varepsilon^{\prime}\leq \varepsilon^{\prime}\sqrt{\langle\xi|\rho_{\alpha}^{-1}\xi\rangle\,\langle \zeta|\rho_{\alpha}^{-1}\zeta\rangle} \tag{4.6}\]
for fixed \(\xi,\zeta\) and all \(\lambda\geq\lambda_{0}\). Next we obtain a bound for \(\omega_{\lambda}^{(1)}\). Assume additionally that \(\xi,\zeta\in 1_{\{c\}}(\rho_{\alpha})\mathsf{H}_{\alpha}\) for some \(c\in\operatorname{Sp}(\rho_{\alpha})\). Recall that \(\tau_{t}^{\mathbb{G}}(U^{\alpha}_{\xi,\eta})=U^{\alpha}_{\rho_{\alpha}^{-it} \xi,\rho_{\alpha}^{-it}\eta}\), \(t\in\mathbb{R}\). Consequently using the inequality (4.6) we have
\[\begin{split}\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi\rangle- \omega_{\lambda}^{(1)}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})\bigr{|}& =\bigl{|}m_{\mathbb{R}}\bigl{(}\mathbb{R}\ni t\mapsto\langle \zeta|\rho_{\alpha}^{-1}\xi\rangle-\omega_{\lambda}(U^{\alpha}_{\rho_{\alpha}^ {-it}\xi,\rho_{\alpha}^{-1-it}\xi})\in\mathbb{C}\bigr{)}\bigr{|}\\ &=\bigl{|}m_{\mathbb{R}}\bigl{(}\mathbb{R}\ni t\mapsto\langle \zeta|\rho_{\alpha}^{-1}\xi\rangle-\omega_{\lambda}(U^{\alpha}_{\zeta,\rho_{ \alpha}^{-1}\xi})\in\mathbb{C}\bigr{)}\bigr{|}\\ &=\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi\rangle-\omega_{ \lambda}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})\bigr{|}\leq\varepsilon^{ \prime}\sqrt{\langle\xi|\rho_{\alpha}^{-1}\xi\rangle\langle\zeta|\rho_{\alpha}^ {-1}\zeta\rangle}\end{split} \tag{4.7}\]
for \(\xi,\zeta\) in the same eigenspace of \(\rho_{\alpha}\) and all \(\lambda\geq\lambda_{0}\). The next step is to obtain a bound for \(\omega_{\lambda}^{(2)}\). Let again \(\xi,\zeta\in 1_{\{c\}}(\rho_{\alpha})\mathsf{H}_{\alpha}\) for a fixed \(c\in\operatorname{Sp}(\rho_{\alpha})\). Using (4.7) and the fact that \(\omega_{\lambda}^{(1)}\) is a state invariant under scaling group we deduce that
\[\begin{split}\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi\rangle- \omega_{\lambda}^{(2)}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})\bigr{|}& \leq\tfrac{1}{2}\bigl{(}\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi \rangle-\omega_{\lambda}^{(1)}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})+ \bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi\rangle-\omega_{\lambda}^{(1)}(R_{ \mathbb{G}}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi}))\bigr{|}\bigr{)}\\ &=\tfrac{1}{2}\bigl{(}\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi \rangle-\omega_{\lambda}^{(1)}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})+ \bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi\rangle-\omega_{\lambda}^{(1)}(U^{ \alpha*}_{\rho_{\alpha}^{-1}\xi,\zeta})\bigr{|}\bigr{)}\\ &=\tfrac{1}{2}\bigl{(}\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi \rangle-\omega_{\lambda}^{(1)}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})+ \bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi|\zeta\rangle-\omega_{\lambda}^{(1)} (U^{\alpha}_{\rho_{\alpha}^{-1}\xi,\zeta})\bigr{|}\bigr{)}\\ &=\tfrac{1}{2}\bigl{(}\bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\xi \rangle-\omega_{\lambda}^{(1)}(U^{\alpha}_{\zeta,\rho_{\alpha}^{-1}\xi})+ \bigl{|}\langle\zeta|\rho_{\alpha}^{-1}\zeta\rangle-\omega_{\lambda}^{(1)}(U^{ \alpha}_{\xi,\rho_{\alpha}^{-1}\zeta})\bigr{|}\bigr{)}\\ &\leq\varepsilon^{\prime}\sqrt{\langle\xi|\rho_{\alpha}^{-1}\xi \rangle\langle\zeta|\rho_{\alpha}^{-1}\zeta\rangle}.\end{split} \tag{4.8}\]
Now, pick \(0<\delta<1\) so that \(6\delta+\varepsilon^{\prime}\leq\varepsilon^{\prime\prime}\). For a fixed \(c\in\operatorname{Sp}(\rho_{\alpha})\) choose a finite set \(\{\theta_{k}\}_{k=1}^{N}\) in the sphere of \(1_{\{c\}}(\rho_{\alpha})\mathsf{H}_{\alpha}\) which forms a \(\frac{\delta}{\|\rho_{\alpha}\|\|\rho_{\alpha}^{-1}\|}\)-net. Since \(N<+\infty\), iterating (4.8) we can find \(\lambda_{1}\geq\lambda_{0}\) (depending on \(c\)) so that for all \(1\leq k,k^{\prime}\leq N,\lambda\geq\lambda_{1}\) we have
\[\bigl{|}\langle\theta_{k}|\rho_{\alpha}^{-1}\theta_{k^{\prime}}\rangle-\omega_{ \lambda}^{(2)}(U^{\alpha}_{\theta_{k},\rho_{\alpha}^{-1}\theta_{k^{\prime}}}) \bigr{|}\leq\varepsilon^{\prime}\sqrt{\langle\theta_{k}|\rho_{\alpha}^{-1} \theta_{k}\rangle\langle\theta_{k^{\prime}}|\rho_{\alpha}^{-1}\theta_{k^{ \prime}}\rangle}. \tag{4.9}\]
Now choose any norm \(1\) vector \(\theta\in 1_{\{c\}}(\rho_{\alpha})\mathsf{H}_{\alpha}\) and \(1\leq k\leq N\) so that \(\|\theta-\theta_{k}\|\leq\frac{\delta}{\|\rho_{\alpha}\|\|\rho_{\alpha}^{-1}\|}\). We obtain
\[\big{|}\langle\theta|\rho_{\alpha}^{-1}\theta\rangle-\omega_{ \lambda}^{(2)}(U_{\theta,\rho_{\alpha}^{-1}\theta}^{\alpha})\big{|} \leq 4\|\theta-\theta_{k}\|\|\rho_{\alpha}^{-1}\|+\big{|} \langle\theta_{k}|\rho_{\alpha}^{-1}\theta_{k}\rangle-\omega_{\lambda}^{(2)}(U _{\theta_{k},\rho_{\alpha}^{-1}\theta_{k}}^{\alpha})\big{|}\] \[\leq 4\tfrac{\delta}{\|\rho_{\alpha}\|}+\varepsilon^{\prime} \langle\theta_{k}|\rho_{\alpha}^{-1}\theta_{k}\rangle\leq 4\tfrac{\delta}{\|\rho_{ \alpha}\|}+\varepsilon^{\prime}(2\tfrac{\delta}{\|\rho_{\alpha}\|}+\langle \theta|\rho_{\alpha}^{-1}\theta\rangle)\] \[\leq 6\tfrac{\delta}{\|\rho_{\alpha}\|}+\varepsilon^{\prime} \langle\theta|\rho_{\alpha}^{-1}\theta\rangle. \tag{4.10}\]
Inequality (4.10) holds for fixed \(c\), but as these come from a finite set \(\operatorname{Sp}(\rho_{\alpha})\), we can find \(\lambda_{2}\in\Lambda\) such that the above inequality holds for all \(c\in\operatorname{Sp}(\rho_{\alpha})\), \(\lambda\geq\lambda_{2}\).
We already proved that \(f_{\lambda}p_{\alpha}=(\operatorname{id}\otimes\omega_{\lambda}^{(2)})( \operatorname{W}^{\widehat{\operatorname{G}}*})p_{\alpha}\) is self-adjoint. Now we claim that
\[f_{\lambda}p_{\alpha}\geq(1-\varepsilon^{\prime\prime})p_{\alpha}.\]
We can think of both operators as acting on \(\mathsf{H}_{\alpha}\). Observe that the operator \(f_{\lambda}p_{\alpha}\) is block-diagonal, i.e. for \(c\in\operatorname{Sp}(\rho_{\alpha})\) we have \((f_{\lambda}p_{\alpha})1_{\{c\}}(\rho_{\alpha})=1_{\{c\}}(\rho_{\alpha})(f_{ \lambda}p_{\alpha})\) - this follows from the fact that \(f_{\lambda}\) is invariant under the scaling group. Let \(\mu\in\mathbb{R}\) be an eigenvalue of \(f_{\lambda}p_{\alpha}\). Then \(\mu\leq 1\). Since \(f_{\lambda}p_{\alpha}\) is block-diagonal, we can find a corresponding eigenvector \(\theta\in\mathsf{H}_{\alpha}\) of norm \(1\) which is included in some \(1_{\{c\}}(\rho_{\alpha})\mathsf{H}_{\alpha}\). Then
\[\mu=\langle\theta|(f_{\lambda}p_{\alpha})\theta\rangle=(\omega_{\theta} \otimes\omega_{\lambda}^{(2)})(\operatorname{W}^{\widehat{\operatorname{G}}*} )=(\omega_{\lambda}^{(2)}\otimes\omega_{\theta})(\operatorname{W}^{\widehat{ \operatorname{G}}})=\omega_{\lambda}^{(2)}(U_{\theta,\theta}^{\alpha}),\]
hence using (4.10)
\[|\mu-1|=|\omega_{\lambda}^{(2)}(U_{\theta,\theta}^{\alpha})-\langle\theta| \theta\rangle|=c|\omega_{\lambda}^{(2)}(U_{\theta,\rho_{\alpha}^{-1}\theta}^{ \alpha})-\langle\theta|\rho_{\alpha}^{-1}\theta\rangle|\leq c\big{(}6\tfrac{ \delta}{\|\rho_{\alpha}\|}+\varepsilon^{\prime}\langle\theta|\rho_{\alpha}^{-1 }\theta\rangle\big{)}\leq 6\delta+\varepsilon^{\prime}\leq\varepsilon^{\prime\prime}.\]
Consequently for \(\lambda\geq\lambda_{2}\)
\[f_{\lambda}p_{\alpha}\text{ is self-adjoint},\quad\operatorname{Sp}(f_{ \lambda}p_{\alpha})\subset[1-\varepsilon^{\prime\prime},1]\quad\Rightarrow \quad f_{\lambda}p_{\alpha}\geq(1-\varepsilon^{\prime\prime})p_{\alpha}.\]
As \(0<\varepsilon^{\prime\prime}<1\) and \(f_{\lambda}\) are normalised and positive definite, this gives us a contradiction.
Proposition 4.1 holds in the full generality of (possibly non-Kac type) compact quantum groups. Its downside is however that it does not say anything about \(\operatorname{L}^{\infty}(\mathbb{G})\) as a von Neumann algebra if we forget about the quantum group structure, as we had to assume that maps \(\Phi_{\lambda}\) are associated to multipliers from the space \(B_{\lambda}(\widehat{\operatorname{G}})\). Another downside is that we have control only over the \(\operatorname{L}^{2}\)-norm of the operator \(x\) in the statement, not the operator norm. In the following results we will prove different separating results, which are similar in spirit, but where these problems will be remedied - under additional assumptions. First let us introduce some convenient terminology.
Recall that for any von Neumann algebra \(\operatorname{M}\) we have a canonical completely isometric identification \(\operatorname{CB}(\operatorname{M})=(\operatorname{M}_{*}\widehat{ \operatorname{S}}\operatorname{M})^{*}\), where \(\widehat{\operatorname{S}}\) is the projective tensor product of operator spaces ([ER, Corollary 7.1.5]).
**Definition 4.2**.: Let \(\operatorname{M}\) be a von Neumann algebra and \(\varepsilon\in(0,1)\).
* We say that \(\operatorname{M}\) has the _\(\varepsilon\)-separation property_ if for every net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, finite rank, UCP maps on \(\operatorname{M}\) there is \(x\in\operatorname{M}\) and \(\omega\in\operatorname{M}_{*}\) with \(\|x\|=\|\omega\|=1\) such that \(\limsup_{\lambda\in\Lambda}|\langle x-\Phi_{\lambda}(x),\omega\rangle|>\varepsilon\).
* We say that \(\operatorname{M}\) has the _matrix \(\varepsilon\)-separation property_ if for every net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, finite rank, UCP maps on \(\operatorname{M}\) there is a Hilbert space \(\mathsf{H}\) and \(x\in\operatorname{M}\widehat{\operatorname{S}}\operatorname{B}(\mathsf{H}), \omega\in(\operatorname{M}\widehat{\operatorname{S}}\operatorname{B}(\mathsf{H}) )_{*}\) with \(\|x\|=\|\omega\|=1\) such that \(\limsup_{\lambda\in\Lambda}|\langle x-(\Phi_{\lambda}\otimes\operatorname{id})(x ),\omega\rangle|>\varepsilon\).
We begin with some easy observations.
**Remark 4.3**.: Fix \(\varepsilon\in(0,1)\). If \({\rm M}\) has the \(\varepsilon\)-separation property, then it has the \(\varepsilon^{\prime}\)-separation property for all \(0<\varepsilon^{\prime}<\varepsilon\). The analogous statement holds for the matrix \(\varepsilon\)-separation property. Further, the matrix \(\varepsilon\)-separation property, formally weaker than the \(\varepsilon\)-separation property, implies that \({\rm M}\) does not have \(w^{*}\)-CPAP.
The next lemma shows that in the definition of the matrix \(\varepsilon\)-separation property one can replace \({\rm B}({\sf H})\) by matrices (or arbitrary von Neumann algebras), which justifies the proposed terminology. Furthermore, we provide an equivalent formulation of this property which does not refer to any additional von Neumann algebra.
**Lemma 4.4**.: _Let \({\rm M}\) be a von Neumann algebra, \(\varepsilon\in(0,1)\) and \((\Phi_{\lambda})_{\lambda\in\Lambda}\) a net of normal UCP maps on \({\rm M}\). The following are equivalent:_
1. _there is a natural number_ \(n\in{\mathbb{N}}\)_,_ \(x\in{\rm M}\otimes{\rm M}_{n}\) _and_ \(\omega\in({\rm M}\otimes{\rm M}_{n})_{*}\) _with_ \(\|x\|=\|\omega\|=1\) _such that_ \(\limsup_{\lambda\in\Lambda}|\langle x-(\Phi_{\lambda}\otimes{\rm id})(x), \omega\rangle|>\varepsilon\)_;_
2. _there is a Hilbert space_ \({\sf H}\)_,_ \(x\in{\rm M}\,\widetilde{\otimes}\,{\rm B}({\sf H})\) _and_ \(\omega\in({\rm M}\,\widetilde{\otimes}\,{\rm B}({\sf H}))_{*}\) _with_ \(\|x\|=\|\omega\|=1\) _such that_ \(\limsup_{\lambda\in\Lambda}|\langle x-(\Phi_{\lambda}\otimes{\rm id})(x), \omega\rangle|>\varepsilon\)_;_
3. _there is a von Neumann algebra_ \({\rm N}\)_,_ \(x\in{\rm M}\,\widetilde{\otimes}\,{\rm N}\) _and_ \(\omega\in({\rm M}\,\widetilde{\otimes}\,{\rm N})_{*}\) _with_ \(\|x\|=\|\omega\|=1\) _such that_ \(\limsup_{\lambda\in\Lambda}|\langle x-(\Phi_{\lambda}\otimes{\rm id})(x), \omega\rangle|>\varepsilon\)_;_
4. _there is_ \(\Omega\in{\rm M}_{*}\,\widehat{\otimes}\,{\rm M}\) _with_ \(\|\Omega\|=1\) _such that_ \(\limsup_{\lambda\in\Lambda}|\langle{\rm id}-\Phi_{\lambda},\Omega\rangle|>\varepsilon\)_._
_Consequently, \({\rm M}\) has the matrix \(\varepsilon\)-separation property if, and only if for every net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, finite rank, UCP maps on \({\rm M}\) there is \(\Omega\in{\rm M}_{*}\,\widehat{\otimes}\,{\rm M}\) with \(\|\Omega\|=1\) such that \(\limsup_{\lambda\in\Lambda}|\langle{\rm id}-\Phi_{\lambda},\Omega\rangle|>\varepsilon\)._
Sketch of a proof.: Implications \((1)\Rightarrow(2)\Rightarrow(3)\) are trivial. To see that (3) implies (1), observe first that it is enough to consider \({\rm N}={\rm B}({\sf H})\), as normal functionals can be extended to superalgebras without increasing norms. Next, use the identification \(({\rm M}\,\widetilde{\otimes}\,{\rm B}({\sf H}))_{*}={\rm M}_{*}\,\widehat{ \otimes}\,{\rm B}({\sf H})_{*}\), where \(\widehat{\otimes}\) is the projective operator space tensor product (see [ER, Theorem 7.2.4]) and approximate \(\omega\) by a finite sum of simple tensors. To finish the proof, use the fact that any normal functional on \({\rm B}({\sf H})\) is given by a \({\rm Tr}(T\cdot)\) for a trace class operator \(T\) and finite rank operators are dense in the space of trace class operators.
Next we show that \((1)\Rightarrow(4)\Rightarrow(2)\).
Assume that (1) holds with \(x\in{\rm M}\,\otimes{\rm M}_{n},\omega\in({\rm M}\,\otimes{\rm M}_{n})_{*}\). Define \(\Omega_{\omega,x}\in{\rm CB}({\rm M})^{*}\) via \(\langle\Omega_{\omega,x},T\rangle=\langle(T\otimes{\rm id})(x),\omega\rangle\), \(T\in{\rm CB}({\rm M})\). The functional \(\Omega_{\omega,x}\) cannot be zero, as then we would have \(\langle x-(\Phi_{\lambda}\otimes{\rm id})(x),\omega\rangle=0\) for each \(\lambda\in\Lambda\). On the other hand \(\|\Omega_{\omega,x}\|\leq\|x\|\|\omega\|=1\). Since \({\rm M}_{n}\) is finite dimensional, we can write \(\omega=\sum_{i=1}^{N}\omega_{1,i}\otimes\omega_{2,i}\) for some \(\omega_{1,i}\in{\rm M}_{*},\omega_{2,i}\in{\rm M}_{n}^{*}\), \(1\leq i\leq N\). We have
\[\langle\Omega_{\omega,x},T\rangle=\sum_{i=1}^{N}\langle(T\otimes{\rm id})(x), \omega_{1,i}\otimes\omega_{2,i}\rangle=\sum_{i=1}^{N}\langle T(({\rm id}\otimes \omega_{2,i})(x)),\omega_{1,i}\rangle\quad(T\in{\rm CB}({\rm M})),\]
which shows that \(\Omega_{\omega,x}\in{\rm M}_{*}\,\odot\,{\rm M}\subset{\rm M}_{*}\,\widehat{ \otimes}\,{\rm M}\). To finish this part of proof, define \(\Omega=\frac{\Omega_{\omega,x}}{\|\Omega_{\omega,x}\|}\).
Implication \((4)\Rightarrow(2)\) follows closely proof of [DKV, Proposition 3.9], hence we provide only a sketch. Assume that we are given \(\Omega\in{\rm M}_{*}\,\widehat{\otimes}\,{\rm M}\) as in (4), and let \(\varepsilon^{\prime}\) be such that \(\limsup_{\lambda\in\Lambda}|\langle{\rm id}-\Phi_{\lambda},\Omega\rangle|> \varepsilon^{\prime}>\varepsilon\). According to [ER, Theorem 10.2.1] we can find infinite matrices \(\alpha\in{\rm M}_{1,\infty\times\infty},\beta\in{\rm K}_{\infty}({\rm M}_{*}), \gamma\in{\rm K}_{\infty}({\rm M}),\alpha^{\prime}\in{\rm M}_{\infty\times \infty,1}\) such that \(\Omega=\alpha(\beta\otimes\gamma)\alpha^{\prime}\) and \(\|\alpha\|\|\beta\|\gamma\|\|\alpha^{\prime}\|\leq\frac{\varepsilon^{\prime}} {\varepsilon}\). Write these matrices as \(\alpha=[\alpha_{1,(i,j)}]_{(i,j)\in{\mathbb{N}}^{\times 2}}\), etc. (so that \(\Omega=\sum_{i,j,k,l=1}^{\infty}\alpha_{1,(i,j)}(\beta_{i,k}\otimes\gamma_{j,l}) \alpha^{\prime}_{(k,l),1}\)) and let \(e_{i,j}\,(i,j\in{\mathbb{N}})\) be the matrix units in \({\rm B}(\ell^{2})\). One can check that \([e_{j,j}]_{i,j=1}^{\infty}\) is a well defined infinite matrix of norm \(1\) in \({\rm M}_{\infty}(T(\ell^{2}))\), where \(T(\ell^{2})\simeq\)
\(\operatorname{B}(\ell^{2})_{*}\) is the space of trace class operators. Notice that \(\gamma\in\operatorname{K}_{\infty}(\operatorname{M})=\operatorname{M}\otimes \operatorname{K}_{\infty}\subset\operatorname{M}\bar{\otimes}\operatorname{B}( \ell^{2})\) and define \(\omega_{0}=\alpha(\beta\otimes[e_{j,i}]_{j,i=1}^{\infty})\alpha^{\prime}\in \operatorname{M}_{*}\widehat{\otimes}\operatorname{B}(\ell^{2})_{*}=( \operatorname{M}\bar{\otimes}\operatorname{B}(\ell^{2}))_{*}\). Unwinding the definitions, one finds that \(\langle\gamma-(\Phi_{\lambda}\otimes\operatorname{id})(\gamma),\omega_{0} \rangle=\langle\operatorname{id}-\Phi_{\lambda},\Omega\rangle\). Setting \(x=\frac{\gamma}{\|\gamma\|},\omega=\frac{\omega_{0}}{\|\omega_{0}\|}\) finishes the proof, as \(\|\gamma\|\cdot\|\omega_{0}\|\leq\frac{\varepsilon^{\prime}}{\varepsilon}\), so that \(|\langle x-(\Phi_{\lambda}\otimes\operatorname{id})(x),\omega\rangle|\geq \frac{\varepsilon}{\varepsilon^{\prime}}|\langle\operatorname{id}-\Phi_{ \lambda},\Omega\rangle|\) for each \(\lambda\in\Lambda\).
**Proposition 4.5**.: _Let \(\operatorname{M}\) be a non-injective von Neumann algebra. Then \(\operatorname{M}\) has the \(\varepsilon\)-separation property for some \(0<\varepsilon<1\)._
Proof.: Assume by contradiction that \(\operatorname{M}\) does not have \(\varepsilon\)-separation property for any \(0<\varepsilon<1\). That is, for all \(0<\varepsilon<1\) there is a net \((\Phi_{\varepsilon,\lambda})_{\lambda\in\Lambda_{\varepsilon}}\) of normal, finite rank UCP maps such that
\[\limsup_{\lambda\in\Lambda_{\varepsilon}}\bigl{|}\langle x-\Phi_{\varepsilon, \lambda}(x),\omega\rangle\bigr{|}\leq\varepsilon \tag{4.11}\]
for all \(x\in\operatorname{M},\omega\in\operatorname{M}_{*}\) with \(\|x\|=\|\omega\|=1\). Next we construct a new net of normal, finite rank, UCP maps on \(\operatorname{M}\), \((\Psi_{F,G,\varepsilon})_{(F,G,\varepsilon)}\) where \(F\subset\operatorname{M},G\subset\operatorname{M}_{*}\) are finite non-empty sets and \(0<\varepsilon<1\). We declare \((F,G,\varepsilon)\leq(F^{\prime},G^{\prime},\varepsilon^{\prime})\) if and only if \(F\subset F^{\prime},G\subset G^{\prime}\) and \(\varepsilon\geq\varepsilon^{\prime}\). For such a triple we choose \(\Psi_{F,G,\varepsilon}=\Phi_{\varepsilon,\lambda}\) with \(\lambda\in\Lambda_{\varepsilon}\) such that
\[\bigl{|}\bigl{\langle}x-\Psi_{F,G,\varepsilon}(x),\omega\bigr{\rangle}\bigr{|} =\bigl{|}\bigl{\langle}x-\Phi_{\varepsilon,\lambda}(x),\omega\bigr{\rangle} \bigr{|}\leq 2\varepsilon\|x\|\|\omega\|\quad(x\in F,\omega\in G).\]
The index \(\lambda\) as above exists due to (4.11). Now it is easy to see that the net \((\Psi_{F,G,\varepsilon})_{(F,G,\varepsilon)}\) implements the \(w^{*}\)-CPAP of \(\operatorname{M}\), which gives us a contradiction.
In view of the above the focus will from now on be on finding for a non-injective von Neumann algebra an explicit set of \(\varepsilon\in(0,1)\) for which the \(\varepsilon\)-separation property holds. We do not know whether the \(\varepsilon\)-separation property in fact depends on \(\varepsilon\); see the discussion in the end of the paper.
For a compact quantum group \(\mathbb{G}\), let us denote \(\mathbf{N}_{\mathbb{G}}=\sup_{\alpha\in\operatorname{Irr}(\mathbb{G})} \dim(\alpha)\in\mathbb{N}\cup\{+\infty\}\).
**Theorem 4.6**.: _Let \(\mathbb{G}\) be a compact quantum group such that \(\mathbf{N}_{\mathbb{G}}<+\infty\). If \(\mathbb{G}\) is not coamenable, then \(\operatorname{L}^{\infty}(\mathbb{G})\) has the \(\varepsilon\)-separation property for all \(0<\varepsilon<\frac{1}{\mathbf{N}_{\mathbb{G}}}\)._
Proof.: By [KS, Theorem 4.3] we know that \(\mathbb{G}\) is of Kac type. Since \(\mathbb{G}\) is of Kac type, with any normal, finite rank UCP map \(\Phi\colon\operatorname{L}^{\infty}(\mathbb{G})\to\operatorname{L}^{\infty}( \mathbb{G})\) we can associate a quantum Herz-Schur multiplier \(\widetilde{\Phi}\) in a canonical way. As this construction is quite well-known, we will only present the relevant formulas. Consider the normal UCP map ([Bra, Section 7.1])
\[\Delta^{\sharp}\colon\operatorname{L}^{\infty}(\mathbb{G})\bar{\otimes} \operatorname{L}^{\infty}(\mathbb{G})\ni U_{i,j}^{\alpha}\otimes U_{k,l}^{ \beta}\mapsto\delta_{\alpha,\beta}\delta_{j,k}\tfrac{U_{i,l}^{\alpha}}{\dim( \alpha)}\in\operatorname{L}^{\infty}(\mathbb{G}); \tag{4.12}\]
in particular \(\Delta^{\sharp}\Delta=\operatorname{id}\). Now define
\[\widetilde{\Phi}=\Delta^{\sharp}(\Phi\otimes\operatorname{id})\Delta\colon \operatorname{L}^{\infty}(\mathbb{G})\to\operatorname{L}^{\infty}(\mathbb{G}).\]
\(\widetilde{\Phi}\) is a normal UCP map and \(\widetilde{\Phi}=\Theta^{l}(a)\) for some \(a\in\operatorname{A}(\widehat{\mathbb{G}})\) - for the proof, see [Bra, Section 6.3.2].
Take \(0<\varepsilon<\frac{1}{\mathbf{N}_{\mathbb{G}}}\). Assume by contradiction that the claim does not hold, i.e. there is a net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, UCP, finite rank maps \(\operatorname{L}^{\infty}(\mathbb{G})\to\operatorname{L}^{\infty}(\mathbb{G})\) such that for all \(x\in\operatorname{L}^{\infty}(\mathbb{G}),\omega\in\operatorname{L}^{1}( \mathbb{G})\) with \(\|x\|=\|\omega\|=1\) we have
\[\limsup_{\lambda\in\Lambda}|\langle x-\Phi_{\lambda}(x),\omega\rangle|\leq\varepsilon. \tag{4.13}\]
For each \(\lambda\in\Lambda\) set
\[\widetilde{\Phi}_{\lambda}=\Delta^{\sharp}(\Phi_{\lambda}\otimes\operatorname{ id})\Delta=\Theta^{l}(a_{\lambda})\]
for some multipliers \(a_{\lambda}\in\mathrm{A}(\widehat{\mathbb{G}})\). We can write them as \(a_{\lambda}=(\omega_{\lambda}\otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}})=( \mathrm{id}\otimes\omega_{\lambda})(\mathrm{W}^{\widehat{\mathbb{G}}*})\) for vector states \(\omega_{\lambda}=\omega_{\xi_{\lambda}}\in\mathrm{L}^{1}(\mathbb{G})\), then \(\widetilde{\Phi}_{\lambda}=(\omega_{\lambda}\otimes\mathrm{id})\Delta\). For \(n\in\mathbb{N}\) choose norm \(1\) vectors \(\xi_{\lambda,n}\in\Lambda_{h}(\mathrm{Pol}(\mathbb{G}))\) such that \(\|\xi_{\lambda}-\xi_{\lambda,n}\|\leq\frac{1}{n}\) and let \(\widetilde{\Phi}_{\lambda,n}=(\omega_{\xi_{\lambda,n}}\otimes\mathrm{id})\Delta\). These are normal UCP quantum Herz-Schur multipliers. Furthermore, \(\widetilde{\Phi}_{\lambda,n}\) are finite rank since \(\xi_{\lambda,n}\in\Lambda_{h}(\mathrm{Pol}(\mathbb{G}))\). We will obtain a contradiction with Proposition 4.1. For this, we need to show that for all \(x\in\mathrm{Pol}(\mathbb{G}),\omega\in\mathrm{L}^{1}(\mathbb{G})\) with \(\|x\|_{2}=\|\omega\|=1\) we have
\[\limsup_{(\lambda,n)\in\Lambda\times\mathbb{N}}|\langle x-\widetilde{\Phi}_{ \lambda,n}(x),\omega\rangle|\leq\mathbf{N}_{\mathbb{G}}\varepsilon \tag{4.14}\]
(note that \(\mathbf{N}_{\mathbb{G}}\varepsilon<1\)). However, an inspection of the proof shows that it is enough to prove (4.14) for \(x=\frac{U_{\xi,n}^{\alpha}}{\|U_{\xi,n}^{\alpha}\|},\omega=\frac{h(U_{\xi,n}^{ \alpha*})}{\|h(U_{\zeta,n}^{\alpha})\|}\) and fixed \(\alpha\in\mathrm{Irr}(\mathbb{G}),\xi,\eta,\zeta\in\mathsf{H}_{\alpha}\setminus \{0\}\). Indeed, the argument leading to (4.5) was the only place in the proof of Proposition 4.1 where the assumption was used. Thus let us show (4.14) for this pair \(x,\omega\). Fix \((\lambda,n)\in\Lambda\times\mathbb{N}\) and note that
\[|\langle x-\widetilde{\Phi}_{\lambda,n}(x),\omega\rangle|\leq |\langle((\omega_{\xi_{\lambda}}-\omega_{\xi_{\lambda,n}}) \otimes\mathrm{id})\Delta(x),\omega\rangle|+|\langle x-\widetilde{\Phi}_{ \lambda}(x),\omega\rangle|\] \[\leq\frac{2\|x\|}{n}+|\langle((\mathrm{id}-\Phi_{\lambda}) \otimes\mathrm{id})\Delta(x),\omega\circ\Delta^{\sharp}\rangle|\]
Further
\[|\langle((\mathrm{id}-\Phi_{\lambda})\otimes\mathrm{id})\Delta(x ),\omega\circ\Delta^{\sharp}\rangle|\] \[=\frac{1}{\|U_{\xi,n}^{\alpha}\|_{2}\|h(U_{\zeta,n}^{\alpha*})\|} \Big{|}\sum_{m=1}^{\dim(\alpha)}\langle(U_{\xi,\xi_{m}^{\alpha}}^{\alpha}- \Phi_{\lambda}(U_{\xi,\xi_{m}^{\alpha}}^{\alpha}))\otimes U_{\xi_{m}^{\alpha },\eta}^{\alpha},h(U_{\zeta,\eta}^{\alpha*}.)\circ\Delta^{\sharp}\rangle\big{|}\] \[=\frac{1}{\|U_{\xi,n}^{\alpha}\|_{2}\|h(U_{\zeta,n}^{\alpha*})\|} \Big{|}\sum_{m=1}^{\dim(\alpha)}\langle(U_{\xi,\xi_{m}^{\alpha}}^{\alpha}- \sum_{a,b=1}^{\dim(\alpha)}\dim(\alpha)h(U_{\xi_{a}^{\alpha*},\xi_{b}^{ \alpha}}^{\alpha}\Phi_{\lambda}(U_{\xi,\xi_{m}^{\alpha}}^{\alpha}))U_{\xi_{a} ^{\alpha},\xi_{b}^{\alpha}}^{\alpha})\otimes U_{\xi_{m}^{\alpha},\eta}^{\alpha},\] \[h(U_{\zeta,\eta}^{\alpha*}.)\circ\Delta^{\sharp}\rangle\big{|}\] \[=\frac{1}{\|U_{\xi,\eta}^{\alpha}\|_{2}\|h(U_{\zeta,n}^{\alpha*})\| }\big{|}\sum_{m=1}^{\dim(\alpha)}\frac{1}{\dim(\alpha)}\langle U_{\xi,\eta}^{ \alpha}-\sum_{a=1}^{\dim(\alpha)}\dim(\alpha)h(U_{\xi_{a}^{\alpha*},\xi_{m}^{ \alpha}}^{\alpha*}\Phi_{\lambda}(U_{\xi,\xi_{m}^{\alpha}}^{\alpha}))U_{\xi_{a} ^{\alpha},\eta}^{\alpha},h(U_{\zeta,\eta}^{\alpha*}.)\rangle\big{|}\] \[=\frac{1}{\|U_{\xi,\eta}^{\alpha}\|_{2}\|h(U_{\zeta,\eta}^{\alpha *})\|}\big{|}\frac{\langle\xi|\zeta\|\eta\|^{2}}{\dim(\alpha)}-\sum_{m,a=1}^{ \dim(\alpha)}h(U_{\xi_{a}^{\alpha},\xi_{m}^{\alpha}}^{\alpha*}\Phi_{\lambda}(U_ {\xi,\xi_{m}^{\alpha}}^{\alpha}))\frac{\langle\xi_{a}^{\alpha}|\zeta\|\eta\|^ {2}}{\dim(\alpha)}\big{|}\] \[=\frac{1}{\|U_{\xi,\eta}^{\alpha}\|_{2}\|h(U_{\zeta,\eta}^{ \alpha*})\|}\big{|}\frac{\langle\xi|\zeta\|\eta\|^{2}}{\dim(\alpha)}-\sum_{m=1}^ {\dim(\alpha)}h(U_{\zeta,\xi_{m}^{\alpha}}^{\alpha*}\Phi_{\lambda}(U_{\xi,\xi_{m }^{\alpha}}^{\alpha}))\frac{\|\eta\|^{2}}{\dim(\alpha)}\big{|}\] \[=\frac{\|\eta\|^{2}}{\|\xi\|\eta\|\|h(U_{\zeta,\eta}^{\alpha*}.)\| \dim(\alpha)^{1/2}}\big{|}\langle\xi|\zeta\rangle-\sum_{m=1}^{\dim(\alpha)}h(U_{ \zeta,\xi_{m}^{\alpha}}^{\alpha*}\Phi_{\lambda}(U_{\xi,\xi_{m}^{\alpha}}^{\alpha}) )\big{|}.\]
Combining the two formulas displayed above we obtain
\[|\langle x-\widetilde{\Phi}_{\lambda,n}(x),\omega\rangle| \leq\tfrac{2\|x\|}{n}+\tfrac{\|\eta\|}{\|\xi\|\|h(U^{\alpha*}_{ \zeta,\eta})\|\dim(\alpha)^{1/2}}\sum_{m=1}^{\dim(\alpha)}\big{|}\tfrac{ \langle\xi|\zeta\rangle}{\dim(\alpha)}-\langle\Phi_{\lambda}(U^{\alpha}_{ \xi,\xi_{m}^{\alpha}}),h(U^{\alpha*}_{\zeta,\xi_{m}^{\alpha}}\cdot)\rangle\big{|}\] \[=\tfrac{2\|x\|}{n}+\tfrac{\|\eta\|}{\|\xi\|\|h(U^{\alpha*}_{\zeta, \eta})\|\dim(\alpha)^{1/2}}\sum_{m=1}^{\dim(\alpha)}\big{|}\langle U^{\alpha}_ {\xi,\xi_{m}^{\alpha}}-\Phi_{\lambda}(U^{\alpha}_{\xi,\xi_{m}^{\alpha}}),h(U^{ \alpha*}_{\zeta,\xi_{m}^{\alpha}}\cdot)\rangle\big{|}.\]
Consequently by (4.13)
\[\limsup_{(\lambda,n)\in\Lambda\times\mathbb{N}}|\langle x-\widetilde{\Phi}_{ \lambda,n}(x),\omega\rangle|\leq\tfrac{\|\eta\|}{\|\xi\|\|h(U^{\alpha*}_{\zeta,\eta})\|\dim(\alpha)^{1/2}}\sum_{m=1}^{\dim(\alpha)}\|U^{\alpha}_{\xi,\xi_{m} ^{\alpha}}\|\|h(U^{\alpha*}_{\zeta,\xi_{m}^{\alpha}}\cdot)\|\varepsilon. \tag{4.15}\]
Now we use the assumption \(\mathbf{N}_{\mathbb{G}}<+\infty\) to obtain a lower bound on the norm of functional \(h(U^{\alpha*}_{\zeta,\eta}\cdot)\). Since
\[\|h(U^{\alpha*}_{\zeta,\eta}\cdot)\|\geq\tfrac{|h(U^{\alpha*}_{\zeta,\eta}U^{ \alpha}_{\zeta,\eta})|}{\|U^{\alpha}_{\zeta,\eta}\|}=\tfrac{\|\zeta\|^{2}\|\eta \|^{2}}{\dim(\alpha)\|U^{\alpha}_{\zeta,\eta}\|},\]
in our situation we have
\[\tfrac{1}{\|h(U^{\alpha*}_{\zeta,\eta})\|}\leq\tfrac{\dim(\alpha)\|U^{\alpha}_ {\zeta,\eta}\|}{\|\zeta\|^{2}\|\eta\|^{2}}\leq\frac{\mathbf{N}_{\mathbb{G}}\| \zeta\|\,\|\eta\|}{\|\zeta\|^{2}\|\eta\|^{2}}=\frac{\mathbf{N}_{\mathbb{G}}}{ \|\zeta\|\,\|\eta\|}.\]
Combining this with inequality (4.15) and standard inequalities \(\|U^{\alpha}_{\xi,\xi_{m}^{\alpha}}\|\leq\|\xi\|\), \(\|h(U^{\alpha*}_{\zeta,\xi_{m}^{\alpha}}\cdot)\|\leq\|U^{\alpha}_{\zeta,\xi_{ m}^{\alpha}}\|_{2}=\tfrac{\|\zeta\|}{\dim(\alpha)^{1/2}}\) we get
\[\limsup_{(\lambda,n)\in\Lambda\times\mathbb{N}}|\langle x-\widetilde{\Phi}_{ \lambda,n}(x),\omega\rangle|\leq\tfrac{\|\eta\|\mathbf{N}_{\mathbb{G}}}{\|\xi\| \,\|\zeta\|\,\|\eta\|\dim(\alpha)^{1/2}}\sum_{m=1}^{\dim(\alpha)}\|\xi\| \tfrac{\|\zeta\|}{\dim(\alpha)^{1/2}}\varepsilon=\mathbf{N}_{\mathbb{G}}\varepsilon\]
which shows (4.14) and ends the proof.
Theorem 4.6 in particular applies to \(\mathbb{G}=\widehat{\Gamma}\), where \(\Gamma\) is a discrete group; this is Corollary C of the introduction.
In the last result of this section we show that if we consider rather the matrix \(\varepsilon\)-separation property, then we can drop the assumptions on the boundedness of the dimension of irreducible representations of \(\mathbb{G}\) and obtain the result valid for all \(\varepsilon\in(0,1)\).
**Theorem 4.7**.: _Let \(\mathbb{G}\) be a compact quantum group of Kac type which is not coamenable. Then \(\mathrm{L}^{\infty}(\mathbb{G})\) has the matrix \(\varepsilon\)-separation property for all \(\varepsilon\in(0,1)\)._
Proof.: The argument will follow the lines of the proof of Theorem 4.6. Assume by contradiction that there is \(0<\varepsilon<1\) and a net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, finite rank, UCP maps on \(\mathrm{L}^{\infty}(\mathbb{G})\) such that
\[\limsup_{\lambda\in\Lambda}|\langle x-(\Phi_{\lambda}\otimes\mathrm{id})(x), \omega\rangle|\leq\varepsilon\]
for all \(n\in\mathbb{N}\) and \(x\in\mathrm{L}^{\infty}(\mathbb{G})\otimes\mathrm{M}_{n},\omega\in(\mathrm{L}^ {\infty}(\mathbb{G})\otimes\mathrm{M}_{n})_{*}\) with \(\|x\|=\|\omega\|=1\). Thus in particular for any \(\alpha\in\mathrm{Irr}(\mathbb{G})\) and \(x=U^{\alpha},\omega=(h\otimes\mathrm{tr})(U^{\alpha*}\cdot)\) we have
\[\limsup_{\lambda\in\Lambda}\big{|}(h\otimes\mathrm{tr})\big{(}U^{\alpha*}(U^{ \alpha}-(\Phi_{\lambda}\otimes\mathrm{id})(U^{\alpha}))\big{)}\big{|}\leq\varepsilon \tag{4.16}\]
(where \(\mathrm{tr}\) is the normalised trace on \(\mathrm{M}_{n}\)). Recall the map \(\Delta^{\sharp}\) of (4.12) and define again for each \(\lambda\in\Lambda\)
\[\widetilde{\Phi}_{\lambda}=\Delta^{\sharp}(\Phi_{\lambda}\otimes\mathrm{id})\Delta.\]
Using the fact that \(\Delta^{\sharp}=\Delta^{-1}\mathbb{E}\), where \(\mathbb{E}\colon\,\mathrm{L}^{\infty}(\mathbb{G})\bar{\otimes}\,\mathrm{L}^{ \infty}(\mathbb{G})\to\Delta(\mathrm{L}^{\infty}(\mathbb{G}))\) is the normal \(h\otimes h\) -preserving conditional expectation, we compute as follows:
\[(h\otimes\mathrm{tr})(U^{\alpha*}(\widetilde{\Phi}_{\lambda} \otimes\mathrm{id})(U^{\alpha})) =(h\otimes h\otimes\mathrm{tr})(\Delta\otimes\mathrm{id})\big{(} U^{\alpha*}(\widetilde{\Phi}_{\lambda}\otimes\mathrm{id})(U^{\alpha})\big{)}\] \[=(h\otimes h\otimes\mathrm{tr})\big{(}(\Delta\otimes\mathrm{id}) (U^{\alpha*})(\mathbb{E}\otimes\mathrm{id})\big{(}(\Phi_{\lambda}\otimes \mathrm{id})(U^{\alpha})_{13}U_{23}^{\alpha}\big{)}\big{)}\] \[=(h\otimes h\otimes\mathrm{tr})\big{(}(\mathbb{E}\otimes\mathrm{ id})\big{(}(\Delta\otimes\mathrm{id})(U^{\alpha*})(\Phi_{\lambda}\otimes \mathrm{id})(U^{\alpha})_{13}U_{23}^{\alpha}\big{)}\big{)}\] \[=(h\otimes h\otimes\mathrm{tr})\big{(}U_{23}^{\alpha*}U_{13}^{ \alpha*}(\Phi_{\lambda}\otimes\mathrm{id})(U^{\alpha})_{13}U_{23}^{\alpha} \big{)}\] \[=(h\otimes\mathrm{tr})(U^{\alpha*}(\Phi_{\lambda}\otimes\mathrm{ id})(U^{\alpha})).\]
In the above calculation we have used the Tomiyama's theorem and the assumption that \(h\) is tracial. Consequently, from (4.16) we obtain
\[\limsup_{\lambda\in\Lambda}\big{|}(h\otimes\mathrm{tr})\big{(}U^{\alpha*}(U^{ \alpha}-(\widetilde{\Phi}_{\lambda}\otimes\mathrm{id})(U^{\alpha}))\big{)} \big{|}\leq\varepsilon. \tag{4.17}\]
Next, let us express explicitly the second factor appearing above:
\[(h\otimes\mathrm{tr})\big{(}U^{\alpha*}(\widetilde{\Phi}_{\lambda }\otimes\mathrm{id})(U^{\alpha})\big{)} =\sum_{i,j,k,l=1}^{\dim(\alpha)}(h\otimes\mathrm{tr})\big{(}U_{i, j}^{\alpha*}\widetilde{\Phi}_{\lambda}(U_{k,l}^{\alpha})\otimes e_{j,i}^{ \alpha}e_{k,l}^{\alpha}\big{)}\] \[=\tfrac{1}{\dim(\alpha)}\sum_{i,j=1}^{\dim(\alpha)}h\big{(}U_{i, j}^{\alpha*}\widetilde{\Phi}_{\lambda}(U_{i,j}^{\alpha})\big{)}.\]
We thus obtain from (4.17) that
\[\limsup_{\lambda\in\Lambda}\big{|}1-\tfrac{1}{\dim(\alpha)}\sum_{i,j=1}^{\dim (\alpha)}h(U_{i,j}^{\alpha*}\widetilde{\Phi}_{\lambda}(U_{i,j}^{\alpha}))\big{|} \leq\varepsilon.\]
Each \(\widetilde{\Phi}_{\lambda}\) is a normal UCP quantum Herz-Schur multiplier associated with a function \(a_{\lambda}\in\mathrm{A}(\widehat{\mathbb{G}})\), again by [Bra, Section 6.3.2]. As in the proof of Theorem 4.6, we can approximate each \(\widetilde{\Phi}_{\lambda}\) (in a completely bounded norm) by a sequence \((\widetilde{\Phi}_{\lambda,m})_{m=1}^{\infty}\) of normal UCP quantum Herz-Schur multipliers associated with positive-definite functions \(a_{\lambda,m}\in\mathrm{c}_{c}(\widehat{\mathbb{G}})\). Increasing \(\varepsilon\) to \(\varepsilon^{\prime}\in(0,1)\) we obtain then again for each \(\alpha\in\mathrm{Irr}(\mathbb{G})\)
\[\limsup_{(\lambda,m)\in\Lambda\times\mathbb{N}}\big{|}1-\tfrac{1}{\dim(\alpha )}\sum_{i,j=1}^{\dim(\alpha)}h(U_{i,j}^{\alpha*}\widetilde{\Phi}_{\lambda,m}( U_{i,j}^{\alpha}))\big{|}\leq\varepsilon^{\prime}. \tag{4.18}\]
Note that we have for each \(\alpha\in\mathrm{Irr}(\mathbb{G})\), \(1\leq i,j\leq\dim(\alpha),\lambda\in\Lambda,m\in\mathbb{N}\)
\[\widetilde{\Phi}_{\lambda,m}(U_{i,j}^{\alpha})=\sum_{k=1}^{\dim(\alpha)}(a_{ \lambda,m})_{i,k}^{\alpha}U_{k,j}^{\alpha}, \tag{4.19}\]
so that (4.18) simplifies to
\[\limsup_{(\lambda,m)\in\Lambda\times\mathbb{N}}\big{|}1-\tfrac{1}{\dim(\alpha )}\sum_{k=1}^{\dim(\alpha)}(a_{\lambda,m})_{k,k}^{\alpha}\big{|}\leq\varepsilon ^{\prime}. \tag{4.20}\]
Finally we average once again (from the other side), setting for each \(\lambda\in\Lambda,m\in\mathbb{N}\)
\[\hat{\Phi}_{\lambda,m}=\Delta^{\sharp}(\mathrm{id}\otimes\widetilde{\Phi}_{ \lambda,m})\Delta,\]
c.f. [DKV, Proposition 6.8]. Clearly \(\hat{\Phi}_{\lambda,m}\) is a normal UCP map. A direct calculation using (4.19) shows that we have for each \(\alpha\in\operatorname{Irr}(\mathbb{G})\), \(1\leq i,j\leq\dim(\alpha)\)
\[\hat{\Phi}_{\lambda,m}(U_{i,j}^{\alpha}) =\Delta^{\sharp}\big{(}\sum_{k=1}^{\dim(\alpha)}U_{i,k}^{\alpha} \otimes\widetilde{\Phi}_{\lambda,m}(U_{k,j}^{\alpha})\big{)}=\Delta^{\sharp} \big{(}\sum_{k=1}^{\dim(\alpha)}U_{i,k}^{\alpha}\otimes\sum_{l=1}^{\dim(\alpha) }(a_{\lambda,m})_{k,l}^{\alpha}U_{l,j}^{\alpha}\big{)}\] \[=\tfrac{1}{\dim(\alpha)}\sum_{k=1}^{\dim(\alpha)}(a_{\lambda,m}) _{k,k}^{\alpha}U_{i,j}^{\alpha}.\]
Thus each \(\hat{\Phi}_{\lambda,m}\) is a central quantum Herz-Schur multiplier, associated to a positive-definite function \(b_{\lambda,m}\in\mathcal{Z}\mathrm{c}_{c}(\widehat{\mathbb{G}})\) given by
\[b_{\lambda,m}=\sum_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{G}})}\big{(} \tfrac{1}{\dim(\alpha)}\sum_{k=1}^{\dim(\alpha)}\big{(}a_{\lambda,m})_{k,k}^{ \alpha}\big{)}p_{\alpha}.\]
Finally define \(c_{\lambda,m}=\tfrac{1}{2}(b_{\lambda,m}+b_{\lambda,m}^{*})\in\mathcal{Z} \mathrm{c}_{c}(\widehat{\mathbb{G}})\). This is again a central, real valued positive-definite function on \(\widehat{\mathbb{G}}\), associated to the quantum Herz-Schur multiplier \(\tfrac{1}{2}(\hat{\Phi}_{\lambda,m}+\hat{\Phi}_{\lambda,m}\circ R_{\mathbb{G}})\). Inequality (4.20) implies that
\[\limsup_{(\lambda,m)\in\Lambda\times\mathbb{N}}\,(1-c_{\lambda,m}^{\alpha}) \leq\varepsilon^{\prime},\]
which in turn shows the coamenability of \(\mathbb{G}\) via Corollary 3.8 (2).
Note that Theorem 4.7 can be reformulated in the spirit of Corollary C, leading to Theorem B from the introduction.
Proof of Theorem B.: This is an immediate consequence of Remark 4.3, Theorem 4.7 and the fact that \(\mathbb{G}\) is coamenable if and only if \(\mathrm{L}^{\infty}(\mathbb{G})\) is injective, which goes back to [Rua] (see also [Bra]).
The analogue of Theorem 4.7 holds also for the Haagerup property. We formalise it in the next theorem.
**Theorem 4.8**.: _Let \(\mathbb{G}\) be a compact quantum group of Kac type and let \(\mathrm{M}=\mathrm{L}^{\infty}(\mathbb{G})\). Fix \(\varepsilon\in(0,1)\). Suppose that there exists a net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) of normal, \(h\)-preserving UCP maps on \(\mathrm{M}\) which have compact \(\mathrm{L}^{2}\)-implementations (in the sense of [Jol]), such that for every \(n\in\mathbb{N}\), \(x\in\mathrm{M}\otimes\mathrm{M}_{n},\omega\in(\mathrm{M}\otimes\mathrm{M}_{n} )_{*}\) with \(\|x\|=\|\omega\|=1\) we have \(\limsup_{\lambda\in\Lambda}|\langle x-(\Phi_{\lambda}\otimes\mathrm{id})(x), \omega\rangle|\leq\varepsilon\). Then \(\widehat{\mathbb{G}}\) has the Haagerup property._
Proof.: As the proof is very similar to that above (and in fact even simpler), we only outline the main steps. We again begin by passing from the net \((\Phi_{\lambda})_{\lambda\in\Lambda}\) to \((\widetilde{\Phi}_{\lambda})_{\lambda\in\Lambda}\) by averaging; the proof of [DFSW, Theorem 7.7] shows that the positive-definite functions associated to the resulting multipliers belong to \(\mathrm{c}_{0}(\widehat{\mathbb{G}})\). Averaging again and then symmetrizing gives a net of normalized real-valued positive-definite functions which in the limit are '\(\varepsilon\)-close to \(\mathds{1}\)'. This contradicts Corollary 3.7.
We finish the paper by stating two natural open questions.
**Question 4.9**.: _Let \(\mathrm{M}\) be a von Neumann algebra and let \(\varepsilon\in(0,1)\)._
* _Is the_ \(\varepsilon\)_-separation property of_ \(\mathrm{M}\) _equivalent to the matrix_ \(\varepsilon\)_-separation property of_ \(\mathrm{M}\)_?_
* _Is the matrix_ \(\varepsilon\)_-separation property of_ \(\mathrm{M}\) _equivalent to non-injectivity of_ \(\mathrm{M}\)_?_
Our results show that the answer to the first question above is positive for von Neumann algebras of discrete groups and answer to the second question is positive for von Neumann algebras of unimodular discrete quantum groups. Note that in view of Proposition 4.5 the positive answers to questions above are equivalent to the negative answer to the following question.
**Question 4.10**.: _Let \(\mathrm{M}\) be a von Neumann algebra._
* _Does the_ \(\varepsilon\)_-separation property of_ \(\mathrm{M}\) _depend on_ \(\varepsilon\in(0,1)\)_?_
One could of course formulate variants of the above questions relevant for the von Neumann algebraic Haagerup property.
Acknowledgments.J.K. was partially supported by EPSRC grants EP/T03064X/1 and EP/T030992/1. A.S. was partially supported by the National Science Center (NCN) grant no. 2020/39/I/ST1/01566. A.S. would also like to express the gratitude to Matthew Daws and Christian Voigt for making possible his visit to Glasgow in May 2023, where some of the work on this paper was done.
|
2309.06624 | Defining the Entropy and Internal Energy of a Monetary Schelling model
through the Energy States of Individual Agents | This work investigates a modified Schelling model within the scope and aims
of Social Physics. The main purpose is to see if how the concepts of potential
and kinetic energy can be represented within a computational sociological
system. A monetary value is assigned to all the agents in the Monetary
Schelling model and a set of dynamics for how the money is spent upon agent
position changes and gradual loss. The introduction of the potential and
kinetic energy allows for the entropy to be calculated based upon the
distribution of the agent energies and as well as the internal energy of the
system at each time point. The results show how the movements of the agents
produce identity satisfactions with their neighbors decreasing the internal
energy of the system along with the decay in the monetary holdings. Simulations
are run where agents are provided monetary values at fixed intervals and this
causes a subset of the agents to mobilize and explore new positions for
satisfaction and increases the entropy with the internal energy removing the
system from the fixed point. | George-Rafael Domenikos, Tyler Laurie, Sahar Awaji, Alexander V. Mantzaris | 2023-09-12T22:26:17Z | http://arxiv.org/abs/2309.06624v1 | Defining the Entropy and Internal Energy of a Monetary Schelling model through the Energy States of Individual Agents
###### Abstract
This work investigates a modified Schelling model within the scope and aims of Social Physics. The main purpose is to see if how the concepts of potential and kinetic energy can be represented within a computational sociological system. A monetary value is assigned to all the agents in the Monetary Schelling model and a set of dynamics for how the money is spent upon agent position changes and gradual loss. The introduction of the potential and kinetic energy allows for the entropy to be calculated based upon the distribution of the agent energies and as well as the internal energy of the system at each time point. The results show how the movements of the agents produce identity satisfactions with their neighbors decreasing the internal energy of the system along with the decay in the monetary holdings. Simulations are run where agents are provided monetary values at fixed intervals and this causes a subset of the agents to mobilize and explore new positions for satisfaction and increases the entropy with the internal energy removing the system from the fixed point.
## 1 Introduction
Socio physics is a field with the goal of understanding social processes and phenomena using methodological approaches derived or inspired by the science of physics [30, 31]. This goal does not aim to ignore the work done or ongoing from social sciences but to augment it with different tools to help dissect the granularity of the intricate processes under study. Such processes include political
polarization [11], policy making [12], morality consensus development [8] and many others. These processes pose interesting challenges to researchers as they display non-linear phenomena which are a challenge to model as noted in the seminal work of [5; 2] studying self-organized criticality. Physics has developed many approaches to studying complex processes which accurately predict their behaviors such as that of transistors [21] where high degrees of confidence are applied to the models. It is more important that these processes are understood as urbanization is on the increase globally [7].
The work presented here proposes novel methodological advancements in Socio Physics using a modification of the well known Schelling model of segregation [25; 26]. When Schelling introduced this model it involved a 13x13 grid where each grid cell could be occupied by at most a single hypothetical agent. There were empty cells and each agent had one of 2 different identity labels. Agents contain a binary state variable of identity satisfaction whose value is defined by having a sufficient number of adjacent cells occupied by agents of the same identity label. If the threshold for the number of homogeneous neighboring agents is present the agent _remains_ in the same position, or else it will move to another position. This is the fundamental dynamic of the Schelling model and is evaluated for all the agents by choosing agents in a random order at each application of the dynamic. Modifications and generalizations of this canonical Schelling model exist [24] producing interesting behaviors.
The Schelling model is one of the first sociological models explored in a computational environment as Schelling conducted the simulations by hand originally. Similarly the Schelling model is one of the first to be explored as having dynamics that are analogues of physical processes. A parallel with the Ising model of magnetism [16; 4; 6] was seen as having a similar dynamic since the atomic units in the Ising model also take a summation of their adjacent neighbor states of spins rather than identities. The fundamental connection between this sociological model and physical model is explored in [29]. The canonical model provides a framework which can be used to explore a myriad of phenomena such as in [22] that investigates the self-organized temperature of a Schelling dynamic with Ising model descriptions.
An important practical study using the Schelling model, although void of physical analogues, is that of [13; 14] which from empirical observation found that the wealth of citizens plays a large role in urban mobilizations. Using these findings the work of [17] proposes a dual dynamic Schelling model that uses the identity dynamic and in parallel a monetary dynamic upon a financial store held by the agents in which the entropy trace of the model can be explored. In contrast to the findings that the canonical Schelling model has a decreasing entropy trace as more agents enter the remain state ([19]), the results of [17] show that the entropy trace increases overall due to the monetary dynamics therefore correcting the decreasing entropy trace. In [19; 17] the entropy is computed using the Boltzman statistics where the distribution of the macrostates is found via microstate sampling. This involves a considerable number of Monte Carlo samples to capture the mode density and surrounding regions.
Important recent work in this field includes [23] that discusses a fundamen
tal modeling paradigm to the formation of consensus. In terms of socio-physics it provides novel insight on physical concepts like the magnetization can be incorporated into a model of social processes. In the work of [27] the reader can find fundamental developments for the field of socio-physics in that the concepts of percolation and nucleation introduced into a simulated model of social interactions. Percolation is an important concept as the paper describes since the thresholds allow for phase transitions to be modeled with it evident in social processes which helps drive the investigation for how an understanding of physics can help model even those systems. The work of [20] provides a deep investigation into policy networks for a subset of European countries and how power can concentrate which is relevant to the modeling paradigm utilized in this study. A key point in the paper is the formation of ties and the association with a concentration of power when a complex network is produced. This also sets of a foundation for the definition of variables which will be treated thermodynamically in this work and the future. This work based on the Schelling model will utilize the concept of the 'agent' for each resident and as a system it is an agent based system (ABM [10]). The work of [15] shows the nonlinear behaviors of agents' collective behavior in relation to polarization and how institutions can affect the trajectory of such trends.
The goal of this paper within the context of socio physics is three fold. First to present a Monetary Schelling model for which the energy, kinetic, and potential, are computed for each agent at each iteration (time step). Second, calculate the entropy of the model based upon the distribution of the energies of the agents (sum of kinetic and potential energy) after discretization of the energy values. Such an approach avoids the costly Monte Carlo sampling scheme which was previously used. Third to calculate the internal energy of the model that is based upon the energies of the agents and the probability of their presence. The introduction of these quantities into the model will allow for a deeper physical interpretation of the Schelling model and dynamics as well as exploring the intricacies of how the monetary and identity dynamics cooperate in order to produce an overall stable system dynamic. Some modifications to the canonical Schelling is that agents upon movement subtract a portion of their monetary value and distribute it uniformly to the agents in the new neighborhood (as in [17]). Agents at each iteration lose a percentage of their monetary value which is not absorbed by other agents. In order to move positions agent must have a larger kinetic energy than a minimum threshold.
Although this work relies on the basic Schelling construction where agents move on grid (lattice) the work of [33] develops a methodology which allows the entropy for a network to be computed. These networks do not follow the rigid uniform pattern offered by a grid and can apply directly to complex real situations. In relation to previous work this work offers the exploration of internal energy and kinetic energy of the system. For a model such as the Schelling model (Monetary Schelling model in this case) this has not been explored yet.
The Methodology section will present the definitions of the dynamics and the Results the simulation trajectories of the quantities represented of the model in the definitions. A key finding is that entropy trace results conform with the
results of [19] and that internal energy of the model follows a trajectory of decay. The simulations also explore the situation where the agents are provided a uniform monetary addition every 200 iterations. This injection allows the model to deviate from the fixed resting point and re-establish the previous dynamics providing insight into how financial policy can affect urban mobilizations. The implementation has been done using Julia Lang [3] as it offers computational efficiency and clear syntax for the representation of the equations defined.
## 2 Methodology
This section defines the quantities and values which parameterize the dynamics of the Monetary Schelling model explored in this work. Key aspects which deviate from the canonical Schelling model [25] are that agents contain a monetary store which can increase or decrease through environment injections, from movements that induce costs, loss over time to the environment and spreading some expenses to the neighborhood. On top of the monetary quantities and introduced dynamics physical quantities are defined based on the state of the agents at each time point: kinetic and potential energy. This allows for the entropy of the system at each time point to be calculated using the distribution of the energies among the agents in the same manner as it is done in kinetic theory of gases. Using the discretized probability distribution of the agent energies the internal energy of the system is then be calculated.
The potential energy \(V\) for each agent at position \((i,j)\) at simulation time point \(t\) can be found using the state of agents' _remain_ status \(r_{i,j,t}\in[0,1]\):
\[V_{i,j,t}=\begin{cases}0&\text{if }r_{i,j,t-1}=1,\\ 1&\text{if }r_{i,j,t-1}=0.\end{cases} \tag{1}\]
This represents the intuition that if an agent will remain in the same position between time steps then the potential energy is zero as it is not mobile. As will be seen this remain state \(r_{i,j,t}\) is not only dependent upon the local neighbor homogeneity count as in the classic canonical Schelling model, but also upon there being a sufficient kinetic energy value. Therefore the remain \(r_{i,j,t}\) will depend upon a both the local identity differential and minimum monetary value (kinetic energy expenditure threshold).
The kinetic energy value for each agent at each time point \((i,j,t)\) based upon the value of the monetary store it holds, \(m_{i,j,t}\):
\[K_{i,j,t}=\frac{2m_{i,j,t}}{\max(\mathbf{m}_{i,j,t=0}:\forall i,j\notin \mathbf{m}_{i,j}=\emptyset)}. \tag{2}\]
Here, \(\mathbf{m}\) represents the vector of all the agent monetary store values. The denominator of this equation finds the maximum monetary among all the agents value which exists at the start of simulation. This type of normalization is done in order to have comparable magnitudes between the kinetic energy and the potential energy. As will be seen agent movements will spread the movement
costs around to their neighbors causing a decrease in the maximum monetary value held by agents at later time points resulting in \(K_{i,j,t}\) being typically less than 1. Another aspect is that the kinetic energy although dependent upon a constant and variable state for its value the normalization aspect is not a function over time. The value of \(K_{i,j}\in[0,2]\) due to the constant term 2 included in the numerator which is chosen to facilitate the movements of the agents. If this factor was not included then the kinetic energy would not surpass the value of 1 which is what the potential energy value can have. The kinetic energy value must be at times greater than the potential energy in order to ensure movements. A larger value for this can be selected which would promote a greater amount of freedom of movement to the agents and the modeler can change this value if needed.
The classic canonical Schelling model agent identity homogeneity satisfaction criteria is represented by \(l(i,j,t)>h_{i,j}\). The number of local homogeneous agents neighboring the position \((i,j)\) at time point \(t\) is given by \(l(i,j,t)\) and the threshold for that grid position is given by \(h_{i,j}\). The \(h\) value does depend upon the grid position since the corners and edges of the grid require fewer homogeneous neighbors (as implemented in the code running the simulations). This proposed model introduces a quantity, \(\Delta\epsilon=0.2\), that is a minimum kinetic energy cost required by agents in order to change grid positions. In this Monetary Schelling model the remain state for an agent over time, \(r_{i,j,t}\) is given by:
\[r_{i,j,t}=(K_{i,j,t}>\Delta\epsilon)\wedge(h(i,j,t)<h). \tag{3}\]
This states that whether an agent remains in the same position or moves is dependent upon both the local identity homogeneity and whether the agent contains enough kinetic energy to surpass the barrier cost (akin to the activation energy, or potential barrier) \(\Delta\epsilon=0.2\). In the implementation agents move to a random grid position when they do not remain in their current position rather than scan the grid to where there is an identity satisfaction to be arrived at in order to be closer to a physical system without guidance on the particles.
Agents are exposed to a dynamic of a movement cost which affects their monetary store value. When an agent moves its monetary value stored, and consequently its kinetic energy, will be reduced with each movement. This reduction in the monetary value is the inverse of the kinetic energy cost:
\[m_{i,j,t}=m_{i,j,t-1}-K^{-1}(\Delta v). \tag{4}\]
This dynamic based on the value \(\Delta v\) can be seen as type of monetary quanta. This reduction in monetary value is then uniformly distributed amongst its neighbors which then have their monetary value (and kinetic energy) increased. Agents also undergo a monetary decrease on each iteration independently of other agents as an analogue to the expenses residents are expected to have over time other than just costs related to relocation. This is modeled by having the monetary holding of each agent reduced by 5% and this monetary loss is not given to other neighbors but simply subtracted from the system (akin to heat radiation). The \(K^{-1}(\Delta v)\) is the inverse function of the value of \(\Delta v\) so that
what is returned is the monetary value that corresponds to smallest amount of kinetic energy which can be exchanged. In this approach the model introduces the concept of a'monetary quanta' which is to be the smallest amount of money that can be exchanged as an analogue to the smallest amount of kinetic energy that can be exchanged as in a physical system. In terms of the simulated reality the hypothetical residential agents experienced this corresponds to a minimal transaction fee for all transactions.
The energy of each agent at every time point in the simulation is defined by the aggregate of the potential and kinetic energy of the agents:
\[E_{i,j,t}=V_{i,j,t}+K_{i,j,t}. \tag{5}\]
This represents the total energy of each agent during the course of the simulation.
The probability of the energy states is required in order to calculate the entropy and internal energy of the complete system at each time point. Since the energies are on a continuous domain, they are discretized first. Each energy state value is denoted with \(n\) and \(n_{total}\) as the total number of discrete energy states of all the agents at each time point independently. The discretization uses fixed energy bins set at, \(E_{n}\in\{0,0.2,0.4,\ldots,999.8,1000\}\) and a bin distance threshold of \(\epsilon=0.1\):
\[p_{t}(E_{n})=\frac{\sum_{i=1}^{N}\sum_{j=1}^{N}(|E_{i,j,t}-E_{n}|\leq\epsilon \wedge E_{i,j,t}\neq\emptyset)}{\|N\|}. \tag{6}\]
Here \(\|N\|\) is the total number of agents in the system which remains constant. \(p_{t}(E_{n})\) is a probability mass function over the discrete domain of possible energy states \(E_{n}\).
With the probabilities of the energy state values at each time point the entropy can be calculated using distribution of the probabilities with the standard formulation [28]:
\[S_{t}=-\sum_{n=1}^{n_{total}}p_{t}(E_{n})ln(p_{t}(E_{n})). \tag{7}\]
The interpretation of the value of \(S_{t}\) can be understood as a measure of the uniformity of the energy values across the domain, so that larger \(S_{t}\) values are a result of the agents occupying a broad range of \(E_{n}\) values and a low \(S_{t}\) when the agents occupy a limited number of \(E_{n}\) values. This concept of how entropy relates to spatial configurations of agents on a grid can be understood thoroughly in the detailed work of [32]. Over the simulation iterations as the system converges towards homogeneity the entropy will be decreasing and increasing if the agents begin to occupy a wide range of energy states. \(S_{t}\) can then provide insight into the stability of the system by assessing the range of different energy states the society residents occupy. This approach bypasses the need to produce a lengthy Monte Carlo estimate of numerous independent microstate samples in order to find a distribution on the macrostates as is done in [19, 17] for Schelling based models, and in [9, 18] for non-Schelling computation sociological models.
The internal energy of the system at each time point is defined by using the distribution of the energies and the energy values themselves:
\[U_{t}=\sum_{n=1}^{n_{total}}p_{t}(E_{n})E_{n}. \tag{8}\]
This quantity helps understand if the system will be energetic in the future or not. Systems with zero internal are at a stable equilibrium or resting point.
## 3 Results
### Initial Investigation
Figure 1 shows four plots of a simulation of the Monetary Schelling model for 800 iterations (iterations marked on the horizontal axis). The plot on the top left shows the percentage of agents at each iteration which will'remain' (Eq 3) in their grid position due to the criteria of the Monetary Schelling model discussed in the Methodology. It must be noted that in this model although all the agents converged to a remain state in their current position the reason may be due to Schelling identity satisfaction or to an insufficient amount of monetary holdings that provide the necessary kinetic energy \(\Delta v\) required to change positions. It can be seen how even in this model almost all the agents arrive to be in a remain state quickly from the random initialization as noted originally by Thomas Schelling [25]. The top right plot shows the value of the overall monetary disparity accumulated across all the agents in the model. This measure takes the absolute monetary value difference between an agent and all adjacent neighbors without any normalization to resemble physical energetic calculations. The bottom left plot shows how the model entropy for the Monetary Schelling model changes value with the new proposed methodology (Eq 7). The eventual decrease in entropy confirms the findings in [19] where the entropy of the system decreases as the system arrives at a more 'organized' agent identity configuration. The bottom right plot shows how the internal energy (Eq 8) of the Monetary Schelling model can be monitored. The monotonic decrease is due to agents having less potential energy being in an identity satisfaction (less potential energy) and reduced amount of monetary holding (less kinetic energy). Monetary holdings are driven to zero in this simulation as the agents will spend a portion of their holdings regardless of movement and there is no dynamic for re-introducing monetary values to the agents. This shows how the model arrives at a static equilibrium similar to physical systems since there is no energy in the system participants (agents).
Figure 2 shows a different set of analytic measurements of the same system simulation presented in Figure1. The top left plot shows the value of the Gini coefficient computed on the value of the potential energy (Eq 1) at each time point. The increase continues until there is only a single agent which has an opportunity to move (maximum energetic inequality). This can come from an agent not being identity satisfied but they must also have the sufficient monetary
values to move (Eq 3). The top right plot shows the Gini coefficient for the kinetic energy (Eq 2) which is not tied to the potential energy (Eq 1) as it decreases at a later iteration. The decrease happens because of the spending at each iteration regardless of movements. It continues to grow until only a single agent has a non-zero equivalent monetary value. The bottom left plot shows the percentage of agents which will remain in their positions in the next iteration. The bottom right plot is the mean kinetic energy of the agents.
### Monetary Injections
The results of the simulations shown in this subsection demonstrate the effects of 'injecting' monetary values into each of the agents uniformly at every 200 iterations (100,000 to each agent). The purpose is to re-stimulate the system from the fixed point of stable equilibrium (resting point) and observe the trends which take effect.
Figure 1: The horizontal axis on each plot shows the iteration of the Monetary Schelling model. Top-left: the percentage of agents which remain in their position due to Schelling identity satisfaction. Top-right: the overall aggregate monetary disparity between each agent and their neighbors. Bottom-left: the model entropy calculated from the probability distribution of the agent energies (Eq 7). Bottom-right: the internal energy of the system based upon the probability of an energy and that energy value (Eq 8). The system arrives to rest and stabilizes around a fixed point as agents remain in their positions.
Figure 3 shows in the top left plot the remain ratio which does not display monotonicity with the monetary injections. The configurations of the agents does not allow all the agents to find identity satisfaction and previously would stop moving (remaining) due to insufficient funds. With the injections the unsatisfied subset of agents begins to mobilize again as their kinetic energy is sufficient to overcome the \(\Delta v\) (Eq 3). As each injection is made a fewer amount of agents still mobilize some eventually find new homogeneous clusters among their group with each injection ([9]). The top right plot shows the aggregate monetary disparity between the agents and their adjacent neighbors. It can be seen how the injections cause a monetary disparity since agents which still move will have less monetary values between neighbors creating the disparity. This relative size of these spikes mirrors the dip sizes in the top left plot so that they decrease as the number of mobile agents decreases as well from there being a smaller set of identity unsatisfied. The bottom left plot shows how the entropy of the model is affected by the monetary injections. The entropy has a
Figure 2: The top plots show the Gini coefficient values for the potential energies of the agents (Eq 1) (based upon their remain values) and the kinetic energies (Eq 2) (based on the amount of money held by each agent). The bottom left plot shows the percentage of agents which are in the remain state during the simulation and the bottom right plot the kinetic energy mean at each iteration. This shows that there is a phase where most agents no longer change positions and gradually lose their kinetic energy over time due to holding less monetary value.
decreasing trend due to the distribution of the total agent energy probabilities accumulating into a single discrete bin. The injections allowing for a portion of agents to deviate from the uniformity causes the distribution to have multiple probability bins and therefore an entropy. The model internal energy shows a similar spiking but the size of the spikes are not dependent upon a distributional disparity but only upon the energy of the agents which is uniform at each injection. The initial internal energy value is highest since the potential energy among the agents is highest even if the monetary injections covered decreases in the kinetic energy.
Figure 4 has analogous plots to Figure2 showing the effect of monetary injections on the system. The top left plot shows that the equality of the potential energy is disrupted since the agents which still seek identity satisfaction will move while they contain sufficient funds. The top right shows how the injections affect the inequality of the kinetic energies. The kinetic energy inequality spikes right before almost all the agents have the lowest possible monetary hold
Figure 3: The horizontal axis on each plot shows the iteration of the Monetary Schelling model. The top-left plot shows the percentage of agents which remain in their position due to Schelling identity satisfaction. Top-right is the overall aggregate monetary disparity between each agent and their neighbors. Bottom-left is the model entropy trajectory (Eq 7) which is seen to reflect the size of the set of the mobile agents over time. Bottom-right shows the internal energy of the system (Eq 8) based upon the probability of an energy and the energy value which is also related to the number of agents which are mobilized.
ing bin value so that only a single agent remains with any monetary holding (maximum monetary and kinetic energy inequality). The bottom left shows the agent remain percentage over the simulation trajectory. The bottom right shows the spikes in the model kinetic energies due to the monetary injections at every 200 iterations and how it decreases as the agents hold less monetary values. It can therefore be seen that a system which lacks an injection of energy (monetary values here) degenerates towards fixed point of the system.
Figure 3 and Figure 4 displays a key outcome of these simulations is that the behavior of the entropy calculated from the energy states of the system is similar to the entropy trace as it would be defined by the canonical entropy formulation from the microstates of the system [19]. From the interpretation of the entropy values it can be understood that as the entropy decreases that the number of microstates which occupy the state of the system decreases as well and this provides a different progression metric than other Schelling trajectory methods based upon cluster densities [24]. This does show that the rule set of
Figure 4: The top plots show the Gini coefficient values for the potential energies of the agents (based upon their remain values) Eq 1, and the Gini of kinetic energies (based on the amount of money held by each agent Eq 2). The bottom left plot shows the percentage of agents which are in the remain state during the simulation and the bottom right plot the kinetic energy mean at each iteration. This shows that in a phase where most agents no longer change positions and gradually lose their kinetic energy over time due to holding less monetary value the internal energy will descend to zero.
the Schelling model is not performing randomize state allocations which would result in the state oscillating around the largest microstate mode.
A key contribution of this paper is the use of the energy states based on the kinetic and the potential energy for the definition of the entropy. In thermodynamics, the entropy is defined on the energy states of the atoms. It is the aim of this work to establish a statistical/informational description of a system that will not only describe its agent behaviors directly, but also be compatible with thermodynamics. This is being done because by utilising this derivation of the entropy the rest of the thermodynamical values such as the energy or temperature will be able to be defined following this work, and allow a model to draw from the scientific knowledge of thermodynamics. In the canonical derivation of the entropy, since it is defined directly based on the location microstates, the probabilities calculated would not be able to be utilised for a calculation of an internal energy even if the energy state was subsequently defined. As such, any definition of macroscopic energy values in such a system would not be self consistent with the entropy of the system and no conclusions could be made. Only by defining these microscopical energy states of the kinetic and potential energy of the system, can a definition of the macroscopic energy value be developed. Thus the fact that the behavior of the entropy of the system (calculated upon the energy microstates) is behaving similarly to the canonical entropy assures that this definition hold true and can be utilised further in the future in other socio physics models.
## 4 Discussion
This work presents a modified Schelling model which includes a monetary dynamic. In the paper it is referred to as the Monetary Schelling model as agents require a certain amount of monetary value to change grid position and that a cost is incurred for each movement. A certain amount of monetary value is also removed at each iteration so that over time agents have their monetary store moving to zero. The overarching goal of the paper is to provide an incremental advancement on the goals of the field of Socio Physics, the agents have a potential energy and kinetic energy defined based on their state at each iteration. Using the quantities of the energies the entropy of the model at each time point as well as the internal energy is defined. This provides a novel approach to calculating the entropy for a computational model of a social process.
The results showcase how there are agents which cease to explore grid positions for identity satisfaction when an insufficient amount of monetary store values exist and that monetary injections are required for further explorations of new positions. The injections demonstrate that the system will move towards a fixed stable state without monetary injections as the kinetic energy (based on monetary holdings of the agents) is lost to a process of monetary decrease (akin to a loss in thermal radiation). It can be seen how the formulations of the kinetic and potential energy for agents can be defined and be incorporated into the modified Schelling model where the entropy is based upon those energy
state values. In terms of policy the applicability of such a modeling approach can inspire the computation of energy within groups that have power brokers between organizations with competitive alignments [1].
|
2309.11358 | On Green's function embedding using sum-over-pole representations | In Green's function theory, the total energy of an interacting many-electron
system can be expressed in a variational form using the Klein or Luttinger-Ward
functionals. Green's function theory also naturally addresses the case where
the interacting system is embedded into a bath. This latter can then act as a
dynamical (i.e., frequency-dependent) potential, providing a more general
framework than that of conventional static external potentials. Notably, the
Klein functional includes a term of the form $\text{Tr}_\omega
\text{Ln}\left\{G_0^{-1}G\right\}$, where $\text{Tr}_\omega$ is the frequency
integration of the trace operator. Here, we show that using a sum-over-pole
representation for the Green's functions and the algorithmic-inversion method
one can obtain in full generality an explicit analytical expression for
$\text{Tr}_\omega \text{Ln}\left\{G_0^{-1}G\right\}$. This allows one, e.g., to
derive a variational expression for the Klein functional in the presence of an
embedding bath, or to provide an explicit expression of the RPA correlation
energy in the framework of the optimized effective potential. | Andrea Ferretti, Tommaso Chiarotti, Nicola Marzari | 2023-09-20T14:40:57Z | http://arxiv.org/abs/2309.11358v1 | # On Green's function embedding using sum-over-pole representations
###### Abstract
In Green's function theory, the total energy of an interacting many-electron system can be expressed in a variational form using the Klein or Luttinger-Ward functionals. Green's function theory also naturally addresses the case where the interacting system is embedded into a bath. This latter can then act as a dynamical (i.e., frequency-dependent) potential, providing a more general framework than that of conventional static external potentials. Notably, the Klein functional includes a term of the form \(\mathrm{Tr}_{\omega}\mathrm{Ln}\left\{G_{0}^{-1}G\right\}\), where \(\mathrm{Tr}_{\omega}\) is the frequency integration of the trace operator. Here, we show that using a sum-over-pole representation for the Green's functions and the algorithmic-inversion method one can obtain in full generality an explicit analytical expression for \(\mathrm{Tr}_{\omega}\mathrm{Ln}\left\{G_{0}^{-1}G\right\}\). This allows one, e.g., to derive a variational expression for the Klein functional in the presence of an embedding bath, or to provide an explicit expression of the RPA correlation energy in the framework of the optimized effective potential.
## I Introduction
Electronic-structure simulations based on density-functional theory (DFT) [1] are today widely exploited [2] in condensed-matter physics, quantum chemistry, or materials modelling [3]. Even if DFT can in principle be used to access any observable of an interacting system as a functional of the density [3; 4; 5], currently available functionals and approximations are mostly limited to the ground-state total energy (and, in turn, to its derivatives wrt external parameters) and to observables connected to the charge density. Instead, electronic excitations are typically addressed by extensions of the basic theory, such as time-dependent DFT [6; 7; 8] or ensemble DFT [9; 10; 11]. Notably, all these approaches are equipped with a variational principle which allows one to determine the basic quantity of the theory (e.g. the density in DFT or its time-dependent version in TD-DFT) for the systems studied.
Conversely, Green's function (GF) methods [5; 12] such as the GW approximation and its combination with the Bethe-Salpeter equation (BSE) [13; 14; 15; 16], are commonly used to address charged and neutral excitations. Nevertheless, the one-particle GF can also be used to access the ground-state total energy [5; 17] (e.g., via the Galitskii-Migdal expression). Variationality of the total energy wrt the one-particle Green's function can be recovered by using the Luttinger-Ward or Klein (LWK) functionals [18; 19; 20; 21], which become stationary when evaluated at the interacting Green's function of the system. Examples include applications to atoms and molecules [22; 23; 24; 25], to Hubbard chains [26; 27], or to the homogeneous electron gas [28; 29; 30]. When the Klein functional is combined with an optimized effective potential approach [31; 32] one obtains the linearized Sham-Schluter equation [33; 34], which can be used to derive advanced KS-DFT functionals from diagrammatic approximations, such as the EXX+RPA exchange-correlation functional [5; 12; 17; 35; 36; 37; 38]. Notably, the Klein functional features a term of the form \(\int\frac{d\omega}{2\pi}\mathrm{Tr}\mathrm{Ln}\{G_{0}^{-1}G\}\) (see in Sec. II for more details), which is quite cumbersome to be evaluated numerically and needs dedicated treatment [25; 26]. The LW functional displays similar issues. In passing we also note that besides DFT-based and GF methods, other orbital-dependent or dynamical approaches [3] addressing excitations are available, including dynamical mean field theory (DMFT) [39], spectral potentials [40; 41], or Koopmans-compliant functionals [42; 43; 41].
Importantly, dynamical potentials can be naturally employed to describe embedding situations, where the system of interest is placed in contact with an external bath. In these cases, for non-interacting systems, the embedded GF can be calculated by adding an embedding self-energy [5; 12], which has the form of a non-local and dynamical potential, to the pristine Hamiltonian. This approach has been successfully exploited, e.g., in the description of semi-infinite systems (surface Green's function) and applied to simulations of quantum transport through nanojunctions [45; 46; 47; 48; 49]. When particle interactions are considered, the situation becomes more complex, but the assumption of dealing with a non-interacting bath [45] allows one to treat the problem similarly to the non-interacting case. Approaches such as DMFT [39], which is a dynamical method targeting both total energies and spectral properties, exploit the embedding of an interacting impurity model to describe the electron-electron self-energy of strongly interacting systems.
In general, the use of dynamical potentials (e.g., originating from many-body perturbation theory [5; 12], embedding, or spectral potentials [3; 40; 41]) in electronic
structure methods is a challenge by itself. Indeed, the frequency representation of propagators (or dynamical potentials) is non trivial [30; 50; 51] with viable approaches ranging from discretized frequency grids (both on the real or imaginary axis) to the use of meromorphic functions and Pade approximants [52; 53], or imaginary-time treatments [53]. Moreover, the solution of the resulting Dyson equation (which can be cast in the form of a non-linear eigenvalue problem [54]) adds further numerical and conceptual complications (including multiple solutions and non-orthonormality of the eigenvectors [5; 15; 54]). In order to address this problem, we have recently exploited the combination of a sum-over-poles (SOP) representation for the propagators, with the algorithmic-inversion method (AIM) [30; 50; 51] to exactly solve the Dyson equation resulting from dynamical potentials.
In this work, by taking advantage of the SOP-AIM approach [30; 50; 51], we first derive an analytical expression for terms of the form \(\mathrm{Tr}_{\omega}\left\{G_{0}^{-1}G\right\}\), as those appearing in the Klein functional, that is valid in the general case of interacting propagators. Next, we exploit this result to (\(i\)) recover an exact expression [35] for the RPA correlation energy [37; 21; 38], and to (\(ii\)) obtain a Klein functional valid in the case of embedding where the system of interest is coupled to a non-interacting bath.
The paper is organized as follows. In Sec. II we present the theoretical framework used throughout the work. Next, in Sec. III we derive an analytical expression for \(\mathrm{Tr}_{\omega}\left\{G_{0}^{-1}G\right\}\). Finally, in Sec. IV we apply the newly derived result first to evaluate the RPA correlation energy, and then to the embedding of the Klein functional. Complementary details about Green's function embedding and TrLn terms are provided in Appendix A and Appendix B, respectively.
## II Theoretical framework
In this Section we present the theoretical framework underpinning the use of Green's function methods to describe an interacting system in the presence of a non-interacting bath; additional details are provided in Appendix A. We consider a closed quantum system \(C\) that is partitioned into two subsystems, \(S\) and \(B\), such that, in terms of degrees of freedom, one has \(C=S\cup B\). Particle interactions are present but limited to subsystem \(S\) only, leaving subsystem \(B\) as a non-interacting bath. All single particle operators, including Hamiltonians, self-energies, and Green's function, become 2\(\times\)2 block matrices, indexed according to the \(S\) and \(B\) subsystems. As detailed in Fig. 1, \(h_{0}\) represents the non-interacting Hamiltonian of the two systems without coupling, while \(H_{0}\) is the non-interacting Hamiltonian of \(C\) when the coupling \(V\) is included. Eventually, self-energy terms accounting for the particle-particle interaction are included. As discussed in App. A, since interactions are only present within \(S\), one can show that the corresponding self-energy is limited to the same subsystem. Moreover, since \(h_{0B}\) is non-interacting, without loss of generality we may take it diagonal on the chosen basis, such that \(h_{0B}=\mathrm{diag}(\Omega_{1},\dots,\Omega_{n},\dots)\).
Within the above definitions, and following Fig. 1, one can define the Green's functions for the whole system \(C\), at different levels of description (non-interacting and uncoupled, non-interacting and coupled, interacting in \(S\) and coupled), according to:
\[g_{0}(\omega) = \left[\omega I-\mathrm{diag}(h_{0S},h_{0B})\right]^{-1}=\left[ \omega I-h_{0}\right]^{-1}\] \[G_{0}(\omega) = \left[\omega I-H_{0}\right]^{-1}\] \[G(\omega) = \left[\omega I-H_{0}-\Sigma(\omega)\right]^{-1} \tag{1}\]
(time-ordered offsets from the real axis are left implicit). We note that when \(G\) is the physical GF, then \(\Sigma=\Sigma_{\mathrm{Hxc}}\) is the interaction self-energy (accounting for Hartree, exchange, and correlation terms). Nevertheless, in the following we will also consider cases where \(G\) is a trial GF, as discussed, e.g., in Sec. III.2. In these cases, \(\Sigma=\widetilde{\Sigma}\) just collects a set of degrees of freedom useful to represent \(G\) via
\[G=G_{0}+G_{0}\widetilde{\Sigma}G. \tag{2}\]
Within this construction, the self-energy will also be constrained to have non-zero matrix elements only within subsystem \(S\), which can be seen as a domain definition for the set of trial \(G\)'s.
By focusing on the subsystem \(S\) and making reference to the theory of Green's function embedding, the \(S\) blocks
Figure 1: Upper panel: Partitioning of the closed system \(C\) into the subparts \(S\) (interacting, as indicated by the wiggly line) and \(B\) (non-interacting). The Hamiltonian and self-energy blocks and the coupling \(V\) of the two subsystems are also indicated. Bottom panel: Sketch view of the three different Hamiltonians and Green’s functions involved in the discussion of embedding. Left: \(S\) and \(B\) are non-interacting and uncoupled; Central: \(S\) and \(B\) are non-interacting but coupled; Right: \(S\) is interacting and coupled to the non-interacting \(B\).
of the above GFs are obtained as:
\[g_{0S}(\omega) = \left[\omega I_{S}-h_{0S}\right]^{-1},\] \[G_{0S}(\omega) = \left[\omega I_{S}-h_{0S}-\Delta v_{S}(\omega)\right]^{-1},\] \[G_{S}(\omega) = \left[\omega I_{S}-h_{0S}-\Delta v_{S}(\omega)-\Sigma(\omega) \right]^{-1}, \tag{3}\]
where \(\Delta v_{S}\) is an embedding self-energy due to the bath [47; 49; 5; 12; 46]:
\[\Delta v_{S}(\omega)=Vg_{0B}(\omega)V^{\dagger}=\sum_{n}\frac{R_{n}}{\omega- \Omega_{n}\pm i0^{+}}, \tag{4}\]
which acts as a correction to the external potential of \(S\).
The total energy of the closed system \(C\) can be obtained variationally, e.g., via the Klein functional [19; 21], reading
\[E^{K}[G] = \mathrm{Tr}_{\omega}\mathrm{Ln}\left\{G_{0}^{-1}G\right\}+ \mathrm{Tr}_{\omega}H_{0}G_{0}\] \[+ \mathrm{Tr}_{\omega}\left[I-G_{0}^{-1}G\right]+\Phi_{\mathrm{Hxc }}[G],\]
where \(\Phi_{\mathrm{Hxc}}[G]\) is a functional [18; 19; 20] to be approximated that is related to the interaction self-energy as
\[\frac{\delta\Phi_{\mathrm{Hxc}}[G]}{\delta G}=\frac{1}{2\pi i}\Sigma_{\mathrm{ Hxc}}[G]. \tag{6}\]
With the above definitions, one can show [5; 12] that the gradient of the Klein functional is zero for the GF \(G\) that satisfies the self-consistent Dyson equation
\[G=G_{0}+G_{0}\Sigma_{\mathrm{Hxc}}[G]G. \tag{7}\]
### Sum-over-poles and algorithmic inversion
In order to make progress in the numerical exploitation of the above described techniques, in the following we make use of the concept of sum-over-poles (SOP) [26; 27; 30; 50; 52] to represent propagators, combined with that of the algorithmic-inversion method (AIM) to solve Dyson-like equations. In practice, this amounts to writing propagators and self-energies using discrete poles and residues (meromorphic representation [26]) as
\[G_{0}(\omega) = \sum_{n}\frac{A_{n}^{0}}{\omega-\epsilon_{n}^{0}\pm i0^{+}}, \tag{8}\] \[G(\omega) = \sum_{s}\frac{A_{s}}{\omega-\epsilon_{s}\pm i0^{+}},\] (9) \[\Sigma(\omega) = \Sigma_{0}+\sum_{n}\frac{\Gamma_{n}}{\omega-\omega_{n}\pm i0^{+}}, \tag{10}\]
which could be seen also as discrete Lehmann representations [52]. Recently, SOPs have also been used to represent the screened Coulomb interaction in the context of GW leading to the multi-pole approximation (MPA) [55; 56]. For simplicity, in this work we assume all residues and poles to be Hermitian and real, respectively. In the above expressions, \(G_{0}\) is a non-interacting Green's function (GF) obtained from the single-particle Hamiltonian \(h_{0}\),
\[h_{0}|\phi_{n}^{0}\rangle=\epsilon_{n}^{0}|\phi_{n}^{0}\rangle,\qquad A_{n}^{0 }=|\phi_{n}^{0}\rangle\langle\phi_{n}^{0}|, \tag{11}\]
while \(G\) is an interacting or embedded GF, obtained from \(G_{0}\) by a Dyson equation involving \(\Sigma\), i.e. \(G=[\omega I-h_{0}-\Sigma(\omega)]^{-1}\).
Having assumed discrete and real poles poles for \(\Sigma\) and \(G_{0}\) (and Hermitian residues) implies [50; 54] that also \(G\) has real discrete poles and that the residues can be written as
\[\left[h_{0}+\Sigma(\epsilon_{s})\right]|f_{s}\rangle=\epsilon_{s}|f_{s}\rangle,\qquad A_{s}=|f_{s}\rangle\langle f_{s}|, \tag{12}\]
where the normalization of \(|f_{s}\rangle\) is defined according to
\[\langle f_{s}|f_{s}\rangle = Z_{s}=1+\langle f_{s}|\dot{\Sigma}(\epsilon_{s})|f_{s}\rangle \leq 1, \tag{13}\] \[\sum_{s}|f_{s}\rangle\langle f_{s}| = I, \tag{14}\]
i.e., the \(|f_{s}\rangle\) are complete though not linearly independent nor orthonormalized (see also Ref. [51]), where we have used \(\dot{\Sigma}(\omega)=\partial\Sigma(\omega)/\partial\omega\). In writing the expressions above the Dyson equation has been mapped to a non-linear eigenvalue problem involving rational functions [50; 54]. Moreover, noting that the residues of \(G\) in Eq. (9) are positive semi-definite (PSD) by construction, the residues \(\Gamma_{n}\) of \(\Sigma\) are also forced to be PSD Hermitian operators. In fact, given
\[A(\omega) = \frac{1}{2\pi i}\left[G(\omega)-G^{\dagger}(\omega)\right] \mathrm{sign}(\mu-\omega),\] \[\Gamma(\omega) = \frac{1}{2\pi i}\left[\Sigma(\omega)-\Sigma^{\dagger}(\omega) \right]\mathrm{sign}(\mu-\omega), \tag{15}\] \[A(\omega) = G(\omega)\Gamma(\omega)G^{\dagger}(\omega), \tag{16}\]
(the last identity coming from the Dyson equation), the positive semi-definiteness of \(A\) is equivalent [57; 12] (i.e., if and only if) to that of \(\Gamma\).
Next, given \(G_{0}\) and \(\Sigma\) represented as SOPs, it is possible to explicitly evaluate the coefficients of the GF \(G\) solving the related Dyson equation. This approach, termed algorithmic-inversion method (AIM) [50], maps the non-linear eigenvalue problem of the Dyson equation into a linear eigen-problem in a larger space. Algebraically, this can be seen as the consequence of identifying the interaction self-energy as an embedding self-energy [see Eqs. (29-30)], and then solving the Hamiltonian problem in the larger subspace; details are provided in Ref. [50]. We also note that similar techniques have been used in the context of dynamical mean-field theory [58; 59; 60], lattice Hamiltonians [26], and, more recently, within the GW and Bethe-Salpeter equation formalism [61; 62].
## III Analytical evaluation of TrLn terms
As a technical prerequisite for this work, and as a relevant result in itself, in this Section we focus on integrals
of the form:
\[\Delta E_{K} = \mathrm{Tr}_{\nu}\mathrm{Ln}\left\{G_{0}^{-1}G\right\} \tag{17}\] \[= \int\frac{d\omega}{2\pi i}\,e^{i\omega 0^{+}}\,\mathrm{TrLn} \left\{G_{0}^{-1}(\omega)G(\omega)\right\}.\]
By representing the Green's functions \(G_{0}\) and \(G\) in the above equation as SOPs according to Eqs. (8-9), one can derive a general analytical expression for \(\Delta E_{K}\) of Eq. (17), as shown below.
In order to do this, we will make use of some common operator and matrix identities, that we report below for completeness. For instance, we will use the following identity:
\[\mathrm{TrLn}(A)=\mathrm{Ln}\det(A). \tag{18}\]
Bearing Eq. (18) in mind, the following relations also hold:
\[\det(AB) = \det(A)\det(B), \tag{19}\] \[\mathrm{TrLn}(AB) = \mathrm{TrLn}(A)+\mathrm{TrLn}(B). \tag{20}\]
Moreover, given an operator \(A\) represented in the form
\[A=\left[\begin{array}{cc}S&V_{1}\\ V_{2}^{\dagger}&B\end{array}\right], \tag{21}\]
its determinant can be expressed according to [63]:
\[\det(A)=\det(B)\cdot\det(S-V_{1}B^{-1}V_{2}^{\dagger}), \tag{22}\]
which is a result reminiscent of techniques used in GF embedding, presented in Sec. II.
### Special case: non-interacting \(G\)
As a first step, we consider the case of both \(G_{0}\) and \(G\) in Eq. (17) being non-interacting GFs corresponding to mean-field Hamiltonians \(h_{0}\) and \(h_{1}\), defined as:
\[h_{i}=\sum_{m}\,|\phi_{m}^{i}\rangle\epsilon_{m}^{i}\langle\phi_{m}^{i}|. \tag{23}\]
This means that both \(G_{0}\) and \(G\) are diagonal on single-particle orthonormal basis sets (\(|\phi_{m}^{0}\rangle\) and \(|\phi_{m}^{1}\rangle\)), that can be used to evaluate the traces. Importantly, we assume that the number of occupied electrons is the same for \(G_{0}\) and \(G\). By considering Eq. (20) and taking \(A=G_{0}^{-1}\) and \(B=G\), one can write the \(\Delta E_{K}\) integral as
\[\Delta E_{K} = \int\frac{d\omega}{2\pi i}e^{i\omega 0^{+}}\left[-\mathrm{TrLn}G_{0}+ \mathrm{TrLn}G\right], \tag{24}\] \[= \int\frac{d\omega}{2\pi i}e^{i\omega 0^{+}}\Big{[}\mathrm{Ln} \frac{\Pi_{m}^{\mathrm{all}}(\omega-\epsilon_{m}^{0}\pm i0^{+})}{\Pi_{m}^{ \mathrm{all}}(\omega-\epsilon_{m}^{1}\pm i0^{+})}\Big{]}. \tag{25}\]
The label "all" in the product means that both occupied and empty poles are considered. In order to evaluate the integral using residues, the contour needs to be closed in the upper half plane, the enclosed poles corresponding to occupied states of both \(G_{0}\) and \(G\). Since the number of occupied poles of both systems is the same, the integral \(\Delta E_{K}\) can be re-written as
\[\Delta E_{K} = \sum_{m}^{\mathrm{occ}}\,\oint_{\Gamma_{m}}\frac{dz}{2\pi i} \mathrm{Ln}\,\frac{z-\epsilon_{m}^{0}-i0^{+}}{z-\epsilon_{m}^{1}-i0^{+}}, \tag{26}\]
with an example of a \(\Gamma_{m}\) contour represented in Fig. 3 of App. B.1. The analytical expression for contour integrals as those appearing in Eq. (26) is provided in Eq. (B.1) of App. B.1. Taking advantage of that expression, we recover the well-known result [5; 12; 24; 35]:
\[\Delta E_{K}=\sum_{m}^{\mathrm{occ}}\,\left[n_{m}^{1}\epsilon_{m}^{1}-n_{m}^{ 0}\epsilon_{m}^{0}\right], \tag{27}\]
where we have made the eigenvalue multiplicities \(n_{m}^{i}\) explicit and limited the sum to distinct multiplets.
### General case: interacting \(G\)
Next, in this Section we consider the case of Eq. (17) with a fully interacting \(G\). Without loss of generality, we can define a self-energy connecting \(G\) and \(G_{0}\) by a Dyson equation, by writing:
\[\Sigma(\omega)=G_{0}^{-1}-G^{-1}. \tag{28}\]
It is important to note that such self-energy is not necessarily physical (i.e. it may not originate from perturbation theory or from a functional formulation), but rather an auxiliary mathematical object. Since \(G_{0},G\) and \(\Sigma\) are connected by a Dyson equation, and having assumed discrete poles for both \(G_{0}\) and \(G\) (which then result meromorphic functions of the frequency), also \(\Sigma\) has discrete poles. We are therefore in the condition to use the SOP representations given in Eqs. (8-10). In what follows we assume to represent single-particle operators on a truncated basis set, thereby mapping them to finite dimension matrices.
As discussed in Sec. II.1, the residues \(\Gamma_{n}\) of \(\Sigma\) are semi-positive definite (stemming from the SPD of the spectral function of \(G\)) and, following Refs. [30; 50], one can introduce \(V_{n}\) such that
\[\Gamma_{n}=V_{n}^{\dagger}V_{n}. \tag{29}\]
In doing so, \(V_{n}\) can be taken, e.g., to be the square root of \(\Gamma_{n}\) or to be a lower-rank rectangular matrix (when represented on a basis) if \(\Gamma_{n}\) is low-rank. By doing this, \(G\) can be seen as the GF of an embedded system (index 0, below), coupled to an external bath. Indeed, by defining the inverse resolvent (\(\omega I-\mathcal{H}\)) of the whole auxiliary
system as
\[\omega I-\mathcal{H} = \left[\begin{array}{cccc}\omega I-h_{0}&V_{1}&V_{2}&\cdots\\ V_{1}^{\dagger}&(\omega-\omega_{1})I&&\\ V_{2}^{\dagger}&(\omega-\omega_{2})I&&\\ \vdots&&&\ddots\end{array}\right], \tag{30}\] \[= \left[\begin{array}{cc}S&V\\ V^{\dagger}&B\end{array}\right],\]
one can immediately verify that the self-energy in Eq. (10) is the embedding self-energy for the zeroth-block subsystem \(S\) (in the following, calligraphic operators such as \(\mathcal{H}\) refer to the enlarged auxiliary space). This construction is the same used in the framework of the algorithmic inversion method [30; 50], used to solve Dyson equations involving propagators represented as SOP and presented in Sec. II.1.
We can now apply the identity in Eq. (22) to the matrix in Eq. (30), obtaining:
\[\det(\omega I-\mathcal{H}) = \det(B)\times\det\big{(}S-VB^{-1}V^{\dagger}\big{)}\] \[= \prod_{n}(\omega-\omega_{n})^{r_{n}}\times\det(\omega I-h_{0}- \Sigma),\]
where \(r_{n}\) is the rank of the \(\Gamma_{n}\) matrix. The above equation can be recast in the following form:
\[\det\,G(\omega) = \prod_{n}(\omega-\omega_{n})^{r_{n}}\times\det(\omega I-\mathcal{ H})^{-1}, \tag{32}\] \[= \frac{\prod_{n}^{\rm all}(\omega-\omega_{n})^{r_{n}}}{\prod_{s}^ {\rm all}(\omega-\epsilon_{s})^{n_{s}}}, \tag{33}\]
where we have exploited the fact that the poles of \(G\) are also eigenvalues of \(\mathcal{H}\) for the whole system, and made the multiplicities \(n_{s}\) explicit.
Combining Eq. (24) with the identity connecting TrLn to Ln det, Eq. (18), we obtain:
\[\Delta E_{K} = \int\frac{d\omega}{2\pi i}e^{i\omega 0^{+}}\left[\mathrm{Ln}\det G-\mathrm{Ln}\det G_{0} \right], \tag{34}\] \[= \int\frac{d\omega}{2\pi i}e^{i\omega 0^{+}}\mathrm{Ln}\left[\frac{ \prod_{n}^{\rm all}(\omega-\omega_{n})^{r_{n}}\prod_{m}^{\rm all}(\omega- \epsilon_{m}^{0})^{n_{m}^{0}}}{\prod_{s}^{\rm all}(\omega-\epsilon_{s})^{n_{s }}}\right].\] \[= \int\frac{d\omega}{2\pi i}\,e^{i\omega 0^{+}}\,\mathrm{TrLn} \left\{\mathcal{G}_{0}^{-1}(\omega)\mathcal{G}(\omega)\right\}.\]
In the last equation, \(\mathcal{G}\) and \(\mathcal{G}_{0}\) are the GFs of the auxiliary system obtained with and without including the coupling matrices \(V\) in \(\mathcal{H}\), respectively. A counting of the degrees of freedom shows that the cardinality of \(\{\epsilon_{s}\}\) is equal to that of \(\{\epsilon_{m}^{0}\}\cup\{\omega_{n}\}\), as also shown by the embedding construction in Eq. (30). Nevertheless, only occupied poles (i.e. poles above the real axis) count in the integral.
If the _number of such poles in the numerator and in the denominator is the same_, by exploiting Eq. (26) we obtain the final result:
\[\Delta E_{K}=\sum_{s}^{\rm occ}n_{s}\epsilon_{s}-\left[\sum_{m}^{\rm occ}n_{m}^ {0}\epsilon_{m}^{0}+\sum_{n}^{\rm occ}r_{n}\omega_{n}\right]. \tag{36}\]
This expression is the first key result of the present work. The condition of having the same number of occupied states in the numerator and denominator in the second line of Eq. (34) is equivalent to having the same number of occupied states before and after the switch-on of the coupling matrix elements \(V\). This condition, therefore, encodes charge conservation within the closed system \(C=S\cup B\). In App. B.3 we also provide a generalization of Eq. (36) where both propagators in the TrLn term are interacting (or embedded).
At this point it is worth discussing alternative approaches existing in the literature aimed at evaluating terms of the form \(\mathrm{Tr}_{\omega}\mathrm{Ln}\left\{G_{0}^{-1}G_{1}\right\}\). For instance, in a series of papers, Dahlen and co-workers [23; 24; 25] first rewrite the TrLn term of the Luttinger-Ward functional by factorizing the static part of the self-energy \(\Sigma_{x}\), and then recasting [25] the integral for numerical integration over the imaginary axis. Along the same lines, in App. B.2 we provide a scheme for numerical integration of the TrLn terms that we have used in the present work to numerically validate analytical expressions such as Eq. (36). In Ref. [26], Friesen and co-workers (which also adopt a meromorphic, i.e. SOP in our language, representation for the propagators) first handle the \(\Sigma_{x}\) term as in Refs. [23; 24; 25] and then numerically evaluate the residual contribution to the integral using a coupling-constant integration. In Ref. [35], Ismail-Beigi discusses the RPA correlation energy in the context of Green's function theory, and, exploiting algebraic techniques similar to those employed in this work, provides an analytical expression involving the poles of the independent-particle and RPA response functions. We discuss the RPA correlation energy in Sec. IV.1 where we re-derive Ismail-Beigi's expression by means of the present formalism. Additionally, Aryasetiawan et al. [64] write the RPA correlation energy in a form similar to that of Ref. [25] and App. B.2 for numerical evaluation along the imaginary axis.
## IV Applications
Having derived an analytical expression for the TrLn terms defined by Eq. (17), in this Section we present two applications. First we focus on the calculation of the RPA correlation energy, providing a re-derivation of a result already known in the literature [35], and then apply the formalism to analyze and partition the Klein functional in the presence of embedding.
### RPA correlation energy and plasmons
In the context of Green's function methods, the RPA correlation energy is written as [5; 12; 35; 36; 37; 38; 17]:
\[P(\mathbf{x}_{1},\mathbf{x}_{2},\omega) = \int\frac{d\omega^{\prime}}{2\pi i}\,G(\mathbf{x}_{1},\mathbf{x}_ {2},\omega+\omega^{\prime})G(\mathbf{x}_{2},\mathbf{x}_{1},\omega^{\prime})\] \[\Phi^{\rm RPA}[P] = -\frac{1}{2}{\rm Tr}_{\omega}\Big{\{}\sum_{n=2}^{\infty}\frac{1} {n}\left[vP(\omega)\right]^{n}\Big{\}}\] \[= +\frac{1}{2}{\rm Tr}_{\omega}{\rm Ln}\big{\{}I-vP(\omega)\big{\}}\] \[+\frac{1}{2}{\rm Tr}_{\omega}\left\{vP\right\},\] \[= \Delta\Phi^{\rm RPA}_{1}+\Delta\Phi^{\rm RPA}_{2},\]
where the irreducible polarizability \(P\) is either evaluated using the Kohn-Sham Green's function \(G_{s}\) in the optimized-effective-potential (OEP) method [31], or by an interacting Green's function (e.g. at the level of self-consistent GW) when making stationary the Klein or Luttinger-Ward functionals [5; 12; 18; 19; 20]. By considering the Dyson equation
\[\chi(\omega)=P(\omega)+P(\omega)v\chi(\omega), \tag{39}\]
connecting the irreducible and reducible polarizabilities (\(P\) and \(\chi\), respectively), one obtains
\[I-vP=\epsilon=\chi^{-1}P, \tag{40}\]
which can be used in the first term \(\Delta\Phi^{\rm RPA}_{1}\)of Eq. (38), leading to:
\[\Phi^{\rm RPA}[P]=-\frac{1}{2}{\rm Tr}_{\omega}{\rm Ln}\big{\{}\chi P^{-1} \big{\}}+\frac{1}{2}{\rm Tr}_{\omega}\left\{vP\right\}. \tag{41}\]
By considering the \(\chi\) and \(P\) as two interacting single particle propagators, we can apply Eqs. (101-102) with \(\Sigma_{21}=v\) in view of Eq. (39). This means that the poles of the two self-energies need to cancel out identically and therefore do not contribute to the evaluation of the \({\rm Tr}\) Ln term. In turn, we obtain:
\[\Delta\Phi^{\rm RPA}_{1} = -\frac{1}{2}\sum_{p}^{\rm occ}\left[n_{p}\Omega_{p}-n_{p}^{0} \Omega_{p}^{0}\right], \tag{42}\] \[= \frac{1}{2}\sum_{p}^{\Omega_{p}>0}\left[n_{p}\Omega_{p}-n_{p}^{0 }\Omega_{p}^{0}\right],\]
where \(\Omega_{p}\) and \(\Omega_{p}^{0}\) are the poles of \(\chi\) and \(P\) respectively, and we have considered that each time-ordered polarizability has poles at \(\pm|\Omega_{p}^{(0)}|\), the negative ones being those above the real axis and contributing to the integral. Degeneracies of the poles (\(n_{p}\) and \(n_{p}^{0}\)), have been marked explicitly.
We now turn to the evaluation of the second term, \(\Delta\Phi^{\rm RPA}_{2}\) in Eq. (38). The irreducible polarizability \(P\) can be represented as a sum-over-poles according to:
\[P(\omega)=\sum_{p}^{\Omega_{p}>0}\left[\frac{|t_{p}\rangle\langle t_{p}|}{ \omega-\Omega_{p}^{0}+i0^{+}}-\frac{|t_{p}\rangle\langle t_{p}|}{\omega+ \Omega_{p}^{0}-i0^{+}}\right], \tag{43}\]
where \(\langle\mathbf{x}|t\rangle=\phi_{c}(\mathbf{x})\phi_{v}^{*}(\mathbf{x})\), \((c,v)\) referring to conduction and valence single particle orbitals, respectively. With the above definitions, one obtains:
\[\Delta\Phi^{\rm RPA}_{2}=-\frac{1}{2}\sum_{p}^{\Omega_{p}>0}\langle t_{p}|v|t _{p}\rangle, \tag{44}\]
which completes the evaluation of the RPA correlation energy, consistently with existing literature. In particular, we have recovered Eq. (23) of Ref. [35].
### Embedding of the Klein functional
The main goal of the present Section is to study the Klein functional in the presence of an embedding scheme as the one described in Sec. II and App. A, in order to derive, as demonstrated below, a variational partition of the total energy. In order to do so we begin by partitioning each term appearing in the Klein functional given by Eq. (5). Notably, the functional depends on a trial Green's function \(G\) that, according to Eq. (2), we represent by means of a self-energy \(\widetilde{\Sigma}\) constrained to be localized on the subsystem \(S\). As discussed in Sec. II, this represents a definition for the domain of the trial GF \(G\).
For what concern \(\Phi_{\rm Hxc}\), the partition is already in place since the particle-particle interaction is only present in \(S\). Therefore one has
\[\Phi_{\rm Hxc}[G]=\Phi_{\rm Hxc}[G_{S}]. \tag{45}\]
This can be understood, e.g., diagrammatically, since the bare interaction lines only connect points in the \(S\) subsystem, making each vertex located in \(S\). This is further discussed in App. A. Next we consider the \({\rm Tr}_{\omega}H_{0}G_{0}\)
Figure 2: RPA exchange and correlation energy represented by means of Feynman diagrams.
term, which is the non-interacting energy of the closed \(C=S\cup B\) system, and can be partitioned as
\[\mathrm{Tr}_{\omega}\left\{H_{0}G_{0}\right\} = \mathrm{Tr}_{\omega}^{S}\left\{(h_{0S}+\Delta v_{S})G_{0S}\right\} \tag{46}\] \[+ \mathrm{Tr}_{\omega}^{B}\left\{(h_{0B}+\Delta v_{B})G_{0B}\right\}\] \[= \sum_{s}^{\mathrm{occ}}\epsilon_{s}^{0}, \tag{47}\]
where \(\epsilon_{s}^{0}\) are the eigenvalues of the non-interacting problem for \(C\), \(H_{0}|\phi_{s}\rangle=\epsilon_{s}^{0}|\phi_{s}\rangle\).
Coming to the next term, the following chain of identities also holds
\[\mathrm{Tr}_{\omega}\left\{I-G_{0}^{-1}G\right\} = -\mathrm{Tr}_{\omega}\left\{\widetilde{\Sigma}G\right\} \tag{48}\] \[= -\mathrm{Tr}_{\omega}^{S}\left\{\widetilde{\Sigma}_{S}G_{S}\right\},\] \[= \mathrm{Tr}_{\omega}^{s}\left\{I_{S}-G_{0S}^{-1}G_{S}\right\},\]
where we have represented the trial \(G\) according to Eq. (2), and limiting \(\widetilde{\Sigma}\) to have non-zero matrix elements only in \(S\) and to have a regular propagator-like analytical structure featuring time-ordering and simple (first order) poles. Indeed, the last step is valid because of the following equation:
\[G_{S}=G_{0S}+G_{0S}\widetilde{\Sigma}_{S}G_{S}. \tag{49}\]
The last and most interesting term in Eq. (5) is \(\mathrm{Tr}_{\omega}\mathrm{Ln}G_{0}^{-1}G\), which can be evaluated using Eq. (36):
\[\mathrm{Tr}_{\omega}\mathrm{Ln}\left\{G_{0}^{-1}G\right\} = \sum_{s}^{\mathrm{occ}}\epsilon_{s}-\sum_{n}^{\mathrm{occ}} \epsilon_{n}^{0}-\sum_{n}^{\mathrm{occ}}\mathrm{poles}(\widetilde{\Sigma}) \tag{50}\] \[= \mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{0S}^{-1}G_{S}\right\},\]
where we have used the fact that \(\sum_{s}\epsilon_{s}=\sum\mathrm{poles}(G_{S})\), \(\sum_{n}\epsilon_{n}^{0}=\sum\mathrm{poles}(G_{0S})\). Using the notation introduced in Eqs. (3-4) where \(\Omega_{n}\) are the poles of the embedding self-energy, one can show that the term \(\sum_{n}\Omega_{n}=\sum\mathrm{poles}(\Delta v_{S})\) does not explicitly appear because the embedding self-energy is used in the evaluation of both the \(G_{0S}\) and \(G_{S}\) Green's functions. Multiplicities have been kept implicit in the sums over eigenvalues.
Alternatively, the same result can be obtained directly from the use of Eq. (18) and the identity concerning the determinant of block matrices, Eq. (22). In particular, from
\[G^{-1}(\omega)=\left[\begin{array}{cc}\omega I_{S}-h_{0S}-\widetilde{\Sigma }&-V\\ -V^{\dagger}&\omega I_{B}-h_{0B}\end{array}\right] \tag{51}\]
one gets
\[\mathrm{det}G^{-1} = \mathrm{det}g_{0B}^{-1}\times\mathrm{det}G_{S}^{-1}, \tag{52}\] \[\mathrm{det}G_{0}^{-1} = \mathrm{det}g_{0B}^{-1}\times\mathrm{det}G_{0S}^{-1}, \tag{53}\]
which gives
\[\mathrm{Tr}_{\omega}\left\{G_{0}^{-1}G\right\} = -\mathrm{Ln}\ \mathrm{det}g_{0B}^{-1}-\mathrm{Ln}\ \mathrm{det}G_{S}^{-1}\] \[+\mathrm{Ln}\ \mathrm{det}g_{0B}^{-1}+\mathrm{Ln}\ \mathrm{det}G_{0S}^{-1}\] \[= \mathrm{Ln}\ \mathrm{det}G_{0S}^{-1}G_{S},\]
the last line being equivalent to the result to be proven.
We are now in the position to put all terms together to obtain:
\[E_{K}[G] = \mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{S}G_{0S}^{-1}\right\} +\sum_{s}^{\mathrm{occ}}\epsilon_{s}^{0}\] \[+ \mathrm{Tr}_{\omega}^{S}\left\{I_{S}-G_{0S}^{-1}G_{S}\right\}+ \Phi_{\mathrm{Hxc}}[G_{S}].\]
Next, the first term on the rhs can be further rewritten using:
\[\mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{S}G_{0S}^{-1}\right\} = \mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{S}g_{0S}^{-1}\,g_{0S }G_{0S}^{-1}\right\} \tag{56}\] \[= \mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{S}g_{0S}^{-1}\right\}\] \[- \mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{0S}g_{0S}^{-1}\right\},\]
\[\mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{0S}g_{0S}^{-1}\right\} = \sum_{s}^{\mathrm{occ}}\epsilon_{s}^{0}-\sum_{s}^{\mathrm{occ}} \tilde{\epsilon}_{s}^{0}-\sum_{n}^{\mathrm{occ}}\Omega_{n} \tag{57}\] \[= \sum_{s}^{\mathrm{occ}}\epsilon_{s}^{0}-\mathrm{Tr}_{\omega}^{S} \left\{h_{0S}g_{0S}\right\}\] (58) \[- \mathrm{Tr}_{\omega}^{B}\left\{h_{0B}g_{0B}\right\},\]
where the eigenvalues \(\tilde{\epsilon}_{s}^{0}\) refer to susbsystem \(S\) in the absence of coupling to \(B\).
Eventually, this leads to the final result for the partitioning of the Klein energy functional:
\[E_{K}[G] = E_{K}^{S}[G_{S}]+\mathrm{Tr}_{\omega}^{B}\left\{h_{0B}g_{0B} \right\}, \tag{59}\] \[E_{K}^{S}[G_{S}] = \mathrm{Tr}_{\omega}^{S}\mathrm{Ln}\left\{G_{S}g_{0S}^{-1}\right\} +\mathrm{Tr}_{\omega}^{S}\left\{h_{0S}g_{0S}\right\}\] (60) \[+ \mathrm{Tr}_{\omega}^{S}\left\{I_{S}-g_{0S}^{-1}G_{S}\right\}+ \mathrm{Tr}_{\omega}^{S}\left\{\Delta v_{S}G_{S}\right\}\] \[+ \Phi_{\mathrm{Hxc}}[G_{S}].\]
This is the second key result of the present paper, implying that \(E_{K}^{S}[G_{S}]\) is stationary for the \(G_{S}\) that solve the embedding Dyson equation, namely:
\[2\pi i\frac{\delta E_{K}^{S}[G_{S}]}{\delta G_{S}}=G_{S}^{-1}-g_{0S}^{-1}+ \Delta v_{S}+\Sigma_{\mathrm{Hxc}}[G_{S}]=0, \tag{61}\]
showing that the partition of the Klein energy is exact and also variational for what concern subsystem \(S\).
Interestingly, we note that an equation formally equivalent to Eq. (60) has been used by Savrasov and Kotliar in Refs. [39; 65] to express the grand-potential of a quantum system in the presence of an external local and dynamical potential coupled to the local Green's function. In the present context, that term is played by \(\Delta v_{S}\), here originating from an embedding procedure. Interestingly, the embedding construction allows us to further inspect the physical nature of the energy terms in Eqs. (59-60). In particular, the complement energy \(\mathrm{Tr}_{\omega}^{B}\left\{h_{0B}g_{0B}\right\}\) (i.e. the energy that needs to be summed to \(E_{K}^{S}[G_{S}]\) to give the total energy of the closed system \(C\), \(E_{K}[G]\)) is that of the non-interacting and uncoupled bath. This means that all effects of the coupling \(V\) need to be absorbed in
\(E_{K}^{S}[G_{S}]\) to allow for variationality. This is at variance with other possible partitions of the \(C\) total energy (such as, e.g., those suggested by the Galitskii-Migdal expression).
## V Conclusions
In this work, and within the framework of Green's function methods, we address the use of the Klein functional when embedding an interacting system \(S\) into a non-interacting bath \(B\). Exploiting a sum-over-pole (SOP) representation for the propagators, and taking advantage of the algorithmic-inversion method (AIM) introduced to solve Dyson-like equations involving SOP propagators [30; 50], we have first derived an exact analytical expression to evaluate terms of the form \(\text{Tr}_{\omega}\text{Ln}\left\{G_{0}^{-1}G\right\}\). Notably, such terms appear in the Klein and Luttinger-Ward functionals [18; 19; 5; 12; 20] as well as in other common maby-body terms such as the RPA correlation energy [17; 35; 36; 37; 38; 5; 12; 35]. In this respect, the analytical expression obtained represents the first key result of the paper.
Next, we have used the above analytical result to partition the Klein functional of an embedded system into two contributions, one associated to the subsystem \(S\) and one to the non-interacting bath \(B\). Importantly, the energy associated to \(S\) is also variational as a functional of the \(S\) Green's function \(G_{S}\), with the functional gradient becoming zero for the physical embedded \(G_{s}\). This is the second main result of the work. Last, we have also exploited the analytical result for the TrLn terms to recover an exact analytical expression for the RPA correlation energy known in the literature [35].
## VI Acknowledgments
We thank Prof. Marco Gibertini and Prof. Lucia Reining for useful discussions on the subject. We also thank Matteo Quinzi for reading the manuscript and for providing further numerical validation for some of the analytical results presented.
## Appendix A Green's function embedding and perturbation theory
In this Appendix we discuss the building of many-body perturbation theory (MBPT), to include particle interaction effects in the Green's function in the presence of embedding. We consider the case of fermions at \(T=0\), for simplicity. As mentioned in Sec. II and sketched in Fig. 1, we consider a closed quantum system \(C\) partitioned into two sub-units, \(C=S\cup B\), interacting via a coupling potential \(V\), with particle interactions confined to the \(S\) region, with \(B\) being a non-interacting bath. The particle-particle interaction can be written in the usual form of a two-body potential:
\[V_{ee} = \frac{1}{2}\int d\mathbf{x}d\mathbf{x}^{\prime}\,\hat{\psi}^{ \dagger}(\mathbf{x})\hat{\psi}^{\dagger}(\mathbf{x}^{\prime})\,v_{\text{int}}( \mathbf{x},\mathbf{x}^{\prime})\,\hat{\psi}(\mathbf{x}^{\prime})\hat{\psi}( \mathbf{x}), \tag{10}\] \[v_{\text{int}}(\mathbf{x},\mathbf{x}^{\prime})\neq 0\qquad \text{for}\quad\mathbf{x},\mathbf{x}^{\prime}\in S,\]
where the constraint on \(v_{\text{int}}(\mathbf{x},\mathbf{x}^{\prime})\) expresses the fact that the interaction is present only in the \(S\) region.
Within the above definitions, the perturbation expansion for the Green's function of the closed system \(C\) leads to [17; 5; 12]:
\[iG(\mathbf{x},t;\mathbf{x}^{\prime},t^{\prime})=\sum_{n=0}^{ \infty}\,\frac{(-i)^{n}}{n!}\int_{-\infty}^{+\infty}dt_{1}\ldots dt_{n}\] \[\quad\times\frac{\langle\Phi_{0}|\mathcal{T}\Big{[}\hat{V}_{ee}(t _{1})\ldots\hat{V}_{ee}(t_{n})\,\hat{\psi}(\mathbf{x},t)\hat{\psi}^{\dagger}( \mathbf{x}^{\prime},t^{\prime})\Big{]}|\Phi_{0}\rangle}{\langle\Phi_{0}|\hat{S }|\Phi_{0}\rangle}, \tag{11}\]
\[\langle\Phi_{0}|\hat{S}|\Phi_{0}\rangle=\sum_{n=0}^{\infty}\,\frac {(-i)^{n}}{n!}\int_{-\infty}^{+\infty}dt_{1}\ldots dt_{n}\] \[\quad\times\langle\Phi_{0}|\mathcal{T}\Big{[}\hat{V}_{ee}(t_{1}) \ldots\hat{V}_{ee}(t_{n})\,\Big{]}|\Phi_{0}\rangle. \tag{12}\]
First we focus on \(G^{S}\), i.e., on the case when \(\mathbf{x},\mathbf{x}^{\prime}\) are located in \(S\). Since \(\hat{V}_{ee}\) only contains field operators related to subspace \(S\), all self-energy diagrams resulting from Eq. (11) have only vertexes within the subsystem \(S\). Similarly, if we consider \(G\) in the general case (end points either in \(B\) or \(S\)), \(B\) points will be present only in disconnected diagrams (to be dropped) or in the external ends of the connected diagrams, which do not show in the proper self-energy. Therefore, the interaction self-energy is zero for matrix elements out of the \(S\) block, as shown in Fig. 1.
So far, perturbation theory in terms of the bare Green's function \(G_{0}\) has been addressed, with \(\Sigma^{S}[G_{0}]=\Sigma^{S}[G_{0}^{S}]\). Nevertheless, one can perform the usual steps [17; 5; 12] in passing from bare diagrams involving \(G_{0}\) to skeleton diagrams involving \(G\), leading to:
\[\Sigma^{S}[G]=\Sigma^{S}[G^{S}], \tag{13}\]
where we can substitute \(G^{S}\) to \(G\) because of the localization of the bare interaction, Eq. (10). A similar reasoning can be applied to the \(\Phi\) functional to obtain \(\Phi_{\text{Hxc}}[G]=\Phi_{\text{Hxc}}[G^{S}]\). In summary, within the non-interacting bath condition, the interaction self-energy \(\Sigma^{S}\) has a perturbation expansion structurally identical to the one usually developed for closed systems [17; 5; 12], and does not make any reference to the \(B\) unit, i.e. all diagrams develop within \(S\), as if \(S\) were disconnected from \(B\). Of course, \(G^{S}\) is then calculated in the presence of the bath, e.g. via embedding self-energies, which in turn make the effect of the interaction spread all over the system. Notably, the Anderson impurity model [5; 39; 66]
can be seen as a special case of the above setting. Indeed, the exact electron-electron self-energy of the model is localized on the impurity [66] (\(S\) in our notation), and can be computed, e.g., using bare perturbation theory [67; 68; 69; 39] involving \(G_{0}^{S}\).
As a relevant point for the present discussion, the use of the skeleton perturbation theory and the Luttinger-Ward functional has been recently questioned [70; 71; 72; 73; 74; 75], leading to a discussion about the domain of the trial \(G\) and the rise of multiple solutions of the non-linear Dyson equation involving \(\Sigma[G]\) (see e.g. Ref. [75] for additional details). For the sake of the present work, we assume to be in the situation where perturbation theory does not pose convergence problems and one is able to discriminate between physical from unphysical solutions when needed.
## Appendix B Complements on TrLn terms
### Notable integrals
In this Section we provide a detailed derivation of the expression
\[I=\oint_{\Gamma}\frac{dz}{2\pi i}\,\text{Ln}\,\frac{z-a}{z-b}=b-a, \tag{12}\]
where both \(a,b\) are assumed to be real numbers. Making reference to Fig. 3, the contour integral can be split into four contributions, labelled \(\Gamma_{1}-\Gamma_{4}\), such that \(I=I_{1}+I_{2}+I_{3}+I_{4}\), with \(I_{i}=\int_{\Gamma_{i}}[...]\).
Let us first consider \(I_{1}\), where we assume that \(\Gamma_{1}\) corresponds to the pole in \(a\). Using the parametrization \(z=Re^{i\theta}\) one has:
\[I_{1} = \int_{\Gamma_{1}}\frac{dz}{2\pi i}\text{Ln}\frac{z-a}{z-b}, \tag{13}\] \[= -R\int_{0}^{2\pi}\frac{d\theta}{2\pi}e^{i\theta}\text{Ln}\left[ 1+\frac{a-b}{Re^{i\theta}}\right],\]
which goes to zero in the limit \(R\to 0\), e.g. in view of \(R\text{ln}(1/R)\to 0\). A similar argument holds for \(I_{3}\), so that we have \(I_{1,3}\to 0\) when \(R\to 0\). Coming to remaining paths, we have
\[I_{2+4}=\frac{1}{2\pi i}\left[-\int_{a+R}^{b-R}\!\!\!\!dz^{+}+\int_{a+R}^{b-R }\!\!\!\!dz^{-}\right]\,\text{Ln}\frac{z-a}{z-b}, \tag{14}\]
where \(dz^{+}\) and \(dz^{-}\) refer to the upper (\(\Gamma_{4}\)) and lower (\(\Gamma_{2}\)) branch, respectively. The real part of the logarithm function does not contribute (the two branches cancel out), while the imaginary part does. Indeed, choosing the branch cut of the complex Log going from \(0\) to \(+\infty\), one obtains:
\[I_{2+4}=\frac{1}{2\pi}(\pi-0+2\pi-\pi)(b-a)=b-a, \tag{15}\]
which completes the derivation of Eq. (12).
### Computational evaluation of TrLn terms
In order to develop a form of Eq. (17) suitable for numerical evaluation, that we have used e.g. to compare with the analytical results of this work, we follow some of the ideas from the App. B of Ref. [25]. We start by re-writing Eq. (34) by rotating the integration over the imaginary axis:
\[\Delta E_{K} = \int_{+i\infty}^{-i\infty}\frac{dz}{2\pi i}\text{Ln}\left[\frac{ \det G(z)}{\det G_{0}(z)}\right],\] \[= \int_{-\infty}^{+\infty}\frac{dx}{2\pi}\text{Ln}\left[\frac{\det G (ix)}{\det G_{0}(ix)}\right],\] \[= \int_{0}^{+\infty}\frac{dx}{2\pi}\bigg{[}\text{Ln}\det G(ix)+ \text{Ln}\det^{*}\!G(ix)\] \[\qquad\qquad-\text{Ln}\det G_{0}(ix)-\text{Ln}\det^{*}\!G_{0}(ix) \bigg{]},\] \[= \int_{0}^{+\infty}\frac{dx}{2\pi}\bigg{[}\ln\!\big{|}\det G(ix) \big{|}^{2}-\ln\!\big{|}\text{det}G_{0}(ix)\big{|}^{2}\bigg{]}.\]
In deriving these equations we have made use of the relations \(G(-ix)=G(ix)^{\dagger}\) and \(\det M^{\dagger}=(\det M)^{*}\). The last expression is suited for numerical evaluation, that we performed using a tangent grid on the imaginary axis.
### TrLn term with two interacting Green's functions
As anticipated in Sec. III.2, Eq. (36) can be further generalized to the case of TrLn computed for two interacting GFs, \(G_{1}\) and \(G_{2}\). As a first step we make reference to an arbitrary non-interacting \(G_{0}\) by exploit the identity in Eq. (20),
\[\text{Tr}_{\omega}\text{Ln}\left\{G_{1}^{-1}G_{2}\right\} = \text{Tr}_{\omega}\text{Ln}\left\{G_{0}^{-1}G_{2}\right\} \tag{16}\] \[- \text{Tr}_{\omega}\text{Ln}\left\{G_{0}^{-1}G_{1}\right\}.\]
Next we can connect \(G_{1,2}\) to \(G_{0}\) via Dyson-like equations, by writing:
\[G_{1} = G_{0}+G_{0}(\Sigma_{1}-v_{0})G_{1} \tag{17}\] \[G_{2} = G_{0}+G_{0}(\Sigma_{2}-v_{0})G_{2} \tag{18}\]
Figure 3: Illustration of the contour used in Eq. (12) and its decomposition in simple paths, \(\Gamma_{1}-\Gamma_{4}\).
where \(\Sigma_{i}\) are suitable self-energy operators. Upon defining \(\Sigma_{21}=\Sigma_{2}-\Sigma_{1}\), the above equations give:
\[G_{2}=G_{1}+G_{1}\Sigma_{21}G_{2}. \tag{101}\]
We can now evaluate Eq. (100) by means of Eq. (36), obtaining:
\[\Delta E_{K} = \left[\sum_{s}^{\rm occ}n_{s}^{(2)}\epsilon_{s}^{(2)}-\sum^{\rm occ }{\rm poles}(\Sigma_{2})\right] \tag{102}\] \[- \left[\sum_{s}^{\rm occ}n_{s}^{(1)}\epsilon_{s}^{(1)}-\sum^{\rm occ }{\rm poles}(\Sigma_{1})\right].\]
|
2309.09517 | FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural
Networks | Federated training of Graph Neural Networks (GNN) has become popular in
recent years due to its ability to perform graph-related tasks under data
isolation scenarios while preserving data privacy. However, graph heterogeneity
issues in federated GNN systems continue to pose challenges. Existing
frameworks address the problem by representing local tasks using different
statistics and relating them through a simple aggregation mechanism. However,
these approaches suffer from limited efficiency from two aspects: low quality
of task-relatedness quantification and inefficacy of exploiting the
collaboration structure. To address these issues, we propose FedGKD, a novel
federated GNN framework that utilizes a novel client-side graph dataset
distillation method to extract task features that better describe
task-relatedness, and introduces a novel server-side aggregation mechanism that
is aware of the global collaboration structure. We conduct extensive
experiments on six real-world datasets of different scales, demonstrating our
framework's outperformance. | Qiying Pan, Ruofan Wu, Tengfei Liu, Tianyi Zhang, Yifei Zhu, Weiqiang Wang | 2023-09-18T06:55:14Z | http://arxiv.org/abs/2309.09517v3 | # FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural Networks
###### Abstract.
Federated training of Graph Neural Networks (GNN) has become popular in recent years due to its ability to perform graph-related tasks under data isolation scenarios while preserving data privacy. However, graph heterogeneity issues in federated GNN systems continue to pose challenges. Existing frameworks address the problem by representing local tasks using different statistics and relating them through a simple aggregation mechanism. However, these approaches suffer from limited efficiency from two aspects: low quality of task-relatedness quantification and inefficacy of exploiting the collaboration structure. To address these issues, we propose FedGKD, a novel federated GNN framework that utilizes a novel client-side graph dataset distillation method to extract task features that better describe task-relatedness, and introduces a novel server-side aggregation mechanism that is aware of the global collaboration structure. We conduct extensive experiments on six real-world datasets of different scales, demonstrating our framework's outperformance.
+
Footnote †: 10.145/mnmnmn.mnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmn
+
Footnote †: 10.145/mnmn.mnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmn.mnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmnmnmn
+
Footnote †: 10.145/mnmnmnmnmnmnmnmnmnmnmnmnmnmnmnmn
## 1. Introduction
Federated training of Graph Neural Networks (GNN) has gained considerable attention in recent years due to its ability to apply a widely used privacy-preserving framework called Federated Learning (FL) (21, 31) to GNN training. This approach facilitates
training round, enabling efficient similarity computation between clients while sufficiently incorporating local task information. The task relator first constructs a collaboration network from the distilled task features and relates tasks through a novel aggregation mechanism based on the network's _global connectivity_, which measures the task-relatedness in a global sense. As a summary, our contributions are:
* We propose a task feature extractor based on a novel dynamic graph data distillation method, representing each local task with a distilled synthetic graph generated from all the local model weights trained at each round. The task features extracted from distilled graphs contain both data and model information, while also allowing for efficient evaluation of task-relatedness.
* We propose a task relator that constructs a collaboration network from the distilled graphs and relates tasks by operating a novel kernelized attentive aggregation mechanism upon local weights that encodes the global connectivity of the collaboration network.
* We conduct extensive experiments to validate that our framework consistently outperforms state-of-the-art personalized GNN frameworks on six real-world datasets of varying scales under both overlapping and non-overlapping settings.
The paper proceeds as follows. In Section 2, we present a review of related works. This is followed by the introduction of the preliminaries on GNN and FL in Section 3. Next, in Section 4, we introduce and formulate the problem of federated GNN. The detailed design of the framework to solve this problem is presented in Section 5. In Section 6, we present the experimental results. Finally, Section 7 concludes the paper.
## 2. Related Work
### Personalized Federated Learning
The learning procedures of homogeneous federated learning [21; 31] are often considered as special forms of distributed optimization algorithms like local SGD [41]. However, these methods have been shown to suffer from client heterogeneity in terms of both convergence [22] and client-side generalization [7; 48]. Personalized federated learning approaches have primarily focused on addressing the latter issue by incorporating adaptation strategies that can be deployed at the client side, server side, or both. **Client-side adaptation** methods typically utilize parameter decoupling paradigms that enable flexible aggregation of partial parameters [1; 32] or control the optimization of local objectives by regularizing towards the global optimum [27]. However, these methods often overlook the overall collaboration structure among the clients [3; 48]. It has been shown that with a correctly-informed collaboration structure that precisely describes the task-relatedness between clients, simple procedures can achieve minimax optimal performance [48]. On the other hand, **server-side adaptation** methods aim to measure the task-relatedness among clients and derive refined aggregation mechanisms. [3] utilize tools from transfer learning theory to conduct an estimating procedure that clusters clients into subgroups. [43; 46] propose to optimize collaboration among clients on-the-fly by solving a quadratic program at the server-side during each aggregation step. It is important to note that server-side adaptation methods often involve the transmission of additional information other than the model parameters.
### Federated Graph Representation Learning
In [33], the authors showed that naively applying FedAvg to distributed GNN training will result in irreducible error under distinct client-side graph structures, which hampers convergence. A recent line of work has been attempting to adopt personalization strategies for federated learning of graph neural networks. For instance, [37] uses client-side adaptation by sharing only a sub-structure of the client-side GNN. [47] equips each client with an auxiliary neighborhood generation task. [42] applies a server-side adaptation strategy that dynamically clusters clients using intermediate gradient updates. Moreover, [2] combines client-side and server-side adaptation methods and measures task similarity using GNN outputs based on a common input random graph.
### Dataset Distillation
The method of dataset distillation [40] was originally proposed as a way to improve training efficiency by distilling a large dataset into a significantly smaller one while keeping model performance almost intact. Later developments generalized the approach to graph-structured data [19; 20]. A notable property of dataset distillation is that the distilled datasets are observed to exhibit good privacy protection against empirically constructed adversaries [9]. This empirical property has also lead to innovations in one-shot federated learning [49], which is very different from the setups in PFL and is considered an orthogonal application.
## 3. Preliminaries
In this section, we provide a brief introduction to two key concepts in our paper: Graph Neural Network and Federated Learning, which are presented in separate subsections.
### Graph Neural Network
Consider a graph \(G=(V,E)\), where \(V\) represents the node set and \(E\) represents the edge set. The graph is associated with a node feature matrix \(\mathbf{X}\in\mathbb{R}^{|V|\times D}\). We can use Graph Neural Networks (GNN) to embed nodes in the graph with low dimensional vectors. An \(L\)-layer GNN in the message passing form [12; 44] is recursively defined in (1), where \(\mathbf{h}_{u}^{l}\) represents the embedding of node \(u\) output from the \(l\)-th GNN layer, \(UPD\) is a function that generates embeddings based on the former layer outputs, \(AGG\) is a function that aggregates the embeddings together and \(\mathcal{N}(u)\) represents the set of neighboring nodes of node \(u\).
\[\mathbf{h}_{u}^{l}=UPD^{l}[\mathbf{h}_{u}^{l-1},AGG^{l}((\mathbf{h}_{u}^{l-1}: v\in\mathcal{N}(u)))]. \tag{1}\]
In general, the outputs of \(L\)-th GNN layers are passed through a customized READOUT layer to accomplish various graph-related tasks. For instance, in node classification tasks, a simple READOUT layer can be selected as a linear layer with the number of categories as the output dimension.
### Federated Learning
Federated Learning (FL) was introduced to address data isolation issues while preserving privacy [31]. It enables collaborative training among clients without exposing raw datasets. A typical FL framework consists of three stages: (1) _model initialization_, where the
server broadcasts initial model weights to all clients; (2) _local training_, where each client trains a local model using the initial model weights and its own dataset, and uploads the local model to the server; (3) _global aggregation_, wherein the server aggregates the local models into one or more new models and broadcasts the result to each client to initiate the next training round. The FL procedure typically alternates between stage (2) and (3) until convergence.
## 4. Problem Formulation
A personalized federated GNN framework aims to collaboratively learn local GNN models in a privacy-preserving manner. This allows the local models to fit their respective local datasets while leveraging information from other clients to improve sample efficiency. In the system, there are \(n\) clients and one server. Each client stores a local graph dataset \(G_{i}=(V_{i},E_{i},\mathbf{X}_{i})\), where \(V_{i}\) represents the node set, \(E_{i}\) represents the edge set, \(\mathbf{X}_{i}\in\mathbb{R}^{|V_{i}|\times D}\) contains the node features, and \([n]\) denotes the set of positive integers from \(1\) to \(n\). Within the system, the \(n\) clients train \(n\) GNN models \(f(G_{i};\mathbf{W}_{i}),i\in[n]\), with the same structure but different parameters \(\mathbf{W}_{i},i\in[n]\). Additionally, we assume that the prediction function \(f=g\circ h\) is composed of an \(L\)-layer message passing GNN module \(h\) with a hidden dimension of \(d\), and a READOUT module \(g\) that maps the node embedding extracted by \(h\) to downstream task predictions. The goal of a personalized federated GNN framework is formulated as
\[\min_{\mathbf{W}_{i}\in\Omega_{i}i\in[n]}\sum_{i=1}^{n}\mathcal{L}(f(G_{i}; \mathbf{W}_{i}),\mathbf{y}_{i}) \tag{2}\]
where \(\mathcal{L}\) is the loss function, and \(\mathbf{y}_{i}\) contains all the labels belonging to client \(i\). To encourage collaboration among clients, the parameter spaces \(\Omega_{i},i\in[n]\), are usually assumed to be related (Kang et al., 2017). The precise structure of this relation is sometimes implicit and instead reflected in the optimization procedure (Beng et al., 2017). Under the graph representation learning setup, we require \(\Omega_{i}\) to capture the relatedness between corresponding tasks, taking into account the topological structure of the graph (Kang et al., 2017), as well as feature and label information. Additionally, we impose an extra constraint to ensure that local models do not deviate significantly from each other. This is achieved by adding a proximal regularization term that prevents overfitting to local data, which has been shown to benefit many federated learning procedures (Kang et al., 2018; Wang et al., 2019).
## 5. Design
### Overview
In this section, we provide a detailed introduction to our personalized federated graph learning framework, which aims to address two major problems:
* How to extract task features from the local dataset \(G_{i}\) and local model parameters \(\mathbf{W}_{i}\)?
* How to relate local tasks with each other using the task features to aggregate \(\mathbf{W}_{i}\)?
To address the first question, we propose a feature extractor based on dataset distillation, as illustrated in Fig. 0(a), that captures all the information within the local model. The feature extractor generates a small graph in each round based on the current local model weights. To mitigate graph heterogeneity, the server distributes a common initial graph to all clients, preventing significant deviations among the distilled graphs.
To address the second question, we draw insights from recent advancements in kernel formulations of self-attention (Kang et al., 2017; Wang et al., 2019). We view the personalized aggregation process as an attentive mechanism operating on the _collaboration network_ among clients. We observe that several contemporary aggregation schemes overlook the global task relatedness. Leveraging tools from kernel theory, we derive a refined aggregation scheme based on exponential kernel construction that effectively incorporates global information, as shown in Fig. 0(b).
### Task Feature Extractor
#### 5.2.1. Motivation
It is well known in the theory of multi-task learning (Kang et al., 2017; Li et al., 2018) that correct specifications of task relatedness may fundamentally impact the model performance, which has also been recently discovered in PFL (Wang et al., 2019). In hindsight, the ideal characterization of a (local) graph representation learning task might be either the _joint_ distribution of the local graph, feature and label variables; or the Bayes optimal learner derived from the joint distribution (Wang et al., 2019). However, none of this information are available during FL, and various surrogates have been proposed in the context of graph PFL that extracts _task features_ from the (local) empirical distribution and the learned model.
Figure 1. Overview of two modules in the proposed FedGKD framework.
The most ad-hoc solution is to use weights [29, 46] and gradients [34], which are typically high-dimensional (random) vectors. However, computing their relations using metrics like Euclidean or cosine similarity can be unreliable due to the curse of dimensionality phenomenon [15], as empirically validated in [2]. As a notable state-of-the-art model, FedPUB [2] uses low-dimensional graph embeddings that are produced by passing a shared random graph between clients. However, since the embedding computation only involves message passing GNN layers, the resulting embeddings are _incomplete_, as they fail to represent the READOUT layer that follows these GNN layers. The READOUT layer encodes label-related information. This limitation is significant when two datasets share similar graph distributions but have divergent label distributions. This can result in two local models with similar GNN layer weights but different READOUT layer weights. In such cases, the embeddings, which are outputs of similar GNN layers, cannot distinguish between the two datasets. We conducted a small experiment to validate this point. We visualized the embeddings for GCN layers trained on two datasets with similar graph distributions but divergent label distributions in Fig. 2. To be more specific, we trained node embeddings on the original Cora graph [45] and a revised Cora graph in which the label \(y_{o}\in[C]\) of any node \(v\) is modified to \(C+1-y_{v}\), where \(C=7\) is the number of classes in Cora. The results show that the two embedding spaces are similar, as vertices belonging to the same community are located in similar positions in the space, as shown in Fig. 2.
To address the challenges encountered in PFL frameworks, we leverage graph dataset distillation, a method that simultaneously compresses the local data distribution and the learned local model into a size-controlled small dataset that is comparable across all client tasks. In the following sections, we will introduce dataset distillation and explain how we incorporate it into our framework.
#### 5.2.2 Dataset Distillation
Dataset distillation [40] (DD) is a centralized knowledge distillation method that aims to distill large datasets into smaller ones. For client \(i\), a distilled dataset \((G_{i}^{s},\mathbf{y}_{i}^{s})\) is defined such that a neural network model trained on \(G_{i}^{s}\) can achieve comparable performance to the one trained on the original dataset \(G_{i}\), as formulated in (3).
\[\min_{G_{i}^{s},\mathbf{y}_{i}^{s}}\mathcal{L}(f(G_{i};\mathbf{W}_{i}^{s}), \mathbf{y}_{i})\quad\text{s.t. }\mathbf{W}_{i}^{s}=\min_{\mathbf{W}_{i}^{s}}\mathcal{L}(f(G_{i}^{s}; \mathbf{W}_{i}^{\prime}),\mathbf{y}_{i}^{s}) \tag{3}\]
According to previous studies [20], many datasets can be distilled into condensed ones with sizes that are only around 1% of the original dataset while still preserving model performance. Moreover, it has been empirically reported that distilled datasets offer good privacy protection [9].
Based on this observation, we propose using _statistics of the distilled local datasets_ as features that describe local tasks and obtain task-relatedness by evaluating the similarities between distilled dataset characteristics. As a straightforward adaptation of vanilla DD to federated settings, we may conduct isolated distillation steps _before_ the federated training and fix the estimated task-relatedness during federated training. This strategy could be implemented using off-the-shelf DD algorithms on graphs [19, 20]. However, the quality of the distilled local datasets may be affected by (local) sample quality and quantity. Since PFL approaches typically improve local performance, we propose a refinement of the aforementioned _static distillation_ strategy that allows clients to distill their local datasets \({G_{i}^{s}}^{t},\mathbf{y}_{i}^{s}t\in[T]\) for each client \(i\), with its corresponding distillation objective at round \(t\) being:
\[\min_{G_{i}^{s},\mathbf{y}_{i}^{s}}\mathcal{L}(f(G_{i}^{s,t};\mathbf{W}_{i}^ {t}),\mathbf{y}_{i}^{s,t}). \tag{4}\]
Apart from its capability to adapt to the federated learning procedure, the objective (4) is computationally more efficient than the vanilla DD objective (3) as it avoids the bi-level optimization problem, which is difficult to solve [40]. Instead, the objective (4) leverages the strength of the federated learning process, which usually produces performance intermediate results after a few rounds of aggregations. We refer to (4) as a _dynamic distillation_ strategy. Next, we present a detailed implementation of the proposed dynamic distillation procedure.
#### 5.2.3 Implementation
There are two algorithmic goals regarding the implementation of (4): Firstly, the distilled datasets should allow efficient similarity comparisons. Secondly, the problem should be efficiently solved so that the extra computation cost for each client is controllable. Note that both goals are non-trivial since the optimization involves a graph-structured object that is not affected by permutations, resulting in alignment issues when performing similarity computation. The solution is detailed in Algorithm 2 in Appendix A. Specifically, at each round \(t\in[T]\), the size of the distilled graph across all clients will be fixed at \(m\times C\), where \(m\) represents the number of representative nodes in each category. The server first initializes node features \(\mathbf{X}_{0}\), with each entry drawn independently from a standard Gaussian distribution \(\mathcal{N}(0,1)\). The initial labels \(\mathbf{y}_{0}\) are set to ensure that there are \(m\) nodes belonging to each category. The tuple \((\mathbf{X}_{0},\mathbf{y}_{0})\) is broadcast to each client as the initial value of their local objectives, while the construction of the distilled graph structure is left to the clients' side to reduce communication cost. This common initialization technique alleviates the alignment issue between distilled graphs.
After each client receives the initial features \(\mathbf{X}_{0}\) and labels \(\mathbf{y}_{0}\), it begins to update the features and labels. Since directly optimizing the graph structure (i.e., among the space of possible binary matrices) is computationally intractable, we use the following simple generative model that describes the relationship between node features and
Figure 2. Embedding spaces (depicted using two leading principle components) trained on two same graphs with divergent labels: vertices belonging to the same community have the same color.
edge adjacency for the distilled graph: For a pair of nodes (regarding the distilled graph) \(u\) and \(v\) with features \(\mathbf{x}_{u}^{s}\) and \(\mathbf{x}_{v}^{s}\), the probability of them being adjacent is given by
\[\mathbb{P}[\mathbf{A}_{uw}^{s}=1]=\frac{e^{(\mathbf{x}_{u}^{s},\mathbf{x}_{v}^{ s})-\gamma}}{1+e^{(\mathbf{x}_{u}^{s},\mathbf{x}_{v}^{s})-\gamma}}, \tag{5}\]
where \(\gamma>0\) is a hyperparameter that controls edge sparsity. 1 Construction of the distilled graph involves sampling from the above distribution, which is not differentiable. Hence we adopt the Gumbel-softmax mechanism [18; 30] to generate approximate yet differentiable samples. In particular, for each \(u,v\), we first draw two independent samples \(\omega\) and \(\omega^{\prime}\) from the standard Gumbel distribution. Next, we compute the following approximation:
Footnote 1: This construction is inherently _homophilic_. In principle, one could propose more sophisticated generative mechanisms with learnable parameters, but this may increase the computational cost of distillation. Experimentally, we have found this simple construction to be quite effective.
\[p_{uw}(\tau_{g})=\frac{e^{\left((\mathbf{x}_{u}^{s},\mathbf{x}_{v}^{s})-\gamma \right)t+\omega-\omega^{\prime})/\tau_{g}}}{1+e^{\left((\mathbf{x}_{u}^{s}, \mathbf{x}_{v}^{s})-\gamma\right)t_{g}}}, \tag{6}\]
which adopts a distribution limit \(\lim_{\tau_{g}\to 0}p_{uw}(\tau_{g})\overset{d}{=}\mathbf{A}_{uw}^{s}\). In practice, we use the straight-through trick [18] to obtain discrete samples from (6) while allowing smooth differentiation. We denote the distilled graph as \(G^{s}=(\mathbf{X}^{s},\mathbf{P})\) with \(\mathbf{P}\) being the (approximated) adjacency matrix, with each entry derived from (6).
We utilize the local model weights to assess how well the distilled dataset fits the model and update \(\mathbf{X}^{s}\) and \(\mathbf{y}^{s}\) based on the same classification loss as each client's local learning objective. In practice, we have found that a few steps of gradient updates suffice for the learning performance. After obtaining the distilled graph, we extract task features \(M_{i\,t\,i\in V_{c}}^{t}\) for client \(i\) at round \(t\) as follows:
\[\mathbf{M}_{i}^{t}=\left[\mathbf{X}_{i}^{s,t}\left\|\mathbf{H}_{i}^{s,t} \right\|,\quad\mathbf{H}_{i}^{s,t}=h(\mathbf{G}_{i}^{s,t},\mathbf{W}_{i}^{t}).\right. \tag{7}\]
Note that although the distilled labels are not included in the task feature, the label information is fused into \(\mathbf{X}^{s}\) during the distillation process. We will present an empirical study regarding other potential choices of task feature maps in section 6.2.7.
### Task relator
#### 5.3.1 Motivation
We represent the estimated relationship among tasks using a _time-varying collaboration network_\(G_{c}^{t}=(V_{c},\mathbf{R}^{t})\), where \(t\in[T]\), \(V_{c}=[n]\), and \(\mathbf{R}^{t}\in\mathbb{R}^{n\times n}\) represents the time-dependent task relation matrix. The entry \(r_{ij}^{t}\) measures the task-relatedness between client \(i\) and client \(j\), obtained by computing similarities of their corresponding task features \(\mathbf{M}_{i}^{t}\) and \(\mathbf{M}_{j}^{t}\). This idea has been adopted in some recent PFL proposals [6; 46].
Since the matrix \(\mathbf{R}^{t}\) encodes pairwise relationships among client tasks, it offers great flexibility in defining personalized aggregation protocols. We formulate the protocols as the following expectation:
\[\overline{\mathbf{W}_{i}^{t}}\leftarrow\mathbb{E}_{j-q_{i}}\left[\mathbf{W}_{ j}^{t}\right] \tag{8}\]
where \(q_{i}\) is a client-specific distribution over \([n]\), with a trivial case of uniform distribution that corresponds to the aggregation rule in FedAVG. The above formulation is closely connected to the self-attention mechanism [39]. In particular, inspired by recent developments that generalize self-attention using kernel theory [38], we parameterize \(q_{i}\) using a kernel-induced distribution:
\[q_{i}[j]=\frac{k(i,j)}{\sum_{j^{\prime}\in[n]}k(i,j^{\prime})}, \tag{9}\]
where \(k(\cdot,\cdot)\) is a kernel function.
The most straightforward choice would be the softmax kernel [8] that uses the exponentiated edge weights \(k(i,j)=e^{r_{ij}}\). However, this method disregards other weights, resulting in a kernel function that only takes _local connectivity_ in the collaboration network into account, overlooking _global connectivity_. We illustrate this point using the example in Fig. 3, where a collaboration network with three vertices has weighted links of \(r_{12}<r_{23}=r_{13}\). If we directly use \(r_{ij}\) and normalize them, the average local model weights \(\overline{\mathbf{W}_{1}^{t}}\) will be very close to \(\mathbf{W}_{3}^{t}\) and far from \(\mathbf{W}_{2}^{t}\). However, the relation between node 1 and 2 is much stronger than what the quantity \(r_{12}\) indicates, as they are also linked by another two-hop path comprising two heavily-weighted edges \((1,3)\) and \((2,3)\). A recent work [6] attempts to capture information beyond local task pairs by incorporating a GNN-like mechanism over a sparsified collaboration network. This approach aims to integrate more information through a few rounds of message passing. While the approach in [6] extends the scope of similarity evaluation, it still operates in a _local sense_ due to the finiteness of message passing rounds and the inherent limitation of oversmoothing phenomenon. To address this limitation, our framework proposes a novel kernel function that incorporates all the global connectivity. Specifically, our kernel extracts connectivity at hops from 1 to infinity while favoring connectivity with fewer hops.
#### 5.3.2 Construction of the task relator
According to the previous discussions, implementing the task relator involves the design of two modules: A _collaboration graph construction procedure_ based on the extracted task features \(\{M_{i}^{t}\}_{i\in V_{c}}\) and a _global-connectivity-aware aggregation mechanism_.
To form the collaboration graph, we use the following feature-wise average correlation as the pairwise task-relatedness:
\[r_{ij}^{t}=\frac{1}{d+D}\sum_{k=1}^{d+D}\mathsf{corr}\left(\mathbf{M}_{i}^{t}[ ;k],\mathbf{M}_{j}^{t}[;k]\right), \tag{10}\]
where \(\mathsf{corr}\) stands for Pearson's correlation coefficient[26]. Next we discuss the construction of the global-connectivity-aware aggregation mechanism. Inspired by the property of exponential kernels [25] that translates local structure into global ones, we use an element-wise exponentiated matrix exponential with two temperature parameters \(\tau\) and \(\tau_{S}\):
\[k(i,j)=e^{\tau_{S}sij},\mathbf{S}=\{s_{ij}\}_{i\in V_{j},j\in V}=e^{\tau \mathbf{R}}=\sum_{k=1}^{\infty}\frac{1}{\kappa!}(\tau\mathbf{R})^{\kappa}. \tag{11}\]
Figure 3: Comparison of local and global connectivity
From the right hand side of (11), we may interpret \(s_{i,j}\) as encoding the relatedness of client \(i\) and \(j\) via conducting infinite rounds of message passing, thereby reflecting their _global connectivity_ structure. Then, \(k(i,j)\) scales the global connectivity into \(\mathbb{R}^{+}\) range for further normalization into the client-specific distribution \(q_{i}\) over \([n]\). The parameter \(\tau\) is used to control the level of personalization, where \(\tau\to 0\) indicates no personalization (FedAVG) and \(\tau\to\infty\) indicates local training. The parameter \(\tau\)s strikes a balance between the contribution of local and global information. It is worth noting that there are other notions of global connectivity measures, such as effective resistance (Beng et al., 2017), which are also applicable. However, here we stick to the matrix exponential due to its loose requirements of arbitrary local similarity R. We will prove in appendix D that the function \(k\) in (11) is a valid kernel over the domain \([n]\).
### Complexity considerations
In comparison to vanilla FedAVG, the proposed framework requires additional local computations, a server-side matrix exponential operation, as well as extra communication costs. Let us briefly discuss the complexity of these procedures. Firstly, the dataset distillation procedure operates on small-scale graphs, making the total computation cost negligible compared to local training. Secondly, the matrix exponential operation upon R has a time complexity of \(O(n^{3})\). This complexity is controllable in practice since the number of clients, \(n\), is typically small or moderate. Finally, the extra communication cost per client per aggregation step depends on the formulation of the task feature map. According to (7), the extra communication cost is \(O(bmC(d+D))\), where \(b\) is the number of bits required to represent a floating-point number. This cost is comparable to the communication cost of a GNN. It is noteworthy that since GNN models are typically shallow, the communication cost of parameters is often dominated by the computation cost of local training. Moreover, the communication cost can be further reduced if a more compact task feature, such as the distilled graph embedding, is used. We will empirically investigate such alternatives in section 6.2.7.
## 6. Experiments
This section presents the empirical analysis of our framework, which includes a performance comparison, convergence analysis, and multiple ablation studies.
### Experiment Setup
#### 6.1.1. Datasets
We have tested the performance on six different graph datasets of varying scales: Cora, CiteSeer, PubMed, Amazon Computer, Amazon Photo, and Ogbn arxiv (2017; 35; 45). To split each graph into subgraphs, we employed the Metis graph partition algorithm following the setup in (Beng et al., 2017). Each client stores one subgraph from the original graph. We have conducted a node classification task by sampling the vertices into training, validation and testing vertices
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Cora} & \multicolumn{3}{c}{CiteSeer} & \multicolumn{3}{c}{PubMed} \\ \hline \# Clients & 10 & 30 & 50 & 10 & 30 & 50 & 10 & 30 & 50 \\ \hline Local & 46.88\(\pm\)1.23 & 66.45\(\pm\)0.81 & 70.32\(\pm\)0.68 & 51.42\(\pm\)1.75 & 59.06\(\pm\)1.64 & 61.40\(\pm\)1.45 & 76.75\(\pm\)0.20 & 77.46\(\pm\)0.20 & 76.02\(\pm\)0.33 \\ \hline FedAvg & 49.75\(\pm\)1.32 & 46.20\(\pm\)2.67 & 43.48\(\pm\)3.97 & 54.79\(\pm\)2.86 & 54.14\(\pm\)1.41 & 57.52\(\pm\)1.98 & 78.53\(\pm\)0.68 & 80.99\(\pm\)0.26 & 79.53\(\pm\)0.06 \\ FedProx & 50.79\(\pm\)2.00 & 54.72\(\pm\)5.21 & 62.11\(\pm\)2.02 & 56.31\(\pm\)5.81 & 59.41\(\pm\)0.68 & 63.29\(\pm\)1.21 & 77.32\(\pm\)0.88 & 80.99\(\pm\)0.51 & 79.60\(\pm\)0.21 \\ \hline FedPer & 52.83\(\pm\)0.55 & 67.15\(\pm\)0.85 & 70.27\(\pm\)0.34 & 57.14\(\pm\)1.45 & 62.21\(\pm\)1.80 & 63.26\(\pm\)1.95 & 79.85\(\pm\)0.31 & 80.59\(\pm\)0.06 & 80.28\(\pm\)0.13 \\ \hline FedPub & 52.58\(\pm\)1.51 & 67.30\(\pm\)0.99 & 42.81\(\pm\)5.70 & 56.06\(\pm\)2.29 & 62.12\(\pm\)0.49 & 64.18\(\pm\)1.88 & 79.70\(\pm\)0.21 & 80.97\(\pm\)0.22 & 80.56\(\pm\)0.23 \\ FedSage & 49.25\(\pm\)0.50 & 59.42\(\pm\)1.03 & 59.99\(\pm\)0.23 & 55.54\(\pm\)6.95 & 55.63\(\pm\)7.00 & 62.73\(\pm\)1.09 & 77.87\(\pm\)0.50 & 80.97\(\pm\)0.24 & 79.36\(\pm\)0.73 \\ GCFL & 49.52\(\pm\)0.33 & 46.78\(\pm\)4.32 & 45.55\(\pm\)6.03 & 56.03\(\pm\)2.04 & 53.91\(\pm\)0.38 & 56.43\(\pm\)0.41 & 76.03\(\pm\)2.04 & 79.58\(\pm\)0.13 & 78.68\(\pm\)0.15 \\ FedStar & 43.09\(\pm\)0.72 & 61.60\(\pm\)0.30 & 67.77\(\pm\)1.25 & 46.45\(\pm\)0.17 & 54.78\(\pm\)2.12 & 58.96\(\pm\)1.81 & 75.45\(\pm\)0.14 & 76.45\(\pm\)0.43 & 74.71\(\pm\)0.52 \\ \hline Ours & **53.26\(\pm\)1.42** & **67.88\(\pm\)1.09** & **70.41\(\pm\)0.51** & **58.19\(\pm\)1.82** & **62.30\(\pm\)1.33** & **64.58\(\pm\)0.55** & **79.90\(\pm\)0.53** & **81.65\(\pm\)0.34** & **80.82\(\pm\)0.20** \\ \hline \hline Dataset & \multicolumn{3}{c}{Amazon Photo} & \multicolumn{3}{c}{Amazon Computers} & \multicolumn{3}{c}{Ogbn Arxiv} \\ \hline \# Clients & 10 & 30 & 50 & 10 & 30 & 50 & 10 & 30 & 50 \\ \hline Local & 46.57\(\pm\)0.15 & 69.25\(\pm\)0.25 & 79.42\(\pm\)0.34 & 51.82\(\pm\)0.62 & 65.69\(\pm\)0.94 & 68.57\(\pm\)0.35 & 34.76\(\pm\)0.50 & 46.98\(\pm\)0.18 & 47.45\(\pm\)0.19 \\ \hline FedAvg & 43.10\(\pm\)2.68 & 44.75\(\pm\)4.82 & 46.38\(\pm\)1.07 & 44.45\(\pm\)0.26 & 52.93\(\pm\)1.05 & 53.91\(\pm\)0.57 & 41.40\(\pm\)0.46 & 44.22\(\pm\)0.80 & 43.74\(\pm\)2.60 \\ FedProx & 43.58\(\pm\)2.05 & 45.29\(\pm\)0.53 & 42.76\(\pm\)5.23 & 42.59\(\pm\)4.17 & 53.58\(\pm\)1.56 & 53.91\(\pm\)0.57 & 41.35\(\pm\)0.20 & 44.68\(\pm\)0.62 & 47.02\(\pm\)0.52 \\ \hline FedPer & 52.20\(\pm\)0.99 & 71.76\(\pm\)0.57 & 81.65\(\pm\)0.06 & 55.04\(\pm\)0.33 & 67.55\(\pm\)1.42 & 68.79\(\pm\)0.51 & 38.77\(\pm\)0.44 & 47.46\(\pm\)0.18 & 50.51\(\pm\)0.15 \\ \hline FedPub & 45.69\(\pm\)2.10 & 64.50\(\pm\)0.48 & 76.58\(\pm\)0.85 & 50.15\(\pm\)1.57 & 60.81\(\pm\)0.52 & 63.82\(\pm\)0.62 & 42.18\(\pm\)0.36 & 50.58\(\pm\)0.21 & 51.11\(\pm\)0.56 \\ FedSage & 47.79\(\pm\)0.76 & 58.26\(\pm\)2.35 & 58.99\(\pm\)1.58 & 47.98\(\pm\)0.84 & 56.82\(\pm\)0.43 & 63.13\(\pm\)0.86 & 42.18\(\pm\)0.11 & 45.43\(\pm\)0.40 & 46.08\(\pm\)0.27 \\ GCFL & 46.93\(\pm\)0.55 & 48.95
according to ratio 0.3, 0.35 and 0.35 before splitting. Appendix B.1 provides detailed descriptive statistics of the datasets.
#### 6.1.2 Baselines
We compared our framework with eight different baselines, categorized into four types: (1) **Local**, which serves as the standard baseline without federated learning; (2) Two traditional FL baselines, including **FedAvg**[31] and **FedProx**[28]; (3) One state-of-the-art personalized FL baseline, **FedPer**[1]; (4) Four state-of-the-art personalized federated GNN baselines, including **FedPub**[2], **FedSage**[47], **GCFL**[42], and **FedStar**[37]. For detailed introductions of the baselines, please refer to Appendix B.2.
#### 6.1.3 Implementation Details
We utilize a two-layer GCN [24] followed by a linear READOUT layer. The dimension of the embeddings is set to 128. To optimize learning, we employ the Adam optimizer with weight decay \(10^{-6}\)[23]. The smoothing parameter \(\mathbf{r}_{g}\) in Gumbel-softmax is set to 1, following the technique in [18]. To monitor the training progress, we use an early-stop mechanism. If the validation accuracy decreases for 20 consecutive rounds, the FL framework stops immediately. Each experiment is conducted over 3 runs with different random seeds. Implementation of all methods is done using PyTorch Geometric [11] on an NVIDIA Tesla V100 GPU. For further details, please refer to Appendix B.3.
### Results
#### 6.2.1 Performance Comparison
We evaluate the node classification performance of various frameworks using six real-world datasets of differing scales. Tables 1 and 2 present the average test accuracy and its standard deviation for overlapping and non-overlapping settings, respectively. Our framework consistently outperforms all other methods across all datasets. Traditional FL baselines, such as FedAvg and FedProx, which lack local adaptation, are consistently inferior to our Local framework in multiple scenarios. Despite using local task statistics for personalization, the feature extraction schemes of FedPer, FedStar, FedSage, and GCFL are less effective and perform worse than our framework. FedPub performs worse than our method due to the absence of information contained in local READOUT layers when describing tasks and inefficient exploitation of the global collaboration structure.
#### 6.2.2 Convergence Analysis
Fig.4 illustrates the convergence behavior of the average test accuracy over the first 100 rounds with 5 clients in the system. It is evident that our proposed framework exhibits a rapid convergence rate towards the highest average test accuracy. This can be attributed to the framework's ability to efficiently capture the local task features and identify pairwise relationships.
#### 6.2.3 Effects of the proposed task relator
The primary goal of this study is to investigate the benefits of incorporating the global information. Specifically, we tested with FedGKD along with two
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Cora} & \multicolumn{3}{c}{CiteSeer} & \multicolumn{3}{c}{PubMed} \\ \hline \# Clients & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 \\ \hline Local & 80.44\(\pm\)1.77 & 79.58\(\pm\)1.07 & 79.46\(\pm\)0.51 & 71.28\(\pm\)1.32 & 68.93\(\pm\)0.79 & 70.49\(\pm\)0.88 & 84.98\(\pm\)0.67 & 83.34\(\pm\)0.42 & 82.92\(\pm\)0.41 \\ \hline FedAvg & 72.91\(\pm\)6.59 & 69.23\(\pm\)1.03 & 47.79\(\pm\)2.61 & 72.43\(\pm\)0.99 & 70.33\(\pm\)1.37 & 67.18\(\pm\)1.17 & 82.02\(\pm\)0.15 & 83.17\(\pm\)1.35 & 76.50\(\pm\)0.20 \\ FedProx & 63.96\(\pm\)3.07 & 71.39\(\pm\)4.16 & 70.88\(\pm\)5.90 & 73.63\(\pm\)0.85 & 42.86\(\pm\)2.59 & 42.31\(\pm\)3.18 & 83.99\(\pm\)0.17 & 83.57\(\pm\)0.09 & 83.93\(\pm\)0.89 \\ \hline FedPer & 81.37\(\pm\)1.58 & 76.73\(\pm\)0.95 & 77.24\(\pm\)1.62 & 70.45\(\pm\)2.00 & 67.14\(\pm\)6.95 & 71.13\(\pm\)0.76 & 85.59\(\pm\)0.18 & 85.42\(\pm\)0.12 & 83.75\(\pm\)0.14 \\ \hline FedPub & 82.33\(\pm\)1.46 & 78.27\(\pm\)1.46 & 79.15\(\pm\)1.08 & 74.11\(\pm\)1.58 & 72.12\(\pm\)1.83 & 68.16\(\pm\)1.41 & 86.22\(\pm\)0.21 & 85.58\(\pm\)0.31 & 84.79\(\pm\)0.46 \\ FedSage & 72.07\(\pm\)0.36 & 69.66\(\pm\)0.27 & 59.28\(\pm\)0.38 & 70.64\(\pm\)3.04 & 65.54\(\pm\)6.95 & 63.02\(\pm\)1.49 & 84.64\(\pm\)0.60 & 83.39\(\pm\)1.29 & 84.92\(\pm\)0.45 \\ GCFL & 79.91\(\pm\)1.93 & 73.25\(\pm\)4.39 & 76.37\(\pm\)1.82 & 71.37\(\pm\)2.54 & 67.58\(\pm\)0.61 & 63.54\(\pm\)3.34 & 84.24\(\pm\)0.57 & 83.47\(\pm\)0.29 & 83.72\(\pm\)0.47 \\ FedStar & 79.33\(\pm\)0.69 & 78.26\(\pm\)0.22 & 80.40\(\pm\)0.30 & 69.47\(\pm\)1.77 & 70.25\(\pm\)1.26 & 68.50\(\pm\)0.68 & 81.96\(\pm\)0.96 & 81.39\(\pm\)0.17 & 80.15\(\pm\)0.66 \\ \hline Ours & **83.37\(\pm\)1.59** & **80.06\(\pm\)1.27** & **81.17\(\pm\)0.63** & **75.25\(\pm\)1.38** & **74.18\(\pm\)1.14** & **71.17\(\pm\)1.69** & **87.05\(\pm\)0.19** & **86.53\(\pm\)0.73** & **86.38\(\pm\)0.30** \\ \hline \hline Dataset & \multicolumn{3}{c}{Amazon Photo} & \multicolumn{3}{c}{Amazon Computers} & \multicolumn{3}{c}{Ogbn Arxiv} \\ \hline \# Clients & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 \\ \hline Local & 77.97\(\pm\)0.29 & 86.14\(\pm\)1.05 & 86.37\(\pm\)0.17 & 65.90\(\pm\)0.29 & 74.41\(\pm\)1.51 & 81.81\(\pm\)0.50 & 56.93\(\pm\)0.89 & 56.54\(\pm\)0.37 & 57.79\(\pm\)0.89 \\ \hline FedAvg & 53.49\(\pm\)5.87 & 45.82\(\pm\)1.88 & 35.15\(\pm\)1.03 & 46.03\(\pm\)1.93 & 39.04\(\pm\)3.68 & 43.74\(\pm\)8.15 & 55.84\(\pm\)0.88 & 61.02\(\pm\)0.32 & 59.30\(\pm\)0.18 \\ FedProx & 71.08\(\pm\)3.11 & 56.78\(\pm\)4.31 & 44.61\(\pm\)5.89 & 37.72\(\pm\)0.94 & 36.44\(\pm\)0.35 & 36.89\(\pm\)0.27 & 62.05\(\pm\)1.10 & 61.77\(\pm\)0.78 & 57.79\(\pm\)0.26 \\ \hline FedPer & 68.19\(\pm\)1.68 & 77.15\(\pm\)0.14 & 78.96\(\pm\)0.68 & 64.30\(\pm\)0.34 & 64.47\(\pm\)0.20 & 70.44\(\pm\)0.57 & 61.57\(\pm\)0.50 & 61.52\(\pm\)0.37 & 62.73\(\pm\)0.26 \\ \hline FedPub & 86.76\(\pm\)1.71 & 87.80\(\pm\)2.44 & 88.72\(\pm\)3.09 & 68.65\(\pm\)2.53 & 77.02\(\pm\)0.87 & 80.71\(\pm\)0.79 & 67.50\(\pm\)0.32 & 66.80\(\pm\)0.32 & 62.11\(\pm\)0.56 \\ FedSage & 51.28\(\pm\)7.30 & 51.68\(\pm\)7.28 & 51.39\(\pm\)7.22 & 42.88\(\pm\)5.23 & 50.41\(\pm\)7.84 & 57.06\(\pm\)0.42 & 58.63\(\pm\)1.29 & 61.65\(\pm\)0.45 & 54.86\(\pm\)1.77 \\ GCFL & 68.17\(\pm\)8.37 & 82.74\(\pm\)3.15 & 57.55\(\pm\)2.28 & 55.36\(\pm\)
local variants obtained via replacing the matrix \(S\) in (11) on the Cora dataset. The _local_ variant corresponds to the standard softmax kernel that sets \(\mathsf{S}=\mathsf{R}\). The _square_ variant corresponds to setting \(\mathsf{S}=\mathsf{R}^{2}\), which can be understood as performing a two-layer message passing. As shown in Fig.5, we observe that incorporating global information leads to improved federated learning performance, especially when the number of clients is large. This implies that inter-client information is more complicated, and the proposed task relater provides a more nuanced solution. Furthermore, we also compare these variations with FedPub framework and observe that even using local connectivity generated from distilled datasets instead of graph embeddings on a random graph input to relate tasks, our framework outperforms FedPub. This suggests that distilled graphs are more representative than graph embeddings due to their incorporation of READOUT layers.
#### 6.2.4. Effects of Sparsity-controlling Coefficient \(\gamma\)
We conduct an ablation study on Cora to assess the impact of the sparsity control coefficient \(\gamma\) in distilled graphs. A larger \(\gamma\) generates a more sparse distilled graph. We vary \(\gamma\) across \(\{10^{-3},0.75,1.5,2.5,5\}\) to test the effect. Our findings show that the optimum value for \(\gamma\) in our framework is \(0.75\). We hypothesize that distilled graphs' density is influenced by the constraint of containing few nodes, as a sparse small graph results in almost no connections. This observation is consistent with results from centralized graph distillation(Kipf and Welling, 2015; Kipf and Welling, 2015).
#### 6.2.5. Effects of Temperature on Element-wise Exponential \(\tau_{\mathsf{S}}\)
We investigate the impact of varying the temperature values on the element-wise exponential \(\tau_{\mathsf{S}}\) defined in (11) on Cora dataset. \(\tau_{\mathsf{S}}\) is a metric that regulates the influence of the local model weights \(W_{t}^{t}\) on the aggregated weights \(\overline{W_{t}^{t}}\). In a federated GNN system with significantly heterogeneous local datasets, a large value of \(\tau_{\mathsf{S}}\) is required to achieve optimal performance. This idea is supported by the results presented in Fig.7. In Appendix B.1, we demonstrate that the number of clients results in more heterogeneous graphs within the system, necessitating a larger value of \(\tau_{\mathsf{S}}\) to attain optimal performance in FedGKD.
#### 6.2.6. Effects of Temperature on Matrix Exponential \(\tau\)
We experiment with varying the values of \(\tau\) in the exponential of the relation matrix, as defined in (11) on Cora. \(\tau\) is introduced to avoid the singularity of the matrix exponential. As shown in Fig.8, a large \(\tau\) may results in extremely low rank aggregation weight matrix thereby deteriorating model performance. Therefore, it is essential to set an appropriate value of \(\tau\) to guarantee non-singularity.
#### 6.2.7. Effects of Task Features from Distilled Graphs
We experiment with multiple choices of statistics obtained from distilled graphs to compute the pairwise task-relatedness in (10) on Cora dataset. Fig.9 illustrates that choices of statistics are robust to model performance but the concatenation of node feature \(\mathbf{X}\) and embeddings \(\mathbf{H}\) outperforms others slightly. It is worth noting that if communication overhead is a major concern, we can further reduce the extra communication cost by transmitting only \(\mathbf{H}\) or even the distilled labels, which incurs only a slight performance degradation.
#### 6.2.8. Additional experiments
We report an ablation study regarding the comparison of static versus dynamic dataset distillation strategy in appendix C. The results suggest that dynamic strategy is preferred.
## 7. Conclusion
Our paper proposes a novel framework that overcomes the limitations of existing federated GNN frameworks in local task featuring and task relating. We utilize graph distillation in task featuring, and introduce a novel kernelized attentive aggregation mechanism based on a collaborated network to incorporate global connectivity during model aggregation. The extensive experimental results demonstrate that our framework outperforms state-of-the-art methods.
Figure 4. Convergence plot for the non-overlapping setting with 5 clients. We visualize the first 100 communication rounds.
Figure 5. Effects of kernel Figure 6. Effects of spar-Figure 7. Effects of tem-Figure 8. Effects of tem-Figure 9: Effects of of functions **sity control coefficient \(\gamma\)** **perature on element-wise perature on local connec-task features from dis-exponential \(\tau_{\mathsf{S}}\)** **tivity matrix exponential \(\tau\)** **tiled graphs** |
2310.00336 | DURENDAL: Graph deep learning framework for temporal heterogeneous
networks | Temporal heterogeneous networks (THNs) are evolving networks that
characterize many real-world applications such as citation and events networks,
recommender systems, and knowledge graphs. Although different Graph Neural
Networks (GNNs) have been successfully applied to dynamic graphs, most of them
only support homogeneous graphs or suffer from model design heavily influenced
by specific THNs prediction tasks. Furthermore, there is a lack of temporal
heterogeneous networked data in current standard graph benchmark datasets.
Hence, in this work, we propose DURENDAL, a graph deep learning framework for
THNs. DURENDAL can help to easily repurpose any heterogeneous graph learning
model to evolving networks by combining design principles from snapshot-based
and multirelational message-passing graph learning models. We introduce two
different schemes to update embedding representations for THNs, discussing the
strengths and weaknesses of both strategies. We also extend the set of
benchmarks for TNHs by introducing two novel high-resolution temporal
heterogeneous graph datasets derived from an emerging Web3 platform and a
well-established e-commerce website. Overall, we conducted the experimental
evaluation of the framework over four temporal heterogeneous network datasets
on future link prediction tasks in an evaluation setting that takes into
account the evolving nature of the data. Experiments show the prediction power
of DURENDAL compared to current solutions for evolving and dynamic graphs, and
the effectiveness of its model design. | Manuel Dileo, Matteo Zignani, Sabrina Gaito | 2023-09-30T10:46:01Z | http://arxiv.org/abs/2310.00336v1 | # DURENDAL: Graph deep learning framework for temporal heterogeneous networks
###### Abstract
Temporal heterogeneous networks (THNs) are evolving networks that characterize many real-world applications such as citation and events networks, recommender systems, and knowledge graphs. Although different Graph Neural Networks (GNNs) have been successfully applied to dynamic graphs, most of them only support homogeneous graphs or suffer from model design heavily influenced by specific THNs prediction tasks. Furthermore, there is a lack of temporal heterogeneous networked data in current standard graph benchmark datasets. Hence, in this work, we propose DURENDAL, a graph deep learning framework for THNs. DURENDAL can help to easily repurpose any heterogeneous graph learning model to evolving networks by combining design principles from snapshot-based and multirelational message-passing graph learning models. We introduce two different schemes to update embedding representations for THNs, discussing the strengths and weaknesses of both strategies. We also extend the set of benchmarks for THNs by introducing two novel high-resolution temporal heterogeneous graph datasets derived from an emerging Web3 platform and a well-established e-commerce website. Overall, we conducted the experimental evaluation of the framework over four temporal heterogeneous network datasets on future link prediction tasks in an evaluation setting that takes into account the evolving nature of the data. Experiments show the prediction power of DURENDAL compared to current solutions for evolving and dynamic graphs, and the effectiveness of its model design.
## 1 Introduction
Graph neural networks (GNNs), as a powerful graph representation technique based on deep learning, have been successfully applied to many real-world static and heterogeneous graphs [1]. Recently, GNNs also attracted considerable research interest to learn, extract, and predict from evolving networks, which characterize many application domains, such as recommender systems [2], temporal knowledge graphs [3], or social network analysis [4]. However, the success of heterogeneous graph learning has not entirely transferred to temporal heterogeneous networks (THNs).
Current architectural designs for dynamic GNNs have been proposed for homogeneous graphs only. A few heterogeneous graph learning models try to extend the computation to handle the graphs' dynamic but suffer limitations in model design, evaluation, and training strategies. Specifically, they struggle to incorporate state-of-the-art designs from static GNNs, limiting their performance. Their evaluation settings are fixed train test splits, which do not fully reflect the evolving nature of the data, and commonly used training methods are not scalable. Furthermore, existing solutions for learning from THNs are heavily designed to solve a specific prediction task, i.e. knowledge base completion, making it hard to obtain general-purpose embedded representation for nodes, edges, and the whole graphs.
To overcome the limitations described above, we propose DURENDAL, a graph deep learning framework for temporal heterogeneous networks. Inspired by the ROLAND [5] framework for dynamic homogeneous graphs, DURENDAL can help to easily repurpose any heterogeneous graph learning model to dynamic graphs, including training strategies and evaluation settings for evolving data. The ability to easily extend heterogeneous GNNs to the dynamic setting arises from a combination of model design principles. To handle dynamic aspects we consider the node embeddings at different GNN layers as hierarchical node states, recurrently updating them over time through customizable embedding modules. Additionally, to handle the heterogeneity, we introduce heterogeneous hierarchical node states and customizable semantic aggregation schemes. In this way, modern architectural design options such as skip connections or attention mechanisms can be easily incorporated. We propose two different update schemes for temporal heterogeneous node states discussing their strengths and drawbacks in terms of scalability, memory footprint, and learning power, allowing researchers to easily follow one of the two schemes according to the real application scenario they face.
We train DURENDAL using an incremental training procedure and using a live-update setting for the evaluation. We conducted experiments over four different THNs network datasets on future link prediction tasks. The four datasets were selected based on certain minimum requirements that they had to meet in order to serve as useful testing grounds for temporal heterogeneous graph learning models. Since current graph benchmarks for THNs are very limited, we also extend the set of benchmarks for THNs by introducing two novel high-resolution temporal heterogeneous graph datasets derived from an emerging Web3 platform and a well-established e-commerce website.
The experimental evaluation shows the prediction power of DURENDAL and the effectiveness of its model design and update schemes. DURENDAL achieves better performance compared to current solutions for dynamic graphs on three of the four datasets, which exhibit different time granularity, number of snapshots, and new incoming links. The effectiveness of the DURENDAL model design is shown by the increase in performance of state-of-the-art heterogeneous graph learning models repurposed in a dynamic setting with our framework, which also highlights the benefit of some modern architectural design options for GNNs. Lastly, we compare the two different DURENDAL update schemes with the ROLAND one, showing the improvements in the prediction performance of our schemes.
We summarize our main contributions as follows: _i)_ we propose a novel graph deep learning framework that allows an easy repurposing of any heterogenous GNNs to a dynamic setting; _ii)_ we introduce two different update schemes for obtaining temporal heterogeneous node embeddings, highlighting their strengths and weaknesses and their practical use scenarios; _iii)_ we define some minimal requirements datasets must satisfy to be useful testing grounds for temporal heterogeneous graph learning models, extending the set of benchmarks for THNs by introducing two novel high-resolution THNs datasets; and _iv)_ we evaluate different types of approaches for dealing with THNs in the new live-update setting, enabling an assessment of the performances along the snapshots of the evolving networks.
## 2 Related work
**Temporal GNNs.** GNNs have been successfully applied to extract, learn, and predict from temporal networks as surveyed in [6]. Most of the works combine GNNs with recurrent models (e.g. a GRU Cell [7]): adopting GNN as a feature encoder [8], replacing linear layers in the RNN cells with GNN layers [9, 10, 11], or using RNNs to update the learned weights [12]. Other works combine GNN layers with temporal encoders [13] or extend the message-passing computation on temporal neighborhood [14, 15]. All these works have been proposed only for homogeneous graphs. Moreover, most have limitations in model design, evaluation, and training strategies, as shown in [5].
**Temporal Heterogenous GNNs.** Only a few works on heterogeneous graph deep learning try to extend the reasoning over temporal networks. For instance, [16] and [17] employ a recurrent event encoder to encode past facts and use a neighborhood aggregator to model the connection of facts at the same timestamp. [18], inspired by Transformer positional encoding methods, introduces a relative temporal encoding technique to handle dynamic graphs. [19] addressed the task of few-shot link prediction over temporal KGs using a meta-learning-based approach that builds representations of new nodes by aggregating features of existing nodes within a specific \(\Delta_{t}\) temporal neighborhood. Though these methods have empirically shown their prediction power, they struggle to easily incorporate state-of-the-art designs from static GNNs (e.g. skip connections), which are beneficial for GNN
architectural design [5, 20]. Furthermore, most of the works use only a fixed-split setting [5] to evaluate link prediction performance or do not evaluate it at all. A fixed-split setting does not take into account the evolving nature of data as it provides to train the model on a huge part of historical information and test it only on the last timestamped information. In contrast, the recently proposed live-update setting [5], where models are trained and tested over time, can lead to a better evaluation for temporal graph learning models since performances are measured for each test snapshot.
**Factorization-based models.** Factorization-based Models (FMs) have enjoyed enduring success in Knowledge Graph Completion (KGC) tasks, often outperforming GNNs [21]. Various FMs have been proposed for temporal KGs [3]. Despite their huge prediction power reached with simple architecture and order of magnitude fewer parameters compared to GNNs, they have shown a few drawbacks; for instance, they struggle to incorporate node features, they work in transductive settings only, and they are heavily designed to cope only with KGC tasks.
DURENDAL differs from the above works proposing a new update scheme for node embeddings that preserve heterogeneous information from the past and capture relational temporal dynamics. Moreover, it can handle node features and inductive tasks w.r.t. FM models since it relies on GNN architectures. Lastly, DURENDAL can be trained and evaluated in a live-update setting [5] that takes into account the evolving nature of the data.
## 3 The proposed framework: DURENDAL
**Temporal heterogeneous graphs.** A heterogeneous graph, denoted as \(G=(V,E)\), consists of a set of nodes \(V\) and a set of links \(E\). A heterogeneous graph is also associated with a node-type \(\phi:V\mapsto A\) and a link-type \(\psi:E\mapsto R\) mapping functions, where \(A\) and \(R\) are the predefined sets of node and link types such that \(|A|+|R|>2\). Nodes can be paired with features related to a certain node type \(X_{a}=\{x_{v}\mid v\in V\land\ \phi(v)=a\}\). On the other hand, in a temporal graph, each node \(v\) has a timestamp \(\tau_{v}\) and each edge \(e\) has a timestamp \(\tau_{e}\). We focus on the snapshot-based representation of temporal graphs, at the basis of the definition of temporal heterogeneous graph. In fact, a temporal heterogeneous graph \(\mathcal{G}=\{G_{t}\}_{t=1}^{T}\) can be represented as a sequence of graph snapshots, where each snapshot is a heterogeneous graph \(G_{t}=(V_{t},E_{t})\) with \(V_{t}=\{v\in V|\tau_{v}=t\}\) and \(E_{t}=\{e\in E|\tau_{e}=t\}\).
**Heterogeneous GNNs.** The objective of a GNN is to learn node representations via an iterative aggregate of neighborhood messages. In heterogeneous graph learning, models exploit the highly multi-relational data characteristic as well as the difference in the features related to each node type, to obtain better representations of nodes. Hence, in heterogenous GNNs node embeddings are learned for each node type and messages are exchanged between each edge type. Then, the partial node representations derived for each edge type, in which they are involved, are mixed together through an aggregation scheme. Formally, we denote by \(H^{(L)}=\{h_{v}^{(L)}\}_{v\in V}\) the embedding matrix for all the nodes after applying an \(L\)-layer GNN. The \(l\)-layer of a heterogenous GNN, \(H^{(l)}\), can be written as:
\[h_{v}^{(l)}=\bigoplus_{r\in R}f_{\theta}^{(l,r)}(h_{v}^{(l-1)},\{h_{w}^{(l-1) }:w\in\mathcal{N}^{(r)}(v)\})\]
where \(\mathcal{N}^{(r)}(v)\) denotes the neighborhood of \(v\in V\) under relation \(r\in R\), \(f_{\theta}^{(l,r)}\) denotes the message passing operator for layer \(l\) and relation \(r\), and \(\bigoplus\) is the aggregation scheme to use for grouping node embeddings generated by different relations. In the following sections, we will also refer to partial views of the embedding matrix w.r.t. types. Specifically, we will use \(H^{(l,r)}\) to denote the partial embeddings related to a relation type \(r\in R\) and \(H^{(l,a)}\) to denote the node embedding matrix related only to a specific node type \(a\in A\).
**From heterogeneous GNNs to temporal heterogeneous GNNs.** Figure 1 shows the proposed DURENDAL framework to generalize any heterogenous GNNs to a dynamic setting. Following the ROLAND [5] model design principle, the node embeddings at different GNN layers are hierarchical node states which are recurrently updated over time through customizable embedding modules. To allow easy repurposing of any heterogenous GNNs to a temporal setting, we introduce heterogeneous hierarchical node states and customizable semantic aggregation schemes, that define how partial node representations for each relation type are aggregated. In this way, modern architectural design options such as skip connections or attention mechanisms can be easily incorporated. Node embeddings can be updated using a moving average, a two-layer MLP, or a GRU Cell. A suitable option for the
semantic aggregation scheme could involve semantic-level attention coefficients [22]. The forward computation of the \(l-\)layer of DURENDAL on the \(t\) snapshot for a node \(v\), \(h_{v_{t}}^{(l)}\), can be written as:
\[h_{v_{t}}^{(l)}=\bigoplus_{r\in R}\mathrm{UPDATE}(f_{\theta}^{(l,r)}(h_{v_{t}}^{ (l-1)},\{h_{w}^{(l-1)}:w\in\mathcal{N}^{(r,t)}(v)\}),h_{v_{t-1}}^{(l)}) \tag{1}\]
where UPDATE (UPD in Figure 1) is a custom update function and \(\mathcal{N}^{(r,t)}(v)\) is the neighbourhood of \(v\) on the relation \(r\) at time \(t\).
**Updating schemas: Update-Then-Aggregate and Aggregate-Then-Update.** As shown in Eq. 1, node states are first updated over time and then aggregated along the different semantic levels, i.e. relation types. We denote this solution as _Update-Then-Aggregate_ scheme - UTA. This scheme provides a rich representation of temporal heterogeneous information. Indeed, it captures relational temporal dynamics by preserving partial node states that are updated through several embedding modules, one for each relation type. Furthermore, thanks to the heterogeneous node states, it is more suited for continual learning [24] settings and it allows partial update scenarios, i.e. feeding the model with a new batch of data related to a specific subset of relations or node types. In contrast, an _Aggregate-Then-Update_ (ATU) scheme can be used to first aggregate the partial representation of nodes and then update the node states using a single update module. Formally, the forward computation of DURENDAL with the _Aggregate-Then-Update_ scheme can be written as:
\[h_{v_{t}}^{(l)}=\mathrm{UPDATE}(\bigoplus_{r\in R}f_{\theta}^{(l,r)}(h_{v_{t} }^{(l-1)},\{h_{w}^{(l-1)}:w\in\mathcal{N}^{(r,t)}(v)\}),h_{v_{t-1}}^{(l)}) \tag{2}\]
This second solution loses the heterogeneity of the information from the past because it updates the node embeddings only at the semantic-aggregated level. However, it is useful to reduce the memory footprint of the model when modeling relational temporal dynamics is not beneficial (see Appendix
Figure 1: DURENDAL model design. (a) Scheme of the computation beyond a heterogeneous GNN layer. (b) Compact representation of the (a) scheme within the GRAPHEDM paradigm [23]. (c) DURENDAL framework with the _Update-Then-Aggregate_ scheme: the orange layer (temporal layer) updates over time the hierarchical node state of each relation type (returned by the first two layers in (b)), then the aggregation scheme (yellow) is run on top the temporal layer. In the _Aggregate-Then-Update_ scheme the temporal layer and the aggregation scheme are swapped.
for use case examples). Moreover, utilizing a single embedding update module reduces the number of learnable parameters, thereby mitigating the model's susceptibility to overfitting.
**Scalability.** To train DURENDAL, it is not necessary to keep in memory the whole temporal heterogeneous network. Indeed, we use the live-update setting for training DURENDAL. The live-update setting is an incremental training approach in which the model is fine-tuned and then tested on each snapshot. Hence, given a new graph snapshot \(G_{t}\), since the hierarchical node states \(H_{t-1}\) have already encoded information up to time \(t-1\), to train the model and make predictions on \(t\) only \(G_{t}\), the GNN model and \(H_{t-1}\) must be stored in the C/GPU memory. In addition, if we adopt the _Update-Then-Aggregate_ scheme, we can easily split the computation for each relation type from the input until the aggregation layer. This splitting allows us to _i)_ parallelize the training procedure on the different semantic levels of the network; and _ii)_ keep in memory only a portion of the GNN model, node states, and new data related to a specific semantic level.
## 4 Temporal heterogeneous networks dataset
Here we present four THN datasets to evaluate the performance of graph machine-learning models on future link prediction tasks. The datasets serve as useful playgrounds for testing graph ML models because they provide high-resolution temporal heterogeneous information along multiple time snapshots. To the best of our knowledge, there are no current benchmark datasets for temporal heterogeneous graph learning.
**Dataset requirements.** We define some minimal requirements graph datasets must meet to be considered suitable for temporal heterogeneous graph learning evaluation. Specifically, we introduce three simple metrics to measure different properties of the data: _heterogeneity_, _temporality_, and _evolutivity_. Heterogeneity is the number of relation types available in the dataset, temporality is the number of graph snapshots, and evolutivity is the average number of new links in the snapshots (i.e. \(\frac{1}{|T-1|}\sum_{t=1}^{T}|E_{t}|\)). We require a value for heterogeneity greater or equal (g.e.q.) to two (by definition of heterogeneous graphs), for temporality g.e.q. to four (minimum number of snapshots to allow live-update evaluation [5]), and for evolutivity g.e.q. to zero (i.e. edges have timestamps). Furthermore, we define _time-granularity_ as the duration of the time interval on which a graph snapshot is constructed, but we do not impose a minimum value for this metric.
**Our datasets.** To cope with the above issue, we present four THN datasets that satisfy our requirements. The first two datasets are part of a well-established suite for benchmarking knowledge base completion tasks, while the remaining two are introduced in this work to extend the benchmark set for THNs.
* GDELT18, ICEWS18: the Global Database of Events, Language, and Tone and the Integrated Crisis Early Warning System used by [16]. The two datasets are event knowledge graphs in which nodes are actors and edges are verbs between actors. They are used to evaluate temporal knowledge base completion [3] tasks and are available in the most used graph representation learning libraries. We process the data according to [16] and then we construct graph snapshots with time granularity equal to one week for GDELT and to one month for ICEWS. Since most of the verbs have no instances in the original datasets, we decided to select only the top 20 most frequent verbs. Actors and verbs codename follow the CAMEO ontology.1 Footnote 1: [https://parusanalytics.com/eventdata/data.dir/cameo.html](https://parusanalytics.com/eventdata/data.dir/cameo.html), September 2023
* with timestamps. Moreover, edges between items and categories assign each item to its set of categories. We construct heterogeneous graph snapshots with time granularity equal to five minutes. We consider a heterogeneous subgraph induced by 250k random sampled items for scalability issues.
- financial operation. The heterogeneous graph snapshots have a monthly time granularity. The starting date corresponds to when the "follow" operation has been made available on the platform. We also collected the textual content produced by users, used to build a feature vector for each node (more details in the Appendix)
We report some dataset statistics in Table 1. The number of nodes and edges refers to the whole graph.
## 5 Experimental evaluation
**Tasks.** The future link prediction problem arises in many different applications and domains. When it comes down to heterogeneous graphs, link prediction can be performed on the whole set of relation types (e.g. Knowledge Base Completion [3], multirelational link prediction [28]) or on a specific relation. We conducted our evaluation considering both kinds of link prediction tasks. Specifically, given all the graph snapshots up to time \(t\) and a candidate pair of nodes \((u,v)\), the _monorelational future link prediction_ task consists of finding if \((u,v)\) are connected through a given relation \(r\) in a future snapshot \(t+1\); while the _multirelational future link prediction task_ involves any \(r\) in the set of all the possible relations. For the monorelational tasks, we focus on a specific relation type for each dataset to study how the models can learn from past information and current heterogeneous interactions between nodes to predict predefined future relations. This choice allows us to analyze the prediction performance in real-application scenarios on general heterogeneous graphs, i.e. graphs that are not KGs, as in the case of SteemitTH and TaobaoTH. Specifically, we perform the following future link prediction tasks: _i)_ "follow" link prediction between users for SteemitTH; _ii)_ "buy" link prediction between users and items for TaobaoTH; and _iii)_ public statements prediction from one actor to another (e.g. governments) for GDELT18 and ICEWS18, according to the CAMEO ontology. For the multirelational tasks, we focus on the event KGs as they represent two standard benchmark datasets for heterogeneous graph learning. Moreover, considering problems different from "user-follow-user" and "user-buy-item" prediction could be not so interesting and meaningful for SteemitTH and TaobaoTH.
**Experimental setup.** We evaluate the DURENDAL framework over the future link prediction task. At each time \(t\), the model utilizes information up to time \(t\) to predict edges in the snapshot \(t+1\). We use the area under the precision-recall curve (AUPRC) and the mean reciprocal rank (MRR) to evaluate the performance of the models. As a standard practice [12], we perform random negative sampling to obtain an equal number of positive and negative edges3. We consider the live-update setting [5] for the evaluation of the models by which we assess their performance over all the available snapshots. We randomly choose \(20\%\) of edges in each snapshot to determine the early-stopping condition. It is worth noting that in SteemitTH we also use the node features derived from the textual content, while in the other settings, node features are not available. We rely on HadamardMLPs [29] and ComplEx [30] as decoders for monorelational and multirelational link prediction as both demonstrated their effectiveness compared to other link prediction decoders [29; 31; 32]. For the multirelation link prediction experiments, we rely on the experimental setting presented by [28]. We
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Dataset & \(|\)IN\(|\) & \(|\)E\(|\) & \(|\)R\(|\) & \(|\)T\(|\) & time-granularity & evolutivity \\ \hline GDELT18 & 4,931 & 2,026 & 20 & 4 & week & 0.263 \\ ICEWS18 & 11,775 & 7,333 & 20 & 7 & month & 0.139 \\ TaobaoTH & 359,997 & 210,448 & 5 & 288 & 5min & 0.003 \\ SteemitTH & 20,849 & 1,832,570 & 4 & 5 & month & 0.177 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics. Evolutivity is divided by \(|E|\).
compute the AUPRC and MRR score for each relation type, averaging the performance over all the relations to obtain the final evaluation scores. To extend this setting to THNs, we repeat the procedure for each snapshot using the live-update evaluation. Code, datasets, and all the information about the experiments are available in our repository4.
Footnote 4: [https://anonymous.4open.science/r/durendal-5154/](https://anonymous.4open.science/r/durendal-5154/)
**Baselines.** We compare DURENDAL to nine baseline models, considering at least one candidate for homogeneous, heterogeneous, static, and dynamic graph learning. Among the static graph learning models, we decide to compare DURENDAL with solutions that utilize an attention mechanism, whose great potential has been well demonstrated in various applications [33, 34, 18]. Whereas for temporal graph learning models, we compare the performance with temporal GNNs [6] as well as walk-aggregating methods [35]. Specifically, we select the following candidates: _GAT_[36], _HAN_[22], _EvolveGCN_[12], _GCRN-GRU_, _TGN_[37], _CAW_[38], and HetEvolveGCN (a new baselines we developed for snapshot-based THNs, see Appendix). For multirelational link prediction, baselines need to leverage heterogeneous graphs. Hence, we consider HAN, HetEvolveGCN, and two additional baselines based on tensor factorization, which demonstrated huge prediction power on knowledge graphs link prediction [31, 32, 3]: _ComplEx_[30] and _TNTComplEx_[39]. A brief description of the baselines is provided in the Appendix. All the candidate models have been trained using the incremental training procedure.
**Results for monorelational link prediction.** Table 2 shows the prediction performance of the candidate models in monorelational future link prediction tasks 5. We report the average AUPRC and MRR over the different snapshots. DURENDAL achieves better performance compared to baselines in three of the four datasets. On GDELT18 and ICEWS18, all dynamic models achieve performances around 90% because they leverage temporal information related to events, which is crucial for predicting future public statements. DURENDAL, achieving the best performance overall, gains the greatest advance from the semantics related to events, i.e. the different relation types. On SteemitTH, all the models obtain great performances; DURENDAL, by exploiting information derived from node attributes, timestamps on edges, and semantic relations, reaches an AUPRC and MRR score of \(0.982\) and \(0.891\), respectively. On TaobaoTH, we obtain surprising results. The best performance is achieved by HAN, that do not use leverage temporal information, apart from the incremental training. TGN and CAW achieve notably worse prediction performance than heterogeneous GNNs, while EvolveGCN, GCRN-GRU, and HetEvolveGCN obtain poor performance. DURENDAL reaches good performance using an embedding update module that simply computes a convex combination between the past and the current representation of nodes, with a past coefficient no greater than \(0.1\). The same results are obtained using a time granularity of one or ten minutes. Hence, predicting future "buy" relations seems just related to the other actions performed by users on items (view an item, add it to your favorites or in your cart) in the previous snapshot, not to the order they are carried out, nor to repetition over time. The result is surprising because sophisticated dynamic models seem to give too much importance to past information without learning this simple structural pattern. However, it is important to note that TaobaoTH has a very low evolutivity value, equal to \(0.003\). Finally, it is worth noticing that TGN and CAW reach worse performance of at least one snapshot-based baseline for three of the four datasets. In our intuition, their continuous-time representation for temporal networks is not beneficial in application scenarios where datasets are snapshots-based.
Footnote 5: The official implementations for TGN and CAW do not compute the MRR for evaluating their performance. On GDELT18, CAW obtains nan values as AUPRC score
**Results for multirelational link prediction.** Table 3 shows the prediction performance of the candidate models in multirelational future link prediction tasks. We report the average AUPRC and MRR over the different snapshots. DURENDAL performs better than baselines on both datasets with at least one of the two update schemes. The results highlight the importance of two different update schemes for temporal knowledge graph forecasting [40]. On GDELT18, the best performance is achieved using the _Upgrade-Then-Aggregate_ scheme, i.e. preserving partial node states to capture relational temporal dynamics. Indeed, due to the significant temporal variability of the Global Database of Events, datasets extracted from GDELT are considered more challenging than the ones collected from ICEWS [41, 42]. GDELT18 exhibits also the highest evolutivity rate in Table 1. Hence, using different embedding update modules for different relations is beneficial to predict its evolution. On ICEWS18, preserving partial node states leads to slightly worse results. In our intuition, as highlighted for other datasets collected from the ICEWS system, ICEWS18 requires more entity-driven predictions, as the relations in these datasets are sparse and they usually encode
one-time patterns with limited, if any, regularity, e.g., official visits, or negotiations [41]. Hence, by focusing on the evolution of the entity embeddings instead of modeling relational temporal dynamics, DURENDAL with the _Aggregate-Then-Update_ scheme achieves the best results. It is worth noting that factorization-based models, typically used for temporal knowledge graph completion [3] (i.e. missing temporal link prediction), achieve good performance on these temporal knowledge graph forecasting tasks, often outperforming other GNN baselines on both datasets.
**Effectiveness of model-design.** DURENDAL can easily repurpose any heterogenous GNN to a dynamic setting thanks to its model design. Here we study the prediction performance of different DURENDAL repurposed heterogeneous GNNs. Specifically, we repurpose RGCN [43], HAN [22], and HGT [18]. Node embeddings are updated using ConcatMLP [5] or a GRU cell, following the _Aggregate-Then-Update_ scheme. Figure 2a shows the AUPRC distributions of the models on the "follow" link prediction task on SteemitTH. Results show that an attention-based aggregation scheme for heterogeneous graph learning is a valuable choice for GNN architectural design. Indeed, HAN and HGT achieve the best results and their AUPRC distributions exhibit low variance. Furthermore, ConcatMLP seems preferable to a GRU Cell because it obtains better results with negligible variation. Lastly, the DURENDAL model design helps HAN to reach better results: the worst result in its AUPRC distribution is \(0.979\), which is better than the average result of "vanilla" HAN \(0.974\) (see Table 2).
**Effectiveness of update schemes.** We also studied the effectiveness of the two different update schemes described in Section 3. Table 4 reports the prediction performance of DURENDAL models with _Update-Then-Aggregate_, _Aggregate-Then-Update_, and ROLAND-update, i.e. no heterogeneous update. The update schemes of DURENDAL perform better than the ROLAND update scheme. In particular, _Update-Then-Aggregate_ seems preferable to _Aggregate-Then-Update_ when the time granularity of the dataset is coarser, and vice-versa. Finally, we also show the prediction performance snapshot by snapshots for "follow" link prediction on SteemitTH in Figure 2b. In this context,
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline & \multicolumn{2}{c}{GDELT18} & \multicolumn{2}{c}{ICEWS18} & \multicolumn{2}{c}{TaobaoTH} & \multicolumn{2}{c}{SteemitTH} \\ & AUPRC & MRR & AUPRC & MRR & AUPRC & MRR & AUPRC & MRR \\ \hline GAT & 0.488 & 0.506 & 0.477 & 0.506 & 0.500 & 0.500 & 0.940 & 0.845 \\ HAN & 0.564 & 0.601 & 0.561 & 0.566 & **0.996** & **0.996** & 0.974 & 0.859 \\ EvolveGCN & 0.933 & 0.864 & 0.930 & 0.898 & 0.500 & 0.500 & 0.979 & **0.895** \\ GCRN-GRU & 0.935 & 0.806 & 0.873 & 0.816 & 0.500 & 0.500 & 0.950 & 0.855 \\ TGN & 0.908 & - & 0.916 & - & 0.710 & - & 0.889 & - \\ CAW & N/A & - & 0.893 & - & 0.518 & - & 0.907 & - \\ HetEvolveGCN & 0.877 & 0.855 & 0.934 & 0.922 & 0.5 & 0.5 & 0.977 & 0.879 \\ DURENDAL & **0.947** & **0.930** & **0.986** & **0.981** & 0.995 & 0.993 & **0.982** & 0.891 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on the monorelational future link predictions tasks in terms of AUPRC and MRR averaged over time. We run the experiments using 3 random seeds, reporting the average result for each model. Results for TGN and CAW are obtained using their official implementations in a live-update setting.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{GDELT18} & \multicolumn{2}{c}{ICEWS18} \\ & AUPRC & MRR & AUPRC & MRR \\ \hline HAN & 0.608 & 0.704 & 0.618 & 0.710 \\ HetEvolveGCN & 0.628 & 0.664 & 0.611 & 0.653 \\ ComplEx & 0.527 & 0.705 & 0.505 & 0.699 \\ TNTComplEx & 0.540 & **0.744** & 0.525 & 0.743 \\ DURENDAL-UTA & **0.672** & 0.743 & 0.677 & 0.745 \\ DURENDAL-ATU & 0.660 & 0.730 & **0.693** & **0.749** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on the multirelational future link predictions tasks in terms of AUPRC and MRR averaged over time. We report the average result for each method over experiments with 3 different random seeds.
_UpdateThen-Aggregate_ dominates the other update schemes but _Aggregate-Then-Update_ is still a profitable choice for learning on multiple snapshots.
## 6 Conclusion
We propose DURENDAL, a snapshot-based graph deep learning framework for learning from temporal heterogeneous networks. Inspired by the ROLAND framework for dynamic homogeneous graphs, DURENDAL can help to easily repurpose any heterogeneous graph learning model to dynamic graphs, including training strategies and evaluation settings for evolving data. To help easy repurposing, DURENDAL introduces heterogeneous hierarchical node states and customizable semantic aggregation schemes. We also introduce two different update schemes, highlighting the strengths and weaknesses of both in terms of scalability, memory footprint, and learning power. To evaluate our framework, we describe the minimum requirements a benchmark should satisfy to be a useful testing ground for temporal heterogenous GNN models, and we extend the current set of benchmarks for TNHs by introducing two novel high-resolution temporal heterogeneous graph datasets. We evaluate DURENDAL over the future link prediction task using incremental training and live-update evaluation over time. Experiments show the prediction power of DURENDAL over four THNs datasets, which exhibit different time granularity, number of snapshots, and new incoming links. Moreover, we show the effectiveness of the DURENDAL model design by enhancing the prediction performance of heterogenous GNN models by repurposing them in our framework.
|
2309.09271 | Homological Shift Ideals: Macaulay2 Package | We introduce the Macaulay2 package HomologicalShiftIdeals. It allows to
compute the homological shift ideals of a monomial ideal, and to check the
homological shift properties, including having linear resolution, having linear
quotients, or being polymatroidal. The theory behind these concepts is
explained and the main features of the package are presented. | Antonino Ficarra | 2023-09-17T13:35:12Z | http://arxiv.org/abs/2309.09271v1 | # Homological Shift Ideals: Macaulay2 Package
###### Abstract.
We introduce the _Macaulay2_ package HomologicalShiftIdeals. It allows to compute the homological shift ideals of a monomial ideal, and to check the homological shift properties, including having linear resolution, having linear quotients, or being polymatroidal. The theory behind these concepts is explained and the main features of the package are presented.
Key words and phrases:monomial ideals, homological shift ideals, linear quotients 2020 Mathematics Subject Classification: Primary 13F20; Secondary 13F55, 05C70, 05E40
## 1. Introduction
Let \(S=K[x_{1},\ldots,x_{n}]\) be the standard graded polynomial ring with coefficients in a field \(K\). Let \(I\) be a monomial ideal of \(S\). For a vector \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{Z}_{\geq 0}^{n}\), we set \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\). Note that, as a \(S\)-module, \(I\) is multigraded. Hence, the minimal free resolution of \(I\), \(\mathbb{F}:\cdots\to F_{i}\to\cdots\to F_{0}\to I\to 0\), is naturally multigraded. Thus, \(F_{i}=\bigoplus_{\mathbf{a}}S(-\mathbf{a})^{\beta_{i,\mathbf{a}}(I)}\) for all \(i\), where \(\beta_{i,\mathbf{a}}(I)\) is a multigraded Betti number. The \(i\)th _homological shift ideal_ of \(I\) is the monomial ideal defined as
\[\operatorname{HS}_{i}(I)\ =\ (\mathbf{x^{a}}\ :\ \beta_{i,\mathbf{a}}(I)\neq 0).\]
Homological shift ideals have been introduced in [11], and attracted the interest of many researchers [1, 2, 3, 5, 6, 7, 8, 12]. The main purpose of this theory is to understand those properties shared by all homological shift ideals of a given monomial ideal. We call these properties, the _homological shift properties_.
One of the driving motivation in this line of research is the _Bandari-Bayati-Herzog conjecture_[11] which asserts that the homological shift ideals of a polymatroidal are again polymatroidal. This conjecture is widely open. However, the conjecture was proved by Bayati for squarefree polymatroidal ideals [1], by Herzog, Moradi, Rahimbeigi and Zhu for polymatroidal ideals that satisfy the strong exchange property [11], and by the author and Herzog for polymatroidal ideals generated in degree 2 [8]. Furthermore, it was shown by the author in [7, Theorem 2.2] that \(\operatorname{HS}_{1}(I)\) is always polymatroidal if \(I\) is such, pointing towards the validity of the conjecture in general. This latter result was also recently recovered by Bayati in [2, Corollary 2.2].
Another interesting conjecture about the homological shifts of powers of the cover ideals of Cohen-Macaulay very well-covered graphs was recently formulated in [5, 6] and proved in some special cases, including bipartite and whisker graphs.
In the present paper, we illustrate and explain how to use the _Macaulay2_[9] package HomologicalShiftIdeals. In Section 2 the mathematical background needed to develop some of the algorithms of the package is explained. In Section 3, some examples are presented, illustrating how to use the functions of the package.
## 2. Mathematical background
Let \(S=K[x_{1},\ldots,x_{n}]\) be the standard graded polynomial ring over a field \(K\). We set \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) for \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{Z}_{\geq 0}^{n}\). The vector \(\mathbf{a}\) is called the _multidegree_ of \(\mathbf{x^{a}}\), whereas \(\deg(\mathbf{x^{a}})=a_{1}+a_{2}+\cdots+a_{n}\) is its _degree_.
Let \(I\) be a monomial ideal of \(S\). Since \(I\) is multigraded, the minimal free resolution is multigraded as well, say
\[\mathbb{F}\ :\quad\cdots\to F_{i}\to\cdots\to F_{0}\to I\to 0,\]
where \(F_{i}=\bigoplus_{\mathbf{a}}S(-\mathbf{a})^{\beta_{i,\mathbf{a}}(I)}\) for all \(i\), and where \(\beta_{i,\mathbf{a}}(I)\) is the \((i,\mathbf{a})\)th multigraded Betti number of \(I\). The vectors \(\mathbf{a}\in\mathbb{Z}_{\geq 0}^{n}\) such that \(\beta_{i,\mathbf{a}}(I)\neq 0\) are called the \(i\)th _multigraded shifts_ of \(I\). The _projective dimension_ of \(I\) is defined as the integer \(\operatorname{pd}(I)=\max\{i:\beta_{i}(I)\neq 0\}\). Whereas, the (_Castelnuovo-Mumford_) _regularity_ of \(I\) is the integer \(\operatorname{reg}(I)=\max\{\deg(\mathbf{x^{a}})-i:\beta_{i,\mathbf{a}}(I)\neq 0\}\).
**Definition 2.1**.: The \(ith\)_homological shift ideal_ of \(I\) is the monomial ideal
\[\operatorname{HS}_{i}(I)\ =\ (\mathbf{x^{a}}\ :\ \beta_{i,\mathbf{a}}(I)\neq 0).\]
Note that \(\operatorname{HS}_{0}(I)=I\) and \(\operatorname{HS}_{i}(I)=(0)\) for \(i<0\) and \(i>\operatorname{pd}(I)\).
The main purpose of the theory is to determine those properties enjoyed by all \(\operatorname{HS}_{j}(I)\). We call these properties, the _homological shift properties_ of \(I\).
Let \(I\subset S\) be a monomial ideal, and let \(G(I)\) be its unique minimal monomial generating set. The _initial degree_ of \(I\) is \(\operatorname{indeg}(I)=\min\{\deg(u):u\in G(I)\}\).
**Definition 2.2**.: Let \(I\subset S\) be a monomial ideal, and let \(G(I)=\{u_{1},\ldots,u_{m}\}\).
1. \(I\) has a _linear resolution_ if \(\operatorname{indeg}(I)=\operatorname{reg}(I)\).
2. \(I\) has _linear quotients_ if there exists an order \(u_{1},\ldots,u_{m}\) of \(G(I)\) such that \((u_{1},\ldots,u_{k-1}):u_{k}\) is generated by a subset of the variables for \(k=2,\ldots,m\). In this case, \(u_{1},\ldots,u_{m}\) is called an _admissible order_ of \(I\).
3. Let \(u=\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\). The \(x_{i}\)_-degree_ of \(u\) is the integer \(\deg_{x_{i}}(u)=a_{i}\). We say that \(I\) is _polymatroidal_ if \(I\) is equigenerated and the _exchange property_ holds: for all \(u,v\in G(I)\) and all \(i\) with \(\deg_{x_{i}}(u)>\deg_{x_{i}}(v)\) there exists \(j\) such that \(\deg_{x_{j}}(u)<\deg_{x_{j}}(v)\) and \(x_{j}(u/x_{i})\in G(I)\).
If (a), or (b), or (c), is an homological shift property, we say that \(I\): has _homological linear resolution_, respectively, _homological linear quotients_, respectively, is _homological polymatroidal_.
For an equigenerated monomial ideal \(I\subset S\), the following hierarchy holds:
(c) \(\Rightarrow\) (b) \(\Rightarrow\) (a).
Before we proceed, we recall some other concepts. The _support_ of a monomial \(u\in S\) is the set \(\operatorname{supp}(u)=\{x_{i}:\deg_{x_{i}}(u)>0\}\). Let \(I\subset S\) be a monomial ideal. The _support_ of \(I\) is the set \(\operatorname{supp}(I)=\bigcup_{u\in G(I)}\operatorname{supp}(u)\). We say that \(I\) is _fully supported_ (in \(S\)) if \(\operatorname{supp}(I)=\{x_{1},\ldots,x_{n}\}\). The _bounding multidegree_ of \(I\) is the vector \(\mathbf{deg}(I)=(\deg_{x_{1}}(I),\ldots,\deg_{x_{n}}(I))\in\mathbb{Z}^{n}\), defined by
\[\deg_{x_{i}}(I)\ =\ \max_{u\in G(I)}\deg_{x_{i}}(u).\]
Furthermore, the _socle_\(\operatorname{soc}(I)\) of a monomial ideal \(I\subset S\) is the set of monomials of \((I:\mathfrak{m})\setminus I\), where \(\mathfrak{m}=(x_{1},\ldots,x_{n})\). In other words, \(\operatorname{soc}(I)\) is the set of all monomials \(v\) such that \(v\notin I\) and \(x_{i}v\in I\), for \(i=1,\ldots,n\).
The purpose of the package HomologicalShiftIdeals is to provide the tools to manipulate and calculate the homological shift ideals of a monomial ideal \(I\) and to determine the homological shift properties of \(I\). The next table collects the functions available in the package and their use. \(I\subset S\) denotes a monomial ideal, \(\mathbf{a}\) an integral vector, \(\mathbf{x^{a}}\) a monomial, \(i\) an integer and \(L\) a list of monomials.
In the remaining part of this section, we explain the theory behind some of the algorithms used in the package. Given \(n\in\mathbb{N}\), we set \([n]=\{1,\ldots,n\}\). For a nonempty subset \(A\) of \([n]\), we set \(\mathbf{x}_{A}=\prod_{i\in A}x_{i}\).
We start with the function socle.
**Proposition 2.3**.: _[_11_, Proposition 1.13]_ _Let \(I\subset S\) be a monomial ideal. Then_
\[\operatorname{soc}(I)\ =\ \{\mathbf{x^{a}}\ :\ \mathbf{x^{a}}\in(I:\mathfrak{m}) \setminus I\}.\]
_In particular, \(\beta_{n-1,\mathbf{a}}(I)\neq 0\) if and only if \(\mathbf{x^{a}}/\mathbf{x}_{[n]}\in\operatorname{soc}(I)\)._
The previous result justifies the following algorithm that calculates socle\((I)\).
Step 1: Compute \(M=\mathtt{multigradedShifts}(I,n-1)\).
Step 2: Compute \(\operatorname{soc}(I)=\{w/(x_{1}x_{2}\cdots x_{n}):w\in M\}\).
Next, we discuss the functions hasLinearQuotients and admissibleOrder. For their implementations, we have imported the packages SimplicialComplexes, and SimplicialDecomposability. These packages use the _Alexander duality_ theory in a fruitful way [4].
\begin{table}
\begin{tabular}{l l} \hline Functions & Description \\ \hline \hline supportIdeal\((I)\) & Computes \(\operatorname{supp}(I)\) \\ isFullySupported\((I)\) & Checks whether \(I\) is fully supported (in \(S\)) \\ toMonomial\((S,\mathbf{a})\) & Computes the monomial \(\mathbf{x^{a}}\) if \(\mathbf{a}\in\mathbb{Z}_{\geq 0}^{n}\) \\ toMultidegree\((\mathbf{x^{a}})\) & Computes the multidegree \(\mathbf{a}\) of \(\mathbf{x^{a}}\) \\ boundingMultidegree\((I)\) & Computes \(\mathbf{deg}(I)\) \\ multigradedShifts\((I,i)\) & Computes the \(i\)th multigraded shifts of \(I\) \\ HS\((I,i)\) & Computes HS\({}_{i}(I)\) \\ socle\((I)\) & Computes \(\operatorname{soc}(I)\) \\ hasLinearResolution\((I)\) & Checks if \(I\) has a linear resolution \\ hasHomologicalLinearResolution\((I)\) & Checks if \(I\) has homological linear resolution \\ hasLinearQuotients\((I)\) & Checks if \(I\) has linear quotients \\ hasHomologicalLinearQuotients\((I)\) & Checks if \(I\) has homological linear quotients \\ admissibleOrder\((I)\) & Determines an admissible order of \(I\) \\ isAdmissibleOrder\((I,L)\) & Checks if \(L\) is an admissible order of \(I\) \\ isPolymatroidal\((I)\) & Checks if \(I\) is polymatroidal \\ isHomologicalPolymatroidal\((I)\) & Checks if \(I\) is homological polymatroidal \\ \hline \end{tabular}
\end{table}
Table 1. List of the functions of HomologicalShiftIdeals.
A _simplicial complex_ on the _vertex set_\([n]\) is a family of subsets of \([n]\) such that
* \(\{i\}\in\Delta\) for all \(i\in[n]\), and
* if \(F\subseteq[n]\) and \(G\subseteq F\), we have \(G\in\Delta\).
The dimension of \(\Delta\) is the number \(d=\max\{|F|-1:F\in\Delta\}\). Any \(F\in\Delta\) is called a _face_ and \(|F|-1\) is the _dimension_ of \(F\). A _facet_ of \(\Delta\) is a maximal face with respect to the inclusion. The set of facets of \(\Delta\) is denoted by \(\mathcal{F}(\Delta)=\{F_{1},\ldots,F_{m}\}\). In this case we write \(\Delta=\langle F_{1},\ldots,F_{m}\rangle\) and say that \(F_{1},\ldots,F_{m}\)_generates_\(\Delta\). We say that \(\Delta\) is _pure_ of dimension \(d\) if all facets of \(\Delta\) have dimension \(d\). The _Alexander dual_ of \(\Delta\) is the simplicial complex (see [10, Lemma 1.5.2]) defined by
\[\Delta^{\vee}\ =\ \{[n]\setminus F\ :\ F\notin\Delta\}.\]
A monomial \(u\in S\) is _squarefree_ if \(\deg_{x_{i}}(u)\leq 1\), for all \(i\in[n]\). A monomial ideal \(I\subset S\) is _squarefree_ if each \(u\in G(I)\) is squarefree. It is well known that for any squarefree ideal \(I\subset S\) there exists a unique simplicial complex \(\Delta\) on \([n]\) such that \(I=I_{\Delta}\), where \(I_{\Delta}=(\mathbf{x}_{F}:F\subseteq[n],F\notin\Delta)\) is the _Stanley-Reisner ideal_ of \(\Delta\)[10].
Now, we establish the connection between squarefree monomial ideals with linear quotients and the _shellability_ of simplicial complexes. Recall that \(\Delta\) is _shellable_ if there exists an order \(F_{1},F_{2},\ldots,F_{m}\) of its facets \(\mathcal{F}(\Delta)\) such that
\[\langle F_{1},\ldots,F_{k-1}\rangle\cap\langle F_{k}\rangle\]
is pure of dimension \(\dim(F_{k})-1\) for \(k=2,\ldots,m\). Any order of the facets of \(\Delta\) satisfying the conditions above is called a _shelling order_ of \(\Delta\). The following result shows that admissible orders and shelling orders are essentially the same thing.
**Theorem 2.4**.: _[_10_, Proposition 8.2.5]_ _The following conditions are equivalent._
* \(I_{\Delta}\) _has linear quotients._
* _The Alexander dual_ \(\Delta^{\vee}\) _of_ \(\Delta\) _is shellable._
_Furthermore, \(F_{1},\ldots,F_{m}\) is a shelling order of the Alexander dual \(\Delta^{\vee}\) of \(\Delta\), if and only if, \(\mathbf{x}_{[n]\setminus F_{1}},\ldots,\mathbf{x}_{[n]\setminus F_{m}}\) is an admissible order of \(I_{\Delta}\)._
The previous theorem provides an algorithm to determine an admissible order of a squarefree monomial ideal with linear quotients. In order to extend the above result to all (non necessarily squarefree) monomial ideals we use _polarization_.
Let \(u=\mathbf{x^{a}}\) be a monomial of \(S\). The _polarization_ of \(u\) is the monomial
\[u^{\wp}\ =\ \prod_{i=1}^{n}(\prod_{j=1}^{a_{i}}x_{i,j})\ =\ \prod_{\begin{subarray}{c}i=1, \ldots,n\\ a_{i}>0\end{subarray}}x_{i,1}x_{i,2}\cdots x_{i,a_{i}}.\]
Let \(S^{\wp}=K[x_{i,j}:i\in[n],j\in[\deg_{x_{i}}(I)]]\). The _polarization_ of \(I\) is the monomial ideal \(I^{\wp}\subset S^{\wp}\) with minimal generating set \(G(I^{\wp})=\{u^{\wp}:u\in G(I)\}\).
The following lemma, taken from [5, Lemma 4.10], is pivotal.
**Lemma 2.5**.: _Let \(I\subset S\) be a monomial ideal with \(G(I)=\{u_{1},\ldots,u_{m}\}\) and having linear quotients. Then, \(u_{1},\ldots,u_{m}\) is an admissible order of \(I\) if and only if \(u_{1}^{\wp},\ldots,u_{m}^{\wp}\) is an admissible order of \(I^{\wp}\). In particular, \(I\) has linear quotients if and only if \(I^{\wp}\) has linear quotients._
Theorem 2.4 and Lemma 2.5 justify the next algorithm for hasLinearQuotients, that determines whether a monomial ideal has linear quotients or not.
Step 1: Compute \(I^{\wp}\).
Step 2: Using SimplicialComplexes compute the Alexander dual \(\Delta^{\vee}\), where \(I^{\wp}=I_{\Delta}\).
Step 3: Using SimplicialDecomposability determine if \(\Delta^{\vee}\) is shellable. If the answer is yes then hasLinearQuotients\((I)=\)true, otherwise \(=\)false.
For the function admissibleOrder as above we begin with Step 1, 2 and 3. If hasLinearQuotients\((I)=\)false, then \(I\) does not have linear quotients. Otherwise, we complete our algorithm with the next two steps.
Step 4: Using SimplicialDecomposability compute a shelling order \(F_{1},\ldots,F_{m}\) of the Alexander dual \(\Delta^{\vee}\), where \(I_{\Delta}=I^{\wp}\).
Step 5: Determine the associated admissible order \(u_{1}^{\wp},\ldots,u_{m}^{\wp}\) of \(I^{\wp}\) and _depolarize_ it (by the substitutions \(x_{i,j}\mapsto x_{i}\) for all \(i\) and \(j\)) to obtain an admissible order of \(I\).
## 3. Examples
In this final section, we present some examples to illustrate how to use the package.
Let \(I=(abd,abf,ace,adc,aef,bde,bcf,bce,cdf,def)\subset\mathbb{Q}[a,\ldots,f]\). Using the package we can check that \(I\) has linear resolution but not linear quotients as follows.
i1: S = QQ[a..f];
i2: I = ideal(a*b*d, a*b*f, a*c*e, a*d*c, a*e*f, b*d*e, b*c*f, b*c*e, c*d*f, d*e*f);
i3: loadPackage "HomologicalShiftIdeals"
i4: hasLinearResolution I
o4: true
i5: hasLinearQuotients I
o5: false
In [8, Theorem 1.4] we proved that for any monomial ideal generated in a single degree and having linear quotients, then HS\({}_{1}(I)\) has linear quotients as well. However, the higher homological shift ideals of \(I\) may fail to have linear quotients.
Consider the ideal \(J=(ab,ac,ad,de,df)\) of \(S=\mathbb{Q}[a,\ldots,f]\). Then \(J\) has linear resolution, indeed it has linear quotients, and HS\({}_{1}(J)\) has linear quotients as well. However, HS\({}_{2}(J)\) does not have linear quotients, not even linear resolution.
i6: J = ideal(a*b, a*c, a*d, d*e, d*f);
i7: HS(J,0)==J
o7: true
i8: HS(J,1)
o8: ideal(a*b*c, a*b*d, a*c*d, a*d*e, a*d*f, d*e*f)
i9: hasLinearQuotients HS(J,1)
o9: true
i10: HS(J,2)
o10: ideal(a*b*c*d, a*d*e*f) i11: hasLinearResolution HS(J,2) o11: false Consider the principal Borel ideal \(I=B(x_{2}^{2}x_{3})\) of \(S=\mathbb{Q}[x_{1},x_{2},x_{3}]\). Then \(I=(x_{1}^{3},x_{1}^{2}x_{2},x_{1}^{2}x_{3},x_{1}x_{2}^{2},x_{1}x_{2}x_{3},x_{2} ^{3},x_{2}^{2}x_{3})\) has homological linear quotients, indeed \(I\) is even homological polymatroidal, see [3, Theorem 3.4].
i12: S = QQ[x_1..x_3]; i13: I = ideal(x_1^3, x_1^2*x_2, x_1^2*x_3, x_1*x_2^2, x_1*x_2*x_3, x_2^3, x_2^2*x_3); i14: hasHomologicalLinearQuotients I o14: true i15: admissibleOrder HS(I,2) o15: {x_1^3*x_2*x_3, x_1^2*x_2^2*x_3, x_1*x_2^3*x_3} i16: socle I o16: {x_1^2, x_1*x_2, x_2^2} i17: isHomologicalPolymatroidal I o17: true
|
2306.17621 | Transient non-Fourier behavior of large surface bodies | The variety and complexity of heterogeneous materials in the engineering
practice are continuously increasing, open-cell metal foams filled with phase
change materials are typical examples. These are also having an impact on the
recent developments in the energy industry. Earlier room temperature heat pulse
experiments on macroscale foam samples showed non-Fourier over-diffusive
behavior on a particular time scale. Since there is a need to investigate such
complex structures on larger spatial scales and extend the one-dimensional
analysis on two-, and three-dimensional settings, here we develop a
two-dimensional analytical solution for the Guyer-Krumhansl and Jeffreys-type
heat equations in cylindrical coordinates to investigate the transient thermal
behavior of large bodies. We provide the steady-state and transient temperature
and heat flux distributions for a space-dependent heat source. The solutions
presented here will be helpful for the thermal characterization of complex
materials and for the validation of numerical methods. | Robert Kovacs | 2023-06-30T12:51:57Z | http://arxiv.org/abs/2306.17621v1 | # Transient non-Fourier behavior of large surface bodies
###### Abstract.
The variety and complexity of heterogeneous materials in the engineering practice are continuously increasing, open-cell metal foams filled with phase change materials are typical examples. These are also having an impact on the recent developments in the energy industry. Earlier room temperature heat pulse experiments on macroscale foam samples showed non-Fourier over-diffusive behavior on a particular time scale. Since there is a need to investigate such complex structures on larger spatial scales and extend the one-dimensional analysis on two-, and three-dimensional settings, here we develop a two-dimensional analytical solution for the Guyer-Krumhansl and Jeffreys-type heat equations in cylindrical coordinates to investigate the transient thermal behavior of large bodies. We provide the steady-state and transient temperature and heat flux distributions for a space-dependent heat source. The solutions presented here will be helpful for the thermal characterization of complex materials and for the validation of numerical methods.
## 1. Introduction
Numerous experimental and theoretical studies emerged on room-temperature heat conduction beyond Fourier in recent years. On the one hand, the nanoscale effects result in the deviation from Fourier's law, usually with the appearance of ballistic heat conduction [1, 2, 3]. Furthermore, the size dependence of thermal conductivity enjoys great interest as it significantly influences the effectiveness of any nanoscale device [4, 5]. On the other hand, room temperature non-Fourier heat conduction is not restricted to the nanoscale exclusively, and it is observable in macroscopic bodies under various conditions [6, 7, 8]. While the parallel diffusive and ballistic propagation modes are present on a nanoscale, the macroscopic deviation is due to the interaction of multiple parallel diffusive (and additional heat transfer) channels. Typical examples are rocks [9] and foams [10, 11]. Although each component behaves according to Fourier's law, the heterogeneous material structure overall (effectively) leads to a more complex, non-Fourier temperature history. That was the motivation for two-temperature models [12, 13, 14].
The presence of multiple time scales is the most visible by showing experimental data obtained from a heat pulse experiment for an aluminum foam (Figure 1) possessing multiple heat transfer channels. The response for a short but finite single pulse (0.01 s), together with the best achievable Fourier fit, shows that at least two heat conduction time scales are present simultaneously. This is called over-diffusion [6, 15] and so far best modeled with the continuum Guyer-Krumhansl (GK) heat equation [10]. Here, with the word "continuum" is an adjective, referring to the continuum thermodynamic background of the GK equation [16], it is free from the usual kinetic theory and phonon hydrodynamic assumptions, therefore it is valid on much larger temperature and spatial scales, independently of the Knudsen number. It is also worth noting that in Fig. 1, the time is re-scaled with respect to the pulse length, i.e., the dimensionless time \(\hat{t}=t/0.01\). The transients become slow enough after about \(7\) s, meaning that the heat transfer process needs much more time to cancel out the effect of multiple time scales. In other words, if thermal transients continuously occur in some particular application, Fourier's law might not apply. Otherwise, such an experiment reveals the limitations on time scales, and the Guyer-Krumhansl equation provides a refinement of the thermal parameters in order to cover the faster processes as well. However, this property indeed scales with the size (different heat capacities, heat transfer surfaces, time scales) and surface (boundary conditions), and thus it is necessary to extend the experimental and theoretical capabilities in this direction.
It is worth noting that recent heat exchanger applications exploit the advantageous properties of a metal foam structure: having large heat transfer surfaces, the matrix material is an excellent heat conductor, therefore such
solutions can essentially ease the realization of an effective thermal storage method. One outstanding example is when an open-cell foam structure is filled with phase change material [17, 18, 19, 20]. The phase change materials usually have low thermal conductivity, significantly restricting their melting or solidification properties. However, a surrounding foam structure can notably enhance the thermal behavior, thus both the heating and cooling processes can be much more efficient. This further motivates the present study as there are currently no reliable thermal models which enable the resource-friendly modeling of such complex structures. A non-Fourier model, however, can be exceptional when the role of the parallel heat transfer channels is understood correctly in such an approach. The present study aims to take a step forward in this direction, deepening our understanding and extending our modeling possibilities about the Guyer-Krumhansl heat equation. In the following, let us briefly summarize the heat conduction models we consider here.
The well-known Fourier law is
\[\mathbf{q}=-\lambda\nabla T,\quad\lambda\in\mathbb{R}^{+} \tag{1}\]
together with the balance of internal energy (\(e=c_{v}T\)),
\[\rho c_{v}\partial_{t}T+\nabla\cdot\mathbf{q}=q_{v}(\mathbf{x},t), \tag{2}\]
includes only one time scale, described by the thermal diffusivity \(\alpha=\lambda/(\rho c_{v})\), where \(\lambda\), \(\rho\) and \(c_{v}\) are the thermal conductivity, mass density, and isochoric specific heat, respectively. Furthermore, \(\mathbf{q}\) and \(T\) stand for the heat flux and temperature fields, and \(q_{v}\) is an internal heat generation, which could be time and space-dependent. We restrict ourselves to isotropic rigid materials.
For a non-Fourier heat conduction model, the constitutive equation, Eq. (1), is exchanged with a more general expression, usually consisting of additional time and space derivatives. In the present paper, we consider the following two constitutive equations among the various models. First, we study the Guyer-Krumhansl equation,
\[\tau\partial_{t}\mathbf{q}+\mathbf{q}=-\lambda\nabla T+\eta_{1}\Delta \mathbf{q}+\eta_{2}\nabla\nabla\cdot\mathbf{q},\quad\lambda,\tau,\eta_{1},\eta _{2}\in\mathbb{R}^{+}, \tag{3}\]
in which \(\tau\) is the relaxation time; \(\eta_{1}\) and \(\eta_{2}\) are independent intrinsic length scales, not associated with a propagation mechanism in a continuum model [16]. We note that in the conventional treatment of the GK equation, \(\eta_{1}=l^{2}\) with \(l\) being the mean free path of phonons, and \(\eta_{2}/\eta_{1}=2\) for the particular approximations performed by Guyer and Krumhansl [21]. We emphasize that for a macroscopic room temperature problem, the phonon approach is not valid anymore, while the continuum model, although possessing the same structure, is free from any prior specific assumptions on the propagation mechanism, hence extending the model's domain of validity.
Second, we will continue our analysis with the Jeffreys equation (JE), i.e.,
\[\tau_{q}\partial_{t}\mathbf{q}+\mathbf{q}=-\lambda\nabla T-\lambda\tau_{T} \partial_{t}\nabla T,\quad\lambda,\tau_{q},\tau_{T}\in\mathbb{R}^{+} \tag{4}\]
where, instead of introducing further spatial derivatives, two time lags appear similarly to the popular dual-phase-lage (DPL) concept [15]. However, while no thermodynamic background is behind the DPL model, Eq. (4) can be derived on a thermodynamic basis [22]. Although we remain in the linear regime, it is worth noting that the
Figure 1. Typical appearance of over-diffusion [10].
coefficients are not completely independent of each other (also for Eq. (3)) in a sense that the \(T\)-dependence of \(\lambda\) would influence all the other parameters, too [23]. For the Jeffreys equation, \(\tau_{q}\) and \(\tau_{T}\) can be adjusted almost independently, and the only exception is that when \(\tau_{q}=0\), \(\tau_{T}=0\) follows immediately (but not vice versa). The GK and JE models consist of two time scales in different ways, and they share the same \(T\)-representation in a one-dimensional setting. However, their physical basis is quite different, and the GK equation fits much better into the systematic structure of non-Fourier models.
In this paper, we choose to study large surface bodies since the typical heterogeneous materials also show strong size-dependent behavior [9]. In other words, extending the existing flash experiment for much larger bodies will be necessary as the usual thickness limit is about \(3-6\) mm for standardized equipment. This can be much smaller than the representative sample size for a heterogeneous material, especially for foams with large (3-5 mm) open-cell structures. Furthermore, we aim to investigate both models using an analytical solution for a two-dimensional setting, which will emphasize the structural differences between these equations. We note that even the book of Carslaw and Jaeger [24] has limitations towards the problem setting we discuss in the following.
For simplicity and clarity, we start and present our method on the example of the Fourier heat equation. This will provide an insight into the problem setting and the solution method. We will continue with the GK and JE equations, demonstrating how the solution method is applied to more complicated models. Such analytical solutions, especially for the GK equation, cannot be found in the literature. Moreover, as there is also missing a reliable two or three-dimensional numerical method, we offer a good starting point for future studies in this direction. Finally, we will compare the temperature histories to the Fourier equation and investigate whether we find the transient behavior similar (or even the same) compared to the one-dimensional room temperature experiments on small samples.
## 2. Problem statement
Let us consider a plane wall constantly heated on one the left side in a circular area (\(r<r_{h}\)) with \(q_{w}\) such as Figure 2 (left side) prescribes. In fact, the book of Carslaw and Jaeger [24] offers a solution for the Fourier heat equation for constant heating in the domain \(r<r_{h}\), however, we are looking for the temperature history further away from \(r_{h}\) such as how the blue dots showing in Fig. 2. Additionally, since the boundary condition is space-dependent on the left side, it is challenging and difficult to apply the findings for non-Fourier heat equations. Therefore, we decided to reduce this original problem to a simpler one and substitute the surface heating with a space-dependent, surface-concentrated internal heat generation, as the characteristics present in Fig. 3. Hence the boundary conditions remain homogeneous but still applicable to the original problem. Furthermore, we assume that at a large enough distance from the heat source, the temperature remains constant, thus we prescribe constant temperature boundary conditions on the right side and on the top.
Figure 2. Schematic problem setting, presenting the boundary conditions and the characteristics of internal heat generation.
It is more convenient to formulate our models in a cylindrical coordinate system using dimensionless quantities. For the length scale, we use \(R\), the radius of the whole domain, and we set the thickness to be the same, i.e., \(L=R\). The usual Fourier number is introduced using the thermal diffusivity \(\alpha=\lambda/(\rho c_{v})\) for the time scale. The temperature field is homogenized and normalized with the initial temperature \(T_{0}\). Overall, these lead to the following set of dimensionless quantities,
\[\hat{r}=\frac{r}{R},\quad\hat{z}=\frac{z}{R},\quad\hat{t}=\frac{\alpha t}{R^{2} },\quad\hat{T}=\frac{T-T_{0}}{T_{0}},\quad\hat{q}_{r,z}=q_{r,z}\frac{R}{\lambda T _{0}},\quad\hat{q}_{v}=q_{v}\frac{R^{2}}{\lambda T_{0}}, \tag{5}\]
and thus, the non-Fourier parameters read
\[\hat{\eta}_{1,2}=\eta_{1,2}\frac{1}{R^{2}},\quad\hat{\tau}=\frac{\alpha\tau}{R ^{2}},\quad\hat{\tau}_{T}=\frac{\alpha\tau_{T}}{R^{2}}. \tag{6}\]
In the following, we leave the hat notation for simplicity and show the units wherever necessary.
Taking account that it is a two-dimensional problem for \(r\) and \(z\) in a cylindrical coordinate system, the balance of internal energy
\[\partial_{t}T+\partial_{r}q_{r}+\frac{1}{r}q_{r}+\partial_{z}q_{z}=q_{v}(r,z),\quad t\in[0,\infty),\quad(r,z)\in[0,1]\times[0,1], \tag{7}\]
and the constitutive equations are
Fourier: \[q_{r}=-\partial_{r}T,\] (8) \[q_{z}=-\partial_{z}T,\] (9) \[\text{GK:}\quad\tau\partial_{t}q_{r}+q_{r}=-\partial_{r}T+(\eta_ {1}+\eta_{2})\left[\partial_{rr}-\frac{1}{r^{2}}+\frac{1}{r}\partial_{r} \right]q_{r}+\eta_{1}\partial_{zz}q_{r}+\eta_{2}\partial_{rz}q_{z},\] (10) \[\tau\partial_{t}q_{z}+q_{z}=-\partial_{z}T+(\eta_{1}+\eta_{2}) \partial_{zz}q_{z}+\eta_{1}\left[\partial_{rr}+\frac{1}{r}\partial_{r}\right] q_{z}+\eta_{2}\left[\frac{1}{r}\partial_{z}+\partial_{rz}\right]q_{r},\] (11) JE: \[\tau\partial_{t}q_{r}+q_{r}=-\partial_{r}T-\tau_{T}\partial_{tr }T,\] (12) \[\tau\partial_{t}q_{z}+q_{z}=-\partial_{z}T-\tau_{T}\partial_{tz}T,\] (13)
accompanying the \(T=0\) initial condition, and \(q=0\) and \(T=0\) boundary conditions with respect to Fig. 2. We are looking for the corresponding heat flux and temperature fields.
## 3. Solution method
Although the problem seems complicated, there is quite efficient method to handle such a complicated set of partial differential equations. Here is the strategy we follow. First, we exploit that the problem can be separated into two parts, viz., we can solve the homogeneous (\(q_{v}=0\)) transient case (\(T_{h}(r,z,t)\)) separately and the inhomogeneous (\(q_{v}\neq 0\)) steady-state case (\(T_{\rm st}(r,z)\)), and we find the solution as their superposition, \(T(r,z,t)=T_{\rm st}(r,z)+T_{h}(r,z,t)\). Second, we start with the Fourier heat equation, not merely because we want to compare the non-Fourier solutions to Fourier's, but because we can exploit Fourier's steady-state solution in solving the non-Fourier models. The
Figure 3. Internal heat generation characteristics (with dimensionless parameters), the heat source is concentrated on the surface with respect to the scaling.
steady temperature field remains the same for both Fourier and non-Fourier cases. However, the heat flux fields can differ, hence we also include this aspect. In the case of Fourier's heat equation, we can quickly determine the proper eigenfunctions and eigenvalues through the Sturm-Liouville problem. It is worth noting that even the non-Fourier models do not introduce higher-order spatial derivatives for the temperature field beyond the Laplacian. Therefore what eigenfunctions we find can also be applied to the GK and JE models. Third, we solve the non-Fourier models exploiting the ansatz that their solutions can be represented in the same set of eigenfunctions with time-dependent coefficients, this is a sort of Galerkin method. That approach simplifies the complicated system of partial differential equations to a set of ordinary differential equations for the time-dependent coefficients.
### Fourier heat equation
In the case of the Fourier equation, it is easier to start with its \(T\)-representation, i.e.,
\[\partial_{t}T=\partial_{r}^{2}T+\frac{1}{r}\partial_{r}T+\partial_{z}^{2}T+q_{ v}(r,z), \tag{14}\]
and applying the standard separation of variable technique for the homogeneous part (where \(q_{v}\) is absent), \(T_{h}(r,z,t)=\varphi(t)\xi(r,z)\), one obtains
\[\text{for time:}\quad\frac{\mathrm{d}\varphi}{\mathrm{d}t}+\beta^ {2}\varphi=0, \tag{15}\] \[\text{for space:}\quad\frac{1}{r}\partial_{r}\xi+\partial_{r}^{2 }\xi+\partial_{z}^{2}\xi+\beta^{2}\xi=0,\quad\Rightarrow\quad\xi(r,z)=\rho(r )\zeta(z),\quad\Rightarrow\] (16) \[\text{for }r\text{:}\quad\frac{1}{r}\frac{\mathrm{d}\rho}{ \mathrm{d}r}+\frac{\mathrm{d}^{2}\rho}{\mathrm{d}r^{2}}+\mu^{2}\rho=0,\] (17) \[\text{for }z\text{:}\quad\frac{\mathrm{d}^{2}\zeta}{\mathrm{d}z^{2 }}+\gamma^{2}\zeta=0, \tag{18}\]
thus \(\mu^{2}+\gamma^{2}=\beta^{2}\). Applying the boundary conditions
\[q_{r}(r=0,z,t)=\partial_{r}T|_{r=0}=0,\quad T(r=1,z,t)=0,\quad q_{z}(r,z=0,t)= \partial_{z}T|_{z=0}=0,\quad T(r,z=1,t)=0, \tag{19}\]
we find the following eigenfunctions and eigenvalues,
\[\rho(r)=J_{0}(\mu_{n}r),\quad\mu_{n}:\ J_{0}(\mu_{n})=0,\quad\zeta(z)=\cos( \gamma_{m}z),\quad\gamma_{m}=\frac{\pi}{2}+m\pi, \tag{20}\]
and hence the solution is
\[T_{h}(r,z,t)=\sum\limits_{n=1}^{\infty}\sum\limits_{m=0}^{\infty}K_{nm}e^{- \beta_{nm}^{2}t}J_{0}(\mu_{n}r)\cos(\gamma_{m}z),\quad\beta_{nm}^{2}=\mu_{n}^{ 2}+\gamma_{m}^{2}. \tag{21}\]
Consequently, we can use Eq. (20) to construct the steady-state solution \(T_{\mathrm{st}}\), including the heat generation as well, i.e.,
\[T_{\mathrm{st}}(r,z)=\sum\limits_{n=1}^{\infty}\sum\limits_{m=0}^{\infty}C_{ nm}J_{0}(\mu_{n}r)\cos(\gamma_{m}z),\quad q_{v}(r,z)=\sum\limits_{n=1}^{\infty} \sum\limits_{m=0}^{\infty}B_{nm}J_{0}(\mu_{n}r)\cos(\gamma_{m}z), \tag{22}\]
\[B_{nm}=\int\limits_{0}^{1}\int\limits_{0}^{1}rq_{vr}(r)q_{vz}(z)J_{0}(\mu_{n} r)\cos(\gamma_{m}z)\mathrm{d}r\mathrm{d}z. \tag{23}\]
Substituting Eq. (22) into Eq. (14) (with \(\partial_{t}T=0\)), we can find the relation between the known \(B_{nm}\) and the unknown \(C_{nm}\),
\[C_{nm}\left(-\frac{1}{r}\mu_{n}J_{1}(\mu_{n}r)-\frac{\mu_{n}^{2}}{2}\Big{(}J_{ 0}(\mu_{n}r)-J_{2}(\mu_{n}r)\Big{)}-J_{0}(\mu_{n}r)\gamma_{m}^{2}\right)\cos( \gamma_{m}z)=B_{nm}J_{0}(\mu_{n}r)\cos(\gamma_{m}z). \tag{24}\]
Then Eq. (24) is multiplied with \(rJ_{0}(\mu_{n}r)\cos(\gamma_{m}z)\) and integrated from \(0\) to \(1\) with respect to both \(r\) and \(z\), following the Galerkin procedure. The \(z\)-direction is straightforward as both sides are multiplied with \(\cos(\gamma_{m}z)\), the non-trivial part originates from the \(r\) direction, and results in
\[C_{nm}=B_{nm}\frac{1}{\beta_{nm}^{2}}, \tag{25}\]
which holds for any internal heat generation \(q_{v}(r,z)\). Due to the separation \(T(r,z,t)=T_{\mathrm{st}}(r,z)+T_{h}(r,z,t)\), the initial condition for the homogeneous part reads \(T_{h}(r,z,t=0)=-T_{\mathrm{st}}(r,z)\) as \(T(r,z,t=0)=0\), and thus \(K_{nm}=-C_{nm}\).
### Guyer-Krumhansl heat equation
Although we follow the same technique here, we also separate the homogeneous and inhomogeneous parts, but it is more advantageous to determine the steady-state heat flux field first. Since the steady temperature field \(T_{\rm st}(r,z)\) is inherited, we can substitute \(\partial_{r}T_{\rm st}(r,z)\) and \(\partial_{z}T_{\rm st}(r,z)\) into Eqs. (10)-(11). Furthermore, we suppose that each term inherits the corresponding set of eigenfunctions and eigenvalues as the boundary conditions remain, and thus
\[q_{r}=\sum_{n=1}^{\infty}\sum_{m=0}^{\infty}D_{nm}J_{1}(\mu_{n}r)\cos(\gamma_{ m}z),\quad q_{z}=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}E_{nm}J_{0}(\mu_{n}r) \sin(\gamma_{m}z), \tag{26}\]
respecting the corresponding derivatives, too. In that steady-state, we consider that \(\partial_{t}q_{r}=\partial_{t}q_{z}=0\) in Eqs. (10)-(11), and after substituting Eq. (26) into Eqs. (10)-(11) and integrating, we obtain a set of algebraic relations among the coefficients,
\[c_{1}D_{nm}=\mu_{n}C_{nm}-c_{2}E_{nm},\quad c_{3}E_{nm}=\gamma_{m}C_{nm}-c_{4} D_{nm}, \tag{27}\]
with
\[c_{1}=1+(\eta_{q}+\eta_{2})(2+\mu_{n}^{2})+\eta_{1}\gamma_{m}^{2},\quad c_{2 }=\eta_{2}\mu_{n}\gamma_{m},\quad c_{3}=1+\gamma_{m}^{2}(\eta_{1}+\eta_{2})+ \mu_{n}^{2}\eta_{1},\]
\[c_{4}=2\gamma_{m}\eta_{2}\left(\frac{1}{\mu_{n}J_{1}(\mu_{n})^{2}}+\mu_{n} \right). \tag{28}\]
Since \(C_{nm}\) is known from Eq. (25), Eq. (27) can be solved for \(D_{nm}\) and \(E_{nm}\), and it holds for any heat sources. Then the steady-state solution is given by Eqs. (22) and (26).
The transient (homogeneous) solution is constructed similarly, however, the coefficients are now time-dependent, viz., we assume that
\[T_{h}(r,z,t)=\sum_{n=1}^{\infty}\sum_{m=0}^{\infty}\tilde{C}_{nm}(t)J_{0}(\mu_ {n}r)\cos(\gamma_{m}z),\quad q_{r}(r,z,t)=\sum_{n=1}^{\infty}\sum_{m=0}^{ \infty}\tilde{D}_{nm}(t)J_{1}(\mu_{n}r)\cos(\gamma_{m}z), \tag{29}\]
\[q_{z}(r,z,t)=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\tilde{E}_{nm}(t)J_{0}(\mu _{n}r)\sin(\gamma_{m}z), \tag{30}\]
are still valid. Furthermore, now we need to exploit the energy balance as well to obtain \(\tilde{C}_{nm}(t)\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\tilde{C}_{nm}(t)+\mu_{n}\tilde{D}_{nm}(t)+ \gamma_{m}\tilde{E}_{nm}(t)=0. \tag{31}\]
After following the same procedure, we obtain almost the same set of equations except that the time derivative terms appear. Consequently, the set of PDEs is reduced to a set of ODE,
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\tilde{C}_{nm}(t)\\ \tilde{D}_{nm}(t)\\ \tilde{E}_{nm}(t)\end{bmatrix}=\begin{bmatrix}0&-\mu_{n}&-\gamma_{m}\\ \frac{\mu_{n}}{\tau}&-\frac{c_{1}}{\tau}&-\frac{c_{2}}{\tau}\\ \frac{\gamma_{m}}{\tau}&-\frac{c_{4}}{\tau}&-\frac{c_{3}}{\tau}\end{bmatrix} \begin{bmatrix}\tilde{C}_{nm}(t)\\ \tilde{D}_{nm}(t)\\ \tilde{E}_{nm}(t)\end{bmatrix}, \tag{32}\]
in which the coefficients are inherited from Eq. (28), and its solution can easily be found in the form of a matrix exponential, \(\exp(\mathbf{M}_{nm}t)\). Let us recall that the initial condition \(T_{h}(r,z,t=0)=-T_{\rm st}(r,z)\) is the same, and therefore we can exploit the known coefficients \(\tilde{C}_{nm}(t=0)=-C_{nm}\), \(\tilde{D}_{nm}(t=0)=-D_{nm}\), and \(\tilde{E}_{nm}(t=0)=-E_{nm}\).
### Jeffreys heat equation
Here, the solution is simpler as the steady-state heat flux field is identical to Fourier's case, hence we do not need to compute it separately. We repeat the ansatz of Eq. (29)-(30) to determine the coefficient matrix \(\mathbf{M}_{nm}\), and following the same steps, that procedure results in
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\tilde{C}_{nm}(t)\\ \tilde{D}_{nm}(t)\\ \tilde{E}_{nm}(t)\end{bmatrix}=\begin{bmatrix}0&-\mu_{n}&-\gamma_{m}\\ \frac{\mu_{n}}{\tau}&-\frac{1+\tau\tau\mu_{n}^{2}}{\tau}&-\frac{\tau\tau\mu_{n} \gamma_{m}}{\tau}\\ \frac{\gamma_{m}}{\tau}&-\frac{\tau\tau\mu_{n}\gamma_{m}}{\tau}&-\frac{1+\tau \tau\mu_{n}^{2}}{\tau}\end{bmatrix}\begin{bmatrix}\tilde{C}_{nm}(t)\\ \tilde{D}_{nm}(t)\\ \tilde{E}_{nm}(t)\end{bmatrix}, \tag{33}\]
together with the known coefficients from Fourier's solution, the time evolution for the Jeffreys case is obtained in the form of \(\exp(\mathbf{M}_{nm}t)\).
## 4. Steady-state distributions
First, let us begin with the steady-states, primarily focusing on the differences between the Fourier and GK equations. From a theoretical point of view, and according to Alvarez et al. [25], GK's steady heat flux field can differ from Fourier's. Here we have to discover in what sense and in what measure they can differ from each other. From a practical point of view, it is possible to measure the average local heat flux, usually on a minimal area of \(10\times 10\) mm\({}^{2}\) up to about \(80\times 80\) mm\({}^{2}\). Furthermore, it is known that such sensors can significantly distort the local heat flux field [26, 27]. Consequently, if one aims to observe the traces of non-Fourier heat conduction at this level, such spatial scales must be included in the preliminary analysis as smooth flux distributions (such as for the temperature field) cannot be measured. These analytical calculations can ease such analysis as well. Since the JE model has the same steady-state as Fourier's, we leave this analysis aside for that heat equation.
Fig. 4 presents the temperature distribution for Fourier's heat equation, which remains the same for the Guyer-Krumhansl and Jeffreys equations. Concerning the heat flux field, the situation becomes quite different. It is worth studying the outcome of the GK equation closer, see Fig, 5 for the details about \(q_{r}(r,z=0)\) and \(q_{z}(r,z=0.05)\). That characteristics is preserved for any \(q_{r}(r,z=\text{const.})\) distributions. The influence of \(\eta_{1}\) is clear, and it can significantly decrease the maximum. Similarly to \(\eta_{1}\), \(\eta_{2}\) has the same effect on the heat flux field, being more influential on \(q_{z}\), see Fig. 6 for a particular solution.
Although these effects are clear and strong for such a parameter interval, the situation of observing them in a steady-state is not that hopeful. Let us consider the flash experiments on rocks and their GK-evaluation with a one-dimensional model [10], we find that \(\eta_{1}+\eta_{2}\approx 10^{-7}\) m\({}^{2}\), in general. Consequently, the most substantial effects could be observed for a body with \(R=0.01\) m or less, which would probably violate our initial assumptions, and the boundary conditions would not be valid as well. For larger bodies, e.g., with \(R=0.1\) m or even larger (\(R=1\) m), the effect becomes small and difficult to detect. This is one reason why this scaling property is most important for nanoscale objects.
## 5. Transient distributions
### Fourier's heat equation
Let us recall that we are using the conventional Fourier number for the time scale. The characteristic size could be \(1\) m as relatively large bodies are considered. Consequently, we must choose small Fourier numbers since those can express relatively large time instants. Figure 7 shows the temperature distribution for Fourier's heat equation, in which we can observe that the characteristics of the distribution establish quickly and does not change significantly for larger time intervals. The color scaling, however, changes, showing how the equilibrium is approximated. At farther away from the heat source, the temperature changes more slowly since the gradients are much smaller. This is also presented in Fig. 8, showing the surface temperature history at different radii, as it is denoted previously in Fig. 2. We use Fig. 8 for comparative purposes in case of non-Fourier models as temperature maps (such as Fig. 7) would not highlight the characteristics of the non-Fourier behavior.
### Guyer-Krumhansl heat equation
Although the GK equation can reproduce the heat wave solutions following the Cattaneo equation with \(\eta_{1}=\eta_{2}=0\), called second sound, it is not meaningful in our situation. First,
Figure 4. Two-dimensional steady-state temperature distribution, using \(N=M=200\) terms, showing only partial spatial domain.
in order to generate such a heat wave, the time scale of the excitation (e.g., a heat pulse) should match the material's characteristic properties, most importantly, its relaxation time \(\tau\). Since its values range from \(10^{-10}--1\) s depending on the propagation phenomenon we investigate [28, 29, 30], such effects become irrelevant for large bodies on much larger time scales. Second, we use the GK equation as an effective approach to model the parallel diffusive mechanisms instead of modeling heat waves (or anything else related to phonon hydrodynamics). Fig. 9 presents two cases for the \(\eta_{1}=\eta_{2}=0\) setting, with \(\tau=10^{-3}\) (left) and \(\tau=10^{-2}\) (right). Similarly to the Fourier number, the relaxation time has the same scaling, therefore even \(\tau=10^{-3}\) is so large, it still does not show any difference from Fourier's solution, unlike the second case with unrealistically large relaxation time, but the effect is still weak. This setting will not be relevant for such continuous heating in a large macroscale body.
In fact, it is not easy to find parameters resulting in a remarkably different solution. Figure 10 presents an example in which the radius we use is much closer to the heat source, for farther away and longer time intervals, the differences vanish. It is worth studying how we can recover a similar behavior observed in heat pulse experiments (see Fig. 1 for the experimental characteristics). Interestingly, being closer to the heat source, GK's solution is not faster than Fourier's, however, it changes with the radius. Figure 11 presents these characteristics, and thus it is best to measure the temperature farther from the heat source to more reliably observe the non-Fourier behavior.
### Jeffreys heat equation
The \(\tau_{T}=0\) subcase coincides with the previous analysis using the GK equation (see the right side of Fig. 9), therefore we do not repeat it again. Instead, we focus on studying the effects induced by the extra time derivative term in the constitutive relation with \(\tau_{T}\neq 0\) and comparing our findings with Fig. 11. Although \(\tau_{T}\) acts analogously on the temperature field, their heat flux fields differ. That difference is essential for superfluids and further low-temperature modeling problems [31]. In engineering practice, however, that difference could be negligible as we seek only an effective model to provide a more accurate description of heterogeneous materials. An effective description cannot simultaneously model the temperature and heat flux fields. In that sense,
Figure 5. Steady-state \(q_{r}(r,z=0)\) (left) and \(q_{z}(r,z=0.05)\) (right) distributions with \(\eta_{2}=0\), using \(N=M=200\) terms, showing only partial spatial domain.
Figure 6. Steady-state \(q_{r}(r,z=0)\) (left) and \(q_{z}(r,z=0.05)\) (right) distributions with \(\eta_{2}=10^{-4}\), using \(N=M=200\) terms, showing only partial spatial domain.
Figure 8: Transient temperature history on the surface at different radii from Fourier’s equation, using \(N=M=100\) terms.
Figure 7: Transient temperature distribution following from Fourier’s heat equation, using \(N=M=40\) terms, showing only partial spatial domain.
Figure 11. Transient temperature history on the surface at different radii, comparing the Fourier and GK equations, using \(N=M=100\) terms.
Figure 10. Transient temperature history on the surface, comparing the Fourier and GK equations with \(\eta_{2}=0\), using \(N=M=100\) terms.
Figure 9. Transient temperature history on the surface, comparing the Fourier and GK equations with \(\eta_{1}=\eta_{2}=0\) and \(\tau=10^{-3}\) (left), \(\tau=10^{-2}\) (right), using \(N=M=100\) terms.
neither approach is more accurate than the other. Both can reproduce the same temperature history, however, the GK equation can be much more complicated. On the one hand, for the JE model, as simply the time derivative of Fourier's equation is added, it is easier to utilize the usual approaches for initial and boundary conditions. On the other hand, although thermodynamically compatible, the JE model does not fit into the systematic generalization of non-Fourier equations.
## 6. Discussion
There is a need for an advanced heat conduction model to overcome the difficulties emerging together with the use of complex heterogeneous materials. Their effective thermal behavior can significantly differ from Fourier's prediction, even though the classical heat equation governs each component. The interaction of parallel heat transfer channels results in a non-Fourier behavior. Two promising extensions of the Fourier equation are the Guyer-Krumhansl and the Jeffreys models. Both describe two heat conduction time scales, even in an isotropic setting. These models are analytically solved for a two-dimensional situation.
The analytical solution revealed that there is no need for additional boundary conditions for these GK and JE models, the same set of eigenfunctions can be used. That property notably eases the solution of these more complex models. The resulting ordinary differential equations can be solved easily for the linear situation; thus, unknown time-dependent coefficients are found. Furthermore, we also exploited that the steady-state temperature distribution given by Fourier's law remains the same in the non-Fourier case, therefore, it is advantageous to handle that heat conduction problem as a superposition of the transient and steady distributions. It also makes it more straightforward how to take into account the initial conditions.
Studying the time evolution of the surface temperature time histories \(T(t)\), we observed similarities compared to the one-dimensional heat pulse experiments, however, these situations are not directly comparable. First, as the surface temperature is the most straightforward to measure, it seems more advantageous if the thermometer is situated further from the heated region to observe the over-diffusive behavior possibly. This can change significantly in space. Second, with such constant heating, no steep transients occur in a large body due to its large heat capacity, thus, it is more challenging to observe the non-Fourier behavior. Based on the analytical solutions, it is clear that both models require unusually large parameters to obtain any observable deviation from Fourier's law. In the future, it would be worth investigating the outcome of periodic heating, especially the effects of the period time, and that could enhance the over-diffusive phenomenon. Third, for the GK equation, while the steady heat flux field differs from Fourier's, it is not practically measurable due to the steep change in the heated region, the available heat flux sensors are too large for such an application. Moreover, as the most significant difference occurs under the heater, it excludes this possibility. However, the analytical calculations also revealed that if such constant heating does not induce notable non-Fourier effects, one could design a measurement method to characterize such objects with Fourier's law remaining valid thermally.
## Funding
Project no. TKP-6-6/PALY-2021 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under
Figure 12. Transient temperature history on the surface at different radii, comparing the Fourier and JE equations, using \(N=M=100\) terms.
the TKP2021-NVA funding scheme. The research was funded by the Sustainable Development and Technologies National Programme of the Hungarian Academy of Sciences (FFT NP FTA), by the grant National Research, Development and Innovation Office-NKFIH FK 134277, and supported by the UNKP-22-5-BME-312 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund.
## Declarations
**Conflict of interest** The author declares no competing interests.
|
2303.12187 | Practice of the conformer enhanced AUDIO-VISUAL HUBERT on Mandarin and
English | Considering the bimodal nature of human speech perception, lips, and teeth
movement has a pivotal role in automatic speech recognition. Benefiting from
the correlated and noise-invariant visual information, audio-visual recognition
systems enhance robustness in multiple scenarios. In previous work,
audio-visual HuBERT appears to be the finest practice incorporating modality
knowledge. This paper outlines a mixed methodology, named conformer enhanced
AV-HuBERT, boosting the AV-HuBERT system's performance a step further. Compared
with baseline AV-HuBERT, our method in the one-phase evaluation of clean and
noisy conditions achieves 7% and 16% relative WER reduction on the English AVSR
benchmark dataset LRS3. Furthermore, we establish a novel 1000h Mandarin AVSR
dataset CSTS. On top of the baseline AV-HuBERT, we exceed the WeNet ASR system
by 14% and 18% relatively on MISP and CMLR by pre-training with this dataset.
The conformer-enhanced AV-HuBERT we proposed brings 7% on MISP and 6% CER
reduction on CMLR, compared with the baseline AV-HuBERT system. | Xiaoming Ren, Chao Li, Shenjian Wang, Biao Li | 2023-02-28T02:10:13Z | http://arxiv.org/abs/2303.12187v1 | # Practice of the conformer enhanced audio-visual hubert on mandarin and English
###### Abstract
Considering the bimodal nature of human speech perception, lips, and teeth movement has a pivotal role in automatic speech recognition. Benefiting from the correlated and noise-invariant visual information, audio-visual recognition systems enhance robustness in multiple scenarios. In previous work, audio-visual HuBERT appears to be the finest practice incorporating modality knowledge. This paper outlines a mixed methodology, named conformer enhanced AV-HuBERT, boosting the AV-HuBERT system's performance a step further. Compared with baseline AV-HuBERT, our method in the one-phase evaluation of clean and noisy conditions achieves 7% and 16% relative WER reduction on the English AVSR benchmark dataset LRS3. Furthermore, we establish a novel 1000h Mandarin AVSR dataset CSTS. On top of the baseline AV-HuBERT, we exceed the WeNet ASR system by 14% and 18% relatively on MISP and CMLR by pre-training with this dataset. The conformer-enhanced AV-HuBERT we proposed brings 7% on MISP and 6% CER reduction on CMLR, compared with the baseline AV-HuBERT system.
Xiaoming Ren, Chao Li, Shenjian Wang, Biao Li Beijing OPPO telecommunications corp., Itd., Beijing, China AV-HuBERT, AVSR, Conformer, Modality Fusion
## 1 Introduction
In recent years, automatic speech recognition (ASR) systems have seen considerable improvements with the help of innumerable neural networks and models [1], which reach or exceed mankind in several scenarios, especially low-noise, near-field situations [2]. A considerable amount of literature has been published on end-to-end approaches [3][4][5]. Among those attempts, attention-based architectures seem to become prevailing and receive raw reviews, such as conformer [6], which is able to learn local context and long-period connections when modeling sequences.
Nevertheless, ASR performance degrades inevitably due to various external disturbances, considering the vast applicability range of speech recognition technology. However, lip movements highly correlated to human speech are not benefited from the correlated and noise-invariant visual information, audio-visual recognition systems (AVSR) enhance robustness in multiple scenarios. By capturing the delicate relationship between audio and visual information, the treasure behind would be exploited.
With the availability of well-designed end-to-end architectures, the help of a growing number of multimedia datasets [7][8] and the fusion of visual and audio modalities [9][10], AVSR has remarkably made significant progress in recent years. Audio-Visual Hidden Unit BERT (AV-HuBERT) [11], a self-supervised AVSR framework with a pre-training and fine-tuning stage, seems to bring AVSR performance to a new level.
In this paper, we present a mixed methodology to boost AV-HuBERT system CER performance further. Firstly, by adopting the 80-dim filterbank feature, audio knowledge is more thoroughly and effectively captured. Secondly, a modified version of ResNet Encoder [12] which is pre-trained deliberately with an abundant number of multimedia resources, is employed to extract visual information. Then, we innovatively suggest a modality fusion method with a gating mechanism. Audio along with visual information is adopted in the design of the fusion gate, balancing audio knowledge going through the system. Finally, we investigate the usage of the conformer instead of the transformer, as mentioned above, which reduces deletion error in speech recognition and thus helps our system evolve further. We also establish a Mandarin multimodal dataset containing 1000h Mandarin audio and visual data. A Mandarin model based on AV-HuBERT is pre-trained using this dataset, which outperforms WeNet by 14% and 18% on MISP and CMLR after fine-tuning. Furthermore, we propose the enhanced AV-HuBERT, which boosts baseline AV-HuBERT performance by a relative 7% and 6% on MISP and CMLR.
The rest of this paper is organized as follows. Section 2 reviews recent works in the AVSR domain. Section 3 profoundly presents our methodology. Section 4 introduces the dataset and how we conduct those experiments. The result and related analysis are presented in Section 5. Section 6 concludes and also discusses further work.
## 2 Related Work
**AV-HuBERT.** It has previously been observed that modern novel neural networks are craving hand-labeled data for training since they are fully supervised. Most AVSR systems were no exception. Encouragingly, [11][13] proposed a robust self-supervised AVSR framework AV-HuBERT, capturing modality information from human speech and lip movement at the same time and achieved state-of-the-art on AVSR benchmark dataset LRS3 [7]. Feature clustering and masked prediction are two impressive aspects of the pre-training stage. In this paper, our work is built on top of AV-HuBERT, continuing to evolve and grow to a more robust AVSR system.
**Conformer.** Attention-based transformer plays a critical role and seemed to become prevailing. Nonetheless, its variant conformer, which can capture local context through convolution layers and long-term relationships, is even better. In this paper, we adopt a conformer encoder instead of the transformer in the pre-training stage, hoping to learn the nuanced correlation and local information.
Standard conformer block [6] is composed of four modules, including a feed-forward module, a self-attention module, a convolution module, and a second feed-forward module. The two feed-forward modules sandwich the multi-headed self-attention module and the convolution module.
Formally, for input \(\mathbf{x}_{i}\) to a conformer block \(i\), the output \(\mathbf{y}_{i}\) of the block is computed as below:
\[\widetilde{\mathbf{x}}_{i} =\text{LN}\left(\mathbf{x}_{i}+\frac{1}{2}\text{FFN}(\mathbf{x}_{i} )\right) \tag{1}\] \[\mathbf{x}_{i}{}^{\prime} =\text{LN}(\widetilde{\mathbf{x}}_{i}+\text{MHSA}(\widetilde{ \mathbf{x}}_{i}))\] (2) \[\mathbf{x}_{i}{}^{\prime\prime} =\text{LN}(\mathbf{x}_{i}{}^{\prime}+\text{Conv}(\mathbf{x}_{i}{} ^{\prime}))\] (3) \[\mathbf{y}_{i} =\text{LN}\left(\mathbf{x}_{i}{}^{\prime\prime}+\frac{1}{2} \text{FFN}(\mathbf{x}_{i}{}^{\prime\prime})\right) \tag{4}\]
where FFN refers to the feed-forward module, MHSA refers to the multi-head self-attention module, Conv refers to the convolution module and LN refers to the layer norm module.
## 3 The Proposed Method
### Audio Feature
As we repeat the AV-HuBERT research and reproduce the result on speech recognition, it is interesting to note that in the previous experiments, the audio feature employed in the pre-training process is 26-dim filterbanks. Empirically, a higher dimension gives a better representation of knowledge in several demanding and complex situations. Hence, we adopt 80-dim filterbanks as many end-to-end systems do. Concerning this adjustment, the performance sees a stable improvement. Additionally, when carrying out the research, features computed from a 15ms window outperform those from a 25ms window.
### Visual Feature Extractor
As the filterbank feature is popular for the audio stream, the visual information is extracted with front-end 3D ResNet18, whose parameters are learned during the model training stage. This visual extractor seems to more or less rely on the quality and quantity of its data set, which tends to be vulnerable and ephemeral in certain cases. We employ light-weighted MobileNet-V2 [14] instead of ResNet.
### Gated Fusion
Enlightened by [15], a mixed modality fusion methodology is introduced to our enhanced AV-HuBERT system. The paper above demonstrates a comparison between straight concatenation of modality features and fusion with gating mechanism [16]. We believe simple concatenation may lead to unreliable results when the visual flow is of low quality or not synchronized with audio. Hence, the audio and visual information are combined to control the audio information flow with the help of a GLU gate. At this point, the chosen audio knowledge is then incorporated with its visual part as the input of the pre-trained encoder. We incorporate this scheme into the AV-HuBERT model. The structure of fusion is shown in Fig.1. FusionNet output is \(\mathbf{m}_{t}\).
\[\mathbf{m}_{t} =(\text{concat}(\mathbf{v}_{t},\mathbf{a}_{t}))*\mathbf{U}+ \mathbf{a} \tag{5}\] \[\mathbf{h}_{t} =(\mathbf{a}_{t}*\mathbf{W}+\mathbf{b})\otimes\sigma(\mathbf{m}_{ t}*\mathbf{V}+\mathbf{c}) \tag{6}\]
where \(\mathbf{U}\in\mathbb{R}^{2D\times D}\),\(\mathbf{W}\in\mathbb{R}^{D\times D}\), \(\mathbf{V}\in\mathbb{R}^{D\times D}\), \(\{\mathbf{a},\mathbf{b},\mathbf{c}\}\in\mathbb{R}^{D}\) are the bias, \(\sigma\) refers to the sigmoid function, \(\otimes\) denotes the Hadamard product.
### Conformer Encoder
Focusing back on the encoder in the original AV-Hubert architecture, one thing to note is that the transformer encoder employs convolution for its position encoding. We find it not compatible with conformer structure, and hence we employ the relative sinusoidal positional encoding scheme which proved to be an important technique from Transformer-XL [17]. The relative positional encoding scheme supports the self-attention module on different input lengths, leading to robustness towards various utterance lengths. The effectiveness of this adjustment is demonstrated later. Moreover, blockformer [18] supplies the last output layer with adequate previous block output, which indicates the potential to be of benefit. Surprisingly, we also find that the conformer system reduces the reduction error in speech recognition, compared with the transformer system.
## 4 Experimental Setting
Our experiments are based on four datasets, including:
**LRS3**. The dataset consists of over 400 hours of video, extracted from 5594 TED and TEDx talks in English, downloaded from YouTube. The dataset is organized into three sets: pre-train, trainval and test. The first two overlap in terms of content but the last is completely independent. It is the largest publicly available labeled audio-visual speech recognition dataset.
**CSTS.** We collect a 1000h unsupervised Chinese audio-visual dataset containing 200k individuals, which will release with the paper. Since it contains only one speaker at a time, we name this dataset Chinese Solo Talk Show (CSTS). The CSTS dataset is organized into three sets: pre-train, train-val, and test.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline source & raw & processed & \multirow{2}{*}{utt} & \multirow{2}{*}{spks} \\ & hours & hours & & \\ \hline CC Forum & 215 & 101 & 50670 & 791 \\ Topics in Focus & 243 & 69 & 33141 & 3601 \\ Yixi Lecture & 39 & 13 & 6043 & 91 \\ Du Talk-Show & 161 & 117 & 59017 & 694 \\ Legal Report & 1997 & 598 & 317154 & 14209 \\ Rock+Roast & 73 & 7 & 3451 & 69 \\ The News Broadcasting & 482 & 82 & 38023 & 672 \\ News Live Room & 298 & 69 & 33321 & 548 \\ \hline Total & 3508 & 1056 & 540820 & 20675 \\ \hline \end{tabular}
\end{table}
Table 1: Composition of the CSTS dataset.
Figure 1: Attention information visualization between different layers and heads, indicates the diversity of information, and complementarity.
The cropped face tracks are provided as mnp4 files. The audio tracks are provided in single-channel 16-bit 16kHz format. Table 1 demonstrates the source, period, total sentence, and number of speakers. We collect speeches, interviews, and news programs from Chinese websites. After face detection, face clustering, face recognition, and VAD, the dataset is available and ready for use.
**CMLR.** It is a large Chinese Mandarin Lip Reading dataset (CMLR), designed to facilitate research on visual speech recognition, sometimes also referred to as automatic lip reading. More than 100,000 spoken sentences from 11 speakers of the national news program "News Broadcast" are included, up to an estimated total length of 61+h. More than 3,000 Chinese characters and 20,000 phrases appear in CMLR. This dataset includes train, dev, and test sets. Following regular data allocation, we use the train and dev set(up to 61h) for fine-tuning and the 17h test set for decoding.
**MISP.** This dataset is from the Multimodal Information Based Speech Processing Challenge 2021 (MISP), which contains 100+h of audio-visual data. The dataset is divided into the near, middle, and far scenarios, and recorded with 30 rooms, and 248 participants condition. The near-field audio is recorded individually via high-fidelity hardware and the middle scenario video is synchronized with the audio. Thus, we use near-scenario audio along with middle-range video, excluding far scenarios.
Generally speaking, our experiments are consistent with AV-HuBERT [11], including pre-training and fine-tuning stages. The pre-training stage learns feature representation through unsupervised audio and visual data, while fine-tuning uses labeled audio-visual pairs, according to the parameters of the labeled data.
### Pre-Training
**Pseudo labels.** We divide our experiments into the one-phase evaluation and the five-phase operation. We use 100 pseudo labels in one-phase evaluation, while the five-phase is the same as AV-HuBERT.
In five-phase, we iteratively increase the label number from 100, 100, 500, 1000, to 2000 at last. Note that in the first phase, we only use MFCC-39 from speech frames, in the other phases the transformer encoder's output from the previous phase is the input for feature clustering.
**Audio feature.** Filterbanks are for feature extraction. We compare fbank26 with fbank80, window length of 25ms with 15ms. If not mentioned, we adopt fbank26+win25.
**Visual feature.** We experiment with two methods of extracting visual features: resnet refers to ResNet18; mobilenet refers to MobileNet-v2.
**Modality fusion.** In feature fusion experiments, we compare simple concatenation with the gated fusion method. In Table 3 the GLU refers to the gated way.
**Backbone.** Transformer vs conformer backbone is compared in this paper. The Transformer encoder shares the same structure with AV-HuBERT while the conformer encoder uses relative position encoding, stacking 12 conformer blocks. In detail, the encoder embedding dimension is set to 768, the layer drop is 0.05, and the remaining keep the same with the transformer. The total parameter in the transformer is 103M and 183M for the conformer.
We conduct one-phase evaluations on LRS3 with 433h pre-training and 30h fine-tuning. The training stage is based on 8 A100 GPUs with max tokens of 1000. It takes 64h, containing 400k steps.
The comparison with WeNet ASR on the Mandarin set is based on five-phase operations. Enhanced AV-HuBERT ablation studies are just one phase. Our Mandarin pre-training models are all built upon a 1000h CSTS dataset and fine-tuned on 100h MISP and 61h CMLR afterward. Our model is trained on 4 A100 GPUs with max tokens 2500, one batch size of data containing at most 100 seconds of audio per GPU. It takes around 32 hours to finish a phase, where 400k steps are. We do not use accumulated gradient, update freq is set to 1.
One thing to note is that, among all pre-training and fine-tuning stages, there is no extra noise added. Random noise is included in the decoding stage. The noise audio clips in the categories of "natural", "music" and "babble" are sampled from the MUSAN dataset [19].
### Supervised Fine-Tuning and Decoding
As for the fine-tuning stage, we use the attention-based sequence-to-sequence cross-entropy loss [20]. Two modalities are included in fine-tuning. The modeling unit here for LRS3 is unigram-based subword units [21]. The vocabulary size is 1000.
As for the Mandarin dataset, Chinese characters are employed as the modeling unit. Linguistic knowledge is learned by the seq2seq transformer decoder of its own accord. Parameters for Mandarin, as well as English datasets in the fine-tuning stage, are listed below.
No extra language model is added during inference. It takes 30k steps to fine-tune the six-layer transformer decoder where in the first 24k steps, parameters in the pre-trained model are frozen. In the remaining 6k steps, we unfreeze the parameters. Other settings are similar to AV-HuBERT.
Warming up is set to 10k and learning is 0.001, without accumulated gradient. Adam [22] optimizer is used with \(\beta\) = (0.9, 0.98). We use 8 A100 GPUs in fine-tuning process, with the max token set to 1000, which means a batch size of at most 40 seconds.
## 5 Experimental Results & Analysis
In Table 2, every boosting method is verified on LRS3, where 'audio' refers to the case when the audio feature is employed for decoding. By 'audio-visual', we use both audio and visual features at the same time. CWER refers to CER in a clean environment and NWER is the WER with random noise in the decoding stage. We find that, in fbank80, win15 conditions, WER reduces for both clean and noisy cases. With GLU and mobilenet, audio-only WER receives a reduction in both cases. However, the word error rate grows
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Audio-only**} & \multicolumn{2}{c|}{**Audio-visual**} \\ & **CWER** & **NWER** & **CWER** & **NWER** \\ \hline baseline & 23.8 & 87.52 & 15.88 & 46.97 \\ baseline+fbank80 & / & / & 15.24 & 46.26 \\ baseline+win15 & 28.65 & 82.74 & 15.66 & 44.43 \\ baseline+GLU & 21.07 & 71.42 & 16.31 & 49.59 \\ baseline+mobilenet & 21.84 & 69.25 & 16.36 & 48.15 \\ baseline+all & 19.81 & 68.70 & 14.34 & 43.95 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study of the enhanced AV-HuBERT on LRS3 with update freq 1 for first phase (WER:%). The baseline is AV-HuBERT. All is composition of fbank80, win15, GLU and mobilenet.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Audio-only**} & \multicolumn{2}{c|}{**Audio-visual**} \\ & **CWER** & **NWER** & **CWER** & **NWER** \\ \hline baseline & 17.59 & 82.04 & 12.56 & 44.39 \\ baseline+conformer & 15.08 & 59.54 & 13.09 & 39.25 \\ baseline+conformer+all & 14.93 & 82.18 & 11.66 & 37.09 \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study of the enhanced AV-HuBERT on LRS3 with update freq 4 for first phase (WER:%). The implication of baseline and all is the same as Table 2.
with audio-visual in clean or noisy experiments. We combine all the boosting methods and the result is shown in the last row. Clean WER drops up to 16.8% and 9.7% relatively with audio and audio-visual features respectively. Noisy WER drops relatively by 21.5% and 6.4% respectively.
In Table 3, we set the update frequency to 4, and leads to 4 times epoch growth. Compared with Table 2, the baseline WER in audio-only and audio-visual cases drops obviously. At the same time, we find that by incorporating conformer and all other boosting methods, WER can decrease around 7-16% except NWER in the audio-only experiment.
In Table 4, we conduct a series of experiments on the transformer and conformer backbone group, where 4 methods are presented for each backbone, including filterbank dimension, frame length, visual extractor category, and audio-visual fusion type. Our baseline sets filterbank to 26-dim, and widow size to 25ms. Only one factor is compared to each line in each backbone experiment. For the reader's convenience, we present our results in an increasing way of performance. Conclusions are:
1) Filterbank with a higher dimension is relatively better(80-dim vs 26-dim) with 3.8% relative CER improvement. Increasing the number of filters when extracting audio features leads to more sufficient knowledge and thus better feature representation.
2) Decrease frame length boosts performance, where 15ms outperforms 25ms window by 2.8% relative CER improvement. One unanticipated finding was that our outcome is contrary to the instinct longer frame length was considered to provide better feature representation where more information lies in there.
3) Gated fusion purposefully accepts audio-visual information flow, which exceeds vanilla concatenation by 3%-7% on relative CER performance, from transformer and conformer experiments.
4) Mobilenet improves lightly and we think the main reason is that mobilenet has 7M fewer parameters than resnet18.
In Table 5, we compare the robustness of the conformer-enhanced AV-HuBERT and AV-HuBERT under various noise types with a 5dB signal-to-noise ratio (SNR) in the first phase. All means noise will be selected randomly from babble, music, and speech noise. The conformer-enhanced AV-HuBERT uses an 80-dim fbank, 15ms window size, gated fusion, and MobileNet. It can be seen that the conformer-enhanced AV-HuBERT outperforms AV-HuBERT no matter in clean or different noisy conditions. AV is better than A by adding visual modality. The experimental results show that our proposed conformer-enhanced AV-HuBERT does have a significant improvement over AV-HuBERT.
In Table 6, we compare the performance of the AV-HuBERT baseline model, the enhanced AV-HuBERT model, and the single audio modality-trained WeNet model. It can be seen that, due to extra modality in AV-HuBERT and 1000h data for pre-training, AV-HuBERT surpasses WeNet CER by relatively 14% and 18% on MISP and CMLR. Moreover, the proposed enhanced AV-HuBERT outperforms the baseline AV-HuBERT by 7% and 6%, which shows the effectiveness of our proposed boosting methods and the generalization of the CSTS dataset we established.
Looking into CER results, performance on CMLR is adimrably up to 2.82%, compared with that of MISP, which is 12.05%. The logic behind this could be ascended to the nature of the dataset. More specifically, CMLR contains clear News reports and high-quality audio leading to better performance while MISP is relatively more challenging, with HeFei accent chatting data, background noise sometimes, and blur visual resources.
## 6 Conclusion & Future Work
This paper dives into the practice of multi-modality pre-training framework on Mandarin and English datasets. We establish a 1000h Mandarin multi-modality dataset, CSTS. With the help of CSTS, we verify that AV-HuBERT with one extra modality and pre-training stage can outperform WeNet with one modality. CER drops 14% and 18% relatively on the Mandarin dataset MISP and CMLR. Based on AV-HuBERT, we propose the conformer enhanced AV-HuBERT, which also surpasses the baseline by a relative 7% and 6% in CER. Further work will focus on:
**Attention-based AV alignment.** Attention-based approaches are used in [23] to automatically align audio with video. Referring to the methods in the paper above, self-attention will perform over audio and video respectively. Then, the matrix containing video information will play the role of key and value while the audio matrix will be treated as the query. Finally, they will learn the alignment through the cross-attention approach.
**Trainable audio convolution network.** A trainable audio convolution network has the potential to take the place of a filterbank to extract audio features. Experiments from google [24] also indicate a decent adjustment.
**AV-Confidence.**[25] suggests the usage of audio-visual confidence, besides current features from lip movement and speech. A decision fusion net combines all modality information, including confidence, which would help our AVSR system we believe.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Model** & **Backbone** & **Labeled** & **Unlabeled** & **CER** \\ \hline WeNet & conformer & 100hrs & - & 15.04 \\ AV-HuBERT & transformer & 100hrs & 1000hrs & 12.95 \\ enhanced AV-HuBERT & transformer & 100hrs & 10000hrs & 12.05 \\ \hline WeNet & conformer & 61hrs & - & 3.65 \\ AV-HuBERT & transformer & 61hrs & 1000hrs & 3 \\ enhanced AV-HuBERT & transformer & 61hrs & 1000hrs & 2.82 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison between WeNet, AV-HuBERT, and enhanced AV-HuBERT. The last two models are pre-training for five phases and fine-tuning on MISP and CMLR (CER:%).
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Backbone** & **Method** & **CER** \\ \hline & fbank26+win25+concat+resnet & 18.93 \\ & **fbank80+win25+concat+resnet** & 18.20 \\ transformer & fbank80+win15+concat+resnet & 17.68 \\ & fbank80+win15+GLU+resnet & 17.12 \\ & fbank80+win15+GLU+**mobilenet** & 16.92 \\ \hline & fbank26+win25+concat+resnet & 16.12 \\ conformer & fbank80+win25+concat+resnet & 15.64 \\ & fbank80+win15+concat+resnet & 15.41 \\ & fbank80+win15+GLU+resnet & 14.25 \\ & fbank80+win15+GLU+**mobilenet** & 14.26 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study of the enhanced AV-HuBERT pre-training on CSTS for the first phase and fine-tuning on MISP (CER:%).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Model** & **Mode** & \multicolumn{2}{c|}{**SNR-5**} & \multirow{2}{*}{**clean**} \\ \cline{2-4} & **All** & **Babble** & \\ \hline AV-HuBERT & A & 92.40 & 116.75 & 34.56 \\ AV-HuBERT & AV & 45.34 & 54.80 & 18.93 \\ conformer enhanced AV-HuBERT & A & 55.19 & 74.78 & 17.25 \\ conformer enhanced AV-HuBERT & AV & 30.48 & 38.40 & 14.26 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of models, which is pre-training on CSTS for the first phase and fine-tuning on MISP, with different noise type (CER:%). |
2307.16616 | Definition of the invariant and the relationship with the compounds
numbers. Generalisation of the Euler theorem | The purpose of this article is to introduce the concept of invariance and its
properties. These properties can be used to check the primality of a number.
Combining these properties with the Euler theorem, it is possible to generalize
this theorem for all the values of $a^{\varphi(m)}$ where $0 < a < m {\pmod
{m}}$ independently if a is co prime or not with m. As $a^{\varphi(m)+1} \equiv
a$ if $m = a \cdot b$ and $GCD(a, b) = 1$. As the following steps, a new
hypothesis is formulated regarding the substitution of the Totien function for
an equivalent function that explains the Carmichael numbers. Keywords: Prime
Numbers, Compound Numbers, Primality test, Euler theorem | Juan Hernandez-Toro | 2023-07-31T12:47:28Z | http://arxiv.org/abs/2307.16616v2 | Definition of the invariant and the relationship with the compounds numbers. Generalisation of the Euler theorem
###### Abstract
The purpose of this article is to introduce the concept of invariance and its properties. These properties can be used to check the primality of a number. Combining these properties with the Euler theorem, it is possible to generalize this theorem for all the values of \(a^{\varphi(m)}\) where \(0<a<m\pmod{m}\) independently if a is co-prime or not with m.
**Keywords:**Prime Numbers, Compound Numbers, Primality test, Euler theorem
## 1 Introduction
The invariant, denoted as I, is a number between 0 and m that accomplishes the following condition: \(I^{2}\equiv I\pmod{m}\). This apparently simple property is deeply connected with compound numbers. The principal fact is that, the prime and prime powers don't have non-trivial invariants remainders, and for this reason, they can be discriminated. Using this condition, the Euler theorem can be generalized. Applying the invariant properties to the Euler's theorem allows his generalization in the following form: \(a^{\varphi(m)}\equiv I(m)\pmod{m}\) independently if a is co-prime or not.
## 2 Invariant and anti-invariant
### Definition. The invariant
For each combination of numbers \((a,b)\), \((a^{\prime},b^{\prime})\)... where \(m=a\cdot b\), \(m=a^{\prime}\cdot b^{\prime}\)... and \(GCD(a,b)=1\), \(GCD(a^{\prime},b^{\prime})=1\)... there are two numbers \(I_{1}\) and \(I_{2}\) between 0 and m such that \(I_{1}^{2}\equiv I_{1}\pmod{m}\) and \(I_{2}^{2}\equiv I_{2}\pmod{m}\). These numbers \(I_{1}\) and \(I_{2}\) are called invariants of \(m\).
### Proposition. Each invariants \(I_{1}\) and \(I_{2}\) are multiple of a or b
In order to accomplish \(I^{2}\equiv I\pmod{m}\), \(I_{1}\) must be multiple of a and \(I_{2}\) must be multiple of b.
#### 2.2.1 Demonstration:
In order to meet the previous criteria, if \(m=a\cdot b\), the I number can be expressed as \(I=\alpha a+x\) and \(I=\beta b+y\) Then:
\[I^{2}=\alpha a\beta b+\alpha ay+\beta bx+xy \tag{1}\]
and:
\[I^{2}\equiv\alpha ay+\beta bx+xy\pmod{m} \tag{2}\]
if \(\beta b=\alpha a+x-y\), \(\beta\cdot b\) can be substituted in (2) as:
\[I^{2}\equiv\alpha ay+\alpha ax+x^{2}\pmod{m} \tag{3}\]
There are two possible solutions to achieve \(I^{2}\equiv I\pmod{m}\) :
* \(x=0\) and \(y=1\) that is \(I_{1}=\alpha\cdot a\)
* \(x=1\) and \(y=0\) that is \(I_{2}=\beta\cdot b\)
#### 2.2.2 Example:
The number 15 has two combinations of multipliers: a = 1, b = 15, and (a'=3,b'=5). The invariants for (a=1,b=15) are (1,15), the invariants for (a'=3,b'=5) are (6,10) that is:
\[6^{2}\equiv 6\pmod{15}\text{ and }10^{2}\equiv 10\pmod{15} \tag{4}\]
### Proposition. Invariant always exists
If \(m=a\cdot b\) and \(GCD(a,b)=1\), the invariants \(I_{1}\) and \(I_{2}\) always exits
#### 2.3.1 Demonstration:
If \(m=a\cdot b\) and \(GCD(a,b)=1\), must exist two invariants \(I_{1}\) and \(I_{2}\). Due to 2.2, \(I_{1}=\alpha\cdot a\), and \(c\equiv 1\pmod{d}\). As consequence of the Bezout theorem [1] if \(GCD(a,b)=1\) then always is possible to find two multiples of a and b that \(c\equiv 1\pmod{d}\), For \(I_{2}\) is possible to follow the same procedure
### Definition. The anti-invariant
The Anti-invariant, denoted as A, is the number between 0 and m that accomplishes the following condition \(A^{2}\equiv m-A\pmod{m}\).
### Proposition. The existence of an Anti-invariant
If a number b accomplish \(b=m-I\pmod{m}\), b is an anti-invariant
#### 2.5.1 Demonstration:
If \(b=m-I\) then:
\[b^{2}=m^{2}-2\cdot m\cdot b+I^{2}\equiv I=m-b\pmod{n} \tag{5}\]
#### 2.5.2 Conclusion:
If \(b=m-I\), then b is an anti-invariant A of m. If \(b=m-A\), then b is an invariant I of m.
#### 2.5.3 Example:
The number 15 has a total 4 invariants (1,6,10,15). The anti-invariant for each invariant are (14,9,5,0)
### Proposition. The existence of anti-invariant, invariant tuples
Each anti-invariant is followed by an invariant. This produces tuples of anti-invariant, invariant consecutive numbers.
#### 2.6.1 Demonstration:
In order to meet the previous criteria, the remainder of (I-1) is calculated.
\[(I-1)^{2}=I^{2}-2I+1 \tag{6}\]
but \(I^{2}\equiv I\pmod{m}\) for this reason:
\[(I-1)^{2}\equiv I-2I+1=-I+1\equiv m-(I-1)\pmod{m} \tag{7}\]
On the other hand:
\[(A+1)^{2}=A^{2}+2A+1 \tag{8}\]
but \(A^{2}\equiv m-A\pmod{m}\) then:
\[(A+1)^{2}\equiv m-A+2A+1=m+A+1\equiv A+1\pmod{m} \tag{9}\]
#### 2.6.2 Example:
The number 15 has four tuples of anti-invariant invariant numbers. (0,1), (5,6), (9,10) and (14,15)
### Proposition. The existence of trivial anti-invariant, invariant tuple
All the natural numbers m have two invariants and two anti-invariants. These two tuples are easily localized (0,1) and (m-1,m)
#### 2.7.1 Demonstration:
This is a particular case of 2.2. If \(a=1\) and \(m=b\), the two solutions are:
* \(n=1\)
* \(n=m\)
Likewise, using 2.6 the tuples are defined as (0,1) (m-1,m)
#### 2.7.2 Conclusion:
Concluding this section, all the numbers have 2 invariants, \(I=m\) and \(I=1\) and 2 anti-invariants \(A=m-1\) and \(A=0\) (mod m).
### Definition. The non-trivial anti-invariant, invariant tuple for odd numbers.
If \(m=a\cdot b\), m odd, \(a\neq 1\), \(b\neq 1\) and \(GCD(a,b)=1\), then these composite numbers have at least 2 additional anti-invariants, invariants tuples. The localization varies with each number12.
Footnote 1: The even numbers have additional Trivial invariants
Footnote 2: The powers of prime numbers b due to \(GCD(a,b)\neq 1\) have only 2 anti-invariant, invariant tuples also.
### Proposition. The invariant in superior orders
If I is an invariant of m, then \(I^{s}\equiv I\pmod{m}\) for all values of s where s is positive and integer.
#### 2.9.1 Demonstration:
Suppose that is true for \(I^{s-1}\) that is \(I^{s-1}\equiv I\pmod{m}\) then the remainder of
\[I^{s}\equiv I^{s-1}\cdot I\equiv I\cdot I\equiv I\pmod{m} \tag{10}\]
Conclusions. The relationship between the compose number, the non-trivial invariant, the Euler theorem, and other hypothesis and considerations
### The relation between compound numbers and non-trivial invariants
The existence of a non-trivial invariant automatically indicates that the number is not prime.
#### 3.1.1 Demonstration:
The demonstration is almost trivial by 2.2 and 2.3. If it is possible to find an invariant I of m, then by 2.2 one of the invariant factor, that is \(I=\alpha\cdot\beta\cdot\gamma..\), is also factor of m. By 2.3 if m is a compound number the Invariant always exists.
### Using the invariant to test the primality
The search for invariant could be used to test the primality and even provide additional information related to the factorization. The main advantage is that it is not necessary to check if the number is a power or triangular.
Additional to this, The algorithm can be distributed in different machines, can run in both directions and multiple strategies can be used.
### Algorithm to test the primality using invariants remains
A very easy algorithm can be created to check the primality and even identifying if is a power of a prime or compound number just searching invariants. This algorithm go from down to up but could be done also in the opposite direction. The procedure is as following3:
Footnote 3: is not optimize
1. Create a counter C1. This counter go from 2 to (m-1)/2 and each iteration is increased by 1.
2. Create a second counter C2 this counter start in 4 and each iteration is increased by 2*C1-1.
3. If C2 in one iteration is bigger than m then C2=C2-m.
4. If C2=m then the program stop and m is a power.
5. If C2=C1 then the program stop and m is a compound number.
6. Finally if C1=(m-1)/2 an the program doesn't stop before the program stop and is a prime number.
#### 3.3.1 Factorization
The result provide an anti-invariant invariant tuple (c,d). Each of then are multiple of one factor and for this reason factorizing c or d provide one of the factor numbers. Other strategy is multiply \(c\cdot d=f\) then \(f/m=g\) an the factorization of g give as result all the factors independents of c and d.
#### 3.3.2 Code
Following code implement the previous algorithm. This code is not optimize but give an idea about how easy is.
string = input('please-insert-a-odd-number:') num = int(string)
#initializecontrolvariable control=int((num-1)/2) prime=True #initializetheothervariables C1=2 C2=4 #controlloop while(control>=C1): if(C2>num): C2=C2-num elif(C2==num): print(f'the-number-is-raised-to-the-a-power{C1}:') prime=False break if(C1==C2): print(f'the-number-is-not-a-prime-{C1}:') prime=False break C1=C1+1 C2=C2+2*C1-1 if(prime): print(f'the-number-is-a-prime:') ```
### Generalization of the Euler theorem
The Euler theorem can be more general. Euler's theorem states that, if m and a are co prime positive integers and \(\varphi(m)\) is Euler's totient function, then a raised to the power \(\varphi(m)\) is congruent to 1 module m. That is:
\[a^{\varphi(m)}\equiv 1\pmod{n}. \tag{11}\]
For 2.2 the theorem can be generalized as:
If \(\varphi(m)\) is Euler's totient function The number a raised to the power \(\varphi(m)\) is congruent to the trivial invariant module m if m and a are co primes. Then:
\[a^{\varphi(m)}\equiv I_{trivial}=1\pmod{m}. \tag{12}\]
If m and a are not co primes then:
\[a^{\varphi(m)}\equiv I_{nottrivial}\pmod{m}. \tag{13}\]
#### 3.4.1 Demonstration
Suppose \(m=a\cdot b\), \(GCD(a,b)=1\) and the invariant is \(I=a^{s}\cdot\beta\) where s is an integer number and \(\beta\) is co prime with m and a. Then:
\[I^{\varphi(m)}=a^{s\cdot\varphi(m)}\cdot\beta^{\varphi(m)}\pmod{m}. \tag{14}\]
But \(\beta^{\varphi(m)}=1\) by the Euler theorem itself. Substituting:
\[I^{\varphi(m)}\equiv a^{s\cdot\varphi(m)}\pmod{m}. \tag{15}\]
For 2.9\(I^{s\cdot\varphi(m)}\equiv I^{\varphi(m)}\equiv I\pmod{m}\). Then \(a^{\varphi(m)}\equiv I\pmod{m}\).
Is trivial to demonstrate that all the multiples of a \(\delta=a\cdot c\) where c and m are co primes meet
\[\delta^{\varphi(m)}\equiv I\pmod{m}. \tag{16}\]
#### 3.4.2 Example:
The Euler's totien number of 105 is \(\varphi(n)=48\) Is easy to check that:
1. The multiples of 3, \(\beta=3\cdot c\) where c is co-prime with 105 \(\beta^{48}\equiv 36\pmod{n}\)
2. The multiples of 5, \(\beta=5\cdot c\) where c is co-prime with 105 \(\beta^{48}\equiv 85\pmod{n}\)
3. The multiples of 7, \(\beta=7\cdot c\) where c is co-prime with 105 \(\beta^{48}\equiv 91\pmod{n}\)
4. The multiples of 15, \(\beta=15\cdot c\) where c is co-prime with 105 \(\beta^{48}\equiv 15\equiv 36\cdot 85\pmod{n}\)
5. The multiples of 21, \(\beta=21\cdot c\) where c is co-prime with 105 \(\beta^{48}\equiv 21\equiv 36\cdot 91\pmod{n}\)
6. The multiples of 35, \(\beta=35\cdot c\) where c is co-prime with 105 \(\beta^{48}\equiv 70\equiv 85\cdot 91\pmod{n}\)
### The invariant, the anti-invariant and the \(a^{s}\) cyclic group
If \(m=a\cdot b\) the powers of a \(a^{s}\) form a cyclic group. I is equivalent to 1 in the group, and A is equivalent to -1.
#### 3.5.1 Demonstration:
If \(I_{1}=a\cdot\alpha\) then \(I_{1}\cdot a\equiv(A_{1}+1)\cdot a\equiv a\) because the anti invariant is a multiple of b.
If \(I_{2}=b\cdot\beta\) then \(A_{2}\cdot a\equiv(I_{2}-1)\cdot a\equiv-a\equiv m-a\) because the invariant is multiple of b.
Also, it is trivial to demonstrate that each \(a^{s}\equiv a\cdot\alpha\pmod{m}\) where s is a different rearrangement of the integer numbers between 0 and m-2 and \(\alpha\) is an integer number between 1 and m-1.
#### 3.5.2 Example
In the following table is possible to find the equivalents of the different subset \(\mathbb{Z}_{35_{multiple\,of\,5}}\) multiplication
\begin{tabular}{|l|l|l|l|l|l|l|} \hline & **5** & **10** & **15** & **20** & **25** & **30** \\ \hline
**5** & 25 & 15 & 5 & 30 & 20 & 10 \\ \hline
**10** & 15 & 30 & 10 & 25 & 5 & 20 \\ \hline
**15** & 5 & 10 & 15 & 20 & 25 & 30 \\ \hline
**20** & 30 & 25 & 20 & 15 & 10 & 5 \\ \hline
**25** & 20 & 5 & 25 & 10 & 30 & 15 \\ \hline
**30** & 10 & 20 & 30 & 5 & 15 & 25 \\ \hline \end{tabular} Then the invariant is I=15, the anti invariant is A=20, the inverse of 5 is 10, the inverse of 25 is 30 and the inverse of 15 is 20.
### The Carmichael number hypothesis
A hypothesis that could be interesting is the fact that the totien fucntion \(\varphi(m)\) is not the minimum function that achieve \(a^{\varphi(m)}\equiv I_{trivial}=1\pmod{m}\) and can be substituted by \(\Omega(m)=lcm((a-1)a^{\alpha-1},(b-1)b^{\beta-1},(c-1)c^{\gamma},...)\) where \(m=a^{\alpha}b^{\beta}c^{\lambda}...\). If the hypothesis is true, then the Carmichael numbers are numbers where p-1 is a multiple of \(\Omega(n)\). As example, the \(\varphi(m)\) of the first Carmichael number is \(\varphi(561)=320\) because \(561=3\cdot 11\cdot 17\). \(560/320\) is not an integer, and consequently, the Euler theorem cannot explain the Carmichael number. On the other side, his, \(\Omega\) value is lcm(2, 10, 16) = 80. It is easy to see that 560/80=7, then all the relative primes n \(560\equiv 1\pmod{m}\). In general, all the Carmichael numbers checked accomplish \(\frac{p-1}{41}\equiv integer\) |
2309.04966 | The Quasi-Newton Method for the Composite Multiobjective Optimization
Problems | In this paper, we introduce several new quasi-Newton methods for the
composite multiobjective optimization problems (in short, CMOP) with Armijo
line search. These multiobjective versions of quasi-Newton methods include BFGS
quasi-Newnon method, self-scaling BFGS quasi-Newnon method, and Huang BFGS
quasi-Newnon method. Under some suitable conditions, we show that each
accumulation point of the sequence generated by these algorithms, if exists, is
both a Pareto stationary point and a Pareto optimal point of (CMOP). | Jian-Wen Peng, Jen-Chih Yao | 2023-09-10T08:56:54Z | http://arxiv.org/abs/2309.04966v1 | # The Quasi-Newton Method for the Composite Multiobjective Optimization Problems 1
###### Abstract
We propose a new class of multiobjective optimization problems for the composite multiobjective optimization problems. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem, which is solved by a linear programming problem. The proposed multiobjective optimization problem is formulated as a linear programming problem, which is solved by a linear programming problem problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem, which is solved by a linear programming problem problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem, which is solved by a linear programming problem problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem, which is solved by a linear programming problem problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem, which is solved by a linear programming problem problem problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem, which is solved by a linear programming problem problem problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem, which is solved by a linear programming problem problem problem. The proposed multiobjective optimization problem is formulated as a linear programming problem problem problem, which is solved by a linear programming
**Abstract** In this paper, we introduce several new quasi-Newton methods for the composite multi-objective optimization problems (in short, CMOP) with Armijo line search. These multiobjective versions of quasi-Newton methods include BFGS quasi-Newton method, self-scaling BFGS quasi-Newton method, and Huang BFGS quasi-Newton method. Under some suitable conditions, we show that each accumulation point of the sequence generated by these algorithms, if exists, is both a Pareto stationary point and a Pareto optimal point of (CMOP).
**Keywords** Composite Multiobjective optimization problem quasi-Newton method Pareto stationarity Convergence analysis
**Mathematical Subject Classification** 90C29, 90C53, 49M15
Introduction
In this work, let us consider the following composite multiobjective optimization problems (in short, CMOP):
\[\begin{array}{ll}\min&F(x)\\ s.t.&x\in\mathbb{R}^{n},\end{array} \tag{1}\]
where \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a vector-valued function with \(F:=(f_{1},..,f_{m})^{T}\) and \(T\) denotes transpose. We assume that each \(f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is defined by
\[f_{i}(x):=g_{i}(x)+h_{i}(x),\ \ i=1,...,m, \tag{2}\]
where \(g_{i}\):\(\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a twice continuously differentiable strongly convex function, \(h_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex but not necessarily differentiable. It is worthy noting that if \(h_{i}(x)\equiv 0\) for all \(x\in\mathbb{R}^{n}\) and \(i=1,2,...,m\), then the above (CMOP) (i.e., (1) with (2) ) reduces to the multiobjective optimization problems studied in [4, 6, 8, 11, 12, 13, 19].
Scalarization approach is one of the most popular strategies to solve the multiobjective optimization problem, which transforms the multiobjective optimization problem into one or several parameterized single-objective problem (see [9, 10, 17] and the references therein). In general, the converted problem and the primal multiobjective optimization problem enjoy same optimal solutions under certain conditions. Nevertheless, Fliege el al. [11] pointed out that the parameters in scalarization method are not known in advance and the selection of parameters may result in unbounded scalar problems even if the original multiobjective optimization problem has solutions. In order to cope with these limitations, the descent methods for multiobjective optimization problems has attracted wide attention in the optimization field (see [13]). There are many kinds of descent methods for multiobjective optimization problems, for examples, the steepest descent method [12], the projection gradient method [4], the proximal point method [6], the Newton's method [11], quasi-Newton method [15, 19, 20], and the subgradient method [8]. Recent years, some authors introduced and researched the descent methods for the composite multiobjective optimization problem (1.1), for examples, the proximal gradient method [22], the Newton's method [1], proximal Newton methods [23], proximal quasi-Newton methods [18]. Inspired by above ideas, the main purpose of this paper is to introduce a new quasi-Newton methods for (CMOP) in the case of Armijo line search.
The main contents of this paper are as follows: In Section 2, we give some notations and some concepts about Pareto optimality and Pareto stationarity. In Section 3, we propose BFGS quasi-Newnon method, self-scaling BFGS quasi-Newnon method and Huang BFGS quasi-Newnon method for the (CMOP) with the Armijo line search. We prove the global convergence of the proposed algorithms in Section 4. In Section 5, some numerical experiments are also carried out to verify the effectiveness of the proposed quasi-Newton methods and to show that multiobjective version of the Huang BFGS proximal quasi-Newnon method is the most effective among the gradient method, Newton method and the proposed quasi-Newton methods for (CMOP) with an Armijo line search.
Preliminaries
Throughout this paper, \(\mathbb{N}\) denotes the set of positive integers, \(\mathbb{R}^{n}\) denotes the \(n\)-dimentional Euclidean space. The Euclidean norm in \(\mathbb{R}^{n}\) will be denoted by \(\left\|\cdot\right\|\). For two vectors \(u\) and \(v\) in \(\mathbb{R}^{m}\), by \(u\leq v\), we mean that \(u_{i}\leq v_{i}\) for all \(i=1,...,m.\) by \(u<v\), we mean that \(u_{i}<v_{i}\) for all \(i=1,...,m\).
We recall that the function \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is strongly convex or uniformly convex if there exists \(\eta>0\) such that
\[g(\lambda x+(1-\lambda)y)\leq\lambda g(x)+(1-\lambda)g(y)-\frac{1}{2}\lambda(1 -\lambda)\eta\left\|x-y\right\|^{2},\]
for any \(x,y\in\mathbb{R}^{n}\) and \(\lambda\in[0,1]\) (see [16]). It is well known that if \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is twice continuously differentiable, then \(g\) is strongly convex if and only if there exists \(\eta>0\) such that \(\nabla^{2}g(x)\succeq\eta I\), for any \(x\in\mathbb{R}^{n}\), where \(\nabla^{2}g(x)\) denote the Hessian matrix of \(f\) at \(x\). Henceif \(g\) is strongly convex, then the Hessian matrix \(\nabla^{2}g(x)\) is positive definite for any \(x\in\mathbb{R}^{n}\). It is clearly that the strong convexity of \(g\) implies its strict convexity and usual convexity.
Let \(f\):\(\mathbb{R}^{n}\rightarrow\mathbb{R}\cup\{+\infty\}\), and let \(x\in dom(f):=\{x\in\mathbb{R}^{n}:f(x)<+\infty\}\). Then the directional derivative of \(f\) at \(x\) in the direction \(d\in\mathbb{R}^{n}\) is defined to be the limit
\[f^{\prime}(x;d)=\underset{\alpha\to 0^{+}}{lim}\,\frac{f(x+\alpha d)-f(x) }{\alpha},\]
if it exists (see [21]). Clearly, if \(f\) is differentiable at \(x\), then \(f^{\prime}(x;d)=\nabla f(x)^{T}d\).
**Definition 2.1**[12, 13] Recall that \(x^{*}\in\mathbb{R}^{n}\) is an efficient point or a Pareto optimum of (CMOP), if there is no \(x\in\mathbb{R}^{n}\) such that \(F(x)\leq F(x^{*})\) and \(F(x)\neq F(x^{*})\). The set of all Pareto optimal values is called Pareto frontier. Likewise, \(x^{*}\in\mathbb{R}^{n}\) is a weakly efficient point or a weakly Pareto optimum of (CMOP), if there is no \(x\in\mathbb{R}^{n}\) such that \(F(x)<F(x^{*})\).
It's well known that the efficient point of (CMOP) is also a weakly efficient point of (CMOP), and the converse is not true in general.
**Definition 2.2**[22] We say that \(\bar{x}\in\mathbb{R}^{n}\) is Pareto stationary (or critical) of (CMOP), if and only if
\[\max_{i=1,...,m}f^{\prime}_{i}(\bar{x};d)\geq 0\;\;\mbox{for all}\;\;d\in\mathbb{R}^{n}.\]
It is worthy to noting that Definition 2.2 generalizes the corresponding ones in [12].
We recall the following important result about the relationship of the weakly efficient point (efficient point) and the Pareto stationary point of (CMOP):
**Lemma 2.1**[11] (1) If \(x\in R^{n}\) is a weakly efficient point of (CMOP)then \(x\) is Pareto stationary.
(2) Let every component \(f_{i}\) of \(F\) be convex. If \(x\in R^{n}\) is a Pareto stationary point of (CMOP), then \(x\) is a weakly efficient point of (CMOP).
(3) Let every component \(f_{i}\) of \(F\) be strictly convex. If \(x\in R^{n}\) is a Pareto stationary point of (CMOP), then \(x\) is an efficient point of (CMOP).
Quasi-Newton methods for (CMOP)
In this section, we consider a new quasi-Newton method for (CMOP) with an Armijo line search. Prior to that, we need the following result, which was shown in [1].
**Lemma 3.1** Suppose that \(0\in Co\underset{j\in\{i=1,...,m\}}{\cup}\partial f_{j}(x^{*})\) for some \(x^{*}\in\mathbb{R}^{n}\), then \(x^{*}\) is a critical point of the (CMOP).
For \(i=1,2,...,m\), let \(\nabla g_{i}(x)\) denote the gradient of \(g_{i}\) at \(x\), \(B_{i}(x)\) be an an approximation of \(\nabla^{2}g_{i}(x)\), which satisfies the following assumption:
**Condition 3.1** For any fixed \(x\in\mathbb{R}^{n}\) and for each \(i\in\{1,2,...,m\}\), there exists a constant \(\sigma_{i}\) such that
\[z^{T}B_{i}(x)z\geq\sigma_{i},\forall z\in\mathbb{R}^{n}.\]
**Remark 3.1** (i) Condition 3.1 implies that for any \(x\in\mathbb{R}^{n}\) and for each \(i\in\{1,2,...,m\}\), there exists a constant \(\sigma\) such that
\[z^{T}B_{i}(x)z\geq\sigma,\forall z\in\mathbb{R}^{n},\]
where \(\sigma=\min\limits_{i=1,...,m}\sigma_{i}\).
* If \(B_{i}(x)=\nabla^{2}g_{i}(x)\), then Condition 3.1 holds true automatically because the strong convexity of \(g_{i}\).
Now, We define the function \(\theta(.,.):\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) by
\[\theta(x,d):=\max\limits_{i=1,...,m}\theta_{i}(x,d), \tag{3}\]
where \(\theta_{i}(x,d)=\nabla g_{i}(x)^{T}d+\frac{1}{2}d^{T}B_{i}(x)d+h_{i}(x+d)-h_{ i}(x)\).
It is easy to see that for any fixed \(x\), \(\theta(x,d)\) is continuous on \(d\) since \(\theta_{i}(x,d)\) is condinuous on \(d\) for each \(i\in\{1,2,...,m\}\). It follows from Condition 3.1 and the convexity of \(h_{i}\), we know that for any fixed \(x\) and for each \(i\in\{1,2,...,m\}\), \(\theta_{i}(x,d)\) is \(\sigma_{i}\)-strongly convex in \(d\). Therefore, for any fixed \(x\), \(\theta(x,d)\) is \(\sigma\)-strongly convex in \(d\) and \(\theta(x,\mathbf{0})=0\), where \(\sigma=\min\limits_{i=1,...,m}\sigma_{i}\). We would like to define the quasi-Newton direction at an iteration \(k\) as \(d^{k}_{QN}=d(x^{k})\), where
\[d(x):=\underset{d\in\mathbb{R}^{n}}{argmin}\,\theta(x,d). \tag{4}\]
For any fixed \(x\), we can rewrite problem (4) as the following subproblem \(\Phi(x)\), which is to find a suitable descent direction of the (CMOP):
\[\Phi(x):\min\limits_{d\in\mathbb{R}^{n}}\theta(x,d). \tag{5}\]
**Remark 3.2**: (i) For fixed \(x\in\mathbb{R}^{n}\), it follows from the strongly convexity of \(\theta(x,d)\) in \(d\) that (5) (i.e., \(\Phi(x)\)) has a unique solution \(d(x)\). Moreover, the optimal value of \(\Phi(x)\) is denoted by \(\alpha(x)\), i.e.,
\[\alpha(x)=\min_{d\in\mathbb{R}^{n}}\theta(x,d)=\theta(x,d(x)). \tag{6}\]
(ii) For fixed point \(x\in\mathbb{R}^{n}\), because of \(\theta(x,\mathbf{0})=0\), we have \(\theta(x,d(x))\leq 0\).
**Remark 3.3** (i) \(\theta(x,d)\) is denoted by \(\theta_{x}(d)\) in [18] and \(\theta(x,d)\) is exactly the \(\varphi_{\omega,x}(d)\) in [18] with \(\omega=0\).
(ii) If for \(i=1,2,...,m\), \(B_{i}(x)\) are replaced by \(\nabla^{2}g_{i}(x)\), then \(\theta(x,d)\) reduces to \(Q(x,d)\) in [1] and \(d^{k}_{QN}=d(x^{k})\) defined by (5) reduces to exactly the Newton direction at an iteration \(k\) in [1] which is defined as \(d^{k}_{N}=:\underset{d\in\mathbb{R}^{n}}{argmin}\,Q(x^{k},d)\).
We recall an important property of \(\theta(x,.)\) as follows:
**Lemma 3.2 ( see [18])** For fixed point \(x\in\mathbb{R}^{n}\) and for all \(d\in\mathbb{R}^{n}\), the following equality holds:
\[\theta^{\prime}((x,\mathbf{0});d)=\max_{i=1,...,m}{f^{\prime}}_{i}(x;d).\]
Denote \(I(x,d)=\{j\in\{1,2,...,m\}:\theta(x,d)=\theta_{j}(x,d)\}.\) It follows that \(d(x)=\underset{d\in\mathbb{R}^{n}}{argmin}\,\theta(x,d)\) and \(\alpha(x)=\theta(x,d(x))\leq\theta(x,\mathbf{0})=0\) that for every \(x\in\mathbb{R}^{n}\), \(d(x)\) is a solution of \(\Phi(x)\)). Therefore,
\[\mathbf{0}\in\partial_{d}\theta(x,d(x)).\]
It follows from Corollary 3.5 in [3] that there exist \(w\in\mathbb{R}^{|I(x,d(x))|}_{+}\), \(\xi_{j}\in\partial_{d}h_{j}(x+d(x)),j\in I(x,d(x))\) such that the following conditions hold:
\[\sum_{j\in I(x,d(x))}w_{j}=1. \tag{7}\]
\[\sum_{j\in I(x,d(x))}w_{j}(\nabla g_{j}(x)+B_{j}(x)d(x)+\xi_{j})=0. \tag{8}\]
Let \(\Xi_{m}:=\{1,2,...,m\}\) and substituting \(w_{j}=0\) and \(\xi_{j}\in\partial_{d}h_{j}(x+d(x))\) for all \(j\notin I(x,d(x),\) we can obtain
\[\sum_{j=1}^{m}w_{j}=1. \tag{9}\]
\[\sum_{j=1}^{m}w_{j}(\nabla g_{j}(x)+B_{j}(x)d(x)+\xi_{j})=0. \tag{10}\]
\[w_{j}\geq 0,\;w_{j}(\nabla g_{j}(x)^{T}d(x)+\frac{1}{2}d(x)^{T}B_{j}(x)d(x)+h_{j}(x+d (x))-h_{j}(x)-\alpha(x))=0,j\in\Xi_{m}. \tag{11}\]
\[\nabla g_{j}(x)^{T}d(x)+\frac{1}{2}d(x)^{T}B_{j}(x)d(x)+h_{j}(x+d(x))-h_{j}(x) \leq\alpha(x),j\in\Xi_{m}. \tag{12}\]
Therefore, we have the following result:
**Lemma 3.3** If \(d(x)\) is a solution of \(\Phi(x)\) and \(\alpha(x)=\theta(x,d(x))\), then
(i) there exist \(w\in\mathbb{R}_{+}^{|I(x,d(x))|}\), \(\xi_{j}\in\partial_{d}h_{j}(x+d(x)),j\in I(x,d(x))\) such that such that \((d(x),\alpha(x),w)\) satisfies (7)-(8).
(ii) there exists \(w\in\mathbb{R}_{+}^{m}\) such that \((d(x),\alpha(x),w)\) satisfies (9)-(12).
The following lemma characterizes the Pareto stationarity of (CMOP) in terms of \(d(\cdot)\).
**Lemma 3.4** Suppose that Condition 3.1 holds true, \(d(x)\) and \(\alpha(x)\) are defined in (4) and (6), respectively. Then, the following statements hold true:
(1) If \(x\) is a Pareto stationary point of (CMOP), then \(d(x)=0\) and \(\alpha(x)=0.\) Conversely, if \(d(x)=0\) and \(\alpha(x)=0\), then \(x\) is a Pareto stationary point of (CMOP).
(2) If \(x\) is not a Pareto stationary point of (CMOP), then \(d(x)\neq 0\) and \(\alpha(x)<0\). Conversely, if \(d(x)\neq 0\) and \(\beta(x)<0\), then \(x\) is not a Pareto stationary point of (CMOP).
_Proof_. (1) Let \(x\) be Pareto stationary of (CMOP). We will prove that \(d(x)=0\) and \(\alpha(x)=0\). On the contrary, we assume, that \(d(x)\neq 0\) or \(\alpha(x)<0.\) It follows from Remark 3.2 that \(d(x)\neq 0\) if and only if \(\alpha(x)<0\), which means that \(d(x)\neq 0\) and \(\alpha(x)<0.\) It follows from (12) in Lemma 3.3(ii) and the positiveness of \(B_{j}\) that
\[\nabla g_{j}(x)^{T}d(x)+h_{j}(x+d(x))-h_{j}(x)\leq\alpha(x)-\frac{1}{2}d(x)^{ T}B_{j}(x)d(x)<0,j\in\Xi_{m}. \tag{13}\]
It follows from the convexity of \(h_{j}\) that for any \(\lambda\in(0,1)\) that
\[h_{j}(x+\lambda d(x))-h_{j}(x)\leq \lambda(h_{j}(x+d(x))+(1-\lambda)h_{j}(x)-h_{j}(x)\] \[= \lambda[h_{j}(x+d(x))-h_{j}(x)],j\in\Xi_{m}. \tag{14}\]
From (13) and (14), we obtain that
\[\lambda\nabla g_{j}(x)^{T}d(x)+h_{j}(x+\lambda d(x))-h_{j}(x)<0,j\in\Xi_{m},\]
which implies that
\[\frac{1}{\lambda}[\lambda\nabla g_{j}(x)^{T}d(x)+h_{j}(x+\lambda d(x))-h_{j}( x)]<0,j\in\Xi_{m}.\]
Taking limit \(\lambda\to 0^{+}\) in the above inequality we have \(f_{j}^{\prime}(x;d(x))<0\) for each \(j=1,2,...,m\). Hence, \(x\) is not a critical point of (CMOP), which is a contradiction.
We now prove the converse. We assume that \(d(x)={\bf 0}\) and \(\alpha(x)=0\) hold true. It follows from Lemma 3.3(i) and \(I(x,{\bf 0})=\{1,2,...,m\}=\Xi_{m}\) that there exist \(w\in\mathbb{R}_{+}^{m}\), \(\xi_{j}\in\partial_{d}h_{j}(x+d(x)),j\in\Xi_{m}\) such that such that \((d(x),\alpha(x),w)\) satisfies
\[\sum_{j=1}^{m}w_{j}=1,\ \ \mbox{and}\ \ \sum_{j=1}^{m}w_{j}(\nabla g_{j}(x)+\xi_{j })=0,\]
which implies that
\[{\bf 0}\in Co\mathop{\cup}\limits_{j\in\Xi_{m}}\partial f_{j}(x).\]
Therefore, it follows from Lemma 3.1 that \(x\) is a critical point of (CMOP).
(2) This statement is equivalent to statement (1).
**Lemma 3.5** Let \(d(x)\) and \(\alpha(x)\) be defined in (4) and (6), respectively. Then, the mappings \(d(\cdot)\) and \(\alpha(\cdot)\) are continuous.
_Proof._ Clearly, the following function
\[\theta(x,d):=\max_{i=1,...,m}\theta_{i}(x,d)=\max_{i=1,...,m}\{\nabla g_{i}(x )^{T}d+\frac{1}{2}d^{T}B_{i}(x)d+h_{i}(x+d)-h_{i}(x)\}\]
is continuous with respect to \(x\) and \(d\). Therefore, it follows from [2, Proposition 23] that the optimal value function \(\alpha(\cdot)\) is also continuous. Furthermore, since the optimal set mapping \(d(\cdot)\) is unique, it follows from [14, Corollary 8.1] that \(d(\cdot)\) is continuous.
From Lemma 3.5, it is easy to see that the following result holds true.
**Corollary 3.1** Let \(x_{k}\in\mathbb{R}^{n}\) and \(d(x_{k})\) be the solution of \(\Phi(x_{k})\).
(i) suppose that \(\{x_{k}\}\) converges to \(x^{*}\) and \(d(x^{k})\) converges to \(d^{*}\), then \(d^{*}=d(x^{*})\).
(ii)suppose that \(\{x_{k}\}\) converges to \(x^{*}\) and \(\alpha(x^{k})\) converges to \(\alpha^{*}\), then \(\alpha^{*}=\alpha(x^{*})\).
**Remark 3.4** (i) From Lemma 3.5, we know that \(x\) is a Pareto stationary point of (CMOP) if and only if \(d(x)=0\), which extends and improves Lemma 3.2 in [1] since the \(\nabla^{2}g_{j}(x)\) has been replaced by a positive definite matrix \(B_{j}(x)\) for any \(j\in\Xi_{m}\).
(ii) Corollary 3.1 extends and improves Theorem 3.1 in [1].
**Theorem 3.1** Suppose that Condition 3.1 holds true.
(i) If \(d(x)\) and \(\alpha(x)\) are the solution and the optimal value of \(\Phi(x)\), respectively, then
\[\alpha(x)\leq-\frac{\sigma}{2}\|d(x)\|^{2}, \tag{15}\]
where \(\sigma=\min_{i=1,...,m}\sigma_{i}\).
(ii) if \(\rho\in(0,1)\) and \(x\) is a non critical point of (CMOP), then the following inequality holds true for every \(\lambda>0\) sufficiently small,
\[f_{j}(x+\lambda d(x))\leq f_{j}(x)+\rho\lambda\alpha(x) \tag{16}\]
holds for any \(j\in\Xi_{m}\).
**Proof.** (i) Suppose that \(d(x)\) is a solution of \(\Phi(x)\) and \(\alpha(x)=\theta(x,d(x))\). Then there exists \(w\in\mathbb{R}_{+}^{m}\) such that \((d(x),\alpha(x),w)\) satisfies (9)-(12). Since \(h_{j}\) is convex and \(\xi_{j}\in\partial_{d}h_{j}(x+d(x))\),
\[h_{j}(x+d(x))-h_{j}(x)\leq\xi_{j}^{T}d(x). \tag{17}\]
Multiplying both sides of (10) by \(d(x)\),
\[\sum_{j=1}^{m}w_{j}[\nabla g_{j}(x)^{T}d(x)+d(x)^{T}B_{j}(x)d(x)+\xi_{j}^{T}d( x)]=0.\]
It follows from the above equality and (17) that
\[\sum_{j=1}^{m}w_{j}[\nabla g_{j}(x)^{T}d(x)+d(x)^{T}B_{j}(x)d(x)+h_{j}(x+d(x)) -h_{j}(x)]\leq 0. \tag{18}\]
Taking sum over \(j\in\{1,2,...,m\}\) in (11) and using (9),
\[\sum_{j=1}^{m}w_{j}[\nabla g_{j}(x)^{T}d(x)+d(x)^{T}B_{j}(x)d(x)+h_{j}(x+d(x)) -h_{j}(x)]=\sum_{j=1}^{m}w_{j}\frac{1}{2}d(x)^{T}B_{j}(x)d(x)+\alpha(x).\]
It follows from (18) that
\[\alpha(x)\leq-\sum_{j=1}^{m}w_{j}\frac{1}{2}d(x)^{T}B_{j}(x)d(x). \tag{19}\]
From (19), (9), Condition 3.1 and the definition of \(\sigma\), we obtain
\[\alpha(x)\leq-\frac{\sigma}{2}\|d(x)\|^{2}.\]
(ii) Suppose that \(x\) is non critical. Then from Lemma 3.4, \(d(x)\neq 0\). It follows from (15) that \(\alpha(x)<0\).
From Condition 3.1 and the convexity of \(h_{i}\), we know that for \(i\in\Xi_{m}\) and for any \(\lambda\in[0,1]\),
\[f_{i}(x+\lambda d(x))-f_{i}(x) =g_{i}(x+\lambda d(x))-g_{i}(x)+h_{i}(x+\lambda d(x))-h_{i}(x)\] \[=\lambda(\nabla g_{i}(x)^{T}d(x)+h_{i}(x+\lambda d(x))-h_{i}(x)) +o(\lambda)\] \[<\lambda(\nabla g_{i}(x)^{T}d(x)+\frac{1}{2}d(x)^{T}B_{i}(x)d(x) +h_{i}(x+\lambda d(x))-h_{i}(x))+o(\lambda)\] \[\leq\lambda\alpha(x)+o(\lambda).\]
Then for \(i\in\Xi_{m}\),
\[f_{i}(x+\lambda d(x))-f_{i}(x)-\lambda\rho\alpha(x)\leq\lambda(1-\rho)\alpha( x)+o(\lambda).\]
Since \(\rho\in(0,1)\) and \(\alpha(x)<0\), the right hand side term in the above inequality becomes non positive for every \(\lambda>0\) sufficiently small, which implies that (16) holds true for every \(\lambda>0\) sufficiently small.
**Remark 3.5.** If \(B_{j}(x)\equiv\nabla^{2}g_{j}(x)\) for any \(j\in\Xi_{m},\) then by Theorem 3.1 we recover Theorem 3.2 in [1].
To solve for (CMOP), now we present our new BFGS quasi-Newton methods with line searches. We compute the step length \(\lambda_{k}>0\) by an Armijo rule. Let \(\rho\in(0,1)\) be a constant. If for each \(i\in\Xi_{m},\) the following inequality holds true
\[f_{i}(x^{k}+\lambda_{k}d^{k})\leq f_{i}(x^{k})+\lambda_{k}\rho\alpha(x^{k}),\]
then the \(\lambda_{k}\) is accepted. Otherwise, we begin with \(\lambda_{k}=1\) and if there exists \(i\in\{1,2,...,m\}\) such that the inequality (20) is not satisfied, we update
\[\lambda_{k}:=\zeta\lambda_{k},\]
where \(\zeta\in(0,1).\)
The following result illustrates that \(d^{k}\) produced by the Armijo rule procedure is a descent direction of (CMOP) at a nonstationary points \(x^{k}.\)
**Lemma 3.6** Suppose that Condition 3.1 holds true. Let \(\rho\in(0,1),\)\(d^{k}:=d(x^{k})\) and \(\alpha(x^{k})\) be the solution and the optimal value of \(\Psi(x^{k}),\) respectively. If \(x^{k}\) is not Pareto stationary, then there exists some \(\bar{\lambda}_{k}>0\) such that for each \(i=1,...,m\) and for any \(\lambda\in(0,\bar{\lambda}_{k}],\) the following inequality holds true
\[f_{i}(x^{k}+\lambda d^{k})\leq f_{i}(x^{k})+\lambda\rho\alpha(x^{k}).\]
_Proof._ Let \(\lambda\in(0,1].\) It follows from the convexity of \(h_{i}\) that for each \(i=1,...,m,\) we have
\[\eqalign{h_{i}(x^{k}+\lambda d^{k})-h_{i}(x^{k})=&h_{i}((1-\lambda)x^{k}+ \lambda(x^{k}+d^{k}))-h_{i}(x^{k})\cr\leq&(1-\lambda)h_{i}(x^{k})+\lambda h_{i }(x^{k}+d^{k})-h_{i}(x^{k})\cr=&\lambda[h_{i}(x^{k}+d^{k})-h_{i}(x^{k})].\cr}\]
From Condition 3.1 and the first-order Taylor expansion of \(g_{i},\) we have that for each \(i\in\Xi_{m},\)
\[\eqalign{g_{i}(x^{k}+\lambda d^{k})+h_{i}(x^{k}+\lambda d^{k})\cr\leq&g_{i}(x^ {k})+\lambda\nabla g_{i}(x^{k})^{T}d^{k}+{1\over 2}(\lambda d^{k})^{T}B_{i}(x^{k})( \lambda d^{k})+h_{i}(x^{k})+\lambda(h_{i}(x^{k}+d^{k})-h_{i}(x^{k}))+o(\lambda) \cr=&g_{i}(x^{k})+h_{i}(x^{k})+\lambda[\nabla g_{i}(x^{k})^{T}d^{k}+{\lambda \over 2}(d^{k})^{T}B_{i}(x^{k})(d^{k})+h_{i}(x^{k}+d^{k})-h_{i}(x^{k})]+o(\lambda) \cr\leq&g_{i}(x^{k})+h_{i}(x^{k})+\lambda[\nabla g_{i}(x^{k})^{T}d^{k}+{1 \over 2}(d^{k})^{T}B_{i}(x^{k})(d^{k})+h_{i}(x^{k}+d^{k})-h_{i}(x^{k})]+o(\lambda) \cr\leq&g_{i}(x^{k})+h_{i}(x^{k})+\lambda\alpha(x^{k})+o(\lambda)\cr=&g_{i}( x^{k})+h_{i}(x^{k})+\lambda\rho\alpha(x^{k})+\lambda\left[(1-\rho)\alpha(x^{k})+{o( \lambda)\over\lambda}\right],\cr}\]
where \(B_{i}(x^{k})\) is some approximation of \(\nabla^{2}g_{i}(x^{k}),i\in\Xi_{m},\) the second inequality follows from the positive definiteness of \(B_{i}(x^{k})\) and \(\lambda\in(0,1],\) and the third one comes from the definition of \(\alpha(x)\)
in Remark 3.2. Since \(x^{k}\) is not Pareto stationary, we have \(\alpha(x^{k})<0\) from Lemma 3.5. It follows from \(\rho\in(0,1)\) that there exists some \(\bar{\lambda}_{k}>0\) such that for each \(i\in\Xi_{m}\),
\[g_{i}(x^{k}+\lambda d^{k})+h_{i}(x^{k}+\lambda d^{k})\leq g_{i}(x^{k})+h_{i}(x^ {k})+\lambda\rho\alpha(x^{k}),\]
for any \(\lambda\in(0,\bar{\lambda}_{k}]\).
To simplify the notation we will use \(B_{i}^{k}\) to denote \(B_{i}(x^{k})\) for all \(i=1,...,m\) and \(k=0,1,2,...\).
Now, we would like to state our new quasi-Newton methods with line searches for (CMOP) as follows:
**Algorithm 3.1**
Step 1 Choose \(\omega>0\), \(\rho\in(0,1)\), \(\zeta\in(0,1)\), \(x^{0}\in\mathbb{R}^{n}\), symmetric positive definite matrix \(B_{i}^{0}\in\mathbb{R}^{n\times n},i=1,...,m\) and set \(k:=0\);
Step 2 Compute \(d(x^{k})\) and \(\alpha(x^{k})\) by solving subproblem (4) with \(x=x^{k}\), let \(d^{k}:=d(x^{k})\);
Step 3 If \(d^{k}=0\), then stop. Otherwise, proceed to the next step;
Step 4 Compute the step length \(\lambda_{k}\in(0,1]\) as the maximum of the following set
\[\Lambda_{k}:=\{\lambda=\zeta^{j}|j\in\mathbb{N},f_{i}(x^{k}+\lambda d^{k})\leq f _{i}(x^{k})+\lambda\rho\alpha(x^{k}),i=1,...,m\};\]
Step 5 Set \(x^{k+1}=x^{k}+\lambda_{k}d^{k}\), update \(\{B_{i}^{k}\}\) by following BFGS update formula for each \(i=1,2,...,m\)
\[B_{i}^{k+1}=B_{i}^{k}-\frac{B_{i}^{k}s^{k}{(s^{k})}^{T}B_{i}^{k}}{{(s^{k})}^{ T}B_{i}^{k}s^{k}}+\frac{y_{i}^{k}{(y_{i}^{k})}^{T}}{{(s^{k})}^{T}y_{i}^{k}}, \tag{21}\]
where \(s^{k}=x^{k+1}-x^{k}=\lambda_{k}d^{k}\), \(y_{i}^{k}=\nabla g_{i}(x^{k+1})-\nabla g_{i}(x^{k})\).
Step 5 Set \(k:=k+1\), and go to Step 2.
**Remark 3.6**: (1) It follows from Lemma 3.4 that Algorithm 3.1 stops at Step 3 with a Pareto stationary point or produces an infinite sequence of nonstationary points \(\{x^{k}\}\). If Step 4 is reached in some iteration \(k\), it means that in Step 3, \(d^{k}\neq 0\), or equivalently, \(\alpha(x^{k})<0.\) It follows from the Armijo condition and Lemma 3.6 that objective value sequence \(\{F(x^{k})\}\) is \(\mathbb{R}^{m}_{++}\)-decrease, i.e.,
\[F(x^{k+1})<F(x^{k})\;\mbox{for all}\;k.\]
(2) It follows from [15] and the references therein that if \(g_{i}\) is a strongly convex function, then the matrix \(B_{i}^{k+1}\) obtained from each of the mentioned updating formula (21) for approximating the Hessian matrix \(\nabla^{2}g_{i}(x^{k+1})\) always preserves positive definiteness.
## 4 Convergence analysis
In this section, we prove that the sequences generated by Algorithm 3.1 converges to Pareto stationary points of (CMOP) and the Condition 3.1 will be replaced by the following form which has been used in [24]:
**Condition 4.1** For all \(k\) and all \(j=1,...,m\), we have \(z^{T}B_{j}(x^{k})z\geq\sigma\|z\|^{2}\).
**Lemma 4.1** Suppose that Condition 4.1 holds true, \(\{d^{k}\}\) is generated by Algorithm 3.1 and that \(\{f_{i}(x^{k})\}\) is bounded from below for all \(i=1,...,m\). Then, it follows that
\[\lim_{k\rightarrow\infty}\lambda_{k}\big{\|}d^{k}\big{\|}^{2}=0.\]
_Proof_. It follows from Lemma 3.6 and step 4 of Algorithm 3.1 that there exists \(\lambda_{k}>0\) such that for each \(i\in\Xi_{m}\),
\[f_{i}(x^{k}+\lambda_{k}d^{k})\leq f_{i}(x^{k})+\lambda_{k}\rho\alpha(x^{k}).\]
Adding up the above inequality from \(k=0\) to \(k=\hat{k}\), where \(\hat{k}\) is a positive integer, we get
\[f_{i}(x^{\hat{k}+1})\leq f_{i}(x^{0})+\rho\sum_{k=0}^{\hat{k}}\lambda_{k} \alpha(x^{k}). \tag{22}\]
By (22) and Theorem 3.1, we get
\[f_{i}(x^{\hat{k}+1})\leq f_{i}(x^{0})-\frac{\sigma}{2}\rho\sum_{k=0}^{\hat{k}} \lambda_{k}\big{\|}d^{k}\big{\|}^{2}. \tag{23}\]
Since \(\{f_{i}(x^{k})\}\) is bounded from below for all \(i=1,...,m\), there exists \(\hat{f_{i}}\in\mathbb{R}\) such that \(\hat{f_{i}}\leq f_{i}(x^{k})\) for all \(i\) and \(k\).
It follows from (23) that
\[\sum_{k=0}^{\hat{k}}\lambda_{k}\big{\|}d^{k}\big{\|}^{2}\leq \frac{2}{\rho\sigma}(f_{i}(x^{0})-f_{i}(x^{\hat{k}+1}))\] \[\leq \frac{2}{\rho\sigma}(f_{i}(x^{0})-\hat{f_{i}}).\]
Taking \(\hat{k}\rightarrow\infty\), we have \(\sum\limits_{k=0}^{\infty}\lambda_{k}\big{\|}d^{k}\big{\|}^{2}<\infty\)
and hence \(\lim\limits_{k\rightarrow\infty}\lambda_{k}\big{\|}d^{k}\big{\|}^{2}=0\).
**Theorem 4.1** (i) Assume that the Condition 4.1 holds true and that \(\{f_{i}(x^{k})\}\) is bounded from below for all \(i=1,...,m\), then every accumulation point of the sequence \(\{x^{k}\}\) generated by Algorithm 3.1, if it exists, is both a Pareto stationary point and a Pareto optimum of (CMOP).
(ii) Moreover, if the level set of \(F\) in the sense that \(\{x\in\mathbb{R}^{n}\mid F(x)\leq F(x^{0})\}\) is bounded, then \(\{x^{k}\}\) has accumulation points and they are all Pareto stationary points and Pareto optimums of (CMOP).
_Proof_. (i) Let \(\bar{x}\) be an accumulation point of \(\{x^{k}\}\) and let \(\{x^{k_{j}}\}\) be a subsequence converging to \(\bar{x}\). We now prove that \(d(\bar{x})=0\). On the contrary, we assume that \(d(\bar{x})\neq 0\). Then, by Lemma
4.1, we get \(\lambda_{k_{j}}\to 0\). It follows from the definition of \(\lambda_{k_{j}}\) in Step 4 of Algorithm 3.1 that for sufficiently large \(j\) there exists some \(i_{k_{j}}\in\{1,...,m\}\) such that
\[f_{i_{k_{j}}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})>f_{i_{k_{j}}}(x^{k_{ j}})+\zeta^{-1}\lambda_{k_{j}}\rho\alpha(x^{k_{j}}).\]
Since \(i\) only takes finite number of values in \(\{1,...,m\}\), we can assume that \(i_{k_{j}}=\bar{i}\) without loss of generality. We thus obtain
\[\frac{f_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})-f_{\bar{i}}(x^{ k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}>\rho\alpha(x^{k_{j}}). \tag{24}\]
It follows from \(0<\zeta^{-1}\lambda_{k_{j}}<1\), the definition of \(\alpha(x)\) and Theorem 23.1 in [21] that
\[\alpha(x^{k_{j}})\geq \nabla g_{\bar{i}}(x^{k_{j}})^{T}d^{k_{j}}+\frac{1}{2}(d^{k_{j}}) ^{T}B_{\bar{i}}(x^{k_{j}})(d^{k_{j}})+h_{\bar{i}}(x^{k_{j}}+d^{k_{j}})-h_{\bar {i}}(x^{k_{j}})\] \[\geq \frac{\zeta^{-1}\lambda_{k_{j}}\nabla g_{\bar{i}}(x^{k_{j}})^{T}d ^{k_{j}}+\frac{1}{2}\zeta^{-1}\lambda_{k_{j}}(d^{k_{j}})^{T}B_{\bar{i}}(x^{k_{ j}})(d^{k_{j}})+h_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})-h_{\bar {i}}(x^{k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}\] \[= \frac{g_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})+h _{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})-g_{\bar{i}}(x^{k_{j}} )-h_{\bar{i}}(x^{k_{j}})+o((\zeta^{-1}\lambda_{k_{j}}\left\|d^{k_{j}}\right\| \right)^{2})}{\zeta^{-1}\lambda_{k_{j}}}\] \[= \frac{f_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})-f _{\bar{i}}(x^{k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}+\frac{o((\zeta^{-1}\lambda_{ k_{j}}\left\|d^{k_{j}}\right\|)^{2})}{\zeta^{-1}\lambda_{k_{j}}},\]
where \(B_{\bar{i}}(x^{k_{j}})\) is an approximation of \(\nabla^{2}g_{\bar{i}}(x^{k_{j}})\) which updated by (21). Thus, we obtain
\[\alpha(x^{k_{j}})\geq\frac{f_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{ k_{j}})-f_{\bar{i}}(x^{k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}+\frac{o(\left(\zeta^{-1} \lambda_{k_{j}}\left\|d^{k_{j}}\right\|\right)^{2})}{\zeta^{-1}\lambda_{k_{j}}}. \tag{25}\]
It follows from (24) and (25) that
\[\frac{f_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})-f_{\bar{i}}(x^{ k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}>\rho\frac{f_{\bar{i}}(x^{k_{j}}+\zeta^{-1} \lambda_{k_{j}}d^{k_{j}})-f_{\bar{i}}(x^{k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}+ \rho\frac{o(\left(\zeta^{-1}\lambda_{k_{j}}\left\|d^{k_{j}}\right\|\right)^{2} )}{\zeta^{-1}\lambda_{k_{j}}}.\]
Therefore, we have
\[\frac{f_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})-f_{\bar{i}}(x^ {k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}>(\frac{\rho}{1-\rho})\frac{o(\left(\zeta^ {-1}\lambda_{k_{j}}\left\|d^{k_{j}}\right\|\right)^{2})}{\zeta^{-1}\lambda_{k_{ j}}}. \tag{26}\]
It follows from Theorem 3.1 that
\[\alpha(x^{k_{j}})\leq-\frac{\sigma}{2}\big{\|}d^{k_{j}}\big{\|}^{2}.\]
Since \(d^{k_{j}}\to d(\bar{x})\neq 0\), by the above inequality, (25) and (26), we know that there exists \(\gamma=\frac{\sigma}{2}\|d(\bar{x})\|^{2}>0\) such that
\[-\gamma\geq\lim_{j\rightarrow\infty}\alpha(x^{k_{j}})\geq\lim_{j\rightarrow \infty}[\frac{f_{\bar{i}}(x^{k_{j}}+\zeta^{-1}\lambda_{k_{j}}d^{k_{j}})-f_{\bar{ i}}(x^{k_{j}})}{\zeta^{-1}\lambda_{k_{j}}}+\frac{o(\left(\zeta^{-1}\lambda_{k_{j}} \left\|d^{k_{j}}\right\|\right)^{2})}{\zeta^{-1}\lambda_{k_{j}}}]\]
\[\geq\lim_{j\rightarrow\infty}[(\frac{\rho}{1-\rho})\frac{o(\left(\zeta^{-1} \lambda_{k_{j}}\left\|d^{k_{j}}\right\|\right)^{2})}{\zeta^{-1}\lambda_{k_{j}}}+ \frac{o(\left(\zeta^{-1}\lambda_{k_{j}}\left\|d^{k_{j}}\right\|\right)^{2})}{ \zeta^{-1}\lambda_{k_{j}}}]=0,\]
which contradicts the fact that \(\gamma>0\). Therefore, we conclude that \(d(\bar{x})=0\).
It follows from Lemma 3.4 that \(\bar{x}\) is Pareto stationary point of (CMOP). By Lemma 2.1 and the strong convexity of \(f_{i}\), we know that \(\bar{x}\) is also a Pareto optimum of (CMOP).
(ii)It follows from Remark 4.1(1) that the objective value sequence \(\{F(x^{k})\}\) is \(\mathbb{R}^{m}_{++}\)-decrease. Moreover, since the set \(\{x\in\mathbb{R}^{n}\mid F(x)\leq F(x^{0})\}\) is bounded, Thus, the sequence \(\{x^{k}\}\) generated by Algorithm 3.1 is contained in the above set and so it is also bounded and has at least one accumulation point, which is a Pareto stationary point and a Pareto optimum of (CMOP) according to the first statement.
## 5 Conclusion
First, for the composite multiobjective optimization problems (in short, CMOP), where each objective function is the sum of a twice continuously differentiable strongly convex function and a proper convex and lower semicontinuous but not necessarily differentiable function, the BFGS quasi-Newnon method with Armijo line search are introduced. Secondly, under appropriate conditions, we prove that each cluster point of the sequence generated by the BFGS quasi-Newton algorithms is both a Pareto stationary point and a Pareto optimum of (CMOP). Thirdly, numerical experiments are performed to verify the effectiveness of the proposed algorithms. In the future, there still exists an interesting topics about the convergence rate of the proposed algorithms.
|
2303.17941 | Comparing Adversarial and Supervised Learning for Organs at Risk
Segmentation in CT images | Organ at Risk (OAR) segmentation from CT scans is a key component of the
radiotherapy treatment workflow. In recent years, deep learning techniques have
shown remarkable potential in automating this process. In this paper, we
investigate the performance of Generative Adversarial Networks (GANs) compared
to supervised learning approaches for segmenting OARs from CT images. We
propose three GAN-based models with identical generator architectures but
different discriminator networks. These models are compared with
well-established CNN models, such as SE-ResUnet and DeepLabV3, using the
StructSeg dataset, which consists of 50 annotated CT scans containing contours
of six OARs. Our work aims to provide insight into the advantages and
disadvantages of adversarial training in the context of OAR segmentation. The
results are very promising and show that the proposed GAN-based approaches are
similar or superior to their CNN-based counterparts, particularly when
segmenting more challenging target organs. | Leonardo Crespi, Mattia Portanti, Daniele Loiacono | 2023-03-31T10:10:05Z | http://arxiv.org/abs/2303.17941v1 | # Comparing Adversarial and Supervised Learning for Organs at Risk Segmentation in CT images
###### Abstract
Organ at Risk (OAR) segmentation from CT scans is a key component of the radiotherapy treatment workflow. In recent years, deep learning techniques have shown remarkable potential in automating this process. In this paper, we investigate the performance of Generative Adversarial Networks (GANs) compared to supervised learning approaches for segmenting OARs from CT images. We propose three GAN-based models with identical generator architectures but different discriminator networks. These models are compared with well-established CNN models, such as SE-ResUnet and DeepLabV3, using the StructSeg dataset, which consists of 50 annotated CT scans containing contours of six OARs. Our work aims to provide insight into the advantages and disadvantages of adversarial training in the context of OAR segmentation. The results are very promising and show that the proposed GAN-based approaches are similar or superior to their CNN-based counterparts, particularly when segmenting more challenging target organs.
Medical Image Segmentation, Deep Learning, GAN, CNN
## I Introduction
In the medical field, segmentation of regions of interest (ROIs), such as organs, tissues, tumors, or other structures, from various imaging modalities like Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and X-ray scans, plays a crucial role in numerous diagnostic and treatment workflows, particularly in radiotherapy. Manual segmentation can be challenging, prone to error, and extremely time-consuming, especially when handling large or multiple targets. Consequently, automating this process has garnered significant interest within the scientific community and might lead to substantial improvements in clinical workflows.
A notable application in radiotherapy involves the segmentation of Organs at Risk (OAR/OARs), which necessitates comprehensive and precise delineation of organs and tissues, alongside the identification of target regions. This process is critical for treatments like Total Marrow and Lymphoid Irradiation [1] and can take more than ten hours to complete. OAR segmentation involves acquiring a patient's CT scan, which is then analyzed by a clinician to identify not only the target areas for irradiation but also the OARs to be spared. This is possible thanks to modern radiation therapy equipment that enables highly precise and customized treatment delivery, tailored to each patient's unique anatomy and needs.
In the past decade, the rapid rise of Deep Learning (DL) has revolutionized computer vision, with Convolutional Neural Networks (CNNs) becoming widely applied in image processing tasks [2]. State-of-the-art models now utilize CNNs, vision transformers [3], and other DL models for image semantic segmentation. The literature is abundant with diverse architectures, featuring various modifications, inputs, parameters, depth, and training paradigms. Although several models and algorithms have demonstrated remarkable performance [4, 5], the vast array of approaches makes it difficult to determine the best practices for creating a successful model [6]. In the medical imaging field, CNN models for segmentation typically employ U-Net style architectures, use 2D input, and are trained in a supervised manner [5]. This is partly due to the increasing availability of annotated public datasets and collaborations with medical centers that provide them.
While supervised training is currently the most popular choice for training CNNs, Generative Adversarial Networks (GANs) [7] have emerged as a highly promising method for generating high-quality images and performing tasks such as image-to-image translation [8], image super-resolution [9], and image restoration. Although adversarial learning has been primarily applied to image generation tasks, recent studies have shown its potential for semantic segmentation tasks [10, 11], even in the medical domain [12]. However, it remains unclear whether using an adversarial learning paradigm could lead to more reliable and better-performing models. In this work, we aim to investigate this aspect by comparing two solutions for OAR segmentation, trained using both supervised and adversarial approaches.
### _Objectives_
This work aims to explore the merits and drawbacks of using adversarial training, particularly with GANs, in contrast to a supervised learning approach, specifically in the context of segmenting Organs at Risk (OARs) from CT images.
To achieve this goal, we propose three GAN-based models for OARs segmentation in CT scans, drawing inspiration from the work of Tan et al. [13]. These models share the same generator architecture but employ different discriminator networks. We compare these models with established and well-tested CNN models, such as SE-ResUnet [7] and DeepLabV3 [8], using a dataset comprising 50 CT scans from various patients, totaling 3861 images. Furthermore, we explore an approach for multi-class segmentation. The comparison focuses on six OARs and employs two evaluation metrics: Dice Score Coefficient (DSC) and Hausdorff Distance (HD), which are |
2309.12862 | Associative Transformer | Emerging from the pairwise attention in conventional Transformers, there is a
growing interest in sparse attention mechanisms that align more closely with
localized, contextual learning in the biological brain. Existing studies such
as the Coordination method employ iterative cross-attention mechanisms with a
bottleneck to enable the sparse association of inputs. However, these methods
are parameter inefficient and fail in more complex relational reasoning tasks.
To this end, we propose Associative Transformer (AiT) to enhance the
association among sparsely attended input patches, improving parameter
efficiency and performance in relational reasoning tasks. AiT leverages a
learnable explicit memory, comprised of various specialized priors, with a
bottleneck attention to facilitate the extraction of diverse localized
features. Moreover, we propose a novel associative memory-enabled patch
reconstruction with a Hopfield energy function. The extensive experiments in
four image classification tasks with three different sizes of AiT demonstrate
that AiT requires significantly fewer parameters and attention layers while
outperforming Vision Transformers and a broad range of sparse Transformers.
Additionally, AiT establishes new SOTA performance in the Sort-of-CLEVR
dataset, outperforming the previous Coordination method. | Yuwei Sun, Hideya Ochiai, Zhirong Wu, Stephen Lin, Ryota Kanai | 2023-09-22T13:37:10Z | http://arxiv.org/abs/2309.12862v3 | # Associative Transformer is a Sparse Representation Learner
###### Abstract
Emerging from the monolithic pairwise attention mechanism in conventional Transformer models, there is a growing interest in leveraging sparse interactions that align more closely with biological principles. Approaches including the Set Transformer and the Perceiver employ cross-attention consolidated with a latent space that forms an attention bottleneck with limited capacity. Building upon recent neuroscience studies of Global Workspace Theory and associative memory, we propose the **A**ssociative **T**ransformer (AiT). AiT induces low-rank explicit memory that serves as both priors to guide bottleneck attention in the shared workspace and attractors within associative memory of a Hopfield network. Through joint end-to-end training, these priors naturally develop module specialization, each contributing a distinct inductive bias to form attention bottlenecks. A bottleneck can foster competition among inputs for writing information into the memory. We show that AiT is a sparse representation learner, learning distinct priors through the bottlenecks that are complexity-invariant to input quantities and dimensions. AiT demonstrates its superiority over methods such as the Set Transformer, Vision Transformer, and Coordination in various vision tasks.
## 1 Introduction
The predominant paradigm in conventional deep neural networks has been characterized by a monolithic architecture, wherein each input sample is subjected to uniform processing within a singular model framework. For instance, Transformer models use pairwise attention to establish correlations among disparate segments of input information Vaswani et al. (2017); Dosovitskiy et al. (2021). Emerging from the pair-wise attention mechanism, there is a growing interest in leveraging modular and sparse interactions that align more closely with biological principles. This sparsity attribute has demonstrated advantages in enhancing model performance and learning efficiency, making it a crucial element for intelligent entity learning Brooks (1991); Greff et al. (2020); Minsky (1986).
Modularization of knowledge can find resonance with the neuroscientific grounding of the Global Workspace Theory (GWT) Baars (1988); Dehaene S. (1998); VanRullen and Kanai (2020); Juliani et al. (2022). GWT explains a fundamental cognitive architecture for information processing within the brain, where diverse specialized modules compete to write information into a shared workspace through a communication bottleneck. The bottleneck facilitates the processing of content-addressable information through attention that is guided by working memory Awh et al. (2006); Gazzaley and Nobre (2011). The coordination method Goyal et al. (2022b) represents the initial attempt to assess the effectiveness of GWT in conventional neural network models. Unfortunately, this method relies on iterative cross-attention for both information writing and retrieval within the shared workspace. When examining information retrieval in the human brain, it is evident that memory typically encompasses both working memory and long-term memory in the hippocampus. Specifically, the hippocampus operates on Hebbian learning for retrieving information from working memory, akin to the associative memory found in Hopfield networks Hopfield (2007); Ramsauer et al. (2021). Our research has revealed that replacing such a repetitive attention-based mechanism
with a consolidated, more biologically-plausible associative memory can lead to improved model performance. Associative memory has the capability to directly store and retrieve patterns from the shared workspace without the need for additional parameters by relying on an energy function, which fundamentally differs from an attention mechanism. Our objective is to introduce a shared workspace augmented with associative memory into a Transformer model, thereby facilitating a more comprehensive and efficient association of information fragments.
To this end, we propose the **A**ssocative **T**ransformer (AiT) based on a novel global workspace layer augmented by associative memory. The global workspace layer entails three main components: 1) the squash layer: input data is transformed into a list of patches regardless of which samples they come from, 2) the bottleneck attention: patches are sparsely selected to learn a set of priors in low-rank memory based on a bottleneck attention mechanism, and 3) the Hopfield network: information is broadcast from the shared workspace to update the current input based on the associative memory of a Hopfield network. Moreover, the bottleneck attention and the low-rank memory contributes to reduced model complexity. However, cascading multiple of these components may lead to difficulty in the emergence of specialized priors in explicit memory. As information flows through multiple layers, it becomes more challenging to maintain specialized priors from diluted representations. Consequently, learning specialized priors in layers cascaded in depth requires a mechanism that counteracts this inherent loss of input specificity. To overcome this challenge, we propose the bottleneck attention balance loss to encourage the diverse selection of inputs in the shared workspace. Through end-to-end training, we show the emerging specialization of low-rank priors, contributing to enhanced performance in vision tasks. This distinguishes our work from previous literature, which relied on latent memory comprising indistinct priors with the same dimension as the input, such as Set Transformer Lee et al. (2019), Perceiver Jaegle et al. (2021), and Luna Ma et al. (2021). The no-free-lunch theorem Baxter (2000); Goyal and Bengio (2020b) states that a set of inductive bias over the space of all functions is necessary to obtain generalization. We demonstrate that the specialization of priors serves as critical inductive biases, encouraging competition among input data and inducing sparsity in the attention mechanism of Transformer models.
Overall, the main contributions of this work are as follows. (1) This work proposes a more biologically plausible learning framework called Associative Transformer (AiT) based on the Global Workspace Theory and associative memory. (2) AiT is a sparse representation learner, leveraging sparse bottleneck attention enhanced by a novel attention balance loss to acquire naturally emerging specialized priors. (3) We devise low-rank priors that are adaptively encoded and decoded for increased memory capacity. AiT can learn a large set of specialized priors (up to 128) from a diverse pool of patches (up to 32.8k). (4) The learned priors serve as attractors within the associative memory of a Hopfield network, enabling information broadcast from the workspace. This is the first work to incorporate the Hopfield network as an integral element in a sparse attention mechanism.
## 2 Related work
This section provides a summary of relevant research concerning sparse attention architectures. We investigate and compare these studies based on their relatedness to the global workspace theory in terms of several key conditions (please see Appendix A.2 for a complete comparison).
Transformer models do not possess inductive biases that allow the model to attend to different segments of the input data Goyal and Bengio (2020a). To enhance Transformer models, studies of sparse attention architectures explored consolidating latent memory to extract contextual representations from input data Gupta and Berant (2020); Jaegle et al. (2021); Goyal et al. (2022b); Jaegle et al. (2022); Lee et al. (2019); Ma et al. (2021). For instance, Perceiver Jaegle et al. (2021) and Perceiver IO Jaegle et al. (2022) used iterative cross-attention with a latent array as priors and a latent transformation applied to the priors, to capture dependencies across input data. Set Transformer Lee et al. (2019) and Linear Unified Nested Attention (Luna) Ma et al. (2021) employed iterative cross-attention, but without using a latent transformation. Other attention mechanisms that rely on strong inductive biases with predefined network modularization are omitted Qiu et al. (2020). In our method, distinct priors naturally emerge through end-to-end training. Moreover, the previous methods using latent memory necessitated priors with the same dimension as the input. In contrast, we devise low-rank priors that can be encoded and decoded adaptively for increased memory capacity.
In the same vein of building sparse attention mechanisms through a shared workspace, Coordination Goyal et al. (2022b) used iterative cross-attentions via a bottleneck to encourage more effective module communication. They argued that more flexibility and generalization could emerge through the competition of specialized modules. However, the priors in the coordination method possess the same dimension as the input, and the number of priors is limited to fewer than 10. The evaluation was also restricted to simple tasks. Unlike the coordination method, we propose low-rank explicit memory to learn a larger set of specialized priors (up to 128) from a pool of patches (up to 32.8k). Moreover, the coordination method relies on iterative cross-attentions to learn such priors, while this work focuses on a novel learning method of associative memory-augmented attention.
Furthermore, external memory such as tape storage and associative memory has been successfully employed Graves et al. (2014); Gulcehre et al. (2018); Krotov and Hopfield (2016); Hoover et al. (2023). Recent studies explored the potential use of Hopfield networks Hopfield (2007) and their modern variants Demircigil et al. (2017); Ramsauer et al. (2021) in Transformers. In contrast to these investigations, we incorporate Hopfield networks as an integral element in constructing the global workspace layer, functioning as a mechanism for information broadcast in the shared workspace. This goal is fundamentally different from prior studies focused on using Hopfield networks independently of the attention mechanism.
## 3 Inspecting attention heads in Vision Transformers
Vision Transformers (ViT) tackle image classification tasks by processing sequences of image patches. The pre-processing layer partitions an image into non-overlapping patches, followed by a learnable linear projection layer. Let \(x\in\mathbb{R}^{H\times W\times C}\) be an input, where \((H,W)\) is the resolution of the image and \(C\) is the number of channels. \(x\) is separated into a sequence of patches \(x_{p}\in\mathbb{R}^{N\times(P^{2}\cdot C)}\), where \((P,P)\) is the resolution of each image patch and \(N=\frac{HW}{P^{2}}\) is the number of patches. These patches are mapped to embeddings \(v_{p}\in\mathbb{R}^{N\times E}\) with the linear projection. ViT leverages self-attention where each head maps a query and a set of key-value pairs to an output. The patch embeddings are used to obtain the query, key, and value based on linear transformations \(W^{Q}\in\mathbb{R}^{E\times D},\,W^{K}\in\mathbb{R}^{E\times D},\,\) and, \(W^{V}\in\mathbb{R}^{E\times D}\). The output is a weighted sum of the values:
\[\text{h}^{i}(v)=\text{softmax}(\frac{W_{i}^{Q}v(W_{i}^{K}v)^{T}}{ \sqrt{D}})\,W_{i}^{V}v, \tag{1}\]
\[\text{Multi-head}(v)=\text{Concat}(\text{h}^{1},\dots,\text{h}^{A})\,W^{O}, \tag{2}\]
where \(W^{O}\) is a linear transformation for outputs, and \(A\) is the number of attention heads.
We assume that the competition within the pair-wise attention of different patches would be of importance for the model to learn meaningful representations. If such competition exists, a trained model will naturally result in sparser interactions in attention heads. Therefore, we first performed an analysis of the operating modes of different attention heads in a pretrained ViT model by measuring the number of patches each head is attending to. We refer to Appendix A.4 for the detailed experimental settings. The inspection revealed the existing competition among patches and a large redundancy in the pair-wise attention. Less than 80% interactions were activated in ViT, and several heads from the middle layers used only 50% or less interactions with higher sparsity compared to the other layers. Based on the observation, by introducing a bottleneck that limits each attention head's focus to foster competition, we obtain inductive biases for more efficient patch learning.
## 4 Associative Transformer
This section discusses the essential building blocks of the Associative Transformer (AiT), where patches compete to write into the shared workspace through bottleneck attention. The workspace enables an efficient information writing and reading mechanism by learning a set of priors in explicit memory. These priors are low-rank and learned progressively from the input through end-to-end training. The priors guide the bottleneck attention with an emerging specialization property. Moreover, we extend the learned priors to attractors within the associative memory of a Hopfield network, facilitating information retrieval from memory and efficient association of information fragments.
### Global workspace layer
We devise an associative memory-augmented attention layer called the _global workspace layer_, which comprises the squash layer, the bottleneck attention guided by low-rank memory, and the information retrieval within the associative memory of a Hopfield network (Figure 1). The global workspace layer can be seen as an add-on component on the monolithic Vision Transformer, where the feed-forward layers process patches before they enter the workspace, facilitating abstract relation learning, and the self-attention learns the contextual relations for a specific sample. The global workspace layer learns spatial relations across various samples and time steps.
Squash layerIn self-attention, patches from the same sample are attended to. In our work, we improve the diversity in patch-wise correlation learning beyond one sample using a _squash layer_. The squash layer obtains patch representations from the entire training batch to enable competition among patches not only from the same sample but also from different samples. This differs from traditional approaches where the competition resides within specific samples. The squash layer concatenates patches within one batch \(V\in\mathbb{R}^{B\times N\times E}\) into vectors \(V\in\mathbb{R}^{(B\times N)\times E}\), which forms a list of patches regardless of the samples they are from. Though the number of patches changes in practice depending on the batch size, the communication bottleneck with a fixed capacity \(k\) limits the number of patches the workspace can attend to at any given time. Since the bottleneck decreases the complexity from \(O((B\times N)^{2})\) to \(O((B\times N)\times k)\), using the squash layer increases the diversity of input patches without adding to the complexity. With the greater diversity, a sample's classification task, for instance, can benefit from other patches belonging to the same class within the batch input.
Low-rank explicit memoryAn explicit memory bank with limited slots aims to learn \(M\)_priors_\(\gamma=\mathbb{R}^{M\times D}\) where \(D\) is the dimension of the prior. The priors in the memory bank are used as various keys to compute the bottleneck attentions that extract different sets of patches from the squashed input. Furthermore, using low-rank priors reduces memory consumption, as a lower dimension \(D<<E\) is obtained through a down-scale linear transformation.
### Bottleneck attention with a limited capacity
The objective of the bottleneck attention is to learn a set of priors that guide attention to various input patches. This is enabled by a cross-attention mechanism constrained by hard attention. We first consider a tailored cross-attention mechanism to update the memory bank based on the squashed input \(\Xi^{t}=V^{t}\in\mathbb{R}^{(B\times N)\times E}\), then we discuss the case of limiting the capacity via a top-\(k\) hard attention. Notably, in the cross-attention, the query is a function of the current memory content
Figure 1: The scheme of the Associative Transformer. (a) In a global workspace layer, the input \(\mathbb{R}^{B\times N\times E}\) is squashed into vectors \(\mathbb{R}^{(B\times N)\times E}\). The squashed representations are projected to a low-rank latent space of dimension \(D<<E\) and then are sparsely selected and stored in the explicit memory via a fixed bottleneck \(k<<(B\times N)\). The Hopfield network utilizes the memory to reconstruct the input, where a learnable linear transformation (LT) scales the memory contents to match the input dimension \(E\). (b) The Associative Transformer block consists of sequentially connected self attention, feed-forward layers, and the global workspace layer.
\(\gamma^{t}=\{\gamma^{t}_{i}\}_{i=1}^{M}\). The key and value are functions of the squashed input \(\Xi^{t}\). The attention scores for head \(i\) can be computed by \(\mathsf{A}^{t}_{i}(\gamma^{t},\Xi^{t})=\text{softmax}(\frac{\gamma^{t}W^{O}_{i, t}(\Xi^{t}W^{K}_{i,t})^{T}}{\sqrt{D}})\). This is the case of soft attention with limited constraints on the bottleneck capacity. Moreover, the hard attention allows patches to compete to enter the workspace through a \(k\)-size bottleneck, fostering the selection of essential patches. In particular, the top-\(k\) patches with the highest attention scores from \(A^{t}_{i}\) are selected to update the memory. To ensure a stable update across different time steps, we employ the layer normalization and the Exponentially Weighted Moving Average (EWMA) method as follows
\[\text{head}^{t}_{i}=\text{top-}k(\mathsf{A}^{t}_{i})\Xi^{t}W^{V}_{t},\ \hat{\gamma}^{t}=\text{LN}(\text{Concat}(\text{head}^{t}_{1},\dots,\text{head }^{t}_{A})W^{O}), \tag{3}\]
\[\gamma^{t+1}=\alpha\cdot\gamma^{t}+(1-\alpha)\cdot\hat{\gamma}^{t},\ \gamma^{t+1}=\frac{\gamma^{t+1}}{\sqrt{\sum_{j=1}^{M}(\gamma^{t+1}_{j})^{2}}}, \tag{4}\]
where top-\(k\) selects the \(k\) highest attention scores, LN is the layer normalization, and \(\alpha\) is a smoothing factor determining the decay rate of older observations. EWMA ensures the stable memory update with varying batch sizes by accumulating both old \(\gamma^{t}\) and new memories \(\hat{\gamma}^{t}\).
During the test time, the explicit memory is frozen, functioning as fixed priors, and any memory update from the bottleneck attention will not be retained (Figure 8). We only compute \(\gamma^{t+1}\) for the following pattern retrieval step in Hopfield networks for the current batch. To ensure a fair evaluation on the test dataset, the same explicit memory from the training time is utilized across all test batches.
Bottleneck attention balance lossThe bottleneck attention and the low-rank memory contribute to reduced model complexity of the global workspace layer. Nevertheless, employing multiple components cascaded in depth might lead to difficulty in the emergence of specialized priors in the explicit memory (Figure 9). To overcome this challenge, we propose the bottleneck attention balance loss to encourage the selection of diverse patches from different input positions. The bottleneck attention balance loss \(\ell_{\text{bottleneck}}\) comprises two components, i.e., the accumulative attention scores and the chosen instances for each input position. Then, we derive the normalized variances of the two metrics across different positions as follows
\[\ell_{\text{loads}_{i,l}}=\sum_{j=1}^{M}(\mathsf{A}^{t}_{i,j,l}>0),\ \ell_{\text{importance}_{i,l}}=\sum_{j=1}^{M}\mathsf{A}^{t}_{i,j,l}, \tag{5}\]
\[\ell_{\text{bottleneck}_{i}}=\frac{\text{Var}(\{\ell_{\text{importance}_{i,l} }\}_{i=1}^{B\times N})}{(\frac{1}{B\times N}\sum_{l=1}^{B\times N}\ell_{\text {importance}_{i,l}})^{2}+\epsilon}+\frac{\text{Var}(\{\ell_{\text{load}_{i,l} }\}_{i=1}^{B\times N})}{(\frac{1}{B\times N}\sum_{l=1}^{B\times N}\ell_{\text {load}_{i,l}})^{2}+\epsilon}, \tag{6}\]
where \(\mathsf{A}^{t}_{i,j,l}\) denotes the attention score of the input position \(l\) for the \(j\)th memory slot of head \(i\), \(\ell_{\text{importance}}\) represents the accumulative attention scores for all \(M\) memory slots concerning each input position, \(\ell_{\text{loads}}\) represents the chosen instances for each input position in \(M\) memory slots, Var(\(\cdot\)) denotes the variance, and \(\epsilon\) is a small value to avoid division by zero. Finally, the loss scores for all the heads are summed up as follows: \(\ell_{\text{bottleneck}}=\sigma\cdot\sum_{i=1}^{A}\ell_{\text{bottleneck}_{i}}\) where \(\sigma\) is a coefficient.
### Information retrieval within associative memory
After writing information into the shared workspace, the learned priors can serve as attractors within associative memory. The objective is to reconstruct the current input patches towards more globally meaningful representations based on these attractors.
AttractorsPriors learned in the memory bank act as attractors in associative memory. Attractors have basins of attraction defined by an energy function. Any input state that enters an attractor's basin of attraction will converge to that attractor. The attractors in associative memory usually have the same dimension as input states; however, the priors \(\gamma^{t+1}\) in the memory bank have a lower rank compared to the input. Therefore, we employ a learnable linear transformation \(f_{\text{LT}}(\cdot)\) to project the priors into a space of the same dimension, \(E\), as the input before using them as attractors.
Retrieval using the energy function in Hopfield networksHopfield networks have demonstrated their potential as a promising approach to constructing associative memory. In particular, a continuous Hopfield network Demircigil et al. (2017); Ramsauer et al. (2021) operates with continuous
input and output values. The upscaled priors \(f_{\text{LT}}(\gamma^{t+1})\) are stored within the continuous Hopfield network and are subsequently retrieved to reconstruct the input state \(\Xi^{t}\). Depending on an inverse temperature variable \(\beta\), the reconstructed input \(\hat{\Xi}^{t}\) can be either a metastable state that represents a mixture of various attractors or a fixed state represented by one of the attractors. A large \(\beta\) makes it less likely for metastable states to appear, while a small \(\beta\) increases the likelihood. The continuous Hopfield network employs an energy function to enable the evolution of patches into more globally meaningful representations with respect to the learned attractors. We update each patch representation \(\xi^{t}\in\Xi^{t}\) by decreasing its energy \(E(\xi^{t})\) within associative memory as follows
\[E(\xi^{t})=-\text{lse}(\beta,f_{\text{LT}}(\gamma^{t+1})\xi^{t})+\frac{1}{2} \xi^{t}\xi^{t}T+\beta^{-1}\text{log}M+\frac{1}{2}\zeta^{2}, \tag{7}\]
\[\zeta=\max_{i}|f_{\text{LT}}(\gamma^{t+1}_{i})|,\ \hat{\xi}^{t}=\arg\min_{\xi^{t}}E( \xi^{t}), \tag{8}\]
where lse is the log-sum-exp function and \(\zeta\) denotes the largest norm of attractors. Equation 7 describes an iteration that can be applied several times. Usually, we apply just a single step for efficient forward and backward computation during end-to-end training. \(t\) is the batch time step, and the iteration time step is implicit. Additionally, a skip connection functioning as the information broadcast from the global workspace is employed to obtain the final output \(\Xi^{t+1}=\hat{\Xi}^{t}+\Xi^{t}\).
## 5 Experiments
In this section, we discuss the settings and extensive empirical results for image classification and relational reasoning tasks. Our study demonstrates that AiT outperforms the coordination method and other sparse attention-based approaches in terms of both performance and model complexity.
### Setup
DatasetsWe evaluate model performance on two different scales of datasets (1) small (Triangle Goyal et al. (2022b), CIFAR10 Krizhevsky & Hinton (2009), and CIFAR100 Krizhevsky & Hinton (2009)) and (2) middle (Oxford-IIIT Pet Parkhi et al. (2012) and Sort-of-CLEVR Santoro et al. (2017)). We train the model on these datasets from scratch using the training split and evaluate using the test split. A detailed description of the datasets can be found in Appendix A.1.
Model variantsWe investigate three different sizes of model configurations, i.e., Small, Medium, and Base. The Base variant setting is adapted from Vision Transformer (ViT) using 12 layers, 12 attention heads for each layer, a hidden dimension of 768, and an MLP dimension of 3072. The Medium variant with 6 layers and the Small variant with 2 layers are added for efficiency comparisons among approaches. The CLS token is removed while the pooled representations of the last dense network layer are used instead since using the CLS token leads to undermined learning results in vision tasks Wang et al. (2021); Graham et al. (2021).
HyperparametersThe hyperparameters were chosen based on a grid search. A batch size of 512 was employed for the CIFAR datasets and the Triangle dataset, 128 for the Pet dataset, and 64 for the Sort-of-CLEVR dataset. We utilized the AdamW optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and a weight decay of 0.01. A cosine learning rate scheduler was implemented with an initial learning rate of 1e-5, a warm-up phase of 5 (15) epochs within a total of 100 (300) epochs, and a minimum learning rate set to 1e-6. The smoothing factor of the exponentially weighted moving average, the coefficient \(\sigma\), and the small value \(e\) in the bottleneck balance loss were set to 0.9, 1e-2, and 1e-10, respectively. For AiT, we employed a memory slot size of 32 and a bottleneck attention head size of 8. We used a bottleneck size of 512 for CIFAR and Pet, 64 for Triangle, and 256 for Relational Reasoning. We used 32 memory slots for CIFAR, Triangle, and Relational Reasoning, and 128 slots for Pet (Appendix A.3). Unless otherwise noted, we trained the model for 100 epochs and reported the mean of three individual experiments. The code will be made publicly available.
### Classification tasks
The experiments on image classification tasks include comparisons to a wide range of methods (Table 1). We used the author-recommended hyperparameters to re-implement these methods. Re
garding the coordination method, we have examined the efficacy of its variants with different model configurations. The default coordination model consists of 4 layers, with parameter sharing among different attention layers. Coordination-D is a deeper model with 8 layers using the parameter sharing. Coordination-H is a high-capacity model with 4 layers that employ individual parameters. Coordination-DH is a high-capacity model with 8 layers. The results show that AiT achieved better performance compared to the coordination methods. The AiT performance also increased when scaling it from AiT-Small to AiT-Base, while the coordination methods appeared difficult to scale with the increasing number of layers and parameters, as seen in the case of Coordination-DH. Moreover, AiT outperformed the other baseline methods, demonstrating strong performance. For instance, compared to ViT-Base with 85.7M parameters, AiT-Medium is a shallower model with only 45.9M parameters. Nevertheless, AiT-Medium exhibited an average performance of 81.58%, surpassing the ViT-Base model's average of 80.46% and requiring much fewer parameters. AiT also outperformed sparse attention-based methods such as Perceiver and Set Transformer.
We extended the evaluation to a middle-sized dataset of Oxford Pet. We used a patch size of 16. A larger memory of 128 slots was employed due to the higher resolution and the increased data class complexity. For the Oxford Pet dataset, we trained the model for 300 epochs. Figure 3 reveals that ViT performance can be enhanced by including the global workspace layer. AiT-Medium with fewer parameters also outperforms ViT-Base in the Pet dataset. Though AiT-Medium converges at a later training stage, it is a smaller model with fewer layers to compute compared to ViT-Base.
Prior specializationPatches in one image can be attended sparsely by different priors. As shown in Section 3, a monolithic Transformer model needs to learn such specialization and relations without the inductive bias introduced by the global workspace layer. Notably, these priors learned to focus on independent spatial areas of an image to guide the attention. We visualized the activation maps for the specialized priors used in CIFAR-10 for AiT-Small (Figure 2). Each slot's activation maps highlight specific areas during the selection of relevant patches.
### Ablation study
We conducted a comprehensive ablation study to gain insights into the functionalities of the various components of AiT (Table 2). In AiT with reset memory, we initialized the explicit memory every
Figure 3: Comparison on the Pet dataset, which shows enhanced accuracy for AiT.
Figure 2: Learned distinct memory slot attentions in AiT. Each slot’s activation maps highlight a specific area during the selection of relevant image patches.
epoch. The W/O Hopfield ablation replaces the Hopfield network with another multi-head attention (MHA) that shares the same architecture as the self attention in Figure 1.b. The rationale behind this ablation is grounded in the prior studies of Set Transformer and Perceiver models that relied on two MHA components cascaded in depth. For a fair comparison, instead of simply removing the Hopfield network, we replaced it with the MHA. The added MHA takes the input state \(\Xi^{t}\) as the query, and the upscaled priors \(f_{\text{LT}}(\gamma^{t+1})\) as the key and value, i.e., \(\hat{\Xi}^{t}=\text{MHA}(\Xi^{t},f_{\text{LT}}(\gamma^{t+1}))\).
Moreover, W/O memory evaluates performance when the global workspace layer is removed, the remaining components of which are equivalent to a simple Vision Transformer. W/O bottleneck shows performance using dense attention by removing the top-\(k\) bottleneck capacity constraint. W/O SA examines performance when the multi-head self attention component in Figure 1.b is excluded, and W/O FF evaluates performance when the feedforward component is removed. Lastly, the dense networks consist of repeated feedforward components with the other components removed in each AiT block. The analysis suggests that the complete model with all components can achieve the highest classification accuracy. The bottleneck appeared to play a significant role in improving performance, since its absence led to an evident decrease in accuracy. Making changes to other components such as Hopfield networks and the explicit memory, while not as impactful, still resulted in degraded accuracy. Despite the relatively good performance of dense networks, their performance in relational reasoning tasks is considerably inferior to that of the AiT model (Section 5.8). Additionally, we demonstrate the W/O memory forward ablation in Table 7 and Table 8.
### Comparison with the coordination method
We performed a detailed comparison with the coordination method in terms of test accuracy and model size. Figure 4 depicts the results for CIFAR-10 based on models with a single layer. Notably, using the low-rank memory (LM) that has a more diverse set of priors showed benefits in both improving the performance and decreasing the model size. For instance, the baseline coordination (C) method exhibited moderate accuracy of 60.41% with a model size of 2.2M. In contrast, consolidating the low-rank memory and the self-attention (C+LM+SA) exhibited the highest accuracy of 71.62%, while maintaining a relatively compact size of 1.2M. The Hopfield network (HN) maintained the model performance while reducing the model size by replacing the cross-attention
\begin{table}
\begin{tabular}{l|c c c c c} \hline Methods & CIFAR10 & CIFAR100 & Triangle & Average & Model Size \\ \hline AiT-Base & **85.44** & **60.78** & 99.59 & **81.94** & 91.0 \\ AiT-Medium & 84.59 & 60.85 & 99.57 & 81.58 & 45.9 \\ AiT-Small & 83.34 & 56.30 & 99.47 & 79.70 & 15.8 \\ \hline Coordination Goyal et al. (2022b) & 75.31 & 43.90 & 91.66 & 70.29 & 2.2 \\ Coordination-DH & 72.49 & 51.70 & 81.78 & 68.66 & 16.6 \\ Coordination-D & 74.50 & 40.69 & 86.26 & 87.16 & 2.2 \\ Coordination-H & 78.51 & 48.59 & 72.53 & 65.64 & 8.4 \\ \hline ViT-Base Dosovitskiy et al. (2021) & 83.82 & 57.92 & **99.63** & 80.46 & 85.7 \\ ViT-Small & 79.53 & 53.19 & 99.47 & 77.40 & 14.9 \\ Perceiver Jaegle et al. (2021) & 82.52 & 52.64 & 96.78 & 77.31 & 44.9 \\ Set Transformer Lee et al. (2019) & 73.42 & 40.19 & 60.31 & 57.97 & 2.2 \\ BRIMs Mittal et al. (2020) & 60.10 & 31.75 & - & 45.93 & 4.4 \\ Luna Ma et al. (2021) & 47.86 & 23.38 & - & 35.62 & 77.6 \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison in image classification tasks
\begin{table}
\begin{tabular}{l|c c} \hline Models & CIFAR10 & CIFAR100 & Triangle & Average \\ \hline AiT & **83.34** & **56.30** & **99.47** & **79.70** \\ \hline ResNet memory & 81.94 & 55.96 & 99.46 & 79.12 \\ W/O Hopfield & 81.03 & 54.96 & 99.44 & 78.48 \\ W/O memory (ViT) & 79.53 & 53.19 & 92.47 & 77.40 \\ Dense networks & 77.78 & 53.14 & 99.46 & 76.79 \\ W/O bottleneck & 75.40 & 46.53 & 93.33 & 73.75 \\ W/O SA & 72.72 & 47.75 & 99.46 & 73.31 \\ W/O FF & 69.51 & 40.89 & 97.61 & 69.34 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison based on an ablation study. The results indicate that combining all the components leads to the highest performance in all the tasks.
Figure 4: Model size vs. accuracy for configurations.
with more efficient information retrieval. However, HN was effective only when either the LM or SA component was applied. We assume that retrieval with the Hopfield associative memory relies on a diverse set of priors, which is enabled by the enhanced bottleneck attention using the low-rank memory and the attention balance loss, and the learning through self-attention. By contrast, the previous coordination method had a limited number of priors, e.g. 8, and did not employ self-attention to correlate among input patches. Moreover, integrating all three components (C+LM+HN+SA) resulted in a competitive accuracy of 71.49% with a compact model size of 1.0M.
### Memory initialization
To initialize the explicit memory, we set each slot with values drawn from a specific distribution. We investigated several memory initialization methods (Table 3). The Gaussian distribution generates random values with a mean of 0 and a variance of 1. The sinusoidal positional embedding Vaswani et al. (2017) uses sine and cosine functions to represent positions in a sequence. The uniform distribution Graves et al. (2014) uses an upper bound \(\frac{1}{\sqrt{M+D}}\), where \(M\) is the memory slot number and \(D\) is the slot size. The identity distribution Goyal et al. (2022a) uses ones on the diagonal and zeros elsewhere. We found that the Gaussian distribution resulted in the best performance, possibly by preventing specific priors from dominating the learning process in early training stages.
### Efficacy of Bottleneck Attention Balance Loss
The Bottleneck Attention Balance Loss facilitates selection of diverse input patches for each prior. To quantitatively measure the efficacy, we computed sparsity scores that represent the ratio of distinct patches in all selected patches. In Figure 10, we observe an apparent increase in the patch diversity.
### Varying the inverse temperature in Hopfield networks
We investigated the effect of the inverse temperature on information retrieval based on the Hopfield networks in Figure 5, which shows the reconstructed patches in the CIFAR-10 task for the AiT-Small model. We found that using an inverse temperature of 1.0 gave the best retrieval performance based on the Hopfield networks. The results suggest that the beta parameter requires tuning to reach optimal performance. We aim to study a mechanism to adjust the beta adaptively in the future, addressing this sensitivity and potentially further improving performance.
### Relational reasoning
In relational reasoning tasks, we aim to train a model to answer questions concerning the properties and relations of various objects based on a given image. A performant model can attend to specific regions of images for the question-answering task. We employed the Sort-of-CLEVR dataset Santoro et al. (2017) and compared performance to both Transformer based models including Set Transformer and the coordination method, and other non-Transformer based models
Figure 5: Comparison with varying inverse temperature scores. The inverse temperature beta influences the formation of metastable states that concurrently represent multiple patch representations. A smaller beta is more likely to generate such metastable states, while a larger beta leads to a stronger separation of different patterns. However, a larger beta can also lead to local minima, where input patterns are reconstructed to the same pattern within associative memory.
including CNN+MLP and CNN+Relation Networks (CNN+RN) Santoro et al. (2017). The non-Transformer based models incorporated inductive biases into their architectures, such as convolutional layers focusing on different image areas. This often results in superior performance compared to the Transformer based methods that lack a built-in inductive bias. Moreover, two dense networks, the Dense-Small and Dense-Base, are included as additional non-Transformer based models. The Dense-Small (11.1M) and Dense-Base (62.7M) are derived from the AiT-Small and AiT-Base, respectively. Additionally, in relational reasoning tasks, a question was embedded with an embedding layer that consists of a learnable linear projection and layer normalization before and after the linear projection. The question embedding was then concatenated to image patch embeddings as the input of a model and the labels were a list of answer options with 10 classes.
Table 4 presents the results for relational and non-relational tasks. In the non-relational task, the question pertains to the attributes of a specific object, whereas in the relational task, the question focuses on the relations between different objects. A description of the dataset can be found in Appendix A.1. The results demonstrate a substantial improvement in AiT's performance when addressing the relational reasoning tasks. This indicates that the global workspace layer can learn spatial relations across different samples and time steps contributing to task performance. Dense networks generally do not perform well in the more complex relational reasoning tasks.
## 6 Conclusions
We proposed the Associative Transformer (AiT), an architecture inspired by Global Workspace Theory and associative memory. AiT leverages a diverse set of priors with the emerging specialization property to enable enhanced association among representations via the Hopfield network. The comprehensive experiments demonstrate AiT's efficacy compared to conventional models, including the coordination method. In the future, we aim to investigate multi-modal competition within the shared workspace, enabling tasks to benefit from the cross-modal learning of distinct perceptual inputs.
|
2309.06817 | Detecting molecules in Ariel low resolution transmission spectra | The Ariel Space Mission aims to observe a diverse sample of exoplanet
atmospheres across a wide wavelength range of 0.5 to 7.8 microns. The
observations are organized into four Tiers, with Tier 1 being a reconnaissance
survey. This Tier is designed to achieve a sufficient signal-to-noise ratio
(S/N) at low spectral resolution in order to identify featureless spectra or
detect key molecular species without necessarily constraining their abundances
with high confidence. We introduce a P-statistic that uses the abundance
posteriors from a spectral retrieval to infer the probability of a molecule's
presence in a given planet's atmosphere in Tier 1. We find that this method
predicts probabilities that correlate well with the input abundances,
indicating considerable predictive power when retrieval models have comparable
or higher complexity compared to the data. However, we also demonstrate that
the P-statistic loses representativity when the retrieval model has lower
complexity, expressed as the inclusion of fewer than the expected molecules.
The reliability and predictive power of the P-statistic are assessed on a
simulated population of exoplanets with H2-He dominated atmospheres, and
forecasting biases are studied and found not to adversely affect the
classification of the survey. | Andrea Bocchieri, Lorenzo V. Mugnai, Enzo Pascale, Quentin Changeat, Giovanna Tinetti | 2023-09-13T09:07:40Z | http://arxiv.org/abs/2309.06817v1 | # Detecting molecules in _Ariel_ low resolution transmission spectra
###### Abstract
The _Ariel_ Space Mission aims to observe a diverse sample of exoplanet atmospheres across a wide wavelength range of 0.5 to 7.8 microns. The observations are organized into four Tiers, with _Tier 1_ being a reconnaissance survey. This Tier is designed to achieve a sufficient signal-to-noise ratio (S/N) at low spectral resolution in order to identify featureless spectra or detect key molecular species without necessarily constraining their abundances with high confidence. We introduce a _P_-statistic that uses the abundance posteriors from a spectral retrieval to infer the probability of a molecule's presence in a given planet's atmosphere in Tier 1. We find that this method predicts probabilities that correlate well with the input abundances, indicating considerable predictive power when retrieval models have comparable or higher complexity compared to the data. However, we also demonstrate that the _P_-statistic loses representativity when the retrieval model has lower complexity, expressed as the inclusion of fewer than the expected molecules. The reliability and predictive power of the _P_-statistic are assessed on a simulated population of exoplanets with H\({}_{2}\)-He dominated atmospheres, and forecasting biases are studied and found not to adversely affect the classification of the survey.
**Keywords: methods: data analysis, planets, and satellites: atmospheres, surveys, techniques: spectroscopic**
## 1 Introduction
During the past decade, the number of exoplanet discoveries has increased exponentially, bringing the total number of confirmed exoplanets to more than 5000 by mid-2022. Numerous space missions are contributing to the effort of detecting new exoplanets, such as Kepler [1; 2], TESS [3], CHEOPS [4], PLATO [5], GAIA [6], together with ground instrumentation such as HARPS [7], WASP [8], KELT [9], and OGLE [10]. Over time, the field emphasis has gradually expanded from the determination of bulk planetary parameters to the search for a deeper understanding of the true nature of exoplanets and their formation-evolution histories.
Multiband photometry and spectroscopy of transiting exoplanets are currently the most promising techniques for characterizing the composition and thermodynamics of exoplanet atmospheres [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30], as they allow us to effectively separate the signal of the planet from that of its host star. Observations in the near- to mid-infrared can probe the neutral atmospheres of exoplanets to study the signal from the rovibrational transitions of molecules [15; 31].
Current instrumentation has enabled this kind of atmospheric characterization only for a few tens of planets orbiting close to their host stars over a limited wavelength range [e.g. 17; 19; 32; 33]. A considerable contribution to exoplanetary science will come from the James Webb Space Telescope (_JWST_), launched in December 2021 [34], and _Ariel_. _JWST_ provides broadband spectroscopy in the range of 0.6 to 28.5 micron of the electromagnetic spectrum, sufficient to detect all molecular species [31; 35; 36; 37; 38; 39].
### Ariel and its Tiers
The Atmospheric Remote-Sensing Infrared Exoplanet Large-survey, _Ariel_, will launch in 2029 as the M4 ESA mission of the Cosmic Vision program [40; Ariel Definition Study Report1]. _Ariel_ will conduct the first unbiased survey of a statistically significant sample of approximately 1000 transiting exoplanet atmospheres in the 0.5-7.8 \(\mu m\) wavelength range. Three photometers (VISPhot, 0.5-0.6 \(\mu m\); FGS1, 0.6-0.80 \(\mu m\); FGS2, 0.80-1.1 \(\mu m\)) and three spectrometers (NIRSpec, 1.1-1.95 \(\mu m\) and R \(\geq\) 15; AIRS-CH0, 1.95-3.9 \(\mu m\) and R \(\geq\) 100; AIRS-CH1, 3.9-7.8 \(\mu m\) and R \(\geq\) 30), provide simultaneous coverage of the whole spectral band. This broad spectral range encompasses the emission peak of hot and warm exoplanets and the spectral signatures of the main expected atmospheric gases such as H\({}_{2}\)O, CO\({}_{2}\), CH\({}_{4}\), NH\({}_{3}\), HCN, H\({}_{2}\)S, TiO, VO [e.g. 15; 31]. _Ariel_ will allow us to comprehensively understand the
formation-evolution histories of exoplanets as well as to extend comparative planetology beyond the boundary of the Solar System.
After each observation, the resulting spectrum from each spectrometer is binned during data analysis to optimize the signal-to-noise ratio (S/N). Therefore, by implementing different binning options, the mission will adopt a four-Tier observation strategy, expected to produce spectra with different S/N to optimize the science return. Tier 1 is a shallow reconnaissance survey created to perform transit and eclipse spectroscopy on all targets to address questions for which a large population of objects needs to be observed. Tier 1 spectra have S/N \(\geq\) 7 when raw spectra are binned into a single spectral point in NIRSpec, two in AIRS-CH0, and one in AIRS-CH1, for a total of seven effective photometric data points. A subset of Tier 1 planets will be further observed to reach S/N \(\geq\) 7 at higher spectral resolution in Tier 2 and Tier 3 for detailed chemical and thermodynamic characterization of the atmosphere. Tier 4 is designed for bespoke or phase-curve observations [41].
### Detecting molecules in Tier 1 spectra
Among the main goals of Tier 1 observations is to identify planetary spectra that show no molecular absorption features (because of clouds or compact atmospheres) and to select those to be reobserved in higher Tiers for a detailed characterization of their atmospheric composition and thermodynamics. Tier 1 observations, however, have a much richer information content even though the combination of S/N and spectral resolution might not be adequate to constrain chemical abundances with high confidence using retrieval techniques.
Adapting existing data analysis techniques or developing new methodologies can be essential to extract all relevant information from the Tier 1 data set. In a previous study, [42] were successful in demonstrating, using color-color diagrams, that Tier 1 observations can be used to infer the presence of molecules in the atmospheres of gaseous exoplanets, independently from planet parameters such as mass, size, and temperature. However, their method has an estimator bias that depends on the magnitude of the instrumental noise; a detailed characterization of instrumental uncertainties is required to remove the estimator bias before it can be used for quantitative predictions. In this follow-up paper, we develop a new method that is both reliable and unbiased to address the following question: _can we use Tier 1 transmission spectra to identify the presence of a molecule, with an associated calibrated probability?_. Hence, these calibrated probabilities can also be used to inform the decision-making process to select Tier 1 targets for re-observation in _Ariel_'s higher Tiers for detailed characterization.
Section 2 outlines the methodology used in this analysis. Section 2.1 describes our data analysis strategy for detecting a molecule in these spectra. Section 2.2 details our experimental data set, including the planetary population, forward model parameters, atmosphere randomization, and noise estimation. Section 2.3 summarizes the spectral retrievals performed, discussing the optimization algorithm and the priors used. Section 2.5 describes
the data analysis tools used to evaluate the probability forecasts of the method. Section 3 details the results obtained in terms of forecast reliability (Section 3.1), predictive power (Section 3.2), and bias of the abundance estimator utilized (Section 3.3). Finally, Section 4 discusses all the results, and Section 5 summarizes the main conclusions of this analysis.
## 2 Methods
Tier 1 transmission spectra contain sufficient information to infer the presence of several atmospheric molecules [42], but Tier 1 observations are in general non-ideal for quantitative spectral retrievals in terms of molecular abundances, as they are required to achieve a S/N \(\geq\) 7 when binned in only seven effective photometric data points in the 0.5-7.8 \(\mu m\) wavelength range [41]. Abundance posterior probabilities from retrievals can however still be informative and here we develop a new method to identify the presence of molecules in Tier 1 transmission spectra starting from these posteriors.
### Analysis strategy
Given a marginalized posterior distribution of a molecular abundance, we compute an empirical probability, \(P\), that the molecule is present in the atmosphere of a planet, with an abundance above some threshold, \(\mathbb{T}_{Ab}\), as:
\[P\simeq\int_{\mathbb{T}_{Ab}}^{\infty}\mathcal{P}(x)dx \tag{1}\]
where \(\mathcal{P}\) is the marginalized posterior distribution and \(x\) represents the abundance values. Thus, the predicted \(P\) depends on the assumed atmospheric model and the selected abundance threshold \(\mathbb{T}_{Ab}\). If the assumed atmospheric model is representative of the observed atmosphere, then a clear correlation (above noise) between \(P\) and the true abundance in Tier 1 data implies that \(P\) can be used to identify the most likely spectra that contain a molecule, providing a preliminary classification of planets by their molecular content. Thus, this \(P\)-statistic can be considered robust [43], even when \(\mathcal{P}(x)\) is too broad to constrain the abundance.
To test whether this method is sensitive enough, we need to simulate transmission spectra as observed in Tier 1, using an atmospheric model that includes a certain number of molecules. Then, we need to perform a spectral retrieval with the same atmospheric model and compare each input molecular abundance with the predicted \(P\) corresponding to that molecule. The test is successful if, for an agreed \(\mathbb{T}_{Ab}\), we recover a high \(P\) for each large input abundance and a low \(P\) for each small input abundance. To understand how well the method behaves under conditions similar to the _Ariel_ reconnaissance survey, we repeat this test on a large and diverse planetary population.
In this study, we employ a simulated population of approximately 300 transmission spectra of H\({}_{2}\)-He gaseous planets, which contain CH\({}_{4}\), H\({}_{2}\)O, and
CO\({}_{2}\) trace gases with randomized input abundances. Additionally, we introduce NH\({}_{3}\) with randomized abundances as a nuisance parameter since its spectral features overlap with those of water and other molecules. We utilize NH\({}_{3}\) to test the \(P\)-statistic's efficacy and investigate the robustness of its predictions under various assumptions, such as the exclusion of NH\({}_{3}\) from retrievals or the inclusion of additional molecules not present in the population.
Therefore, we can study whether this method provides reliable predictions under less favorable conditions when the assumed model is not fully representative of the observed atmosphere. This might provide some insight into how robustly the method can reveal the presence of a molecule in a real observation when the atmosphere is unknown. For this, we add or remove molecules from the retrieval model (hereafter, "fit-composition") with respect to the simulated composition. Then, we perform different spectral retrievals, that use different fit-compositions, and compare the predictions obtained from the \(P\)-statistic with the input abundances.
#### Model exploration
We consider three cases in our analysis. In the first case (referred to as R\({}_{0}\)), we use an atmospheric model that includes CH\({}_{4}\), H\({}_{2}\)O, CO\({}_{2}\), and NH\({}_{3}\) as trace gases, which matches the composition used in the forward model generation of the population.
In the second case (referred to as R\({}_{1}\)), we consider a fit-composition that includes only CH\(4\), CO\(2\), and H\(2\)O, omitting NH\(3\). In this case, there is a possibility of inadequate representation of the data because NH\({}_{3}\)'s molecular features could overlap with the observed features of other molecules (hence its adoption as a nuisance), particularly H\({}_{2}\)O [31]. As a result, the retrieved values of \(P\) may not accurately reflect the input abundances of H\({}_{2}\)O, leading to decreased reliability of the predictions.
In the third case (referred to as R\({}_{2}\)), we expand the fit-composition beyond the input composition by including also CO, HCN, and H\({}_{2}\)S. It should be noted that the spectral features of these additional molecules could also overlap with the observed features of the other molecules. For instance, CO and CO\({}_{2}\) exhibit a spectral overlap around \(4.5\,\mu m\). Hence, even in this case, obtaining reliable predictions of the input composition may not be obvious.
Table 1 provides a summary of the molecules included in the fit-composition for each retrieval. For more detailed information on the retrievals performed, please refer to Section 2.3.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Retrieval & CH\({}_{4}\) & CO\({}_{2}\) & H\({}_{2}\)O & NH\({}_{3}\) & CO & HCN & H\({}_{2}\)S \\ \hline R\({}_{0}\) & ✓ & ✓ & ✓ & ✓ & & & \\ R\({}_{1}\) & ✓ & ✓ & ✓ & & & & \\ R\({}_{2}\) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Molecules included in the fit-composition for each retrieval.
### Experimental data set
As a simulated population, we use a planetary population generated using the Alfnoor software [42; 44]. Alfnoor is a wrapper of TauREx 3 [45] and Ariel-Rad [46]. Given a list of candidate targets and a model of the _Ariel_ payload, it automatically computes simulated exoplanet spectra as observed in each _Ariel_ Tier.
Specifically, we use a subset of the POP-I planetary population of [42]. POP-I consists of 1000 planets from a possible realization of the _Ariel_ Mission Reference Sample (MRS) of [41]. That MRS (hereafter, MRS19) comprises known planets in 2019 from NASA's Exoplanet Archive and TESS forecast discoveries. Here we ignore the TESS forecasts, thus obtaining a sub-population of around 300 planets, that we label POP-Is. Using POP-Is planets ensures that, in principle, we can compare our results with those of [42].
Figure 1 shows that POP-Is comprises a diverse sample of planets mostly with large radii (\(\gtrsim\) 5 R\({}_{\oplus}\)), short orbital periods (\(\leq\) 4/5 days), warm to hot equilibrium temperatures (500 - 2500 \({}^{\circ}K\)) and stellar hosts with different magnitudes in the K band of the infrared spectrum (8 - 12 \(m_{K}\)). Compared to the parameter space sampled by the entire POP-I, this data set has more occasional statistics on smaller and longer-period planets around brighter stars.
The detailed properties of POP-I (and therefore POP-Is) are discussed in [42] and briefly summarized here. The forward model parameters are randomized to test diverse planetary atmospheres. The baseline atmosphere is a primordial atmosphere filled with H\({}_{2}\) and He with a solar mixing ratio of He/H\({}_{2}\) = 0.17. The vertical structure of the atmosphere comprises 100 pressure layers, uniformly distributed in log space from \(10^{-4}\) to \(10^{6}\) Pa, using the plane-parallel approximation. The equilibrium temperature of each planet is randomized between \(0.7\times T_{p}\) and \(1.05\times T_{p}\), where \(T_{p}\) is the equilibrium temperature of the planet listed in MRS19; the atmospheric temperature-pressure profile is isothermal. Constant vertical chemical profiles are added for H\({}_{2}\)O, CO\({}_{2}\), CH\({}_{4}\), and NH\({}_{3}\), with abundances randomized according to a logarithmic uniform distribution spanning \(10^{-7}\) to \(10^{-2}\) in Vertical Mixing Ratios (VMR). Randomly generated opaque gray clouds are also added with a surface pressure varying from 5\(\times 10^{2}\) to \(10^{6}\) Pa to simulate cloudless to overcast atmospheres. Table 2 summarizes the randomized parameters of the POP-I forward models. For each planet, POP-I contains the raw spectrum binned at each _Ariel_ Tier resolution ("noiseless spectra"), the associated noise predicted by the _Ariel_ radiometric simulator, ArielRad, for each spectral bin, and the number of transit observations expected to reach the Tier-required S/N. To simulate an observation, we scatter the noiseless spectra according to a normal distribution with a standard deviation equal to the noise at each spectral bin. The "observed spectra" data set is built by repeating this process for each planet in POP-Is. As in [42], the Tier 1 data used in this work are binned on the higher resolution Tier 3 spectral grid: R = 20, 100, and 30, in NIRSpec, AIRS-CH0, and AIRS-CH1, respectively. The noise is that of Tier 1, which
yields a S/N \(>\) 7 if data were binned on the Tier 1 spectral grid. This is to prevent the loss of spectral information that may occur in binning.
### Retrievals summary
To perform the retrievals, we use the TauREx 3 retrieval framework [45], the same used to generate the raw POP-Is spectra. In the retrieval model, we
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Parameter** & **Unit** & **Range** & **Scale** \\ \hline T\({}_{\text{P}}\) / \(T_{\text{P; MRS19}}\) & \({}^{\circ}K\) & 0.7; 1.05 & linear \\ CH\({}_{4}\) & VMR & \(10^{-7}\); \(10^{-2}\) & log \\ CO\({}_{2}\) & VMR & \(10^{-7}\); \(10^{-2}\) & log \\ H\({}_{2}\)O & VMR & \(10^{-7}\); \(10^{-2}\) & log \\ NH\({}_{3}\) & VMR & \(10^{-7}\); \(10^{-2}\) & log \\ P\({}_{clouds}\) & Pa & 5\(\times 10^{2}\); \(10^{6}\) & log \\ \hline \hline \end{tabular}
\end{table}
Table 2: Forward model randomized parameters in POP-I.
Figure 1: Parameter space distribution of the POP-Is planetary population used in this work, which comprises about 300 selected planets from MRS19. The horizontal axis reports the planetary orbital period in days; the vertical axis reports the stellar magnitude in the K band. Each data point represents a planet; the symbol size is proportional to the planetary radius in Earth’s radii; the symbol color shows the expected planetary equilibrium temperature. Light blue data points in the background show the entire MRS19/POP-I parameter space for reference.
include opaque gray clouds, pressure-dependent molecular opacities of various trace gases, Rayleigh scattering, and Collision-Induced Absorption (CIA) of H\({}_{2}\)-H\({}_{2}\) and H\({}_{2}\)-He. Table 3 reports a referenced list of CIA and all molecular opacities used in this study.
The free parameters of the retrievals are the radius and mass of the planet, as well as the molecular mixing ratios, as listed in Table 4. We use broad logarithmic uniform priors for the molecular abundances, ranging from \(10^{-12}\) to \(10^{-1}\) in VMR. For the mass and radius of the planet, we select uniform priors of 20% and 10% around the respective values listed in MRS19. The gray cloud pressure levels are not included as free parameters in the retrieval because of their degeneracy with other parameters such as the radius [60].
We set the evidence tolerance to 0.5 and sample the parameter space through 1500 live points using the Multinest algorithm2[61; 62]. We disable the search for multiple modes to obtain a single marginalized posterior distribution of each molecular abundance to insert in Equation 1.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Parameters** & **Units** & **Priors** & **Scale** \\ \hline M\({}_{\mathrm{P}}\) & \(M_{J}\) & \(\pm 20\%\) & linear \\ R\({}_{\mathrm{P}}\) & \(R_{J}\) & \(\pm 10\%\) & linear \\ CH\({}_{4}\) & VMR & \(10^{-12}\); \(10^{-1}\) & log \\ CO\({}_{2}\) & VMR & \(10^{-12}\); \(10^{-1}\) & log \\ H\({}_{2}\)O & VMR & \(10^{-12}\); \(10^{-1}\) & log \\ NH\({}_{3}\) & VMR & \(10^{-12}\); \(10^{-1}\) & log \\ CO & VMR & \(10^{-12}\); \(10^{-1}\) & log \\ HCN & VMR & \(10^{-12}\); \(10^{-1}\) & log \\ H\({}_{2}\)S & VMR & \(10^{-12}\); \(10^{-1}\) & log \\ \hline \hline \end{tabular} We take a conservative approach by choosing larger bounds for the priors than those used for the random forward spectra generation, reported in Table 2.
\end{table}
Table 4: Fit parameters and their priors for the retrievals.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Opacity** & **Reference(s)** \\ \hline H\({}_{2}\)-H\({}_{2}\) & [47; 48] \\ H\({}_{2}\)-He & [49] \\ H\({}_{2}\)O & [50; 51] \\ CH\({}_{4}\) & [52; 53] \\ CO\({}_{2}\) & [54] \\ NH\({}_{3}\) & [55; 56] \\ CO & [57] \\ H\({}_{2}\)S & [58] \\ HCN & [59] \\ \hline \hline \end{tabular}
\end{table}
Table 3: List of opacities used in this work and their references.
We then perform the three different retrievals (respectively R\({}_{0}\), R\({}_{1}\), and R\({}_{2}\)) described in Section 2.1 on each POP-Is planet. We use the Atmospheric Detectability Index (ADI) [19] to assign statistical significance to the results of these retrievals. Given the Bayesian evidence of a nominal retrieval model, \(E_{N}\), and of a pure-cloud/no-atmosphere model, \(E_{F}\), the ADI is:
\[\text{ADI}=\begin{cases}log(E_{N})-log(E_{F}),\text{if }log(E_{N})>log(E_{F})\\ 0,\text{otherwise}\end{cases} \tag{2}\]
ADI is a positively defined metric, equivalent to the log-Bayesian factor [63; 64] where \(\log(E_{N})>\log(E_{F})\). To compute \(E_{F}\), we perform an additional retrieval for each planet with a flat-line model with the planet radius being the only free parameter.
### Abundance threshold
We utilized the marginalized posteriors to estimate the \(P\)-statistic using an abundance threshold of \(\mathbb{T}_{Ab}=10^{-5}\), which is considered "molecular-poor" according to the definition by [42]. This threshold is higher by 1-2 orders of magnitude compared to the Tier-2 detection limits reported by [44]. The "molecular-poor" condition is met for approximately 40% of the atmospheres due to the randomization boundaries set for each molecule (see Table 2). The ability to detect a molecule depends on factors such as opacities, correlations among molecules, and noise in the measured spectrum. Therefore, \(\mathbb{T}_{Ab}\) can be optimized for each molecule in future work, although we applied the same abundance threshold for all in this pilot study.
### Data analysis tools
The \(P\)-statistic can be used to reliably classify planets for the presence of a molecule with an abundance above \(\mathbb{T}_{Ab}\) when \(P\) correlates with the \(Ab\) true value. The stronger the correlation above noise fluctuations, the larger the predictive power. Because this classification is binary and \(P\) is defined in the range \(0\to 1\), we can use standard statistical tools such as calibration curves and ROC curves [65; 66] to evaluate the performance of this method in revealing the presence of molecules and in selecting Tier 1 targets for higher Tiers. These curves are routinely utilized by the Machine Learning community3, as they present the forecast quality of a binary classifier in a well-designed graphical format.
Footnote 3: In Python, the package scikit-learn [67] (v1.0) provides the method calibration_curve in sklearn.calibration and the method roc_curve in sklearn.metrics.
#### 2.5.1 Calibration curves
A calibration curve [e.g. 66] plots the forecast probability averaged in different bins on the horizontal axis and the fraction of positives, in each bin, on the vertical axis (see Figure 2 for a generic example). In this work, the fraction
of positives is the fraction of POP-Is planets with true abundance larger than \(\mathbb{T}_{Ab}\), and the forecast probability is the corresponding \(P\)-statistic. Calibration curves provide an immediate visual diagnosis of the quality of binary classifier forecasts and the biases that the forecasts may exhibit.
For well-calibrated predictions, the forecast probability is equal to the fraction of positives, except for deviations consistent with sampling variability. Therefore, the ideal calibration curve follows the 1:1 line. Miscalibrated forecasts can be biased differently depending on whether the calibration curve lies on the left or on the right of the 1:1 line. A curve entirely to the right of the 1:1 line indicates an over-forecasting bias, as the forecasts are consistently too large relative to the fraction of positives, as seen in the calibration curve of Classifier 1 in Figure 2. On the contrary, the calibration curve of Classifier 2 shows the characteristic signature of under-forecasting, being entirely on the left of the 1:1 line, indicating that the forecasts are consistently too small relative to the fraction of positives. There may also be more subtle deficiencies in forecast performance, such as an under-confident forecast, with over-forecasting biases
Figure 2: Calibration curves of three mock classifiers, exhibiting different forecast quality and biases. The legend reports the B-S of the forecasts of each classifier. The calibration curve for perfectly calibrated forecasts is reported for reference.
associated with lower probabilities and under-forecasting biases associated with higher probabilities, as seen in the calibration curve of Classifier 3.
Calibration curves paint a detailed picture of forecast performance, often summarized in a scalar metric known as the Brier Score [B-S, 68], which is defined as the mean square difference between probability forecasts and true class labels (positive or negative); the lower the B-S, the better the predictions are calibrated. From Figure 2, we see that Classifier 3 achieves the best B-S, although the forecasts are not well calibrated. In general, uncalibrated forecasts can be calibrated using calibration methods such as Platt scaling and Isotonic regression [69; 70; 71].
#### ROC curves
Given the predicted probabilities of a classifier, and a selected probability threshold \(\mathbb{P}\), the number of True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN), are defined in Table 5.
A binary classifier with high predictive power assigns larger \(P\) to positive observations (true label "Yes") and smaller \(P\) to negative (true label "No"). This maximizes TP and TN, and minimizes FP and FN.
A ROC curve [e.g. 66] is a square diagram that illustrates the predictive power at different values of the probability threshold \(\mathbb{P}\). It plots the False Positive Rate (FPR) on the horizontal axis and the True Positive Rate (TPR) on the vertical axis (see Figure 3 for a generic example), defined as:
\[\text{FPR}=\frac{\text{FP}}{\text{Negatives}}=\frac{\text{FP}}{ \text{FP}+\text{TN}} \tag{3a}\] \[\text{TPR}=\frac{\text{TP}}{\text{Positives}}=\frac{\text{TP}}{ \text{TP}+\text{FN}} \tag{3b}\]
FPR and TPR are commonly known as "false alarm" and "hit" rates. ROC curves are constructed by calculating the TPR and FPR from the number of TP, TN, FP, and FN as \(\mathbb{P}\) decreases from 1 to 0. The ideal classifier minimizes the FPR while maximizing the TPR; thus, its ROC curve is the unit step function. On the other hand, the worst possible classifier is a random classifier
\begin{table}
\begin{tabular}{l c c c} \hline \hline & & \multicolumn{2}{c}{True label} \\ Forecast & Forecast label & Yes & No \\ \hline \(\text{P}\geq\mathbb{P}\) & Yes & TP & FP \\ \(\text{P}<\mathbb{P}\) & No & FN & TN \\ \hline \hline \end{tabular}
\end{table}
Table 5: Contingency table formulating all four possible outcomes of a binary classification problem.
with a ROC curve along the 1:1 line. Real-world classifiers have intermediate ROC curves ranked by how close they are to the unit step function. As seen in Figure 3, Classifier 3 exhibits the highest predictive power, as the corresponding ROC curve arcs everywhere above the ROC curves for Classifiers 1 and 2.
ROC curves portray a detailed picture of predictive power, often summarized in a scalar metric known as the Area Under the Curve (AUC), the fraction of the unit square area subtended by a ROC curve. The higher the AUC, the higher the predictive power. The ideal classifier has \(\text{AUC}=1.0\); the random one has \(\text{AUC}=0.5\). From Figure 3, we see that, as expected, Classifier 3 also achieves the largest AUC.
ROC curves can also be used to select the optimal classification threshold \(\mathbb{P}\), which roughly corresponds to the position on the curve where the TPR cannot be raised without significantly increasing the FPR. For example, as seen in Figure 3, the optimal \(\mathbb{P}\) for Classifier 3 is around 0.5, where it achieves a TPR of nearly 0.9 at a low FPR of approximately 0.1. Reducing \(\mathbb{P}\) to 0.4 is
Figure 3: ROC curves of the same mock classifiers shown in Figure 2, exhibiting different predictive powers. The legend reports the AUC associated with each ROC curve. The ideal and worst possible classifier ROC curves are reported for reference. Several probability thresholds \(\mathbb{P}\) at regularly spaced intervals are also displayed on each curve.
not advantageous, as it only increases the TPR to approximately 0.95, at the expense of increasing the FPR to almost 0.3.
### Using calibration and ROC curves
Using calibration curves and the B-S metric, we can immediately diagnose the forecast quality of the \(P\)-statistic and its potential biases. Suppose that the forecast probability \(P\) matches the fraction of planets with input abundances greater than \(\mathbb{T}_{Ab}\) (fraction of positives) in each probability bin. In that case, the prediction of the method is well-calibrated. Moreover, we can compare the forecast quality achieved for different molecules using the B-S metric. If the forecasts are not well calibrated, we can infer which kind of bias affects the predictions of the method by inspecting the shape of the calibration curve. If the forecasts show an over-forecasting bias (as in the example of Classifier 1, Fig. 2) and therefore incorrectly classify a fraction of planets as bearing a molecule, too many Tier 1 planets may be selected for re-observation in higher Tiers, resulting in less optimal scheduling of observations. On the contrary, an under-forecasting bias (as in the example of Classifier 2, Fig. 2) may imply that fewer Tier 1 planets than possible would be scheduled for re-observing in higher Tiers.
Using ROC curves and the AUC metric, the power of the \(P\)-statistic to predict the presence of molecules can be assessed. The closer the ROC curve approaches the unit step function (AUC \(\simeq\) 1, Fig. 3), the higher the predictive power. Moreover, we can directly compare the predictive power achieved for different molecules by analyzing the shape of the corresponding ROC curves and the AUC values.
The shape of the ROC curve provides a way to select the optimal classification threshold, \(\mathbb{P}_{*}\), for the problem under study. For instance, \(\mathbb{P}_{*}\) can be chosen in a trade-off process that maximizes the TPR while keeping the FPR at an acceptable low value.
This choice can aid the selection of Tier 1 targets for re-observation in a higher Tier: a large FPR would result in a poor allocation of observing time while a low TPR would result in a reduction of observational opportunities. It can also benefit population studies where one might need to track the presence of certain molecules across families of planets and extrasolar systems. These types of studies are outside the scope of this work, but can profit from the methodology developed here.
## 3 Results
As detailed in Section 2.1, we designed a method based on the \(P\)-statistic to reveal the presence of a molecule in Tier 1 spectra. In the following sections, we use the statistical tools described in Section 2.5 to show the performance of the \(P\)-statistic in predicting the presence of several molecules in our simulated planetary population. In particular, in Section 3.1, we use calibration curves to assess the reliability of the predictions of the method and related biases, while
in Section 3.2, we use ROC curves to assess the predictive power of the method and discuss the optimal classification threshold, \(\mathbb{P}_{*}\). In Section 3.3, we use the median abundance as an estimator of the true abundance and investigate its biases in the low S/N regime to explain the biases observed in the calibration curves.
### Detection reliability
#### 3.1.1 Retrieval \(\mathbf{R_{0}}\)
Figure 4 shows the analysis performed to evaluate the reliability of the method when using the abundance posteriors of the retrieval \(\mathrm{R_{0}}\), which uses the same atmospheric composition as the one used in the generation of the simulated atmospheres (see Table 1). The subplots in each column share the same horizontal axis with the predicted probability \(P\) that a molecule is present with an input abundance, \(Ab_{mol}\), above the selected abundance threshold \(\mathbb{T}_{Ab}=10^{-5}\) (see Section 2.4). The figure reports the results for \(\mathrm{CH_{4}}\), \(\mathrm{H_{2}O}\), and \(\mathrm{CO_{2}}\), shown from left to right, respectively.
The top row displays histograms of the \(P\)-statistic realizations, which exhibit a bimodal distribution. Two peaks are observed in the distribution, with one located at \(P\approx 0.2\) and the other at \(P\approx 0.8\), with the former being more prominent. Additionally, a valley is observed at intermediate values, with \(P\approx 0.5\).
The middle row shows the correlation between the predicted probabilities on the horizontal axis and the input abundances of each molecule on the vertical axis. We take a rough measure of the correlation by calculating the angular coefficient of the data points from a linear fit. These coefficients are listed in Table 6. The lower right quadrant of these diagrams (\(P\gtrsim 0.5\) and \(Ab_{mol}<10^{-5}\)) is almost empty of data points, indicating that whenever the method predicts a high \(P\), the corresponding input abundance is likely higher than \(\mathbb{T}_{Ab}\). However, not all planets with an input abundance greater than \(\mathbb{T}_{Ab}\) are associated with a high \(P\), as the upper left quadrants of these diagrams (\(P\lesssim 0.5\) and \(Ab_{mol}>10^{-5}\)) are not empty of data points.
The bottom row shows the calibration curves computed for each molecule; each curve is shown with a bootstrap confidence interval calculated using 1000 bootstrap samples. That is, following [72], we randomly remove \(\sim 1/e\approx 36\%\) of the data from each of these samples and replace them by repeating some randomly chosen instances of the ones kept. For each molecule, we calculate the B-S using the brier_score_loss method of sklearn.metrics[67], with the associated uncertainty estimated from the same bootstrap samples. Table 6 lists the B-S values obtained.
The calibration curves show an under-forecasting bias (curve to the left of the 1:1 line; see Section 2.5.1) especially associated with larger forecast probabilities, giving a fraction of positives \(\approx 1.0\) for \(P\gtrsim 0.6\). On the contrary, the
probabilities are better calibrated for \(P\lesssim 0.4\). From the B-S values (less accurate forecasts receive higher B-S), we see that CH\({}_{4}\) is the best-scoring molecule, probably due to its strong absorption spectral features.
It is possible that the observed under-forecasting of the calibration curves and the bimodality of the \(P\)-statistic distribution are both related to the sampling of the parameter space. This is briefly discussed further in Section 4.2.
#### 3.1.2 Retrieval R\({}_{1}\)
Figure 5 shows the same analysis for the retrieval R\({}_{1}\), which includes only CH\({}_{4}\), CO\({}_{2}\), and H\({}_{2}\)O in the fit-composition and excludes NH\({}_{3}\), although this molecule is present in the data set (see Table 1). Comparing the histograms from the
Figure 4: Detection reliability analysis for CH\({}_{4}\), H\({}_{2}\)O, and CO\({}_{2}\) from the R\({}_{0}\) retrievals, that implement a model that is fully representative of the simulated atmospheres. All plots in the same column share the same horizontal axis with the predicted probabilities, \(P(Ab_{mol}>10^{-5})\), that a molecule is present in the atmosphere of a planet, with an abundance above the selected abundance threshold, \(\mathbb{T}_{Ab}=10^{-5}\). Top row: histogram with the frequency of the \(P\) forecasts. Middle row: diagrams showing the correlation between \(P\) values on the horizontal axis and input abundances on the vertical axis. The linear fit parameters of the data points are reported on each legend. For visual reference, the dotted horizontal lines show the position of \(\mathbb{T}_{Ab}\) and the dotted vertical lines the value 0.5 on the x-axis. Bottom row: calibration curves with associated bootstrap confidence intervals; each legend shows the B-S of the forecasts.
top row of this figure with those obtained for the retrieval \(\mathrm{R_{0}}\) (Figure 4), we notice a decrease in the forecast frequency at low \(P\), especially for \(\mathrm{CH_{4}}\) and \(\mathrm{H_{2}O}\), with a reduced peak at \(P\) around 0.2. On the contrary, high values of \(P\) are more frequent, enhancing the peak at \(P\) around 0.8: for \(\mathrm{CH_{4}}\), more than \(\mathrm{CH_{4}}\), the peak at \(P\) around 0.
30% of the data set receives \(P\) between 0.8 and 0.9. These are samples with high input abundance.
The plots in the middle row show an increase in the scatter in the data points compared to R\({}_{0}\). In this case, we find a decrease in the correlation between \(P\) and the input abundances, and the angular coefficients of the linear fit are reported in Table 6. Planets that receive \(P\gtrsim 0.8\) have high input abundance, \(Ab_{mol}>10^{-5}\).
The calibration curves for H\({}_{2}\)O and CH\({}_{4}\) in the bottom row are, within the uncertainties, closer to the 1:1 line than for R\({}_{0}\), both for high and low forecast probabilities. Although this might appear closer to the ideal behavior, it could be misleading. The B-S is higher than for R\({}_{0}\), because the mean squared difference between the forecasts and true class labels is larger. This is visualized in the middle plots: for \(Ab_{mol}<10^{-5}\) (negative true class label), there are many forecast values with \(P>0.5\). In other words, the correlation between the \(P\)-statistic and the true input abundances is weaker. In contrast, the entire CO\({}_{2}\) calibration curve shows the signature of under-forecasting. The curve for CO\({}_{2}\) is almost the same as for R\({}_{0}\), likely because the missing NH\({}_{3}\) affects less the CO\({}_{2}\) abundance posteriors. On the other hand, the overlap of NH\({}_{3}\) with H\({}_{2}\)O but also CH\({}_{4}\) makes the model used in the retrieval less suitable to describe the data.
The reduced correlation between probability forecasts and input abundances, as well as the higher B-S values, suggest that excluding NH\({}_{3}\), despite its presence in the data set, leads to less representative abundance posteriors. However, predictions for CO\({}_{2}\) are less affected, possibly because this trace gas has less spectral overlap with NH\({}_{3}\) compared to H\({}_{2}\)O or CH\({}_{4}\).
#### 3.1.3 Retrieval R\({}_{2}\)
The results of the same analysis for the retrieval R\({}_{2}\), which includes CO, HCN, and H\({}_{2}\)S as additional molecules to the fit-composition (see Table 1) are very similar to those of R\({}_{0}\) (see Section 3.1.1). Therefore, we refer the reader to Table 6 that summarizes the results for the correlation between predicted probabilities and input abundances, along with the B-S values, and to Figure 1 in Section A of the Appendix.
### Predictor assessment
#### 3.2.1 Retrieval R\({}_{0}\)
Figure 6 shows the analysis performed to assess the predictive power of the \(P\)-statistic (ability to maximize TP and TN while minimizing FP and FN) when using the abundance posteriors from the retrieval R\({}_{0}\). The figure reports the results for CH\({}_{4}\), H\({}_{2}\)O, and CO\({}_{2}\), shown in different columns from left to right, respectively.
The upper row shows the calculated ROC curves for each molecule. Each curve is reported with a bootstrap confidence interval calculated using 1000 bootstrap samples, with the same random removal and replacement of the
data as discussed in Section 3.1, involving \(1/e\approx 36\%\) of the data. For each molecule, we calculate the AUC using the roc_auc_score method of sklearn.metrics[67], with the associated uncertainty estimated from the same bootstrap samples. The AUC values thus obtained are collected in Table 7. For all molecules, the ROC curves are close to ideal behavior (curve near the unit step function, see Section 2.5.2), showcasing that the \(P\)-statistic has significant predictive power. Consequently, the corresponding AUC values are \(>0.9\), with no considerable variation between molecules, implying similar predictive power.
For each molecule, the bottom row shows the number of TP, TN, FP, and FN (see Table 5), used to construct the ROC, versus the probability threshold \(\mathbb{P}\). Also shown are the associated confidence intervals estimated from the same bootstrap samples. These diagrams provide information on how the predictive power of the method changes as \(\mathbb{P}\) varies from 1 to 0 and aid in the selection of the optimal classification threshold \(\mathbb{P}_{*}\) (see Section 2.6).
Figure 6: Predictor assessment analysis for CH\({}_{4}\), H\({}_{2}\)O, and CO\({}_{2}\) from the R\({}_{0}\) retrievals, that implement a model that is fully representative of the simulated atmospheres. Top row: ROC curves with associated bootstrap confidence intervals. The ideal and worst possible classifier ROC curves are reported for reference. The legends report the AUC associated with each ROC curve. Several probability thresholds \(\mathbb{P}\) at regularly spaced intervals are also displayed on each curve. Bottom row: TP, TN, FP, and FN curves plotted as a function of the probability threshold \(\mathbb{P}\), with confidence intervals from the same bootstrap estimation.
Given the randomization of trace gas abundances in the forward model (\(10^{-7}\) to \(10^{-2}\) on a uniform logarithmic scale, see Table 2), and the selected abundance threshold (\(\mathbb{T}_{Ab}=10^{-5}\)), the data set contains \(\sim 60\%\) positive observations and \(\sim 40\%\) negative observations. By definition, for \(\mathbb{P}=1\), the number of positive forecasts, \(\mathrm{N_{P}}=\mathrm{TP}+\mathrm{FP}\), is zero, and the number of negative forecasts, \(\mathrm{N_{N}}=\mathrm{TN}+\mathrm{FN}\), is equal to the size of the data set. Therefore, at this probability threshold, \(\mathrm{TN}\simeq 40\%\) and \(\mathrm{FN}\simeq 60\%\). As \(\mathbb{P}\) decreases, \(\mathrm{N_{P}}\) increases (TP and FP increase), while \(\mathrm{N_{N}}\) decreases (TN and FN decrease). For \(\mathbb{P}=0\), \(\mathrm{N_{N}}\) is zero and \(\mathrm{N_{P}}\) is equal to the data set size; at this classification threshold, \(\mathrm{TP}\simeq 60\%\) and \(\mathrm{FP}\simeq 40\%\).
In those cases where there are no external constraints on which misclassification is more bearable (FP or FN), the intersection of their curves gives an optimized classification threshold \(\mathbb{P}_{*}\).
From this intersection, we obtain \(\mathbb{P}_{*}\approx 0.3\) for all molecules. For confirmation, we can trace this \(\mathbb{P}_{*}\) on the ROC curves. As expected, it roughly corresponds to the point where we cannot significantly increase TPR without increasing FPR, which is at TPR \(\approx 0.8\). If, instead, we need a more conservative number of FP, we can choose a higher \(\mathbb{P}_{*}\), for example \(\mathbb{P}_{*}=0.5\), the default classification threshold for a binary classifier.
A concise way to demonstrate the effectiveness of the \(P\)-statistic in rejecting misclassifications is by computing the odds TP:FP and TN:FN, estimated from the curves in the bottom row of Figure 6. Odds relate to the probability that a molecule is correctly identified at the selected \(\mathbb{P}\), with an example shown in Table 7, estimated at \(\mathbb{P}_{*}=0.5\). The table shows that the \(P\)-statistic is quite effective in rejecting FP, as they are negligible for all molecules at this threshold. Moreover, TPR at \(\mathbb{P}_{*}=0.5\) indicates that more than 60% of the positives in the dataset is correctly identified, with TP values of approximately 45%, 35%, and 45% for CH\({}_{4}\), H\({}_{2}\)O, and CO\({}_{2}\), respectively (rounded to the nearest 5% from the odds values listed in the table). However, at this \(\mathbb{P}\), FN
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Retrieval** & **molecule** & **AUC [\%]** & **TP [\%]: FP [\%]** & **TN [\%]: FN [\%]** \\ \hline \multirow{2}{*}{\(\mathrm{R_{0}}\)} & CH\({}_{4}\) & 93 \(\pm\) 1 & 43 \(\pm\) 3 : \(<\) 1 & 42 \(\pm\) 2 : 15 \(\pm\) 3 \\ & H\({}_{2}\)O & 92 \(\pm\) 1 & 37 \(\pm\) 3 : \(<\) 1 & 41 \(\pm\) 3 : 23 \(\pm\) 3 \\ & CO\({}_{2}\) & 91 \(\pm\) 1 & 45 \(\pm\) 4 : 1.7 \(\pm\) 0.3 & 37 \(\pm\) 2 : 17 \(\pm\) 3 \\ \hline \multirow{2}{*}{\(\mathrm{R_{1}}\)} & CH\({}_{4}\) & 86 \(\pm\) 2 & 51 \(\pm\) 3 : 16 \(\pm\) 1 & 27 \(\pm\) 2 : 7 \(\pm\) 2 \\ & H\({}_{2}\)O & 82 \(\pm\) 2 & 47 \(\pm\) 3 : 15 \(\pm\) 1 & 26 \(\pm\) 2 : 13 \(\pm\) 3 \\ & CO\({}_{2}\) & 90 \(\pm\) 1 & 48 \(\pm\) 3 : 5.6 \(\pm\) 0.5 & 33 \(\pm\) 2 : 14 \(\pm\) 2 \\ \hline \multirow{2}{*}{\(\mathrm{R_{2}}\)} & CH\({}_{4}\) & 93 \(\pm\) 1 & 41 \(\pm\) 3 : \(<\) 1 & 42 \(\pm\) 2 : 17 \(\pm\) 3 \\ & H\({}_{2}\)O & 92 \(\pm\) 1 & 37 \(\pm\) 4 : \(<\) 1 & 41 \(\pm\) 2 : 23 \(\pm\) 3 \\ \cline{1-1} & CO\({}_{2}\) & 91 \(\pm\) 1 & 45 \(\pm\) 3 : 1.7 \(\pm\) 0.3 & 37 \(\pm\) 2 : 17 \(\pm\) 3 \\ \hline \hline \end{tabular}
\end{table}
Table 7: AUC of the ROC curves and probability odds at the probability threshold \(P=0.5\) for all possible combinations of retrievals and molecules.
increases to approximately 15-25% of the dataset (as seen in the bottom row of Figure 6 at \(\mathbb{P}_{*}=0.5\)), resulting in TN:FN odds of less than 3:1.
#### 3.2.2 Retrieval \(\mathbf{R_{1}}\)
Figure 7 shows the same analysis for the retrieval \(\mathrm{R}_{1}\).
Comparing the ROC curves in the top row with those obtained for the retrieval \(\mathrm{R}_{0}\) (see Section 3.2.1), we notice a decrease in the predictive power of the method, measured by a reduction in AUC for \(\mathrm{CH}_{4}\) and \(\mathrm{H}_{2}\mathrm{O}\), as reported in Table 7. On the contrary, the \(\mathrm{CO}_{2}\) ROC achieves the highest AUC, similar to that of \(\mathrm{R}_{0}\), possibly caused by the limited overlap between \(\mathrm{NH}_{3}\) and \(\mathrm{CO}_{2}\), when compared to the case of \(\mathrm{CH}_{4}\) and \(\mathrm{H}_{2}\mathrm{O}\).
The plots in the bottom row show a significant reduction in the performance of the FP curve compared to that achieved for \(\mathrm{R}_{0}\): for \(\mathrm{CH}_{4}\) and \(\mathrm{H}_{2}\mathrm{O}\), it is above 10% up to \(\mathbb{P}\simeq 0.6\), instead of \(<1\%\) at \(\mathbb{P}\simeq 0.5\). The TN curve also shows a decrease in performance: it remains below 30% to \(\mathbb{P}\simeq 0.6\), instead of reaching 40% at \(\mathbb{P}\simeq 0.4\) in \(\mathrm{R}_{0}\). Although the TP and FN curves demonstrate relatively better performance, the optimal classification threshold denoted as \(\mathbb{P}_{*}\), determined at the intersection of the FP and FN curves, increases to
Figure 7: Same as Figure 6. Predictor assessment for the \(\mathrm{R}_{1}\) retrievals, implementing a model that excludes \(\mathrm{NH}_{3}\) from the fit-composition.
approximately \(\mathbb{P}_{*}\sim 0.65,0.5,0.4\) for CH\({}_{4}\), H\({}_{2}\)O, and CO\({}_{2}\), respectively. Tracing these \(\mathbb{P}_{*}\) values on the ROC curves reveals that they correspond to a TPR of approximately 0.8 for all molecules, similar to R\({}_{0}\), but with a significantly worse FPR, as a consequence of the reduced predictive power.
Table 7 reflects this, showing the odds of TP:FP and TN:FN at the same probability threshold \(\mathbb{P}_{*}=0.5\), which was used for R\({}_{0}\). In this case, the method is less efficient in rejecting FP, despite having TP of approximately 50% and 45% for CH\({}_{4}\) and H\({}_{2}\)O, respectively, resulting in only about 3:1 odds for TP:FP. However, the method is still effective in correctly identifying planets with CO\({}_{2}\), with TP:FP odds of about 9:1. As for TN:FN, the results are similar to R\({}_{0}\), with a slightly better rejection of FN in the case of CH\({}_{4}\) (4:1 instead of 3:1).
#### Retrieval R\({}_{2}\)
The results from the same analysis for the retrieval R\({}_{2}\) are very similar to R\({}_{0}\)'s (see Section 3.2.1). Therefore, we refer the reader to Table 7 that summarizes the AUC values obtained and the odds TP:FP and TN:FN at the probability threshold \(\mathbb{P}_{*}=0.5\), and to Figure 10 in Section A of the Appendix.
### Abundance estimates
Tier 1 might not be adequate for reliable abundance retrieval, for which higher _Ariel_ Tiers are better suited. Therefore, we study the retrieved Tier 1 abundances to investigate trends in their distribution that may clarify some of the behavior observed in the calibration and ROC curves seen in the previous sections. The abundance estimator used is obtained from the median of the marginalized posterior distribution of the \(\log Ab_{mol}\) with asymmetric error bars estimated from the 68.3% confidence level around the median. In particular, we are interested in investigating the regime of input abundances under which this median-based estimator is unbiased.
#### 3.3.1 Retrieval R\({}_{0}\)
Figure 8 reports the analysis performed to investigate potential biases affecting the median of the marginalized posteriors when used as an estimator of the log-abundances. The figure reports the results for CH\({}_{4}\), H\({}_{2}\)O, and CO\({}_{2}\), shown in different columns from left to right, respectively. NH\({}_{3}\) exhibits similar behavior to the other three molecules, but it is not included in the figure in line with the decision to treat it as a nuisance in this study.
Panels in the top row show the molecular log-abundance input vs. the retrieved with the error bar. A solid black line serves as the ideal trend (1:1 line) for visual reference. The color bar indicates the distances between the input and retrieved log-abundance, expressed in units of the uncertainty \(\sigma\) on \(\log Ab_{mol}\), estimated by averaging the asymmetric error bars. Blue colors denote distances up to \(1\sigma\); red colors represent distances in the range of \(1\to 2\sigma\). Larger distances are marked with black circles, which serve to diagnose potential trends and biases that may affect the retrieval results. In addition,
the symbol size reflects the signal-to-noise ratio (S/N) of each observation as estimated in the AIRS-CH0 spectroscopic channel, providing insight into possible trends between the distance to the input abundance and the S/N condition.
The retrieved abundances exhibit good agreement with the input abundances in the large abundance regime, characterized by limited scatter around the ideal trend and by low retrieved uncertainties. This regime is generally observed for \(Ab_{mol}\gtrsim 10^{-4}\), but starts to break down at \(10^{-5}\lesssim Ab_{mol}\lesssim 10^{-4}\).
Figure 8: Comparison between the retrieved molecular abundances and their true values is shown from the R\({}_{0}\) retrievals. The estimator for the retrieved log-abundances is the median of the posterior distributions from the retrievals. Top row: retrieved vs. input molecular abundances. The solid black line represents the ideal trend, and the color bar visualizes the distance between input and retrieved abundances in units of uncertainty \(\sigma\). The symbol size is proportional to the S/N in the AIRS-CH0 spectroscopic channel. Middle row: log-abundance S/N vs. the difference between the retrieved and input log-abundances. A black dashed line is drawn at a value of 5 on the vertical axis for visual reference. Bottom row: true abundances vs. the difference between the retrieved and true log-abundances, in units of \(\sigma\). Dashed vertical lines are drawn at 3 and 5-\(\sigma\). Text boxes show the number of 2-, 3-, and 5-\(\sigma\) outliers.
For \(Ab_{mol}\lesssim 10^{-5}\), the input abundances are rarely retrieved accurately. This analysis can provide insights into the detection limits of CH\({}_{4}\), H\({}_{2}\)O, and CO\({}_{2}\) in _Ariel_ Tier 1, which are estimated to be around \(10^{-4}\). These values can be compared with the expected detection limits of the same molecules in _Ariel_ Tier 2, which are anticipated to be significantly lower, with previous studies [44] reporting limits between \(10^{-7}\) and \(10^{-6.5}\).
Let the log-abundance S/N be defined as \(\frac{1}{\sigma}\mid\log Ab_{mol}\mid\), where \(Ab_{mol}\) is the true value of the molecular abundance. The middle row panels in Figure 8 show the plot of log-abundance S/N vs. the difference between the retrieved and input log abundances. It can be observed that the distribution of data points is broadly separated into two sub-populations at a S/N of about 5. Data points with high S/N correspond to cases where the input is confidently retrieved and aligned along the 1:1 line in the upper row diagrams, indicating unbiased estimation. On the other hand, data points with low S/N cluster in the bottom left portion of the diagram. In these cases, the median is no longer an unbiased estimator of the true value, as the corresponding data points lie to the left of the 1:1 line in the upper row diagrams. As discussed further in Section 4.2, these cases have posteriors dominated by the prior imposed in the retrieval and are best treated as upper limits.
In the bottom row of Figure 8, the true abundances are shown vs. the difference between the retrieved and true abundances, in units of \(\sigma\). The diagrams provide a visualization of how many samples are 2-, 3-, and 5-\(\sigma\) outliers, allowing verification that the distribution is compatible with the tail of the abundance posteriors. The number of outliers is shown in the text box inserted in the diagrams and (converted into percentages) in Table 8. Assuming that the abundance posteriors are representative of the data, the fraction of expected outliers outside is 5%, 0.3%, and \(\ll\) 1%, respectively at 2-, 3-, and 5-\(\sigma\). We find good agreement between the percentages reported in Table 8 and these values, with minor deviations compatible with the statistical fluctuations of a random variable.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Retrieval** & **molecule** & \(>2\sigma\) [\%] & \(>3\sigma\) [\%] & \(>5\sigma\) [\%] \\ \hline R\({}_{0}\) & CH\({}_{4}\) & 5.6 & 0.7 & \(\ll 1\) \\ & H\({}_{2}\)O & 1.3 & 0.3 & \(\ll 1\) \\ & CO\({}_{2}\) & 5.0 & 1.3 & 0.7 \\ \hline R\({}_{1}\) & CH\({}_{4}\) & 32.9 & 19.6 & 11.6 \\ & H\({}_{2}\)O & 17.9 & 13.6 & 9.6 \\ & CO\({}_{2}\) & 16.6 & 10.3 & 6.6 \\ \hline R\({}_{2}\) & CH\({}_{4}\) & 6.0 & 0.7 & \(\ll 1\) \\ & H\({}_{2}\)O & 1.3 & 0.3 & \(\ll 1\) \\ & CO\({}_{2}\) & 5.3 & 1.7 & 1.3 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Percentage of data points counted outside three confidence intervals for all possible combinations of retrievals and molecules.
#### Retrieval \(\mathbf{R_{1}}\)
Figure 9 shows the same analysis for the retrieval \(\mathrm{R_{1}}\).
The top row shows that, although there is still a correlation between the retrieved and input abundances, it is less significant than for \(\mathrm{R_{0}}\). Furthermore, comparing the retrieved and input abundances yields different regimes for each molecule. However, the main difference from \(\mathrm{R_{0}}\) is the significant number of data points at distances greater than \(2\sigma\) (marked by black circles), corresponding to 2-\(\sigma\) outliers. In particular, for all molecules, most of these points are located to the right of the ideal trend, indicating the presence of an overestimation bias for the retrieved abundances. These data points are located in the region \(\mathrm{y}\gtrsim 5\) and \(\mathrm{x}>0\) in the plots in the middle row. Therefore, in addition to the overestimation bias for the abundances, their retrieved uncertainties are underestimated. Furthermore, the bottom-row diagrams show a larger number of outliers compared to the \(\mathrm{R_{0}}\) case: too many for the posterior to be considered representative. This is
Figure 9: Same as Figure 8 for the \(\mathrm{R_{1}}\) retrievals, implementing a model that excludes \(\mathrm{NH_{3}}\) from the fit-composition.
a consequence of an atmospheric model which is not representative of the data, biasing the likelihood, the abundance posteriors, and the median estimator of the abundances.
#### 3.3.3 Retrieval \(\mathbf{R_{2}}\)
The results of the same analysis for the retrieval \(\mathrm{R_{2}}\) are very similar to those of \(\mathrm{R_{0}}\), including the number of outliers that are compatible with the expectations for a model that is representative of the data. Therefore, we refer the reader to Table 8, and to Figure A3 in Section A of the Appendix. Here, we only stress that adding molecules to the fit-composition that are not present in the data set does not appear to significantly bias the abundance posteriors, compared to \(\mathrm{R_{0}}\). This is further discussed in Section 4.2.
## 4 Discussion
In this section, we first discuss the similarities between the results from the retrievals \(\mathrm{R_{0}}\) and \(\mathrm{R_{2}}\), shown in Sections 3.1 and 3.2. Then we apply the ADI metric to compare all retrievals from the point of view of the Bayesian evidence (Section 4.1). Finally, we expand the discussion to the role of the priors in the retrieved abundance posteriors (Section 4.2).
The results of Sections 3.1 and 3.2 show that the predictions of the \(P\)-statistic for the retrievals \(\mathrm{R_{0}}\) and \(\mathrm{R_{2}}\) are comparable, despite the quite different fit-compositions, while the reliability of the \(P\)-statistic is lower in the \(\mathrm{R_{1}}\) case. The \(\mathrm{R_{0}}\) model and its parameters are identical to those used to generate the POP-Is population, and the \(\mathrm{R_{2}}\) extends the parameter space with new molecules. In \(\mathrm{R_{2}}\), the abundance posteriors for \(\mathrm{CH_{4}}\), \(\mathrm{H_{2}O}\), and \(\mathrm{CO_{2}}\) do not appear to be significantly affected by the addition of CO, HCN, and \(\mathrm{H_{2}S}\) in \(\mathrm{R_{2}}\), despite that the latter three spectral signatures partially overlap with those of \(\mathrm{CH_{4}}\), \(\mathrm{H_{2}O}\), and \(\mathrm{CO_{2}}\)[31]. It should be noted that the absence of the three molecules from the simulated atmospheres is correctly revealed in \(\mathrm{R_{2}}\) by their low \(P\)-statistic, shown in Figure 10, that take values smaller than 40% for CO, HCN, and \(\mathrm{H_{2}S}\), respectively. The extension of the analysis to include the calibration and ROC curves to these molecules is left to future work.
The analysis, therefore, suggests that the \(P\)-statistic is robust (that means, provides reliable results) against retrieval models that are over-representative of the observed atmosphere. However, the \(P\)-statistic can no longer be considered robust when the retrieval models are under-representative of the observed atmosphere.
In the current study, the threshold abundance used to estimate the \(P\)-statistic remains constant for all molecules. While it is possible to optimize this threshold for individual molecules, we leave this aspect for future research as discussed in Section 2.4. Lowering the threshold reduces the information provided by the ROC curves. To achieve the optimal point of operation, one must balance the True and False Positive Rates, which is necessary to promote a Tier-1 target to higher Tiers. It is important to note that ROC curves
calculated at different threshold levels provide a statistical estimation of the sample's completeness, enabling the inference of population-wide properties such as the fraction of planets containing certain molecules. While this aspect requires further investigation in future research, it should be noted that the fraction of positive, \(\Sigma\) (planets with true abundance in excess of \(\mathbb{T}_{Ab}\)) is related to the fraction of Tier-1 targets, \(\tilde{\Sigma}\), selected with \(P(>\mathbb{T}_{Ab})>\mathbb{P}\) by
\[\Sigma=\frac{\tilde{\Sigma}-FPR}{TPR-FPR}.\]
The similarities between the R\({}_{0}\) and R\({}_{2}\) models are further discussed in the next section.
### ADI comparison
The ADI metric, described in Section 2.3, is used to assess the statistical significance of a model atmosphere with respect to a featureless spectrum using the log-Bayesian factor. A large ADI suggests that a featureless spectrum is less favored by the data. From the ADI definition, the log-Bayesian factor of two competing models is the difference between their respective ADI.
Figure 11 shows the ADI differences between the R\({}_{0}\) model and the two competing models, R\({}_{1}\) and R\({}_{2}\), plotted against NH\({}_{3}\) abundances. A large, positive difference indicates that the competing models are less representative of the data compared to R\({}_{0}\). The median ADI values for all retrievals are approximately 91, 86, and 92 for R\({}_{0}\), R\({}_{1}\), and R\({}_{2}\), respectively, as shown in the text box within Figure 11. This suggests that a featureless atmospheric model is not favored by the data, and R\({}_{1}\) is the least representative, as expected. This
Figure 10: Histogram of the frequency of use of each possible \(P\) forecast for CO, HCN, and H\({}_{2}\)S, using the abundance posteriors from the retrieval R\({}_{2}\). The dotted vertical line marks the default binary classification threshold \(P=0.5\) for reference.
is further supported by the fact that the ADI difference between R\({}_{0}\) and R\({}_{1}\) increases with increasing NH\({}_{3}\) abundance, indicating that higher NH\({}_{3}\) abundances make R\({}_{1}\) less representative compared to R\({}_{0}\), in agreement with the analysis of Section 3. In contrast, the ADI difference between R\({}_{0}\) and R\({}_{2}\) is close to zero, with a scatter described by a standard deviation of approximately 0.5, which is independent of NH\({}_{3}\) abundance. This confirms that R\({}_{2}\) is similarly representative of the data compared to R\({}_{0}\), despite describing a wider parameter space.
### Priors
In this section, we discuss the impact of the log-uniform priors adopted in the analysis on the results presented. The consequence is a non-Gaussian posterior distribution, and the mean, mode, and median are not equivalent moments of the distribution. In particular, the median is not an unbiased estimator of the true abundance as shown in Figure 8 for low log-abundance S/N (hereafter, "abundance S/N"). This can be explained in terms of the Bayesian formulation of the posterior, \(\mathcal{P}\), which is proportional to the product of the likelihood, \(\mathcal{L}\), and the prior, \(\Pi\).
\[\mathcal{P}\propto\mathcal{L}\times\Pi \tag{4}\]
Figure 11: Bayesian evidence comparison of the retrievals R\({}_{0}\), R\({}_{1}\), and R\({}_{2}\), measured in ADI. The horizontal axis plots the input abundances of NH\({}_{3}\); the vertical axis reports the ADI difference between R\({}_{0}\) and the other two retrievals, R\({}_{1}\) and R\({}_{2}\). The y-axis uses a matplotlib “symlog” scale with the linear threshold set at 1 for better visualization. The text box on the bottom shows the median ADI reported by each retrieval.
Because \(\Pi(\log x)\) is uniform, \(\Pi(x)\sim 1/x\), for large abundance S/N, the likelihood dominates, the posterior is Gaussian (because of the central limit theorem), and the median estimator is unbiased. For low abundances, the prior dominates, \(\mathcal{P}(x)\propto 1/x\), and the median is an estimator of the molecular abundance that is biased towards low abundances. This is shown in Figure 12. Each panel shows the probability density function (PDF) of the likelihood, prior and posterior normalized to 1 at the peak, for three cases where the abundance S/N is 4.0, 5.5, and 7.0, respectively, from the top to the bottom panel, assuming an input abundance of \(10^{-5}\). The posterior is likelihood-dominated when the abundance S/N is 7 and is prior-dominated when the abundance S/N is 4.
Although logarithmic uniform priors are often assumed in spectral retrieval studies, they are certainly not "uninformative priors" [73; 74]. Clearly, using these priors biases the median estimator of the molecular abundance in the low S/N regime, explaining the trends seen in Figure 8. As a side note, log-priors on molecular abundances could as well introduce biases on the derived elemental abundances, therefore the issue has to be investigated carefully in future studies.
The low abundance S/N targets are those that contribute to the leftmost peak in the bimodal distribution of the \(P\)-statistic (Figure 4). Further investigation is however needed to fully understand the origin of the \(P\)-statistic bimodality and its under-forecasting properties.
Figure 12: The probability density functions (PDF) of the likelihood, prior and posterior are shown by the red, blue, and black lines, respectively. The PDFs are normalized to 1 at their peak. The assumed abundance S/N is 4.0, 5.5, and 7.0, respectively, from the top to the bottom panel. An input abundance of \(10^{-5}\) is assumed.
## 5 Conclusion
The _Ariel_ Tier 1 is a shallow reconnaissance survey of a large and diverse sample of approximately 1000 exoplanet atmospheres. It is designed to achieve a signal-to-noise ratio (S/N) greater than 7 when the target exoplanet atmospheric spectra are binned into 7 photometric bands. Tier 1 enables rapid and broad characterization of planets to prioritize re-observations in higher Tiers for detailed chemical and physical characterization. However, Tier 1 may not have sufficient S/N at the spectral resolution required for high-confidence abundance retrieval of chemical species. Nonetheless, it contains a wealth of spectral information that can be extracted to address questions requiring population studies.
In this study, we have introduced a \(P\)-statistic, which is a function of the data that is sensitive enough to reveal the presence of molecules from transit spectroscopy observations of exoplanet atmospheres and can be used as a binary classifier. The \(P\)-statistic is estimated from the marginalized retrieval posterior distribution and provides an estimate of the probability that a molecule is present with an abundance exceeding a threshold, fixed at \(\mathbb{T}_{Ab}\sim 10^{-5}\) in this study, but can be optimized in future analyses.
We have tested the performance of the \(P\)-statistic on a simulated population of gaseous exoplanets, POP-Is, with traces of H\({}_{2}\)O, CH\({}_{4}\), and CO\({}_{2}\) of randomized abundances, in a H\({}_{2}\)-He dominated atmosphere. NH\({}_{3}\) is also included as a disturbance parameter to test the robustness of the \(P\)-statistic. For this, three models are used in the retrievals: R\({}_{0}\), which is representative of the data; R\({}_{1}\), which is under-representative as it excludes NH\({}_{3}\); and R\({}_{2}\), which is over-representative as it includes additional molecules not considered in the simulated POP-Is.
We find that the \(P\)-statistic estimated from R\({}_{0}\) posteriors shows a clear, above-noise correlation with the input abundances, allowing us to infer the presence of molecules. The \(P\)-statistic appears to follow a bimodal distribution, where targets with low abundance S/N are likely contributors to the peak at low \(P\) values. This is supported by the distribution of the median of the abundance posterior, which is an unbiased estimator of the true value only when the abundance S/N is sufficiently large (typically above 5). The \(P\)-statistic is affected by an under-forecasting bias, but this is not expected to adversely affect the classification of the planets in the survey as it can be calibrated in principle. This is further evidenced by ROC curves with large AUC, indicating that the \(P\)-statistic can be used to implement a reliable classifier for the presence of molecules. However, further investigation is needed to fully understand the origin of the \(P\)-statistic bimodality and its under-forecasting properties.
The results obtained appear not to be affected by the increase in complexity of the assumed atmospheric model, implemented in this study with the R\({}_{2}\) retrieval model, as indicated by similar calibration and ROC curves. We find that the predictive power of the \(P\)-statistic is adversely affected by an under-representative model, as implemented in the R\({}_{1}\) retrieval model, which is
evident from a weaker correlation between the \(P\)-statistic and the input abundances, and the median of the posterior abundance no longer being a reliable unbiased estimator of the true value, even in the high abundance S/N regime.
Based on our findings, we conclude that the \(P\)-statistic is a reliable predictor of the presence of molecules within the parameter space explored, as long as the retrieval model matches the complexity of the data. Models that are under-representative can result in poor predictive power, while the investigated over-representative model does not seem to adversely affect classification. Further investigations are needed to test the robustness of the \(P\)-statistic over a wider parameter space, particularly including a wider set of molecules in both the simulated population and retrievals.
Acknowledgments.This version of the article has been accepted for publication, after peer review, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: [https://doi.org/10.1007/s10686-023-09911-x](https://doi.org/10.1007/s10686-023-09911-x)
**Software.** ArielRad [46], TauREx 3 [45], Alfnoor [42; 44], Astropy [75], h5py [76], Matplotlib [77], Numpy [78].
## Declarations
Funding.The authors acknowledge that this work has been supported by the ASI grant n. 2021.5.HH.0.
Conflict of interest.The authors declare they have no conflict of interest.
Authors' contributions.Andrea Bocchieri wrote the main manuscript text and prepared all the figures. Lorenzo V. Mugnai provided the forward models for the analysis. All authors provided comments on the analysis. Andrea Bocchieri and Enzo Pascale edited the final manuscript. All authors read and approved the final manuscript.
Figure 10: Same as Figure 4. Detection reliability for the R\({}_{2}\) retrievals, that implement a model that is over-representative of the simulated atmospheres, by including CO, HCN, and H\({}_{2}\)S as additional trace gases.
Figure 10: Same as Figure 6. Predictor assessment for the R\({}_{2}\) retrievals, that implement a model that is over-representative of the simulated atmospheres, by including CO, HCN, and H\({}_{2}\)S as additional trace gases.
Figure 10: Same as Figure 8 for the R\({}_{2}\) retrievals, that implement a model that is over-representative of the simulated atmospheres, by including CO, HCN, and H\({}_{2}\)S as additional trace gases. |
2308.00197 | Performance Evaluation of Swin Vision Transformer Model using Gradient
Accumulation Optimization Technique | Vision Transformers (ViTs) have emerged as a promising approach for visual
recognition tasks, revolutionizing the field by leveraging the power of
transformer-based architectures. Among the various ViT models, Swin
Transformers have gained considerable attention due to their hierarchical
design and ability to capture both local and global visual features
effectively. This paper evaluates the performance of Swin ViT model using
gradient accumulation optimization (GAO) technique. We investigate the impact
of gradient accumulation optimization technique on the model's accuracy and
training time. Our experiments show that applying the GAO technique leads to a
significant decrease in the accuracy of the Swin ViT model, compared to the
standard Swin Transformer model. Moreover, we detect a significant increase in
the training time of the Swin ViT model when GAO model is applied. These
findings suggest that applying the GAO technique may not be suitable for the
Swin ViT model, and concern should be undertaken when using GAO technique for
other transformer-based models. | Sanad Aburass, Osama Dorgham | 2023-07-31T23:30:16Z | http://arxiv.org/abs/2308.00197v1 | Performance Evaluation of Swin Vision Transformer Model using Gradient Accumulation Optimization Technique
###### Abstract
Vision Transformers (ViTs) have emerged as a promising approach for visual recognition tasks, revolutionizing the field by leveraging the power of transformer-based architectures. Among the various ViT models, Swin Transformers have gained considerable attention due to their hierarchical design and ability to capture both local and global visual features effectively. This paper evaluates the performance of Swin ViT model using gradient accumulation optimization (GAO) technique. We investigate the impact of gradient accumulation optimization technique on the model's accuracy and training time. Our experiments show that applying the GAO technique leads to a significant decrease in the accuracy of the Swin ViT model, compared to the standard Swin Transformer model. Moreover, we detect a significant increase in the training time of the Swin ViT model when GAO model is applied. These findings suggest that applying the GAO technique may not be suitable for the Swin ViT model, and concern should be undertaken when using GAO technique for other transformer-based models.
Keywords:Image Classification; Optimization; Swin ViT; Transformers; Vision Transformers.
## 1 Introduction
Image classification is a fundamental task in computer vision, which involves assigning a label to an image based on its content. This task has many practical applications, such as object recognition, facial recognition, and medical imaging [1], [2]. In recent years, deep learning methods, especially CNNs, have achieved remarkable success in image classification [3, 4, 5]. CNNs are neural networks specifically designed for processing and analyzing images, and they can capture complex patterns and features from images. CNNs consist of multiple layers, including convolutional, pooling, and fully connected layers. The convolutional layer is the core component of the CNN, which extracts features from the input image by applying a set of filters to the image. The pooling layer
is used to down-sample the feature maps, reducing the spatial size of the output. The fully connected layer is used to classify the image based on the extracted features [6, 7]. Despite the success of CNNs in image classification, they have some limitations. CNNs are imperfect for modeling long-range dependencies in images [8, 9], which are crucial for understanding the context and relationships between different objects in an image. Transformers, on the other hand, are attention-based models that excel at capturing long-range dependencies in sequences, such as natural language processing [10]. Transformers have also shown promising results in image classification, especially for large datasets such as ImageNet [11]. The recent ViT model applies the Transformer architecture to image classification. ViT replaces the CNN's convolutional layers with a set of self-attention layers, which allow the model to attend to all the image pixels simultaneously, capturing the global context of the image. The Swin ViT is a recent improvement to the ViT model that addresses the limitation of long-range dependencies by using a hierarchical architecture. Swin divides the image into non-overlapping patches, which are processed by a series of self-attention layers. The resulting features are then aggregated using a Swin block, which captures both local and global dependencies [12, 13, 14]. Gradient Accumulation Optimization (GAO) is a technique that can be used to improve training efficiency in deep learning models. It involves accumulating the gradients over multiple mini-batches before updating the weights. This technique helps to reduce memory usage and allows for larger batch sizes, leading to faster convergence. However, its effectiveness depends on several factors like the number of the mini-batches and the learning rate, and it may not always lead to better results [15]. We have implemented the gradient accumulation optimization on the Swin ViT model and conducted experiments to measure its performance on image classification tasks using the CIFAR10 [16] and MNIST [17] datasets. Experiments involve training the Swin ViT model with and without gradient accumulation and comparing their accuracy and training time performance.
Our contributions summarized as follows: we present the possibility of applying GAO technique for classification model as in Swin ViT. We evaluate the performance (i.e. accuracy and time) of Swin Vision Transformer (ViT) model using gradient accumulation optimization (GAO) technique. To the best of our knowledge, this paper is the first to provide realistic performance evaluation of swin ViT model using such an optimization technique. The content of the paper can be summarized as follows. Section 2 presents the methodology and describes the implementation of our work. Section 3 presents the results and model evaluation. Finally, Section 4 presents the conclusions of this paper.
## 2 Methodology
### _Data Acquisition_ References
The CIFAR10 and MNIST datasets are two popular datasets used in the field of machine learning and computer vision for classification tasks. The CIFAR10 dataset consists of 60,000 32x32 color images, with 10 different classes, each containing 6,000 samples. These classes include objects such as airplanes, automobiles, birds, cats, dogs, and more. On the other hand, the MNIST dataset consists of 70,000 28x28 grayscale images of handwritten digits, with 10 different classes, each containing 7,000 samples. The classes in this dataset represent digits ranging from 0 to 9. Both datasets have been widely used in research for classification tasks, with many models achieving high levels of accuracy on these datasets.
### _Vision Transformers_
ViT and Swin Transformer are two popular models for image classification tasks. Both models consist of two main components: the Transformer encoder and the MLP head [18]. ViT Model: The ViT model takes an image as input and transforms it into a sequence of fixed-length vectors. The Transformer encoder is composed of L layers and consists of four main steps. Firstly, the image is split into a sequence of non-overlapping patches:
\[x_{i}=w_{patch}\ \times patch_{i} \tag{1}\]
where \(x_{i}\)is the \(i^{th}\) patch, \(patch_{i}\)is the representation of the \(i^{th}\) patch, \(w_{patch}\)is a learnable weight matrix.
Secondly, learnable position embeddings are added to each patch to encode the spatial information of the image:
\[x_{i}=x_{i}\ \times pos_{i} \tag{2}\]
where \(pos_{i}\) is the learnable position embedding for patch \(i\).
Thirdly, multi-head self-attention mechanism and feedforward neural networks are applied to the input embeddings:
\[x_{i^{\prime}}=MultiHeadAtt(x_{i})\times x_{i^{\prime\prime}}=FFN(x_{i^{ \prime}})\times x_{i^{\prime\prime\prime}}=LayerNorm(x_{i^{\prime}}+x_{i^{ \prime\prime}}) \tag{3}\]
where \(MultiHeadAtt\) is the multi-head self-attention mechanism, \(FFN\) is the feedforward neural network, and \(LayerNorm\) is the layer normalization function.
Lastly, the output embeddings are aggregated by taking the mean or max pooling over the sequence dimension:
\[Z=Pooling(x_{i^{\prime\prime\prime}},x_{2^{\prime\prime\prime}},x_{3^{\prime \prime\prime}},...\,...\,x_{N^{\prime\prime\prime}}) \tag{4}\]
where \(N\) is the number of patches and \(Pooling\) is the mean or max pooling operation. The MLP head takes the output of the Transformer encoder as input and performs linear projection, activation, dropout, and linear projection to obtain the final classification result. The Swin Transformer model takes an image as input and transforms it into a sequence of fixed-length vectors. The Swin Transformer encoder is composed of K groups, and each group contains a set of non-overlapping patches. The patches in each group are processed by multi-layer Shifted Windows to generate a set of Swin Transformer blocks. Each Swin Transformer block consists of a Shifted Window Attention (SWA) layer, a local window-based feedforward network (LWFFN) layer, and a residual connection [19]. The output of each block is passed as input to the next block within the same group, and the output of the last block in each group is passed as input to the first block in the next group. The output embeddings are aggregated by taking the mean or max pooling over the sequence dimension:
\[Z=Pooling(x_{1^{L}},x_{2^{L}},x_{3^{L}},...\,x_{N^{L}}) \tag{5}\]
where \(L\) is the number of Swin Transformer blocks in each group, and \(x_{i^{L}}\) is the output of the \(l^{th}\) block in group \(K\). Then, the Swin Transformer head performs linear projection, activation, dropout, and linear projection to obtain the final classification result.
In summary, both ViT and Swin ViT models use a Transformer encoder to transform images into fixed-length vectors and an MLP head for classification. The ViT model uses a multi-head self-attention mechanism and feedforward neural networks, while the Swin ViT model uses multi-layer Shifted Windows to generate a set of Swin Transformer blocks. Both models can be trained using backpropagation with stochastic gradient descent (SGD) or other optimization methods. Figures 1 and 2 show the architectures of ViT and Swin-ViT respectively.
### _Gradient Accumulation Optimization_
Gradient accumulation optimization, also known as gradient accumulation over multiple small batches, is a technique used in deep learning to overcome the limitations of GPU memory while training deep neural networks. This technique allows the model to accumulate gradients over multiple small batches before updating the model's parameters. In this way, memory usage during training is reduced, while the model's accuracy is improved [15].
The basic idea behind gradient accumulation optimization is to perform multiple forward and backward passes on small batches of data before updating the model's parameters. Suppose we have a batch size of B, and we want to accumulate gradients over N batches. In that case, we split the original batch into N smaller batches of size B/N and perform forward and backward passes on each of these smaller batches. The gradients obtained from each backward pass are then accumulated over the N batches before updating the model's parameters.
Mathematically, the gradient accumulation optimization can be expressed as follows:
1. For each training step t, split the batch into N smaller batches of size B/N, and perform forward and backward passes on each of these smaller batches.
2. Accumulate the gradients obtained from each of the N backward passes:
\[\Delta\theta^{(t)}=\sum_{i=1}^{N}\Delta\theta_{i}^{(t)} \tag{9}\]
where \(\Delta\theta^{(t)}\) is the gradient obtained from the backward pass on the \(i^{th}\) smallest batch.
3. After accumulating the gradients over N batches, update the model's parameters using the accumulated gradient:
\[\theta^{(t+1)}=\theta^{(t)}-\eta\Delta\theta^{(t)} \tag{10}\]
where \(\eta\) is the learning rate, and \(\theta^{(t)}\) and \(\theta^{(t+1)}\)are the model parameters before and after the update, respectively. The above equations illustrate the process of gradient accumulation optimization. This technique is especially useful for training large models with limited GPU memory.
Fig. 2: The architecture of Swin Transformers [19]
### _Experiment setup_
In our experimental setup, we utilized the resources provided by Google Colab, which offers a cloud-based environment for machine learning development. We leveraged the computational power of a GPU and 25 GB of RAM to train and evaluate our models efficiently. To build our models, we utilized Python, a popular programming language in the machine learning community, and TensorFlow, a widely used deep learning framework that provides high-level APIs for building and training deep neural networks. We chose TensorFlow because of its ease of use, its extensive documentation, and its ability to run on both CPUs and GPUs. Our experimental setup allowed us to run our experiments smoothly and efficiently, enabling us to focus on model development and analysis.
## 3 Results and Discussion
In this study, we applied gradient accumulation optimization on Swin ViT and compared its performance with the standard Swin ViT on the CIFAR10 and MNIST datasets. The results showed that using the optimization led to a decrease in accuracy and a significant increase in training time, as shown in figures 3 and 4, unlike the uplifting performance for applying GAO on [15]. We believe that the reason behind these results is overfitting, as the training accuracies were much higher than the testing accuracies, as shown in figures 5 and 6. Overfitting occurs when a model learns to fit the training data too closely, leading to poor generalization performance on new, unseen data. It can be caused by a variety of factors, such as model complexity, insufficient data, or inappropriate optimization strategies. In our case, we are dubious that the gradient accumulation optimization led to overfitting because it allowed the model to learn from the same data multiple times before updating the weights, which may have caused the model to become too specialized to the training set. One possible solution to this problem is to use regularization techniques to prevent overfitting. Regularization refers to a set of techniques that aim to reduce the model's variance by adding constraints or penalties to the optimization objective. For example, we could use L2 regularization to penalize large weights, or dropout to randomly remove units during training to prevent co-adaptation. Another approach is to use early stopping, where we stop training when the validation performance starts to deteriorate, to avoid overfitting.
Figure 4: Training Time of Swin ViT before and after applying GAO on MNIST
Figure 3: Training Time of Swin ViT before and after applying GAO on CIFAR10
Figure 5: Training accuracy and Testing accuracy of Swin ViT after applying GAO on CIFAR10
Figure 6: Training accuracy and testing accuracy of Swin ViT after applying GAO on MNIST.
## 4 Conclusion
Our study evaluated the effectiveness of gradient accumulation optimization on the Swin ViT model. Our results indicate that the application of this optimization technique resulted in a considerable reduction in accuracy and significantly increased the training time compared to the standard Swin Transformers. Thus, caution should be exercised when using gradient accumulation optimization for the Swin ViT model, and other transformer-based models. Overall, our findings provide insights into the performance of gradient accumulation optimization and its potential impact on transformer-based model. Also, our study suggests that gradient accumulation optimization may not be an effective strategy for improving the performance of Swin ViT on the CIFAR10 and MNIST datasets. The observed decrease in accuracy and increase in training time may be due to overfitting caused by the optimization. Future research could explore alternative optimization strategies or regularization techniques to improve the performance of Swin ViT on these datasets.
|
2301.13386 | Emergence of extreme events in a quasi-periodic oscillator | Extreme events are unusual and rare large-amplitude fluctuations that occur
can unexpectedly in nonlinear dynamical systems. Events above the extreme event
threshold of the probability distribution of a nonlinear process characterize
extreme events. Different mechanisms for the generation of extreme events and
their prediction measures have been reported in the literature. Based on the
properties of extreme events, such as rare in frequency of occurrence and
extreme in amplitude, various studies have shown that extreme events are both
linear and nonlinear in nature. Interestingly, in this work, we report on a
special class of extreme events which are nonchaotic and nonperiodic. These
nonchaotic extreme events appear in between the quasi-periodic and chaotic
dynamics of the system. We report the existence of such extreme events with
various statistical measures and characterization techniques. | Premraj Durairaj, Sathiyadevi Kanagaraj, Suresh Kumarasamy, Karthikeyan Rajagopal | 2023-01-31T03:33:45Z | http://arxiv.org/abs/2301.13386v1 | # Emergence of extreme events in a quasi-periodic oscillator
###### Abstract
Extreme events are unusual and rare large-amplitude fluctuations that occur can unexpectedly in nonlinear dynamical systems. Events above the extreme event threshold of the probability distribution of a nonlinear process characterize extreme events. Different mechanisms for the generation of extreme events and their prediction measures have been reported in the literature. Based on the properties of extreme events, such as rare in the frequency of occurrence and extreme in amplitude, various studies have shown that extreme events are both linear and nonlinear in nature. Interestingly, in this work, we report on a special class of extreme events which are nonchaotic and nonperiodic. These nonchaotic extreme events appear in between the quasi-periodic and chaotic dynamics of the system. We report the existence of such extreme events with various statistical measures and characterization techniques.
pacs: 05.45.-a Extreme events are unanticipated, rare events that occur in many natural and engineering systems. Extreme events (EE) can exist in various forms, including floods, cyclones, droughts, pandemics, power outages, material ruptures, explosions, chemical contamination, and stock market crashes, among others [1]. Such events have a severe impact on real-world situations. Thus, it is necessary to understand the relevant mechanism and its generic characteristics for the occurrence of EE in order to prevent such EE. As a result, the researchers focused on exploring the EE in diverse nonlinear oscillators [2; 3; 4; 5], maps [6], and neural networks [7]. Further, the extreme events have also been identified in a super-fluid helium [8], plasma[9], optical fibers [10], lasers [11], and capillary wave [12] etc.
However, depending on the characteristics of a dynamical system, the occurrence of EE has been discovered under a variety of mechanisms, including internal crises, on-off intermittency, blowout bifurcations, stick-slip bifurcations, and so on [6; 11; 13; 14; 15]. For instance, prior studies reveal that EE can arise as a result of the abrupt expansion and destruction of chaotic attractors produced by internal or external crises [11; 14]. Further, interior crises are found to be a critical mechanism for the occurrence of EE, when the trajectory of chaotic attractors reaches the stable manifold of a saddle or unstable periodic orbit, which increases the size of the chaotic attractors. Such a sudden expansion of the chaotic attractor may result in EE. In addition, Pomeau-Manneville intermittency is identified as another mechanism for the existence of EE. Such intermittency can occur when the periodic oscillations are interspersed by chaotic bursts, which further results in very large amplitude events. EEs can also exist through the following other mechanisms. The sliding bifurcation near the discontinuous boundary can cause EE. The trajectory of the attractors might hop between coexisting attractors due to noise in multi-stable systems, which can cause unusually large events. This is referred to as noise-induced intermittency. The trajectory of the attractors in coupled systems departs from the synchronization manifold to the transverse direction of the manifold. During such a transition, a synchronization error of dynamics can show zero or nonzero and is referred to as on-off intermittency [16].
Moreover, previous studies discovered that extreme or rare events can occur as a result of chaotic or stochastic processes [16]. In particular, the appearance of EE has been reported in micro-electromechanical cantilevers with discontinuous boundaries and diode lasers with phase-conjugate feedback [17; 18]. By applying the harmonic pump modulation to the fiber laser the emergence of Rogue waves has been identified [2; 19; 20; 21]. The EE in stochastic transport on networks has been demonstrated using multiple random walks on complex networks [23; 24]. Now the interesting question is whether extreme events can be induced by nonchaotic signals. In literature, a study has shown nonchaotic and nonperiodic have been well studied in the name of strange nonchaotic dynamics, which arises during the attractor transition from quasi-periodicity to chaos [31]. One can find the generation mechanisms of these strange nonchaotic attractors in literature [25; 26; 31]. The results in the present work show that similar to the strange nonchaotic dynamics, the nonperiodic and nonchaotic dynamics show large-amplitude extreme events. The present study opens a new area of study where the nonchaotic nonlinear process can also lead to extreme events and the same has not been found reported.
To show the nonchaotic extreme events, we consider the Morse Oscillator (MO) which is used to describe the motion of diatomic molecules. Importantly, the MO has made substantial contributions in the fields of classical, semi-classical, and quantum mechanics [27; 28]. The MO was used for photo-dissociation molecules without any damping. In the presence of driving and damping, the MO was exploited for multi-photon excitation
of molecules, pumping the local mode of polyatomic molecules [29]. We consider the quasi-periodically forced MO and its dynamical equation can be written as
\[\dot{x} = y\] \[\dot{y} = fsin(\omega_{1}t)+gsin(\omega_{2}t)+e^{-2x}-e^{-x}-\gamma y \tag{1}\]
where \(x\), and \(y\) are the state variables of the system and \(\gamma\) is a damping parameter. The amplitudes of the first and second force are represented by \(f\) and \(g\) and the corresponding frequencies are denoted by \(\omega_{1}\) and \(\omega_{2}\), respectively.
To manifest the existence of extreme events, we first depicted the time evolution of the \(x\)-variable in Fig. 1(a) and Fig. 1(b) by fixing the amplitude of the first and second forcing as \(f=g=0.255\) and \(f=g=0.278\). We observe from Fig. 1(a) that some of the oscillation(event) has larger amplitudes, while the rest of them take lower amplitudes. To check the larger amplitude oscillations satisfy the extreme events criteria defined in the literature, we use the following relation:
\[x_{{}_{EE}}=<x_{n}>+N\sigma_{x_{n}}, \tag{2}\]
where \(x_{{}_{EE}}\) is the critical amplitude threshold and \(N\) is a multiplication factor. The mean and standard deviation of the variable \(x\) is represented by \(<x_{n}>\) and \(\sigma_{x_{n}}\), respectively. Here, the \(x_{n}\) (an event) are the local peaks of the variable \(x\). An event or a local peak can satisfy extreme event criteria if it has a value higher than the critical threshold defined by Eq. (2) with \(N\geq 4\). To confirm the presence of EE, we plotted the critical threshold on the time series for \(N=5\) and \(N=4\) in Figs. 1(a) and 1(b). We used two different \(N\) values depending on the time series. Though the choice of \(N\) is arbitrary, we set the minimum \(N\) value as 4 in the present study. We also find the critical value of \(N_{max}\) for a range of each \(f\) value - the details will be discussed below. In both cases, we can see that some of the large amplitude events cross the threshold line, confirming the presence of EE. Since the choice of \(N\) is arbitrary in the previous criterion, we use another criterion defined by the abnormality index; \(A_{n}=\frac{Hf_{n}}{H_{1/3}}\)[17], where \(Hf_{n}\) is the difference between the maximum height of the event \(n\) and the mean height of its population, \(Hf_{n}=x_{n}-\langle x_{n}\rangle_{n}\) and \(H_{1/3}\) is the average value among the highest one-third values of \(Hf_{n}\). If an event \(x_{n}\) has abnormality \(A_{n}\) greater than 2 then the event is termed an extreme event. We find that both cases in Figs. 1(a) and 1(b) satisfy the above criterion with abnormality index \(A=3\) denoted by a dashed horizontal line in the plots. It is evident that a few rare large amplitude events cross the abnormality index line. We computed the probability distribution function (PDF) in Fig. 1(c) for the time series shown in Fig. 1(b). The EE critical threshold at \(N=4\) is plotted as a vertical dashed line on the PDF diagram. In the plot, the events with a finite probability above the critical threshold line characterize the extreme events. We can plot similar probability distribution for Fig. 1(a), however, for simplicity, we have plotted the PDF corresponding to Fig. 1(b).
The above analysis shows that the observed behavior satisfies the extreme events criterion in the amplitudes. Another important characteristic of extreme events is an inter-event interval. The inter-event interval defines the frequency occurrence of the events and should not have discrete values (discrete values mean the periodic occurrence of events), rather it should have a distribution over a range. In order to examine the distribution of events in the observed time series, we find inter-event intervals (R) between successive extreme events. Subsequently, we find the probability of such inter-event intervals (PR) as shown in Fig. 1(d). Inter-event interval and its probability obey power-law relations as given by \(log_{10}(PR)=a~{}log_{10}(R)^{b}\), where \(a\) and \(b\) are constants with values \(a=-0.006\) and \(b=2.96\), respectively. The obtained numerical values are depicted in a filled circle,
Figure 1: (a) Time evolution of \(x_{n}\) for the nonchaotic dynamics with forcing amplitudes as (a) \(f=g=0.255\), and (b) \(f=g=0.278\). The \(x_{n}\) is the \(n^{th}\) local peaks of the variable \(x\). The horizontal black dot-dashed and red dashed lines are the critical threshold lines defining the extreme events (refer to the text for the meaning of \(N\) and \(A\)). (c) The probability distribution function corresponds to the extreme events and (d) return interval (R) (inter-event interval) with respect to the probability of recurrence times (PR) of the EE for (b). The filled circles and solid lines in (d) represent the numerical data and the corresponding power-law fit. We fixed the other parameter values as \(\gamma=0.35\), \(\omega_{1}=0.3\), and \(\omega_{2}=(\frac{\sqrt{5}-1}{2})\).
and a continuous line shows the corresponding power-law fit. The route for the emergence of EE and its transitions is further estimated below using Lyapunov exponents (LE), amplitude maxima \(X_{max}\), critical factor \(N_{max}\), and two-parameter analysis.
To illustrate the global dynamical transition of the attractors and route of the EE, the two-parameter diagram is drawn in \((f,g)\) space using the maximum LE as shown in Fig. 2(a). The range of LE (shown in the color bar) denotes the emergence of quasi-periodic, nonchaotic, and chaotic attractors in the respective parameters of \(f\) and \(g\). If the forcing amplitudes \(f\) and \(g\) are small, attractors have a maximum negative LE, indicating the presence of a quasi-periodic (QP) attractor region. To better comprehend QP attractors, we plotted their time-evolution and phase portrait trajectories in Supplementary Material Fig. S1 a(i, ii) for \(f=g=0.23\), which show their bounded nature. Thus, the EE critical threshold for this attractor is greater than the amplitude of QP attractors. By increasing \(f\) and \(g\) values, the QP attractor transits to a chaotic (CH) attractor via strange and nonchaotic dynamics in which the LE takes the values from negative (near zero) to positive. To distinguish between the strange nonchaotic and chaotic attractors, the time-evolution and phase portrait trajectories are shown in Figs. S1 b(i,ii) and Figs. S1 c(i,ii) in the supplementary materials by fixing \(f=g=0.278\) and \(f=g=0.33\), respectively. Also, the frequency spectra can be used to distinguish quasiperiodic, SNA and chaos. We have the frequency spectrum analysis in the Supplementary material in Fig. S4 (a-c). When compared to the chaotic attractor (which has a greater number of large amplitude oscillations), we found the SNA shows fewer large amplitude oscillations. The supplemental material's Fig. S1 can be consulted for more information. Furthermore, to show the dynamical transitions clearly, we displayed maximum Lyapunov exponents in Fig. 2(b) by keeping the parameter (\(f=g\)) and varying it along the diagonal dashed line shown in Fig. 2(a). In Fig. 2(b), the maximum LE is illustrated as a function of forcing amplitudes \(f\) and \(g\) (\(f=g\)) in the range (\(0.23<f(=g)<0.32\)). We observe that when the forcing amplitudes are minimum in the mentioned range, LE takes negative values, indicating quasi-periodic dynamics. While increasing the parameter, the transition of LE from negative to positive values indicates the dynamical transition of quasiperiodic behavior to chaotic behavior. Furthermore, we found that the negative values of LE near-zero exhibit strange nonchaotic behavior; extreme events are seen in this region. The literature has shown that the EEs occur under chaotic dynamics [16] through distinct routes and stochastic processes like stochastic transport on networks has been demonstrated using multiple random walks on complex networks [23; 24]. Among the various routes, the occurrence of EEs in nonchaotic dynamics is new and it has not been reported to the best of our knowledge.
To validate the occurrence of EEs in the SNA region, we find the maximum amplitude \(x_{max}\), extreme event threshold \(x_{EE}\), and maximum value of \(N\) (\(N_{max}\)) of a given time series. In Fig. 2(c), we have plotted the above quantities by varying the magnitude of \(f=g\). The plot explains the regime of extreme events in the following way. During the non-extreme regime, the critical threshold \(x_{EE}\) is larger than the \(x_{max}\). It means that the threshold is larger than the large amplitude oscillations and does not satisfy the extreme events criterion. While in the EE regime, the \(x_{max}\) is larger than the EE critical threshold \(x_{EE}\) (shaded EE region). This explains that extreme events have a larger amplitude than the extreme event criterion. Note that the SNA regime in the
Figure 2: (a) The two-parameter bifurcation diagram in \((f,g)\) space. Using the range of Lyapunov exponents (\(\lambda\)) (denoted by the color bar) the dynamical regions are marked. (b) The maximum Lyapunov exponents as a function of forcing amplitude \(f(=g)\), (c) maximum amplitude of the events \(x_{max}\) (red) and the corresponding \(N_{max}\) (Eq. 3) of the event (blue) by varying the magnitude of \(f(=g)\). The black line represents the extreme events critical threshold (\(x_{EE}\)) drawn from Eq. (2) for N=4. The other parameter values are fixed as the same as in Fig. 1.
parameter range \(f\in 0.28\) to \(0.2912\) shows no extreme events. As we discussed above, we fixed \(N=4\) as an arbitrary constant from the literature [30]. However, the maximum value of the \(N\) can be determined by rewriting Eq. (2), as
\[N_{max}=\frac{max(x_{EE})-\left\langle x_{n}\right\rangle}{\sigma_{x_{n}}}. \tag{3}\]
In the SNA region shown in Fig. 2(c), we found that the multiplication factor taking values between \(4\leq N_{max}\leq 5.611\) when the forcing amplitudes in the range from \(0.256\) to \(0.28\) denoted by shaded transparent pattern. The plot of \(N_{max}\) shows that depending on the parameter choice, the arbitrary value can be chosen N\(\in\{4,5.611\}\). Thus above results satisfy all the criteria proposed for the extreme events and justify the existence of EEs in the SNA regime.
As we discussed earlier, the observed EEs are nonchaotic and nonperiodic. At the same time, the parameters corresponding to the strange nonchaotic EEs show multiple stable behaviors. The multi-stable behavior can be seen from the basins of attraction drawn for a range of initial conditions. Figure 3 is drawn by varying the initial states \(x_{0}\) and \(y_{0}\) of the system for the parameters given in Fig. 1 caption. We can see that basin of nonchaotic and nonperiodic behavior or SNA is embedded within the basin of quasi-periodic dynamics. Outside the SNA basin, we have found three different basins which contain quasi-periodic attractors. All the three different quasi-periodic attractor basins and the SNA basin, denoted by QP1, QP2, QP3, and SNA respectively in Fig. 3. In supplemental material Fig. (S2), each of the quasi-periodic attractors is depicted. Figure 3 shows that extreme events occur for specific values of initial conditions. The size of these basins changes as we vary the parameter within the EEs regime marked in Fig. 2.
Similarly, to determine the regime of the extreme event in the parametric space between \(f\) and \(g\), a two-parameter diagram is drawn as shown in Fig. 4. The white regime in the plot shows the extreme events for the combinations of parameter \((f,g)\) separated with the help of Eq. 2 from the non-extreme events (NEE- denoted by blue color). By comparing Fig. 2(a) with Fig. 4 we can say that EEs occur in the SNA region (however some of the SNA parameter regime may not contain EEs).
To show the generality of the existence of EEs in the SNA regime, we present the regime of EEs for \(\gamma=0.4\) in the supplementary material Figs. S3 (a),(b). This result validates the presence of strange nonchaotic extreme events in the selected parameter regime. In the following section, we characterize the observed behavior as strange and nonchaotic in nature. For this purpose, we perform
Figure 4: Two parameter phase diagram in \((f,g)\) space (plotted using Eq. 2 for fixed initial condition \((x_{0},y_{0})=(0.3,0.2)\)), to distinguish the existence of extreme events (EE) and non-extreme events (NEE), respectively. We fixed the other parameter values as the same as in Fig. 1.
Figure 5: Singular continuous spectrum for fixing the forcing amplitudes \(f,g=0.278\). (a) The logarithmic plot of \(|x(\alpha,N)|^{2}\) against \(N\). The red and black lines denote the numerical values and the corresponding power-law fit. (b) Fractal path in the complex plane of \(x\). The other parameter values are defined as \(\gamma=0.35\), \(\omega_{1}=0.3\), \(\omega_{2}=(\frac{\sqrt{5}-1}{2})\).
Figure 3: Basin of attraction for \(f=g=0.278\). \(QP1\), \(QP2\), and \(QP3\) are the quasi-periodic attractor-1, quasi-periodic attractor-2, and quasi-periodic attractor-3, respectively. SNA represents the strange nonchaotic attractor. We fixed the other parameter values the same as in Fig. 1.
singular continuous spectrum analysis and distribution of finite-time Lyapunov exponents.
To validate the strange nonchaotic dynamics, we plot singular continuous spectrum [31] in Fig. 5 using partial Fourier sum of the signal \(x\) given by \(X(\alpha,N)=\sum_{m=1}^{N}x_{m}e^{2\pi im\alpha}\), where \(\alpha\) is proportional to the external frequency (\(\omega_{1}\)) and \(N\) is the length of the time series. The red and black lines show the singular continuous spectrum and the corresponding power-law fit. When N is considered as time, \(|X(\alpha,N)|^{2}\) grows with N, that is \(|X(\alpha,N)|^{2}\sim N^{\beta}\), where \(\beta\) is the slope. When the signal possesses the properties of strange nonchaotic dynamics, the corresponding slope values lie between \(1<\beta<2\). For this case, the slope value \(\beta=1.576\) confirms the existence of strange nonchaotic dynamics shown in Fig. 5(a). The corresponding path of Brownian motion with fractal structure in complex \([Re(x),Im(x)]\) plane also confirms the strange nonchaotic dynamics in Fig. 5(b).
The strange nonchaotic dynamics are also validated using another statistical characterization known as the distribution of finite-time Lyapunov exponents. The distribution takes both positive and negative values, but the area under the curve is maximum in the negative regime for strange nonchaotic dynamics. Figure 6 plotted for three different finite time intervals \(T=500,~{}1000\), and \(1500\), the distribution has a large negative region compared to the positive region showing nonchaotic dynamics. From these analyses, the observed dynamics are strange (nonperiodic) as well as nonchaotic, which also shows the large amplitude and rare events.
The present letter shows a mechanism of the emergence of extreme events in a quasi-periodically forced Morse oscillator. As a function of forcing amplitude, we found the transition from quasi-periodic (QP) to chaotic (CH) attractor via strange nonchaotic extreme events. During such extreme event dynamics, we found a long excursion of trajectories that are away from the bounded attractor, while the chaotic attractors show many higher amplitude peaks. To confirm the existence of EEs, we estimated the critical threshold, and it is observed that the higher amplitude peaks in the EE cross the critical threshold while the peaks in the CH and QP attractor do not. The dynamical transitions of the attractors and the occurrence of nonchaotic EE dynamics are manifested through maximum Lyapunov exponents. The observed extreme events are further validated using the probability distribution and return interval (inter-event interval) with respect to the probability of recurrence times of the EE. Extreme events are abnormal and unexpected events that occur in many natural and man-made systems. Understanding the mechanism or route can help to anticipate the onset of EEs. Early works on extreme events show the chaotic nature of the extreme events because of the rare and extreme amplitude properties of extreme events. The present study shows an unknown emergence of extreme events that are nonchaotic and nonperiodic extreme events. This finding shed light on the new direction where extreme events can happen as a nonchaotic process.
We gratefully acknowledge this work is funded by the Center for Nonlinear Systems, Chennai Institute of Technology (CIT), India, vide funding number CIT/CNS/2022/RP-016.
|
2301.13392 | Combinatorial Causal Bandits without Graph Skeleton | In combinatorial causal bandits (CCB), the learning agent chooses a subset of
variables in each round to intervene and collects feedback from the observed
variables to minimize expected regret or sample complexity. Previous works
study this problem in both general causal models and binary generalized linear
models (BGLMs). However, all of them require prior knowledge of causal graph
structure or unrealistic assumptions. This paper studies the CCB problem
without the graph structure on binary general causal models and BGLMs. We first
provide an exponential lower bound of cumulative regrets for the CCB problem on
general causal models. To overcome the exponentially large space of parameters,
we then consider the CCB problem on BGLMs. We design a regret minimization
algorithm for BGLMs even without the graph skeleton and show that it still
achieves $O(\sqrt{T}\ln T)$ expected regret, as long as the causal graph
satisfies a weight gap assumption. This asymptotic regret is the same as the
state-of-art algorithms relying on the graph structure. Moreover, we propose
another algorithm with $O(T^{\frac{2}{3}}\ln T)$ regret to remove the weight
gap assumption. | Shi Feng, Nuoya Xiong, Wei Chen | 2023-01-31T03:45:17Z | http://arxiv.org/abs/2301.13392v4 | # Combinatorial Causal Bandits without Graph Skeleton
###### Abstract
In combinatorial causal bandits (CCB), the learning agent chooses a subset of variables in each round to intervene and collects feedback from the observed variables to minimize expected regret or sample complexity. Previous works study this problem in both general causal models and binary generalized linear models (BGLMs). However, all of them require prior knowledge of causal graph structure. This paper studies the CCB problem without the graph structure on binary general causal models and BGLMs. We first provide an exponential lower bound of cumulative regrets for the CCB problem on general causal models. To overcome the exponentially large space of parameters, we then consider the CCB problem on BGLMs. We design a regret minimization algorithm for BGLMs even without the graph skeleton and show that it still achieves \(O(\sqrt{T}\ln T)\) expected regret. This asymptotic regret is the same as the state-of-art algorithms relying on the graph structure. Moreover, we sacrifice the regret to \(O(T^{\frac{2}{3}}\ln T)\) to remove the weight gap covered by the asymptotic notation. At last, we give some discussions and algorithms for pure exploration of the CCB problem without the graph structure.
Machine Learning, Causal Bandits, Graph Skeleton, Graph Skeleton
## 1 Introduction
The multi-armed bandits (MAB) problem is a classical model in sequential decision-making (Robbins, 1952; Auer et al., 2002; Bubeck et al., 2012). In each round, the learning agent chooses an arm and observes the feedback reward corresponding to that arm, with the goal of either maximizing the cumulative reward over \(T\) rounds (regret minimization), or minimizing the sample complexity to find the intervention closest to the optimal one (pure exploration). MAB can be extended to have more structures among arms and reward functions, which leads to more advanced learning techniques. Such structured bandit problems include combinatorial causal bandits (Chen et al., 2013, 2016), linear bandits (Abbasi-Yadkori et al., 2011; Agrawal and Goyal, 2013; Li et al., 2017), and sparse linear bandits (Abbasi-Yadkori et al., 2012).
In this paper, we study another structured bandit problem called causal bandits, which is first proposed by (Lattimore et al., 2016). It consists of a causal graph \(G=(\mathbf{X}\cup\{Y\},E)\) indicating the causal relationship among the observed variables. In each round, the learning agent selects one of a few variables in \(\mathbf{X}\) to intervene, gains the reward as the output of \(Y\), and observes the values of all variables in \(\mathbf{X}\cup\{Y\}\). The use of causal bandits is possible in a variety of contexts that involve causal relationships, including medical drug testing, performance tuning, policy making, scientific experimental process, etc.
In all previous literature except (Lu et al., 2021), the structure of the causal graph is known, but the underlying probability distributions governing the causal model are unknown. Lu et al. (2021) further assumes that the graph structure is unknown and the learning agent can only see the graph skeleton. Here, graph skeleton is also called essential graph (Gamez et al., 2013) and means all the edges in \(G\) without direction information. In our paper, we further consider that the graph skeleton is unknown and remove the unrealistic assumption \(|\mathbf{P}\mathbf{a}(Y)|=1\) in (Lu et al., 2021). In many scenarios, the learning agent needs to learn the causal relationships between variables and thus needs to learn the graph without any prior information. For example, in policymaking for combating COVID-19, many possible factors like food supply, medical resources, vaccine research, public security, and public opinion may consequently impact the mortality rate. However, the causal relationships among these factors are not readily known and need to be clarified during the sequential decision-making process. Learning the causal graph from scratch while identifying the optimal intervention raises a new challenge to the learning problem.
For regret minimization, we study combinatorial causal bandits (CCB) under the binary generalized linear models (BGLMs) as (Feng and Chen, 2023; Xiong and Chen, 2023). Using a novel initialization phase, we could determine the ancestor structure of the causal graph for the
BGLM when the minimum edge weight in the model satisfies a weight gap assumption. This is enough to perform a CCB algorithm based on maximum likelihood estimation on it (Feng and Chen, 2023). The resulting algorithm BGLM-OFU-Unknown achieves \(O(\sqrt{T}\log T)\) regret, where \(T\) is the time horizon. The big \(O\) notation only holds for \(T\) larger than a threshold so the weight gap assumption is hidden by the asymptotic notation. For binary linear models (BLMs), one could sacrifice \(O(T^{\frac{1}{6}})\) regret to remove the weight gap assumption. The algorithms we design for BLMs allow hidden variables and use linear regression instead of MLE to remove an assumption on parameters.
For pure exploration, we give some discussions on general causal models. If we allow the weight gap, a trivial solution exists. Without the weight gap, we give an adaptive algorithm for general causal model in the atomic setting.
In summary, our contribution includes: (a) providing an exponential lower bound of cumulative regret for CCB on general causal model, (b) proposing an \(O(\sqrt{T}\ln T)\) cumulative regret CCB algorithm BGLM-OFU-Unknown for BGLMs without graph skeleton, (c) proposing an \(O(T^{\frac{3}{2}}\ln T)\) cumulative regret CCB algorithm for BLMs without graph skeleton and the weight gap assumption, (d) giving the first discussion including algorithms and lower bounds on pure exploration of causal bandits on general causal models and atomic intervention without knowing the graph structure.
## 2 Related Works
**Causal Bandits.** The causal bandits problem is first proposed by Lattimore et al. (2016). They discuss the simple regret for parallel graphs and general graphs with known probability distributions \(P(\mathbf{Pa}(Y)|a)\) for any action \(a\). Sen et al. (2017); Nair et al. (2021); Maiti et al. (2021) generalize the simple regret study for causal bandits to more general causal graphs and soft interventions. Lu et al. (2020); Nair et al. (2021); Maiti et al. (2021) consider cumulative regret for causal bandits problem. However, all of these studies are not designed for combinatorial action set and has exponentially large regret or sample complexity with respect to the graph size if the actions are combinatorial. Yabe et al. (2018); Feng and Chen (2023); Xiong and Chen (2023); Varici et al. (2022) consider combinatorial action set for causal bandits problem. Among them, Feng and Chen (2023) are the first to remove the requirement of \(T>\sum_{X\in\mathbf{X}}2^{|\mathbf{Pa}(X)|}\) and proposes practical CCB algorithms on BGLMs with \(O(\sqrt{T}\ln T)\) regret. Xiong and Chen (2023) simultaneously propose CCB algorithms on BGLMs as well as general causal models with polynomial sample complexity with respect to the graph size. Varici et al. (2022) further include soft interventions in the CCB problem, but their work is on Linear Structural Equation Models. Lee and Bareinboim (2018, 2019, 2020) propose several CCB algorithms on general causal bandits problem, but they focus on empirical studies while we provide theoretical regret analysis. All of the above works require the learning agent to know the graph structure in advance. Lu et al. (2021) is the first and only work on causal bandits without graph structure. However, their algorithm is limited to the case of \(|\mathbf{Pa}(Y)|=1\) for the atomic setting, and thus the main technical issue degenerates to finding the particular parent of \(Y\) so that one could intervene on this node for the optimal reward.
**Social Network and Causality.** Causal models have intrinsic connections with influence propagation in social networks. Feng and Chen (2021) study the identifiability in the Independent Cascade (IC) propagation model as a causal model. The BGLM studied in this paper contains the IC model and linear threshold (LT) model in a DAG as special cases, and is also related to the general threshold model (Kempe et al., 2003). Moreover, Feng and Chen (2023); Xiong and Chen (2023) also study causal bandits on BGLMs to avoid the exponentially large parameter space of general causal models. These papers borrow some techniques and ideas from influence maximization literature, including (Li et al., 2020) and (Zhang et al., 2022). However, in our BGLM CCB problem, the graph skeleton is unknown, and we need adaptation and integration of previous techniques together with some new ingredients.
## 3 Model
We utilize capital letters (\(U,X,Y\ldots\)) to represent variables and their corresponding lower-case letters to indicate their values, as was frequently done in earlier causal inference literatures (see, for example, (Pearl, 2009; Pearl and Mackenzie, 2018)). To express a group or a vector of variables or values, we use boldface characters like \(\mathbf{X}\) and \(\mathbf{x}\).
**Causal Models.** A _causal graph_\(G=(\mathbf{X}\cup\{Y\},E)\) is a directed acyclic graph consisting of intervenable variables \(\mathbf{X}\), a special target node \(Y\) without outgoing edges, and the set of directed edges \(E\) connecting nodes in \(\mathbf{X}\cup\{Y\}\). Denote \(n=|\mathbf{X}|\) as the number of nodes in \(\mathbf{X}\). For simplicity, in this paper we consider all variables in \(\mathbf{X}\cup\{Y\}\) are \((0,1)\)-binary random variables. In our main text, all the variables in \(\mathbf{X}\cup\{Y\}\) are known and their values can be observed but the edges in \(E\) are unknown and cannot be directly observed. We refer to the in-neighbor nodes of a node \(X\) in \(G\) as the _parents_ of \(X\), denoted by \(\mathbf{Pa}(X)\), and the values of these parent random variables as \(\mathbf{pa}(X)\). According to the definition of causal Bayesian model (Pearl, 2009), the probability distribution \(P(X|\mathbf{Pa}(X))\) is used to represent the causal relationship between \(X\) and its parents for every conceivable value combination of \(\mathbf{Pa}(X)\). Moreover, we define the ancestors of a node \(X\in\mathbf{X}\cup\{Y\}\) by \(\mathbf{Anc}(X)\).
We mainly study the _Markovian_ causal graph \(G\) in this paper, which means that there are no hidden variables in \(G\) and every observed variable \(X\) has some randomness that is not brought on by any other variables. In this study, we dedicate random variable \(X_{1}\) to be a special variable that always takes the value \(1\) and is a parent of all other observed random variables in order to model this effect of the Markovian model.
In this paper, we study a special causal model called binary generalized linear model (BGLM). Specifically, in BGLM, we have \(P(X=1|\mathbf{Pa}(X)=\mathbf{pa}(X))=f_{X}(\mathbf{\theta}_{X}^{*}\cdot\mathbf{pa}(X))+\varepsilon _{X}\), where \(f_{X}\) is a monotone increasing function, \(\mathbf{\theta}_{X}^{*}\) is an unknown weight vector in \([0,1]^{|\mathbf{Pa}(X)|}\), and \(\varepsilon_{X}\) is a zero-mean sub-Gaussian noise that ensures that the probability does not exceed \(1\). We use the notation \(\theta_{X^{\prime},X}^{*}\) to denote the entry in vector \(\mathbf{\theta}_{X}^{*}\) that corresponds to node \(X^{\prime}\in\mathbf{Pa}(X)\), \(\mathbf{\theta}^{*}\) to denote the vector of all the weights, and \(\Theta\) to denote the feasible domain for the weights. We also use notation \(\mathbf{\varepsilon}\) to represent all noise random variables \((\varepsilon_{X})_{X\in\mathbf{X}\cup Y}\).
We also study binary linear model (BLM) and linear model in this paper. In BLMs, all \(f_{X}\)'s are identity functions, so \(P(X=1|\mathbf{Pa}(X)=\mathbf{pa}(X))=\mathbf{\theta}_{X}^{*}\cdot\mathbf{pa}(X)+\varepsilon_{X}\). When we remove the noise variable \(\varepsilon_{X}\), BLM coincides with the _linear threshold (LT)_ model for influence cascades (Kempe et al., 2003) in a DAG. In linear models, we remove the randomness of conditional probabilities, so \(X=\mathbf{\theta}_{X}^{*}\cdot\mathbf{pa}(X)+\varepsilon_{X}\). For a node \(X\) and one of its parent \(X^{\prime}\), the corresponding weight are denoted as \(\theta_{X^{\prime},X}^{*}\).
For the unknown causal graph, there is an important parameter \(\theta_{\min}^{*}=\min_{(X^{\prime},X)\in E}\theta_{X^{\prime},X}^{*}\), which represents the minimum weight gap for all edges. Intuitively, this minimum gap measures the difficulty for the algorithm to discover the edge and its correct direction. When the gap is relatively large, we can expect to discover the whole graph accurately during the learning process; When the gap is very small, we cannot guarantee to discover the graph directly and we must come up with another way to solve the causal bandit problem on an inaccurate model.
**Combinatorial Causal Bandits.** The problem of combinatorial causal bandits (CCB) is first introduced in (Feng and Chen, 2023) and describes the following setting and the online learning task. The intervention can be done on no all variables except \(X_{1}\) and \(Y\). The action set is defined by \(\mathcal{A}\subseteq\{\mathit{do}(\mathbf{S})=\mathbf{s}\}_{\mathbf{S}\subseteq\mathbf{X} \backslash\{X_{1}\},\mathbf{s}\in\{0,1\}^{|\mathbf{S}|}}\). The expected reward under intervention on \(\mathbf{S}\subseteq\mathbf{X}\backslash\{X_{1}\}\) is denoted as \(\mathbb{E}[Y|\mathit{do}(\mathbf{S}=\mathbf{s})]\). A learning agent runs an algorithm \(\pi\) for \(T\) rounds. In particular, an _atomic intervention_ only intervenes one node, i.e. \(|\mathbf{S}|=1\). In our paper, we assume the observation \(\mathit{do}()\) and atomic interventions \(\mathit{do}(X=x)\) are always in our action set, because they are needed to discover the graph structure.
The performance of the agent could be measured by the _regret_ of the algorithm \(\pi\). The regret \(R^{\pi}(T)\) in our context is the difference between the cumulative reward using algorithm \(\pi\) and the expected cumulative reward of choosing best action \(\mathbf{S}^{*}\). Here, \(\mathbf{S}^{*}\in\operatorname*{argmax}_{\mathit{do}(\mathbf{S}=\mathbf{s})\in\mathcal{A} }\mathbb{E}[Y|\mathit{do}(\mathbf{S})]\). Formally, we have
\[R^{\pi}(T)=\mathbb{E}\left[\sum_{t=1}^{T}(\mathbb{E}[Y|\mathit{do}(\mathbf{S}^{*}= \mathbf{s}^{*})]-\mathbb{E}[Y|\mathit{do}(\mathbf{S}^{\pi}_{t}=\mathbf{s}^{\pi}_{t})]) \right], \tag{1}\]
where \(\mathbf{S}^{\pi}_{t}\) is the intervention set selected by algorithm \(\pi\) in round \(t\). The expectation is from the randomness of the causal model and the algorithm \(\pi\).
In this paper, we mainly focus on the regret minimization problem, and we will discuss the pure exploration problem and its sample complexity in the Section 7. We defer the definition of sample complexity to that section.
## 4 Lower Bound on General Binary Causal Model
In this section, we explain why we only consider BGLM and BLM instead of the general binary causal model in the combinatorial causal bandit setting. Note that in the general case both the number of actions and the number of parameters of the causal model are exponentially large to the size of the graph. The following theorem shows that in the general binary causal model, the regret bound must be exponential to the size of the graph when \(T\) is sufficiently large, or simply linear to \(T\) when \(T\) is not large enough. This means that we cannot avoid the exponential factor for the general case, and thus justify our consideration of the BGLM and BLM settings with only a linear number of parameters.
**Theorem 1** (Binary Model Lower Bound).: _Recall that \(n=|\mathbf{X}|\). For any algorithm, when \(T\geq\frac{16(2^{n}-1)}{3}\), there exists a bandit instance \(\mathcal{T}\) such that_
\[\mathbb{E}_{\mathcal{T}}[R(T)]\geq\frac{\sqrt{2^{n}T}}{8e}.\]
_Moreover, when \(T\leq\frac{16(2^{n}-1)}{3}\), there exists a bandit instance \(\mathcal{T}\) that_
\[\mathbb{E}_{\mathcal{T}}[R(T)]\geq\frac{T}{16e}.\]
The lower bound contains two parts. The first part shows that the asymptotic regret cannot avoid an exponential term \(2^{n}\) when \(T\) is large. The second part states that if \(T\) is not exponentially large, the regret will be linear at the worst case. The proof technique of this lower bound is similar to but not the same as previous classical bandit, because the existence of observation \(\mathit{do}()\) and atomic intervention
\(do(X_{i}=1)\) may provide more information. To our best knowledge, this result is the first regret lower bound on the general causal model considering the potential role of observation and atomic intervention. The result shows that in the general binary causal model setting, it is impossible to avoid the exponential term in the cumulative regret even with the observations on null and atomic interventions. The proof of lower bound is provided in Appendix C.5.
The main idea is to consider the action set \(\mathbf{A}=\{do(),do(X=x),do(\mathbf{X}=\mathbf{x})\}\) for all node \(X\), \(x\in\{0,1\}\), \(\mathbf{x}\in\{0,1\}^{n}\) be the null intervention, atomic interventions and actions that intervene all nodes. The causal graph we use is a parallel graph where all nodes in \(\mathbf{X}\) directly points to \(Y\) with no other edges in the graph, and each node \(X_{i}\in\mathbf{X}\) has probability \(P(X_{i}=1)=P(X_{i}=0)=0.5\). Intuitively, under this condition the null intervention and atomic interventions can provide limited information to the agent. This fact shows that observations and atomic interventions may not be conducive to our learning process in the worst case on the general binary causal model.
## 5 BGLM CCB without Graph Skeleton but with Minimum Weight Gap
In this section, we propose an algorithm for causal bandits on Markovian BGLMs based on maximum likelihood estimation (MLE) without any prior knowledge of the graph skeleton.
Our idea is to try to discover the causal graph structure and then apply the recent CCB algorithm with known graph structure (Feng and Chen, 2023). We discover the graph structure by using atomic interventions in individual variables. However, there are a few challenges we need to face on graph discovery. First, it could be very difficult to exactly identify all parent-child relationships, since some grandparent nodes may also have strong causal influence to its grand-child nodes. Fortunately, we find that it is enough to identify ancestor-descendant relationships instead of parent-child relationships, since we can artificially add an edge with \(0\) weight between each pair of ancestor and descendant without impacting the causal propagation results. Another challenge is the minimum weight gap. When the weight of an edge is very small, we need to perform more atomic interventions to identify its existence and its direction. Hence, we design an initialization phase with the number of rounds positively correlated to the total round number \(T\) and promise that the ancestor-descendant relationship can always be identified correctly with a large probability when \(T\) is sufficiently large.
Following (Li et al., 2017; Feng and Chen, 2023; Xiong and Chen, 2023), we have three assumptions:
**Assumption 1**.: For every \(X\in\mathbf{X}\cup\{Y\}\), \(f_{X}\) is twice differentiable. Its first and second order derivatives are upper-bounded by \(L_{f_{X}}^{(1)}>0\) and \(L_{f_{X}}^{(2)}>0\).
Let \(\kappa=\inf_{X\in\mathbf{X}\cup\{Y\},\mathbf{v}\in[0,1]^{|\mathbf{Pa}(X)|},||\mathbf{\theta}- \mathbf{\theta}_{X}^{*}||\leq 1}\dot{f}_{X}(\mathbf{v}\cdot\mathbf{\theta})\).
**Assumption 2**.: We have \(\kappa>0\).
**Assumption 3**.: There exists a constant \(\zeta>0\) such that for any \(X\in\mathbf{X}\cup\{Y\}\) and \(X^{\prime}\in\mathbf{Anc}(X)\), for any value vector \(\mathbf{v}\in\{0,1\}^{|\mathbf{Anc}(X)\setminus\{X^{\prime},X_{1}\}|}\), the following inequalities hold:
\[\Pr_{\mathbf{\varepsilon},\mathbf{X},Y}\{X^{\prime}=1|\mathbf{Anc}(X)\setminus\{X^{ \prime},X_{1}\}=\mathbf{v}\}\geq\zeta, \tag{2}\] \[\Pr_{\mathbf{\varepsilon},\mathbf{X},Y}\{X^{\prime}=0|\mathbf{Anc}(X)\setminus \{X^{\prime},X_{1}\}=\mathbf{v}\}\geq\zeta. \tag{3}\]
Assumptions 1 and 2 are the classical assumptions in generalized linear model (Li et al., 2017). Assumption 3 makes sure that each ancestor node of \(X\) has some freedom to become \(0\) and \(1\) with a non-zero probability, even when the values of all other ancestors of \(X\) are fixed, and it is originally given in (Feng and Chen, 2023) with additional justifications. For BLMs and continuous linear models, we propose an algorithm based on linear regression without the need of this assumption in Appendix B.
To discover the ancestors of all variables, we need to perform an extra initialization phase (see Algorithm 1). We denote the total number of rounds by \(T\) and arbitrary constants \(c_{0},c_{1}\) to make sure that \(c_{0}T^{1/2}\in\mathbb{N}^{+}\). In the initialization phase, from \(X_{1}\) to \(X_{n}\), we intervene each of them to \(1\) and \(0\) for \(c_{0}T^{1/2}\) times respectively. We denote the value of \(X\) in the \(t^{th}\) round by \(X^{(t)}\). For every two variables \(X_{i},X_{j}\in\mathbf{X}\backslash\{X_{1}\}\), if
\[\frac{1}{c_{0}\sqrt{T}}\sum_{k=1}^{c_{0}\sqrt{T}}\left(X_{j}^{ \left(2ic_{0}\sqrt{T}+k\right)}-X_{j}^{\left((2i+1)c_{0}\sqrt{T}+k\right)} \right)>c_{1}T^{-\frac{1}{6}}, \tag{4}\]
we set \(X_{i}\) as an ancestor of \(X_{j}\). Here, \(X_{j}^{\left(2ic_{0}\sqrt{T}+k\right)}\)s with \(k\in[c_{0}\sqrt{T}]\) are the values of \(X_{j}\) in the rounds that \(do(X_{i}=1)\) is chosen; \(X_{j}^{\left((2i+1)c_{0}\sqrt{T}+k\right)},k\in[c_{0}\sqrt{T}]\) are the values of \(X_{j}\) in the rounds that \(do(X_{i}=0)\) is chosen. Specifically, if \(X_{i}\) is not an ancestor of \(X_{j}\), the value of \(X_{j}\) is not impacted by intervention on \(X_{i}\). Simultaneously, if \(X_{i}\in\mathbf{Pa}(X_{j})\), the value of \(X_{j}\) is notably impacted by \(do(X_{i})\) so the difference of \(X_{j}\) under \(do(X_{i}=1)\), \(do(X_{i}=0)\) can be used as a discriminator for the ancestor-descendant relationship between \(X_{i}\) and \(X_{j}\). This is formally shown by Lemma 1.
**Lemma 1**.: _Let \(G\) be a BGLM with parameter \(\mathbf{\theta}^{*}\) that satisfies Assumption 2. Recall that \(\theta^{*}_{\min}=\min_{(X^{\prime},X)\in E}\theta^{*}_{X^{\prime},X^{\prime}}\). If \(X_{i}\in\mathbf{Pa}(X_{j})\), we have \(\mathbb{E}[X_{j}|do(X_{i}=1)]-\mathbb{E}[X_{j}|do(X_{i}=0)]\geq\kappa\theta^{*} _{X_{i},X_{j}}\geq\kappa\theta^{*}_{\min}\); if \(X_{i}\) is not an ancestor of \(X_{j}\), we have \(\mathbb{E}[X_{j}|do(X_{i}=1)]=\mathbb{E}[X_{j}|do(X_{i}=0)]\)._
We use the above idea to implement the procedure in Algorithm 2, and then put this procedure in the initial phase and integrate this step into BGLM-OFU proposed by (Feng and Chen, 2023), to obtain our main algorithm, BGLM-OFU-Unknown (Algorithm 1).
```
1:Input: Graph \(G=(\mathbf{X}\cup\{Y\},E)\), action set \(\mathcal{A}\), parameters \(L_{f_{X}}^{(1)},L_{f_{X}}^{(2)},\kappa,\zeta\) in Assumption 1, 2 and 3, positive constants \(c_{0}\) and \(c_{1}\) for initialization phase such that \(c_{0}\sqrt{T}\in\mathbb{N}^{+}\).
2:/* Initialization Phase: */
3: Do each intervention among \(do(X_{2}=1),do(X_{2}=0),\cdots,do(X_{n}=1),do(X_{n}=0)\) for \(c_{0}T^{1/2}\) times in order and observe the feedback \((\mathbf{X}_{t},Y_{t})\), \(1\leq t\leq T_{0}\).
4: Compute the ancestors \(\widehat{\mathbf{Anc}}(X)\), \(X\in\mathbf{X}\cup\{Y\}\) by BGLM-Ancestors(\((\mathbf{X}_{1},Y_{1}),\cdots,(\mathbf{X}_{T_{0}},Y_{T_{0}}),c_{0},c_{1}\)) (see Algorithm 2).
5: Initialize \(M_{0,X}\leftarrow\mathbf{0}\in\mathbb{R}[\widehat{\mathbf{Anc}}(X)|\times[\widehat{ \mathbf{Anc}}(X)]]\) for all \(X\in\mathbf{X}\cup\{Y\}\), \(\delta\leftarrow\frac{1}{3n\sqrt{T}}\), \(R\leftarrow[\lceil\frac{512n(L_{f_{X}}^{(2t)})^{2}}{\kappa^{4}}(n^{2}+\ln \frac{1}{\delta})\rceil\), \(T_{0}\gets 2(n-1)c_{0}T^{1/2}\), \(T_{1}\gets T_{0}+\max\left\{\frac{c}{\zeta^{2}}\ln\frac{1}{\delta},\frac{(8 n^{2}-6)R}{\zeta}\right\}\) and \(\rho\leftarrow\frac{3}{\kappa}\sqrt{\log(1/\delta)}\).
6: Do no intervention on BGLM \(G\) for \(T_{1}-T_{0}\) rounds and observe feedback \((\mathbf{X}_{t},Y_{t}),T_{0}+1\leq t\leq T_{1}\).
7:/* Iterative Phase: */
8:for\(t=T_{1}+1,T_{1}+2,\cdots,T\)do
9:\(\{\hat{\mathbf{\theta}}_{t-1,X},M_{t-1,X}\}_{X\in\mathbf{X}\cup\{Y\}}\) = BGLM-Estimate(\((\mathbf{X}_{1},Y_{1}),\cdots,(\mathbf{X}_{t-1},Y_{t-1})\)) (see Algorithm 3).
10: Compute the confidence ellipsoid \(\mathcal{C}_{t,X}=\{\mathbf{\theta}_{X}^{\prime}\in[0,1]^{|\widehat{\mathbf{Anc}}(X)| }\ :\ \left\lVert\mathbf{\theta}_{X}^{\prime}-\hat{\mathbf{\theta}}_{t-1,X}\right\rVert_{M_{ t-1,X}}\leq\rho\}\) for any node \(X\in\mathbf{X}\cup\{Y\}\).
11: Adopt \(\operatorname*{argmax}_{do(\mathbf{S}=\mathbf{s})\in\mathcal{A},\mathbf{\theta}_{t,X}^{ \prime}\in\mathcal{C}_{t,X}}\mathbb{E}[Y|do(\mathbf{S}=\mathbf{s})]\) as \((\mathbf{S}_{t},\mathbf{s}_{t},\tilde{\mathbf{\theta}}_{t})\).
12: Intervene all the nodes in \(\mathbf{S}_{t}\) to \(\mathbf{s}_{t}\) and observe the feedback \((\mathbf{X}_{t},Y_{t})\).
13:endfor
```
**Algorithm 1** BGLM-OFU-Unknown for BGLM CCB Problem
Notice that each term in Eq. (4) is a random sample of \(\mathbb{E}[X_{j}|do(X_{i}=1)]-\mathbb{E}[X_{j}|do(X_{i}=0)]\), which means that the left-hand side of Eq. (4) is just an estimation of \(\mathbb{E}[X_{j}|do(X_{i}=1)]-\mathbb{E}[X_{j}|do(X_{i}=0)]\). Such expression can be bounded by concentration inequalities. Hence we can prove that Algorithm 2 identifies \(X_{i}\in\mathbf{Anc}(X_{j})\) with false positive rate and false negative rate both no more than \(\exp\left(-\frac{c_{0}c_{1}^{2}T^{1/10}}{2}\right)\) when \(\theta_{\min}^{*}\geq 2c_{1}\kappa^{-1}T^{-1/5}\). Formally, we have the following lemma that shows the probability of correctness for Algorithm 2. For completeness, the proof of Lemma 2 is put in appendix.
```
1:Input: Observations \(((\mathbf{X}_{1},Y_{1}),\cdots,(\mathbf{X}_{T_{0}},Y_{T_{0}}))\), positive constants \(c_{0}\) and \(c_{1}\).
2:Output:\(\widehat{\mathbf{Anc}}(X)\), ancestors of \(X,X\in\mathbf{X}\cup\{Y\}\).
3: For all \(X\in\mathbf{X},\widehat{\mathbf{Anc}}(X)=\emptyset\), \(\widehat{\mathbf{Anc}}(Y)=\mathbf{X}\).
4:for\(i\in\{2,3,\cdots,n\}\)do
5:for\(j\in\{2,3,\cdots,n\}\backslash\{i\}\)do
6:if\(\sum_{k=1}^{c_{0}\sqrt{T}}\left(X_{j}^{\left(2ic_{0}\sqrt{T}+k\right)}-X_{j}^{ \left((2i+1)c_{0}\sqrt{T}+k\right)}\right)>c_{0}c_{1}T^{3/10}\)then
7: Add \(X_{i}\) into \(\widehat{\mathbf{Anc}}(X_{j})\).
8:endif
9:endfor
10:endfor
11: Recompute the transitive closure of \(\widehat{\mathbf{Anc}}(\cdot)\), i.e., if \(X_{i}\in\widehat{\mathbf{Anc}}(X_{j})\) and \(X_{j}\in\widehat{\mathbf{Anc}}(X_{\ell})\), then add \(X_{i}\) to \(\widehat{\mathbf{Anc}}(X_{\ell})\).
```
**Algorithm 2** BGLM-Ancestors
```
1:Input: All observations \(((\mathbf{X}_{1},Y_{1}),\cdots,(\mathbf{X}_{t},Y_{t}))\) until round \(t\).
2:Output:\(\{\hat{\mathbf{\theta}}_{t,X},M_{t,X}\}_{X\in\mathbf{X}\cup\{Y\}}\)
3: For each \(X\in\mathbf{X}\cup\{Y\}\), \(i\in[t]\), construct data pair \((\mathbf{V}_{i,X},X^{(i)})\) with \(\mathbf{V}_{i,X}\) the vector of ancestors of \(X\) in round \(i\), and \(X^{(i)}\) the value of \(X\) in round \(i\) if \(X\not\in S_{i}\).
4:for\(X\in\mathbf{X}\cup\{Y\}\)do
5: Calculate the maximum-likelihood estimator \(\hat{\mathbf{\theta}}_{t,X}\) by solving the equation \(\sum_{i=1}^{t}(X^{(i)}-f_{X}(\mathbf{V}_{i,X}^{\intercal}\mathbf{\theta}_{X}))\mathbf{V}_{i,X}=0\).
6:\(M_{t,X}=\sum_{i=1}^{t}\mathbf{V}_{i,X}\mathbf{V}_{i,X}^{\intercal}\).
7:endfor
```
**Algorithm 3** BGLM-Estimate
**Lemma 2** (Positive Rate of BGLM-Order).: _Suppose Assumption 2 holds for the BGLM \(G\). In the initialization phase of Algorithm 1, Algorithm 2 finds a consistent ancestor-descendant relationship for the BGLM \(G\) with probability no less than \(1-2\binom{n-1}{2}\exp\left(-\frac{c_{0}c_{1}^{2}T^{1/10}}{2}\right)\) when \(\theta_{\min}^{*}\geq 2c_{1}\kappa^{-1}T^{-1/5}\)._
We refer to the condition \(\theta_{\min}^{*}\geq 2c_{1}\kappa^{-1}T^{-1/5}\) in this lemma as _weight gap assumption_. The number of initialization rounds in Algorithm 1 is \(O(\sqrt{T})\). According to Lemma 2, the expected regret contributed by incorrectness of the ancestor-descendant relationship does not exceed \(O\left(T\exp\left(-\frac{c_{0}c_{1}^{2}T^{1/10}}{2}\right)\right)=o(\sqrt{T})\). Therefore, after adding the initialization, the expected regret of BGLM-OFU-Unknown increases by no more than \(o(\sqrt{T})\) over BGLM-OFU (Algorithm 1 in (Feng and Chen, 2023)). Thus we have the following theorem to show the regret of Algorithm 2, which is formally proved in appendix.
**Theorem 2** (Regret Bound of BGLM-OFU-Unknown).: _Denote \(L^{(1)}_{\max}=\max_{X\in\mathbf{X}\cup\{Y\}}L^{(1)}_{f_{X}}\). Under Assumptions 1, 2 and 3, the regret of BGLM-OFU-Unknown (Algorithms 1, 2 and 3) is bounded as_
\[R(T)=O\left(\frac{1}{\kappa}n^{\frac{3}{2}}L^{(1)}_{\max}\sqrt{T}\log T\right), \tag{5}\]
_where the terms of \(o(\sqrt{T}\ln T)\) are omitted, and the big \(O\) notation holds for \(T\geq 32\left(\frac{c_{1}}{\kappa\theta^{*}_{\min}}\right)^{5}\)._
Compared to (Feng and Chen, 2023), Theorem 5 has the same asymptotic regret, The only additional assumption is \(T\geq 32\left(c_{1}/(\kappa\theta^{*}_{\min})\right)^{5}\). Intuitively, this extra assumption guarantees that we can discover the ancestor-descendant relationship consistent with the true graph. Our result indicates that not knowing the causal graph does not provide substantial difficulty with the weight gap assumption.
_Remark 1_.: Because Lemma 2 requires weight gap assumption, in the proof of this regret bound, we only consider the case of \(T\geq 32\left(c_{1}/(\kappa\theta^{*}_{\min})\right)^{5}\). This limitation does not impact the asymptotic big \(O\)notation in our regret bound. However, when the round number \(T\) is not that large, the regret can be linear with respect to \(T\). We remove this weight gap assumption in Section 6 for the linear model setting. The \(c_{0}\) and \(c_{1}\) are two adjustable constants in practice. When \(T\) is small, one could try a small \(c_{0}\) to shorten the initialization phase, i.e., to make sure that \(T_{0}\ll T\), and a small \(c_{1}\) to satisfy the weight gap assumption. When \(T\) is large, one could consider larger \(c_{0}\) and \(c_{1}\) for a more accurate ancestor-descendant relationship. However, because \(\theta^{*}_{\min}\) is unknown, one cannot promise that the weight gap assumption holds by manipulating \(c_{1}\), i.e., \(\theta^{*}_{\min}\) may be too small for any practical \(T\) given \(c_{1}\).
## 6 BLM CCB without Graph Skeleton and Weight Gap Assumption
In the previous section, we find that if \(T>O((\theta^{*}_{\min})^{-5})\), we can get a valid upper bound. However, in reality, we have two challenges: 1) We do not know the real value of \(\theta^{*}_{\min}\), and this makes it hard to know when an edge's direction is identified. 2) When \(\theta^{*}_{\min}\to 0\), it makes it very difficult to estimate the graph accurately. To solve these challenges, we must both eliminate the dependence of \(\theta^{*}_{\min}\) in our analysis, and think about how the result will be influenced by an inaccurate model. In this section, we give a causal bandit algorithm and show that the algorithm can always give \(\tilde{O}(T^{2/3})\) regret. This sub-linear regret result shows that the challenge can be solved by some additional techniques.
In this section, we consider a special case of BGLM called Binary Linear Model (BLM), where \(f_{X}\) becomes identity function. The linear structure allows us to release the Assumption 1-3 (Feng and Chen, 2023) and analyze the influence of an inaccurate model.
The main algorithm follows the BLM-LR algorithm in (Feng and Chen, 2023), which uses linear regression to estimate the weight \(\mathbf{\theta}^{*}\), and the pseudocode is provided in Algorithm 4. We add a graph discovery process (Algorithm 5) in the initialization phase using \(O(nT^{2/3}\log T)\) times rather than \(O(nT^{1/2})\) in the previous section. For any edge \(X^{\prime}\to X\) with weight \(\theta^{*}_{X^{\prime},X}\geq T^{-1/3}\), with probability at least \(1-1/T^{2}\), we expect to identify the edge's direction within \(O(nT^{2/3}\log(T))\) samples for \(do(X^{\prime}=1)\) and \(do(X^{\prime}=0)\) by checking whether the difference \(P(X\mid do(X^{\prime}=1))-P(X\mid do(X^{\prime}=0))\) is large than \(T^{-1/3}\). Since
the above difference is always larger than \(\theta^{*}_{X^{\prime},X}\), after the initialization phase, the edge \(X^{\prime}\to X\) will be added to the graph if \(\theta^{*}_{X^{\prime},X}\geq T^{-1/3}\).
Moreover, if \(X^{\prime}\) is not an ancestor of \(X\), we claim that it cannot be a estimated as an ancestor after the initialization phase. This is because in this case \(P(X\mid do(X^{\prime}=1))=P(X)=P(X\mid do(X^{\prime}=0))\). Denote the estimated graph \(G^{\prime}\) as the graph with edge \(X^{\prime}\to X\) for all \(X^{\prime}\in\widehat{\mathbf{Anc}}(X)\). We then have the following lemma.
**Lemma 3**.: _In Algorithm 4, if the constants \(c_{0}\) and \(c_{1}\) satisfy that \(c_{0}\geq\max\{\frac{1}{c_{1}^{2}},\frac{1}{(1-c_{1})^{2}}\}\), with probability at least \(1-(n-1)(n-2)\frac{1}{T^{1/3}}\), after the initialization phase we have 1). If \(X^{\prime}\) is a true parent of \(X\) in \(G\) with weight \(\theta^{*}_{X^{\prime},X}\geq T^{-1/3}\), the edge \(X^{\prime}\to X\) will be identified and added to the estimated graph \(G^{\prime}\)._
_2). If \(X^{\prime}\) is not an ancestor of \(X\) in \(G\), \(X^{\prime}\to X\) will not be added into \(G^{\prime}\)._
The properties above together provide the analytic basis for the following observation, which plays a key role in our further analysis. Denote the estimated accuracy \(r=T^{-1/3}\). We know the linear regression for \(X\) will be performed on \(X\) and all its possible ancestors \(\widehat{\mathbf{Anc}}(X)\) we estimated. For the true parent node \(X^{\prime}\) in \(G\) that is not contained in \(\widehat{\mathbf{Anc}}(X)\), we have \(\theta^{*}_{X^{\prime},X}\leq r\). Suppose \(\widehat{\mathbf{Anc}}(X)=\{X_{1},X_{2},\cdots,X_{m}\}\), and true parents which is not contained in \(\widehat{\mathbf{Anc}}(X)\) are \(X_{m+1},\cdots,X_{m+k}\). Thus \(\theta^{*}_{X_{m+i},X}\leq r\) for all \(1\leq i\leq k\).
Also, assume \(X_{1},\cdots,X_{t}(t<m)\) are true parents of \(X\) in \(G\). For \(X_{m+i}\), by law of total expectation, the expectation of \(X\) can be rewritten as
\[\mathbb{E}[X\mid X_{1},\cdots,X_{t}]\] \[=\mathbb{E}_{X_{m+1},\cdots,X_{m+k}}[\mathbb{E}[X\mid X_{1}, \cdots,X_{t},X_{m+1},\cdots,X_{m+k}]]\] \[=\mathbb{E}_{X_{m+1},\cdots,X_{m+k}}\left[\sum_{i=1}^{t}\theta^{ *}_{X_{i},X}X_{i}+\sum_{i=m+1}^{m+k}\theta^{*}_{X_{i},X}X_{i}\right]\] \[=\sum_{i=1}^{t}\theta^{*}_{X_{i},X}X_{i}+\sum_{i=m+1}^{m+k}\theta^ {*}_{X_{i},X}\mathbb{E}[X_{i}]=\sum_{i=1}^{t}\theta^{*^{\prime}}_{X_{i},X}X_{ i},\]
where
\[\theta^{*^{\prime}}_{X_{i},X} =\theta^{*}_{X_{i},X},\ \ i\geq 2, \tag{6}\] \[\theta^{*^{\prime}}_{X_{1},X} =\theta^{*}_{X_{1},X}+\sum_{i=m+1}^{m+k}\theta^{*}_{X_{i},X} \mathbb{E}[X_{i}]. \tag{7}\]
Eq.(7) is because \(X_{1}=1\) always holds. Then we have \(|\theta^{*^{\prime}}_{X_{i},X}-\theta^{*}_{X_{i},X}|\leq\sum_{i=m+1}^{m+k} \theta^{*}_{X_{i},X}\leq kr\leq nr\), which shows that the difference between \(\mathbf{\theta}^{\prime}\) and \(\mathbf{\theta}\) is small if accuracy \(r\) is small. Let model \(M^{\prime}\) represent the model with graph \(G^{\prime}\) with weights \(\mathbf{\theta}^{*^{\prime}}\) defined above. The following lemma shows the key observation:
**Lemma 4**.: _The linear regression performed on graph \(G^{\prime}\) in Algorithm 4 (lines 12-15) gives the estimation \(\hat{\mathbf{\theta}}^{\prime}\) such that_
\[\|(\hat{\mathbf{\theta}}^{\prime}_{t,X}-\mathbf{\theta}^{*^{\prime}}_{X})\|_{M_{t,X}} \leq\sqrt{n\log(1+tn)+2\log(1/\delta)}+\sqrt{n},\]
_where \(M_{t,X}\) is defined in Algorithm 4._
This lemma shows that, the linear regression performed on the inaccurate estimated linear model \(M^{\prime}\) is equivalent to the regression for \(\mathbf{\theta}^{*^{\prime}}\). Note that this regression only gives us the approximation in some direction with respect to elliptical norm, allowing the variables to be dependent.
Based on claim above, we only need to measure the difference for \(\mathbb{E}[Y\mid do(\mathbf{S}=1)]\) on model \(M\) and \(M^{\prime}\). The following lemma shows that the difference between two models can be bounded by our estimated accuracy \(r\):
**Lemma 5**.: \(|\mathbb{E}_{M}[Y\mid do(\mathbf{S}=\mathbf{I})]-\mathbb{E}_{M^{\prime}}[Y\mid do(\bm {S}=\mathbf{I})]|\leq n^{2}(n+1)r\)_, where \(r\) is the estimated accuracy defined in the start of this section._
The Lemma 5 gives us a way to bound our linear regression performance on the estimated model \(M^{\prime}\). Suppose our linear regression achieves \(O(\sqrt{T})\) regret comparing to \(\max_{\mathbf{S}}\mathbb{E}_{M^{\prime}}[Y\mid do(\mathbf{S}=\mathbf{1})]\), based on our estimated accuracy \(r=O(T^{-1/3})\), the regret for optimization error is \(O(T^{2/3})\), which is the same order as the initialization phase. Moreover, it implies that we cannot set \(r\) to a larger gap, such as \(r=O(T^{-1/2})\), because it would lead to the regret of optimization error linear to \(T\).
From these two lemmas, we can measure the error for initialization phase. Motivated by Explore-then-Commit
framework, we can achieve sublinear regret without the weight gap assumption. The detailed proof is provided in Appendix C.2 and Appendix C.3.
**Theorem 3**.: _If \(c_{0}\geq\max\{\frac{1}{c_{1}^{2}},\frac{1}{(1-c_{1})^{2}}\}\), the regret of Algorithm 4 running on BLM is upper bounded as_
\[R(T)=O((n^{3}T^{2/3})\log T).\]
Theorem 3 states the regret of our algorithm without weight gap. The leading term of the result is \(O(T^{2/3}\log T)\), which has higher order than \(O(\sqrt{T}\log T)\), the regret of the previous Algorithm 2 and the BLM-LR algorithm in (Feng and Chen, 2023). This degradation in regret bound can be viewed as the cost of removing the weight gap assumption, which makes the accurate discovery of the causal graph extremely difficult. How to devise a \(O(\sqrt{T}\log T)\) algorithm without weight gap assumption is still an open problem.
Using the transformation in Section 5.1 in (Feng and Chen, 2023), this algorithm can also work with hidden variables.
## 7 Pure Exploration of Causal Bandits without Graph Structure
Another performance measure for bandit algorithms is called sample complexity. In this setting, the agent aims to find an action with the maximum expected reward using as small number of rounds as possible. This setting is also called pure exploration. To be more specific, the agent is willing to find \(\varepsilon\)-optimal arm with probability at least \(1-\delta\) by sampling as few rounds as possible for fixed parameter \(\varepsilon\) and \(\delta\). For pure exploration, we consider the general binary causal model with only null and atomic interventions, and study the gap-dependent bounds, meaning that the sample complexity depends on the reward gap between the optimal and suboptimal actions. Moreover, let \(a^{*}\) be one of the optimal actions. For each action \(a=do(X_{i}=x)\), define \(\mu_{a}=\mathbb{E}[Y\mid a]\) and the gap for action \(a\) to be
\[\Delta_{a}=\left\{\begin{array}{ll}\mu_{a^{*}}-\max_{a\in\mathbf{A}\setminus\{a ^{*}\}}\{\mu_{a}\},&\quad a=a^{*};\\ \mu_{a^{*}}-\mu_{a},&\quad a\neq a^{*}.\end{array}\right. \tag{8}\]
According to the causal discovery literature (Pearl, 2009), by passive observations alone one can obtain an essential graph of the causal graph, with some edge directions unidentified. We assume that the essential graph is known but the exact graph structure is unknown, which is also considered by (Lu et al., 2021), with additional assumptions on the graph.
One naive solution for this problem is to first identify the graph structure and then performed the pure exploration algorithm of causal bandits with known graph (Xiong and Chen, 2023). Define \(c_{e}=|P(X\mid do(X^{\prime}=1))-P(X^{\prime}\mid do(X=0))|\) for each edge \(e=(X,X^{\prime})\) and \(c_{X}=\min_{e:X\to X^{\prime}}\frac{1}{c_{e}^{2}}\). Then this naive solution admits a sample complexity about
\[\tilde{O}\left(\sum_{a\in S}\frac{1}{\max\{\Delta_{a},\varepsilon/2\}^{2}}+ \sum_{x\in X}\frac{1}{c_{X}^{2}}\right), \tag{9}\]
where \(S\) is a particular set defined following the previous work (Xiong and Chen, 2023) and the definition is provided in Appendix D. The first term is the sample complexity in (Xiong and Chen, 2023), while the second term is the cost for identifying the directions of all edges in the essential graph.
This naive solution separates the causal discovery phase and learning phase, so it cannot discover the directions adaptively. In the Appendix D, we propose an adaptive algorithm to discover the edges' directions and learn the reward distribution in parallel, which can provide a lower sample complexity for some cases.
However, when the \(\Delta_{a}\) and \(c_{X}\) is small, both the naive algorithm and our algorithms provided in Appendix D suffers \(\Omega(\frac{n}{\varepsilon^{2}}\log(1/\delta))\) sample complexity. We claim that pure exploration for the general binary causal model is intrinsically hard due to unknown graph structure. To show this, we state a negative result for pure exploration of causal bandits on unknown graph structure with atomic intervention. It states that even if we have all observation distribution \(P(\mathbf{X},Y)\) as prior knowledge, we still cannot achieve better sample complexity result than the result in the classical pure exploration problem for the multi-armed bandit \(O(\frac{n}{\varepsilon^{2}}\log(1/\delta))\).
**Theorem 4** (Lower bound).: _Consider causal bandits with only essential graph and atomic intervention, for any algorithm which can output \(\varepsilon\)-optimal action with probability at least \(1-\delta\), there is a bandit instance with expected sample complexity \(\Omega(\frac{n}{\varepsilon^{2}}\log(1/\delta))\) even if we have all observational distribution \(P(\mathbf{X},Y)\)._
Note that if we know distribution \(P(\mathbf{X},Y)\) and the exact graph structure, we can compute each intervention \(P(Y\mid do(X=x))\) by do-calculus because the absence of hidden variables. So Theorem 4 shows the intrinsic hardness provided by unknown graph structure. The detailed proof can be found in Appendix D.
## 8 Future Work
This paper is the first theoretical study on causal bandits without the graph skeleton. There are many future directions to extend this work. We believe that similar initialization methods and proof techniques can be used to design causal bandits algorithms for other parametric models without the skeleton, like linear structural equation models (SEM). Moreover, how to provide a algorithm with \(\tilde{O}(\sqrt{T})\) regret without weight gap assumption is interesting and still open. For pure exploration, the combinatorial setting needs more research. |
2303.17955 | Critical curves of rotations | In rotations with a binary symbolic dynamics, a critical curve is the locus
of parameters for which the boundaries of the partition that defines the
symbolic dynamics are connected via a prescribed number of iterations and
symbolic itinerary. We study the arithmetical and geometrical properties of
these curves in parameter space. | John A G Roberts, Asaki Saito, Franco Vivaldi | 2023-03-31T10:34:14Z | http://arxiv.org/abs/2303.17955v1 | # Critical curves of rotations
###### Abstract.
In rotations with a binary symbolic dynamics, a critical curve is the locus of parameters for which the boundaries of the partition that defines the symbolic dynamics are connected via a prescribed number of iterations and symbolic itinerary. We study the arithmetical and geometrical properties of these curves in parameter space.
_Dedicated to the memory of Uwe Grimm._
## 1. Introduction
We consider a rotation \(\mathrm{g}\) on the circle (unit interval):
\[\mathrm{g}:[0,1)\to[0,1)\qquad x\mapsto\{x+\theta\}, \tag{1}\]
where \(\theta\) is the angle of rotation and \(\{\cdot\}\) denotes the fractional part. The partition of the circle
\[I_{a}=[0,\rho)\qquad I_{b}=[\rho,1), \tag{2}\]
defines a symbolic dynamics in two letters \(a\) and \(b\). (For background on symbolic dynamics, see [1, 5, 12]).
The parameter space of this system is the closed unit square \([0,1]^{2}\) of all pairs \(\zeta=(\theta,\rho)\), and a **critical point** is a pair \(\zeta\) for which the equation
\[\mathrm{i}\theta=\mathrm{j}+\rho \tag{3}\]
has a solution \(\mathrm{z}=(\mathrm{i},\mathrm{j})\in\mathbb{Z}^{2}\). This means that at a critical point there is a **critical orbit** of \(\mathrm{g}\) containing both boundary points \(0\) and \(\rho\) of the partition (2). The _shortest_ portion of orbit connecting such boundary points, in some order, will be called the **centre** of the orbit, with the convention that it includes the initial boundary point but not the final one. We then consider the symbol sequence \(w\) of the centre. If \(\theta\) is irrational, then the boundary points are visited only once and in a defined order, and \(w\) is determined by \(\zeta\). If \(\theta\) is rational, then the orbit is periodic, and we may choose either \(0\) or \(\rho\) as the initial point of the centre. Hence when \(\theta\) is rational, \(\zeta\) determines two centres and their corresponding words \(w\), typically of different length. A **critical word**\(w\) is a word constructed in this fashion, and the **critical curve**\(\mathcal{C}_{w}\) is the set of critical points which share the same critical word \(w\).
Even though critical curves of rotations are fairly basic objects, to the best of our knowledge they have not been considered explicitly. It is known that critical points determine the complexity of rotational words --the symbol sequences generated by the rotation (1) with partition (2). At a critical point with irrational \(\theta\), a critical word of length \(n\) is a
Introduction
The study of the dynamics of a rigid rotator (finitely) in the sense of a rigid rotator (finitely) in
by degenerate curves called **Farey points**, and that along a chain the code changes from \(a^{n}\) to \(b^{n}\), or vice-versa (theorem 1). Furthermore, the identification of a Farey point on a chain is a relative concept: every such point belongs to a critical curve transversal to the given chain (corollary 4).
In section 4 we describe all curves through a critical rational point, for which equation (3) has infinitely many solutions. We show that in general every such point \(\zeta\) is an interior point of exactly two curves --the **dominant curves** of \(\zeta\)-- as well as the common Farey point of four infinite pencils of curves (theorem 5). The symbolic dynamics of these curves is determined in theorem 6.
In section 5 we consider the geometric figure determined by the six dominant lines of a rational critical point and two points closest to \(\zeta\) which share the value of \(\theta\). We show that these lines form two triples concurrent to the two **triple points** of \(\zeta\), whose rotation numbers are convergents of the continued fraction expansion of \(\theta\) (theorem 7).
Acknowledgements. This research was supported by the Australian Research Council grant DP180100201 and by JSPS KAKENHI Grant Numbers JP16KK0005 and JP22K12197.
## 2. Basic properties
The parameter space for the symbolic dynamics of the map g of (1) is the set of pairs \(\zeta=(\theta,\rho)\in[0,1]^{2}\). From (2) we find that the values \(\rho=0\) and \(\rho=1\) correspond to the trivial symbolic dynamics built from the single letter \(b\) and \(a\) respectively. For \(\theta=0\) we have the identity map, while \(\theta=1\) is included for consistency with Farey sequences1.
Footnote 1: In phase space, the values \(\theta,\rho=1\) are represented by their fractional part \(0\), according to (1).
We begin by taking a closer look at equation (3). For given \(\zeta\), every solution \((\mathrm{i},\mathrm{j})\) correspond bi-uniquely to an orbit segment of length \(|\mathrm{i}|\) having \(0\) and \(\rho\) as end-points, in the order prescribed by the sign of \(\mathrm{i}\). We first deal with the associated symbolic dynamics.
**Def**. A **critical orbit** of (1) is an orbit containing both boundary points \(0\) and \(\rho\), and a **boundary word** is the symbolic dynamics of a finite section of a critical orbit connecting such boundary points, in some order, in such a way that the initial point is included and the final one is not. A **critical word** is a boundary word of minimal length, that is, the symbolic dynamics of the **centre** of the orbit. (Thus every boundary word contains a critical word as a prefix.)
If \(\rho=0\), then equation (3) has the trivial solution \(\mathrm{z}=(0,0)\) for every \(\theta\), the centre of the critical orbit is empty, and the critical word is the empty word \(w=\varepsilon\). There are also non-trivial solutions for rational \(\theta\) --with associated boundary words-- which we shall consider below. Likewise, for \(\rho=1\) we get the trivial solution \(\mathrm{z}=(0,-1)\) as well as nontrivial solutions.
Assume for the moment that \(\rho\neq 0,1\). Then at a critical point, \(\theta\) and \(\rho\) are both rational or both irrational --indeed they belong to the same number field. Suppose that \(\theta\) is irrational. Then the points \(x=0\) and \(x=\rho\) appear only once in the doubly infinite
orbit through \(0\), and hence (3) has only one solution \(z=(i,j)\). We now keep this solution fixed and regard (3) as an equation for \((\theta,\rho)\), subject to the constraint \((\theta,\rho)\in[0,1]^{2}\) (the condition \(\rho\neq 0,1\) was needed to determine \((i,j)\) from \(\zeta\), and is no longer required --see below). We obtain a line segment of critical points --called a **chain**-- given by
\[\mathcal{L}_{i,j}=\{(\theta,\rho)\,:\,\rho=i\theta-j\,,\,\theta^{-}\leqslant \theta\leqslant\theta^{+}\} \tag{6}\]
where
\[\theta^{-}=\begin{cases}j/i&i>0\\ (j+1)/i&i<0\end{cases}\qquad\text{and}\qquad\theta^{+}=\theta^{-}+\frac{1}{|i|}. \tag{7}\]
Note that \(i\neq 0\) by assumption and that \((i,j)\) is also a solution of equation (3) when \(\rho=0,1\), namely at \((\theta^{-s},0)\) and \((\theta^{s},1)\), where \(s=\operatorname{sign}(i)\).
The above association between a solution \((i,j)\) of (3) and the chain \(\mathcal{L}_{i,j}\) is extended to the cases \(\rho=0,1\), by defining
\[\mathcal{L}_{0,0}=\{(\theta,0)\,:\,0\leqslant\theta\leqslant 1\}\qquad \mathcal{L}_{0,-1}=\{(\theta,1)\,:\,0\leqslant\theta\leqslant 1\}. \tag{8}\]
From the above and (3) one verifies that for any integer \(i\), a pair \((i,j)\in\mathbb{Z}^{2}\) corresponds to a chain if \(j\) is subject to the bounds
\[J(i)=\begin{cases}0\leqslant j\leqslant i-1&i>0\\ -1\leqslant j\leqslant 0&i=0\\ i\leqslant j\leqslant-1&i<0.\end{cases} \tag{9}\]
A pair of integers \((i,j)\) satisfying the above conditions will be called the **affine parameters** of the chain \(\mathcal{L}_{i,j}\). They are the solution of (3) shared by all points on the chain.
We are interested in the critical words \(w=w(\theta,\rho(\theta))\) of the points of \(\mathcal{L}_{i,j}\) of (6) and (8). The following definition relates points and words.
**Def.** Let \(\mathcal{L}_{i,j}\) be a chain, let \(\zeta\in\mathcal{L}_{i,j}\), and let \(w\) be the critical word at \(\zeta\) (not necessarily of length \(|i|\)). The **critical curve \(\mathcal{C}_{w}\) on \(\mathcal{L}_{i,j}\) containing \(\zeta\)** is the set of points of \(\mathcal{L}_{i,j}\) sharing the same critical word. If such a set reduces to a point, we shall speak of a **Farey point**\(\mathcal{B}_{w}\).
Thus a chain is partitioned into critical curves and Farey points. In particular, each of the chains (8) has empty word as critical word, and therefore consists of a single critical curve \(\mathcal{C}_{\varepsilon}\), and no Farey points.
**Def.** A non-empty boundary word \(w\) is **positive** (**negative**, respectively) if the first symbol of \(w\) is \(a\) (\(b\), respectively). The empty word is both positive and negative. Likewise, the sign of the affine parameters \(z=(i,j)\) is that of \(i\) if \(i\neq 0\), and if \(i=0\) then \(z\) is both positive and negative.
We see that a non-empty word is positive (negative) if the corresponding centre starts at \(0\) (\(\rho\)). Let \(\rho\neq 0,1\). Because --as noted above-- to an irrational critical points \(\zeta\) there corresponds unique affine parameters, the point \(\zeta\) has a well-defined sign and so does the unique critical curve that contains it. By the same criterion, rational critical points are both positive and negative.
Let \(n=|{\rm i}|\). At all irrational points on \({\mathcal{L}}_{{\rm i},{\rm j}}\) the critical word \(w=w_{0}\cdots w_{n-1}\) has length \(n\), but this is not necessarily the case if \(\zeta\) is rational. Indeed in this case the orbit is periodic and, being also critical, both points \(0\) and \(\rho\) are visited infinitely often. As a result, equation (3) has a doubly-infinite set of solutions \(({\rm i}_{t},{\rm j}_{t})\), which include \(({\rm i},{\rm j})\), and to each solution there is an associated boundary word. If there is a \(t\) such that \({\rm i}_{t}\) and \({\rm i}\) have the same sign and \(|{\rm i}_{t}|<n\), then the centre of the critical orbit is shorter than \(n\), and therefore \(w\) is not a critical word. In what follows, when we speak of the boundary words of a chain \({\mathcal{L}}_{{\rm i},{\rm j}}\), we will always refer to words of length \(|{\rm i}|\).
From the above discussion we conclude that a rational critical point should be regarded as being both positive and negative, and this duplicity is reflected in the sign of the boundary words at that point. Such a correspondence however fails at the rational points of the chains (8), where the (non-empty) boundary words assume only one sign, since one element of the partition (2) is empty. Specifically, at the rational points \((p/q,0)\), equation (3) has the solutions \(({\rm i},{\rm j})=(tq,tp),t\in{\mathbb{Z}}\), assuming both signs. However, for \(t\neq 0\) the corresponding boundary words \(b^{|tq|}\) are negative. There is an analogous discrepancy at \((p/q,1)\), where all boundary words are positive.
## 3. Boundary and critical words on a chain
In this section we consider the decomposition of a chain into critical curves and Farey points, as defined in section 2, as well as the associated symbolic dynamics. Figure 1 serves as an illustration of the items we shall be dealing with.
Our first result describes all the boundary words on a chain.
**Theorem 1**.: _Let \({\mathcal{L}}_{{\rm i},{\rm j}}\) be a chain, let \(n=|{\rm i}|\geqslant 1\) and let \({\mathcal{F}}_{n}\) be the \(n\)th Farey sequence. Then_
1. _The set of rotation numbers_ (10) \[{\mathcal{F}}={\mathcal{F}}_{n}\cap[\theta^{-},\theta^{+}]\]
_partitions_ \({\mathcal{L}}_{{\rm i},{\rm j}}\) _into_ \(|{\mathcal{F}}|-1\) _critical curves, themselves segments, separated by_ \(|{\mathcal{F}}|\) _Farey points._
2. _The boundary words on_ \({\mathcal{L}}_{{\rm i},{\rm j}}\) _are computed recursively as follows. Assume first that_ \({\mathcal{L}}_{{\rm i},{\rm j}}\) _is positive, and let_ \(w=w_{0}\cdots w_{n-1}\) _be the positive word of length_ \(n\) _at_ \(\theta=p/q\in{\mathcal{F}}\)_. Finally, let_ \(w^{\pm}\) _be the words of the adjacent curves to the right (_\(+\)_) and left (_\(-\)_) (_\(w^{\pm}\) _is missing at_ \(\theta=\theta^{\pm}\)_). Then, for_ \(k=1,\ldots,n-1\) _the following holds:_ 1. _At_ \(\theta=\theta^{-}\) _we have_ \(w=b^{n}\)_, and_ \(w^{+}_{k}=a\) _iff_ \(k\equiv 0\,({\rm mod}\ q)\)_. This holds also for_ \(k=0\)_._ 2. _At_ \(\theta=\theta^{+}\) _we have_ \(w=a^{n}\)_, and_ \(w^{-}_{k}=b\) _iff_ \(k\equiv 0\,({\rm mod}\ q)\)_._ 3. _For_ \(\theta\in{\mathcal{F}}\setminus\{\theta^{-},\theta^{+}\}\) _we have_ \(\alpha)\ w_{k}=b\) _and_ \(w^{-}_{k}=w_{k}\neq w^{+}_{k}\) _iff_ \(k\equiv n\,({\rm mod}\ q)\)_;_ \(\beta)\ w_{k}=a\) _and_ \(w^{-}_{k}\neq w_{k}=w^{+}_{k}\) _iff_ \(k\equiv 0\,({\rm mod}\ q)\)_._ 4.
_The corresponding statements for negative chains are obtained from the above by exchanging all \(as\) and \(bs\)._
The statement of the theorem excludes the case \(\mathrm{i}=0\). We remark that the critical word for the chains (8) is \(\varepsilon\) by definition, while the boundary words at the rational points are given in parts ii) 1,2).
Proof. i) Along the chain (6), the point \(x_{t}(\theta)\), \(t=0,\ldots,n\) of the centre are affine functions of \(\theta\) with slope \(t\) if \(w\) is positive, and slope \(-n+t\) if \(w\) is negative. If for some \(\theta\) in the range (7) two such functions coincide, then the map \(\mathrm{g}\) is periodic with period not exceeding \(n\), that is, \(\theta\in\mathcal{F}\). Conversely, any \(\theta\in\mathcal{F}\) corresponds to a periodic orbit of \(\mathrm{g}\) of period not exceeding \(n\). Note that \(\theta^{\pm}\) are the only elements of \(\mathcal{F}\) whose denominator is divisible by \(n\), because the numerators of \(\theta^{\pm}\) are consecutive integers.
Assume first that \(\theta=p/q\in\mathcal{F}\setminus\{\theta^{-},\theta^{+}\}\). Then \(1<q<n\), because \(q\) does not divide \(n\). Periodicity implies that \(x_{0}(p/q)=x_{q}(p/q)\), and the intersection of \(x_{0}\) and \(x_{q}\) is transversal because the slopes of the two functions are different. It follows that the \(q\)th symbol of the critical word \(w\) changes at \(p/q\), which is therefore the common end-point of two adjacent segments, hence a Farey point. Since \(\theta^{\pm}\) are necessarily Farey points, and all rationals with denominator not exceeding \(n\) have been accounted for, we have shown that the elements of \(\mathcal{F}\) partition the chain \(\mathcal{L}_{\mathrm{i,j}}\) into \(|\mathcal{F}|-1\) curves, as desired.
ii) Let \(w\) be positive.
1) At \(\theta=\theta^{-}\) we have \(\rho=0\); then, according to (2), the partition element \(I_{a}\) is empty, so the code is \(b^{n}\). We have \(x_{k}=0\) iff \(k\equiv 0\,(\mathrm{mod}\ q)\), and these are precisely the values of \(k\) (which include \(k=0\)) for which \(x_{k}\in I_{a}\) for \(\theta>\theta^{-}\), that is, \(w_{k}^{+}=a\).
2) At \(\theta=\theta^{+}\) we have \(\rho=1\); then \(I_{b}\) is empty, and the code at \(\theta^{+}\) is \(a^{n}\). Again we have \(x_{k}=0\) (on the circle) iff \(k\equiv 0\,(\mathrm{mod}\ q)\), and if \(k\neq 0\) we have \(x_{k}\in I_{b}\) for \(\theta<\theta^{+}\), that is, \(w_{k}^{+}=b\). (The value \(k=0\) must be excluded here, because \(w_{0}=a\) for all \(\theta\neq\theta^{-}\).)
3) Let now \(\theta=p/q\in\mathcal{F}\setminus\{\theta^{-},\theta^{+}\}\). Then \(n\not\equiv 0\,(\mathrm{mod}\ q)\), as noted above.
\(\alpha)\) The critical curve property together with \(q\)-periodicity implies that \(x_{k}=x_{n}=\rho\) iff \(k\equiv n\,(\mathrm{mod}\ q)\). The proper symbol \(w_{k}\) at \(\theta\) is \(b\), and since \(k<n\), the slope of \(x_{k}\) is smaller than that of \(x_{n}\), so that \(x_{k}\in I_{b}\) (\(x_{k}\in I_{a}\)) in a left (right) neighbourhood of \(\theta\), that is, \(w_{k}^{-}=w_{k}\neq w_{k}^{+}\).
\(\beta)\)\(q\)-periodicity implies that \(x_{k}=x_{0}=0\) iff \(k\equiv 0\,(\mathrm{mod}\ q)\). The proper symbol \(w_{k}\) at \(\theta\) is \(a\), and since \(k>0\), the slope of \(x_{k}\) is greater than that of \(x_{0}\), so that \(x_{k}\in I_{b}\) (\(x_{k}\in I_{a}\)) in a left (right) neighbourhood of \(\theta\), that is, \(w_{k}^{-}\neq w_{k}=w_{k}^{+}\).
The proof of ii) is complete.
If \(w\) is negative, the argument develops in a symmetrical manner. At \(\theta^{+}\) we have \(\rho=0\) whence \(w=b^{n}\). As \(\theta\) decreases, all collisions of orbit points involve \(b\)s turning into \(a\)s, until we reach \(\theta^{-}\) with code \(a^{n}\). We omit the details.
The arithmetical and combinatorial aspects of a chain are illustrated in figure 1. Applying theorem 1 ii) recursively along a chain, from \(\theta^{-}\) to \(\theta^{+}\), we deduce that every letter \(b\) of the initial word changes to an \(a\) without omissions or repetitions. This translates into the following arithmetical statement.
**Corollary 2**.: _For any integers \(n>m\) and \(m\geqslant 0\), let \(\mathcal{F}\) be the subset of \(\mathcal{F}_{n}\) lying between \(m/n\) and \((m+1)/n\), and for each \(p/q\in\mathcal{F}\) consider the congruences_
\[x\equiv n\,(\mathrm{mod}\ q),\qquad x\equiv 0\,(\mathrm{mod}\ q)\]
_(which coincide if \(q\) divides \(n\)). Then, as \(p/q\) ranges in \(\mathcal{F}\), the solutions of this family of congruences form a complete set of residues modulo \(n\), and each non-zero residue is a solution of exactly one congruence._
In the above statement there is no restriction on the numerator \(m\), because the restriction that appears in theorem 1 plays no role in the proof of part ii).
The number \(M=|\mathcal{F}|-1\) of curves in a chain depends on its affine parameters \((\mathrm{i},\mathrm{j})\). Let \(n=|\mathrm{i}|\). Such a number is independent of \(n\) in only three cases, namely \(\mathrm{j}=0\), \(\mathrm{j}=n-1\) (\(M=1\)), and \(n\geqslant 3\) odd and \(2\mathrm{j}+1=n\) (\(M=2\)), plus the corresponding values for negative \(\mathrm{i}\). In all other cases, for fixed \(\mathrm{j}\), we have \(M(\mathrm{i},\mathrm{j})\to\infty\) as \(|\mathrm{i}|\to\infty\). In this case the average order of \(M(n)\) is \(3n/\pi^{2}\), which may be deduced from that of the Farey series [8, theorem 331].
Our next result provides alternative characterisations of the Farey points of a chain.
**Lemma 3**.: _Let \(\mathcal{L}_{\mathrm{i},\mathrm{j}}\) be a chain and let \(\zeta=(\theta,\rho)\in\mathcal{L}_{\mathrm{i},\mathrm{j}}\). The following statements are equivalent:_
* \(\zeta\) _is a Farey point of_ \(\mathcal{L}_{\mathrm{i},\mathrm{j}}\)_;_
* _the critical word_ \(w\) _at_ \(\zeta\) _is such that_ \(|w|<|\mathrm{i}|\)_;_
* _there exists_ \((\mathrm{i}^{\prime},\mathrm{j}^{\prime})\) _with_ \(|\mathrm{i}^{\prime}|<|\mathrm{i}|\) _and_ \(\mathrm{sign}(\mathrm{i}^{\prime})=\mathrm{sign}(\mathrm{i})\) _such that_ \(\zeta=\mathcal{L}_{\mathrm{i},\mathrm{j}}\cap\mathcal{L}_{\mathrm{i}^{\prime},\mathrm{j}^{\prime}}\) _and_ \(\zeta\) _is not a Farey point of_ \(\mathcal{L}_{\mathrm{i}^{\prime},\mathrm{j}^{\prime}}\)_._
Figure 1. The partition of the positive chain \(\mathcal{L}_{7,5}\) with affine parameters \((\mathrm{i},\mathrm{j})=(7,5)\), into four critical curves and five Farey points (solid circles), determined according to theorem 1 i). Along the chain, all boundary words have length \(\mathrm{i}=7\). Above the line we have the critical words of the curves, and below the line the Farey fractions with corresponding boundary words [see theorems 1 ii) and 6]. The bottom row displays the critical words at the Farey points, all of length smaller than \(7\), including the empty word \(\varepsilon\) of zero length at \(\theta^{\pm}\). The boundary word at a Farey point is the concatenation of a critical word and a periodic word, whose period is given by the denominator of the fraction.
Proof. For \(i=0\) all statements above are false, hence equivalent. If \(i\neq 0\) and \(\zeta\) also belongs to \(\mathcal{L}_{0,0}\) or \(\mathcal{L}_{0,-1}\), then all statements are true since the critical word of these chains is the empty word, which is both positive and negative by definition.
We now assume that \(i\neq 0\), \(\rho\neq 0,1\), and we let \(w\) be the critical word at \(\zeta\). We shall prove that \(i)\Rightarrow iii)\Rightarrow ii)\Rightarrow i).
\(i)\Rightarrow iii)\). If \(\zeta\) is a Farey point of \(\mathcal{L}_{i,j}\), then from theorem 1 i) and the fact that \(\rho\neq 0,1\) we have that \(\theta=p/q\) with \(q<|i|\) and \(q\nmid|\). We define \(c:=\lfloor|i|/q\rfloor\geqslant 1\) and \(i^{\prime}\) and \(j^{\prime}\) by \(i^{\prime}=i-sign(i)\,c\,q\) and \(j^{\prime}=j-sign(i)\,c\,p\). Hence \(1\leqslant|i^{\prime}|<q<|i|\), \(sign(i^{\prime})=sign(i)\) and one checks that \(i^{\prime}\,\theta-j^{\prime}=i\,\theta-j=\rho\). So \(\zeta\) lies at the intersection of \(\mathcal{L}_{i,j}\) and \(\mathcal{L}_{i^{\prime},j^{\prime}}\). Furthermore, if \(w\) is the critical word at \(\zeta\) with the same sign as \(i\), it has the minimal length \(|i^{\prime}|\) by construction. Since \(q>|w|=|i^{\prime}|\), theorem 1 i) shows that \(\zeta\) cannot be a Farey point on \(\mathcal{L}_{i^{\prime},j^{\prime}}\).
\(iii)\Rightarrow ii)\). If iii) holds, then \(\theta=(j-j^{\prime})/(i-i^{\prime})\), and hence at \(\zeta\) the critical orbit is periodic with period \(|i|-|i^{\prime}|\). Since the length of the critical word is necessarily smaller than the period, we have \(|w|<|i|-|i^{\prime}|<|i|\).
\(ii)\Rightarrow i)\). If ii) holds, then \(\zeta\) is necessarily a rational point, and the denominator of \(\theta\) is less than \(|i|\), being a divisor of \(|i|-|w|\). Thus \(\theta\in\mathcal{F}_{|i|}\), and \(\zeta\) is a Farey point from theorem 1 i).
We infer from part iii) of the lemma, which itself relies on theorem 1, that Farey points exist only in the context of a given chain. More precisely, at a Farey point \(\mathcal{B}_{w}\) on a chain there is always a direction in parameter space along another chain for which the critical word \(w\) does not change for sufficiently small displacements (and so \(\mathcal{B}_{w}\) is not a Farey point on the second chain). This fact is expressed concisely by the following statement:
**Corollary 4**.: _A Farey point of a chain belongs to a critical curve transversal to the chain._
## 4. Curves through rational points
We now characterise all critical curves through a rational critical point \(\zeta\). As discussed in the previous section, equation (3) for rational \(\zeta\) has a doubly-infinite family of solutions corresponding to as many chains passing through \(\zeta\). We shall identify the chains for which \(\zeta\) belongs to a critical curve, and those for which \(\zeta\) is a Farey point; in the latter case we also determine the adjacent curves on the chain. In what follows, by the **Farey points of a critical curve** we shall mean those belonging to the closure of the curve.
If \(\theta=p/q\), then the orbit of \(0\) consists of \(q\) equally spaced points on the unit interval, and therefore \(\zeta\) must be of the form \((p/q,r/s)\), with \(s\) dividing \(q\). We call \(q\) the **denominator** of \(\zeta\). We develop \(p/q\) in continued fractions, and of the two possible continued fractions representations we choose the one whose last coefficient is unity (see [8, theorem 162]). Then the index \(n\) of the last convergent \(p_{n}/q_{n}=p/q\) is determined unambiguously. At various junctures we shall discuss the implications of this choice. We let
\[\tau=q^{\prime}\frac{r}{s},\qquad\tau^{\pm}=q^{\prime}\bigg{(}\frac{r}{s}\pm \frac{1}{q}\bigg{)}\qquad\text{where}\qquad q^{\prime}=(-1)^{n-1}q_{n-1}. \tag{11}\]
Given a rational point \(\zeta\) with \(\rho\neq 0,1\), we consider, among the positive chains containing \(\zeta\) that with minimal value of \(\mathrm{i}\), and denote its affine parameters by \((\mathbf{i}^{+},\mathbf{j}^{+})\). In a similar manner, the negative chain with minimal value of \(-\mathrm{i}\) will be denoted by \((\mathbf{i}^{-},\mathbf{j}^{-})\). These pairs will be called the **dominant affine parameters** of the point \(\zeta\). Likewise, we shall speak of the **dominant chains** (or **lines**), and the **dominant curves** of \(\zeta\). We will show that these objects exist and are unique. For uniformity of exposition, we treat the cases \(\rho=0,1\) as follows. We shall regard the segment \(\rho=0\) as the positive dominant chain of any point \(\zeta=(\theta,0)\), and the segment \(\rho=1\) as the negative dominant chain of any point \(\zeta=(\theta,1)\). In both cases, the chain consists of a single curve with the empty word. Accordingly, if \(\rho=0\), we let \((\mathbf{i}^{+},\mathbf{j}^{+})=(0,0)\) and \((\mathbf{i}^{-},\mathbf{j}^{-})=(-q,-p)\). If \(\rho=1\) we let \((\mathbf{i}^{+},\mathbf{j}^{+})=(q,p-1)\) and \((\mathbf{i}^{-},\mathbf{j}^{-})=(0,-1)\). Finally, we define the **upper** and **lower neighbours** of a rational point \(\zeta=(p/q,r/s)\) to be the points
\[\zeta\!\!\uparrow=\zeta+\Big{(}0,\frac{1}{q}\Big{)}=\Big{(}\frac{p}{q},\frac{ r}{s}+\frac{1}{q}\Big{)}\qquad\zeta\!\!\downarrow=\zeta-\Big{(}0,\frac{1}{q} \Big{)}=\Big{(}\frac{p}{q},\frac{r}{s}-\frac{1}{q}\Big{)}, \tag{12}\]
respectively. If \(\zeta\) is a critical point, then so are \(\zeta\!\!\uparrow\) and \(\zeta\!\!\downarrow\), with one of them missing if \(\rho=0,1\).
We shall consider the four quadrants with origin at \(\zeta\) and label all objects that are pertinent to such quadrants --points and curves-- with the superscripts \(\mathrm{I},\mathrm{II},\mathrm{III},\mathrm{IV}\). The superscript \(\pm\) will refer to the sign of curves, so that the positive sign refers to \(\mathrm{I},\mathrm{III}\) and the negative sign to \(\mathrm{IV},\mathrm{II}\).
**Theorem 5**.: _Let \(\zeta=(\theta,\rho)\) be a rational critical point. If \(\rho\neq 0,1\) then \(\zeta\) is an interior point of the dominant curves of \(\zeta\), and the common Farey point of four infinite pencils of curves, arranged pairwise as adjacent curves in two pencils of chains of opposite sign. Furthermore, all Farey points distinct from \(\zeta\) in a pencil belong to (the closure of) a dominant curve of a neighbour of \(\zeta\), this association being bi-unique --see figure 2. The same applies if \(\rho=0,1\), but in this case one neighbour and two pencils are missing (three pencils, if \(\theta=0,1\))._
Proof. By assumption, \(\zeta\) has the form \((p/q,r/s)\) with \(s\) dividing \(q\) and it belongs to some curve or Farey point; we begin to construct all chains through \(\zeta\). From (3) with \(q=su\) we find \(ru=\mathrm{i}p-\mathrm{j}q\), with solutions
\[\mathrm{i}_{t}=ruq^{\prime}+tq,\quad\mathrm{j}_{t}=rup^{\prime}+tp\qquad t\in \mathbb{Z}, \tag{13}\]
where \(p^{\prime}=p_{n-1}(-1)^{n-1}\) and \(q^{\prime}=q_{n-1}(-1)^{n-1}\). Thus \(pq^{\prime}-qp^{\prime}=1\).
The sequences (13) are the affine parameters of all chains containing \(\zeta\). Assume first that \(\rho\neq 0,1\), that is, \(s\neq 1\). If we had \(\mathrm{i}_{t}=0\) for some \(t\), then \(s\) would also divide \(q_{n-1}\), which in turn would yield \(s=1\), contrary to the assumption. So there are exactly two values \(t^{\pm}\) of \(t\) in (13) for which \(0<|\mathrm{i}_{t}|<q\), namely
\[t^{+}=\lceil-q^{\prime}r/s\rceil,\qquad t^{-}=\lfloor-q^{\prime}r/s\rfloor=t^ {+}-1 \tag{14}\]
while all other values of \(t\) give \(|\mathrm{i}_{t}|>q\).
With reference to (14) define
\[\mathrm{i}^{+}(\ell)=\mathrm{i}_{t^{+}+\ell}\qquad\mathrm{i}^{-}(\ell)= \mathrm{i}_{t^{-}-\ell}\qquad\ell=0,1,2,\ldots \tag{15}\]
and similarly for \(\mathrm{j}^{\pm}(\ell)\), so that the sign of \(\mathrm{i}^{\pm}(\ell)\) is \(\pm 1\) for all \(\ell\). To obtain explicit formulae for \(\mathrm{i}^{\pm}(\ell)\) and \(\mathrm{j}^{\pm}(\ell)\) as functions of \(p/q\) and \(r/s\), we use (13-15), keeping in mind that \(ru=qr/s\) and \(\lceil-x\rceil=-\lfloor x\rfloor\). We obtain the dominant affine parameter of \(\zeta\):
\[\mathbf{i}^{\pm}=\mathrm{i}^{\pm}(0)=\pm q\{\pm\tau\},\qquad\mathbf{j}^{\pm}= \mathrm{j}^{\pm}(0)=\pm p\{\pm\tau\}-\frac{r}{s}, \tag{16}\]
where \(\{\cdot\}\) denotes the fractional part, and \(\tau\) was defined in (11). Finally, we obtain, for \(\ell=0,1,2,\ldots\)
\[\mathrm{i}^{\pm}(\ell)=\pm q(\{\pm\tau\}+\ell)\qquad\mathrm{j}^{\pm}(\ell)=\pm p (\{\pm\tau\}+\ell)-\frac{r}{s}. \tag{17}\]
The above expression give all affine parameters of all chains though \(\zeta\) in the case \(\rho\neq 0,1\). From (6) the sign of \(\mathrm{i}_{t}\) is the sign of the chain containing \(\zeta\), and \(|\mathrm{i}_{t}|=|w|\). Then, formulae (17) and lemma 3 give two infinite sequences of chains of opposite sign, such that \(\zeta\) is an interior point of a curve in the chains \([\mathrm{i}^{\pm}(0),\mathrm{j}^{\pm}(0)]\), and a Farey point in all other chains, as claimed.
We now assume further that \(\rho\neq 1/q,(q-1)/q\). As indicated above, we shall use the superscripts \(\mathrm{I},\mathrm{II},\mathrm{III},\mathrm{IV}\), which refer to the four quadrants with origin at \(\zeta\), to label the points in the four sequences and other relevant quantities. We denote the dominant affine parameters of the upper neighbour \(\zeta\uparrow\) --see (12)-- by \((\mathbf{i}^{\mathrm{I}},\mathbf{j}^{\mathrm{I}})\) and \((\mathbf{i}^{\mathrm{II}},\mathbf{j}^{\mathrm{II}})\), and those of \(\zeta\downarrow\) by \((\mathbf{i}^{\mathrm{III}},\mathbf{j}^{\mathrm{III}})\) and \((\mathbf{i}^{\mathrm{IV}},\mathbf{j}^{\mathrm{IV}})\). (Thus \(\mathrm{I},\mathrm{III}\) are positive and \(\mathrm{II},\mathrm{IV}\) are negative.) Considering
Figure 2. The critical curves through (or adjacent to) the rational point \(\zeta=(3/5,2/5)\) (see theorem 5), with its two dominant lines (black) and four pencils of curves concurrent at \(\zeta\) (grey), the latter being their common Farey point. The other Farey point of the curves in a pencil lie on a dominant lines of one of \(\zeta\)’s neighbours (blue), each pencil paired with a different line. The six dominant lines feature two concurrent triples, as detailed in theorem 7.
(11) and (16), we find
\[\mathbf{i}^{\mathrm{I}}=q\{\tau^{+}\}\qquad\mathbf{i}^{\mathrm{II}}=-q\{-\tau^{+} \}\qquad\mathbf{i}^{\mathrm{III}}=q\{\tau^{-}\}\qquad\mathbf{i}^{\mathrm{IV}}=-q \{-\tau^{-}\}, \tag{18}\]
and \(\mathbf{j}^{\sigma}=\mathbf{i}^{\sigma}\frac{p}{q}-\rho^{\sigma}\), where \(\rho^{\sigma}=r/s+1/q\) if \(\sigma=\mathrm{I},\mathrm{II}\) and \(\rho^{\sigma}=r/s-1/q\) if \(\sigma=\mathrm{III},\mathrm{IV}\).
Next, for each pencil, we determine the external Farey point of each curve having \(\zeta\) as the common Farey point. The corresponding parameters are given in (17) with \(\ell\geqslant 1\). Then \(|\mathrm{i}^{\pm}(\ell)|>q>|\mathbf{i}^{\sigma}|\) for every \(\sigma\), and therefore, from lemma 3 iii) (with \((\mathrm{i}^{\prime},\mathrm{j}^{\prime})=(\mathbf{i}^{\sigma},\mathbf{j}^{ \sigma})\) and \((\mathrm{i},\mathrm{j})=(\mathrm{i}^{\pm}(\ell),\mathrm{j}^{\pm}(\ell))\)), we see that the curves of the pencil in sector \(\sigma\) cannot extend further than the dominant line \((\mathbf{i}^{\sigma},\mathbf{j}^{\sigma})\). We will now show that all Farey points in fact belong to that line.
Let us consider the Farey sequence \(\mathcal{F}\), given in (10), for the affine parameters \((\mathrm{i}^{\pm}(\ell),\mathrm{j}^{\pm}(\ell))\). From theorem 1 i) and the fact that \(p/q\neq\theta^{\pm}\), the rotation numbers of the Farey points of the curves adjacent to \(\zeta\) are three successive terms of \(\mathcal{F}\): \(p^{l}/q^{l}<p/q<p^{r}/q^{r}\). Then (see [8, section 3.1]), we have \(p^{r}q-q^{r}p=1\). Moreover \(p/q\) is the mediant of \(p^{r}/q^{r}\) and \(p^{l}/q^{l}\), and we shall compute the latter from the former.
With the notation as above, we have the candidate values for \(p^{r}\) and \(q^{r}\):
\[q^{r}_{k}=-q^{\prime}+kq\qquad p^{r}_{k}=-p^{\prime}+kp\qquad k\in\mathbb{Z}. \tag{19}\]
As \(|k|\) increases, \(p^{r}_{k}/q^{r}_{k}\) approaches \(p/q\), and the approach is from the right if \(q^{r}_{k}\) is positive. The fraction \(p^{r}_{k}/q^{r}_{k}\) belongs to the \(|\mathrm{i}^{\pm}(\ell)|\)th Farey sequence if \(0<q^{r}_{k}\leqslant|\mathrm{i}^{\pm}(\ell)|\), and so we let \(k^{\pm}\) be the largest value of \(k\) for which this property holds. We find
\[k^{\pm}(\ell)=\ell\pm t^{\pm}+\lfloor\pm\tau^{\pm}\rfloor, \tag{20}\]
where \(\tau^{\pm}\) was defined in (11). Using the quadrant superscripts, we let
\[p^{\mathrm{I}}(\ell)=p^{r}_{k^{+}},\quad q^{\mathrm{I}}(\ell)=q^{r}_{k^{+}}, \quad p^{\mathrm{IV}}(\ell)=p^{r}_{k^{-}},\quad q^{\mathrm{IV}}(\ell)=q^{r}_{k ^{-}}.\]
To harmonise the notation, we shall also use the symbols
\[\mathrm{i}^{\sigma}(\ell)=\begin{cases}\mathrm{i}^{+}(\ell)&\sigma=\mathrm{I},\mathrm{III}\\ \mathrm{i}^{-}(\ell)&\sigma=\mathrm{II},\mathrm{IV}\end{cases}\]
where \(\mathrm{i}^{\pm}(\ell)\) was defined in (17).
To compute \(p^{\mathrm{II},\mathrm{III}}(\ell)\) and \(q^{\mathrm{II},\mathrm{III}}(\ell)\) using the mediant property we let \(p^{\mathrm{II},\mathrm{III}}(\ell)=a^{\pm}p-p^{\mathrm{IV},\mathrm{I}}(\ell)\) and \(q^{\mathrm{II},\mathrm{III}}(\ell)=a^{\pm}q-q^{\mathrm{IV},\mathrm{I}}(\ell)\), where \(a^{\pm}\) is a positive integer. The required value of \(a^{\pm}\) is the largest such that \(q^{\mathrm{III},\mathrm{II}}\leqslant|\mathrm{i}^{\pm}(\ell)|\), which is
\[a^{\pm}(\ell)=2(\ell\pm t^{\pm})+\lfloor\pm\tau^{\pm}\rfloor+\lfloor\mp\tau^{ \mp}\rfloor.\]
Performing the calculation explicitly gives:
\[\begin{array}{rclclclcl}p^{\mathrm{I}}(\ell)&=&-p^{\prime}+p(\ell+t^{+}+ \lfloor\tau^{+}\rfloor)&&q^{\mathrm{I}}(\ell)&=&-q^{\prime}+q(\ell+t^{+}+ \lfloor\tau^{+}\rfloor)\\ p^{\mathrm{II}}(\ell)&=&+p^{\prime}+p(\ell-t^{-}+\lfloor-\tau^{+}\rfloor)&&q^ {\mathrm{II}}(\ell)&=&+q^{\prime}+q(\ell-t^{-}+\lfloor-\tau^{+}\rfloor)\\ p^{\mathrm{III}}(\ell)&=&+p^{\prime}+p(\ell+t^{+}+\lfloor\tau^{-}\rfloor)&&q^ {\mathrm{III}}(\ell)&=&+q^{\prime}+q(\ell+t^{+}+\lfloor\tau^{-}\rfloor)\\ p^{\mathrm{IV}}(\ell)&=&-p^{\prime}+p(\ell-t^{-}+\lfloor-\tau^{-}\rfloor)&&q^ {\mathrm{IV}}(\ell)&=&-q^{\prime}+q(\ell-t^{-}+\lfloor-\tau^{-}\rfloor).\end{array} \tag{21}\]
We have constructed four infinite sequences of Farey points, and we must now show that the Farey points of each sequence belong to (the closure of) a dominant curve of a
neighbour of \(\zeta\). We begin to show that these points are collinear, and to this end, we let
\[\theta(\ell)=\frac{p(\ell)}{q(\ell)}\qquad\rho(\ell)=\mathrm{i}(\ell)\theta(\ell )-\mathrm{j}(\ell)\qquad\zeta(\ell)=(\theta(\ell),\rho(\ell)), \tag{22}\]
where all quantities refer to the same superscript. Using the above and (21), we find
\[\lim_{\ell\to\infty}\zeta^{\sigma}(\ell)=\begin{cases}\zeta\updownarrow& \sigma=\mathrm{I},\mathrm{II}\\ \zeta\updownarrow&\sigma=\mathrm{III},\mathrm{IV}\end{cases} \tag{23}\]
where \(\zeta\updownarrow\) was defined in (12).
From (22) and (21) we find, after some manipulations,
\[\alpha^{\sigma}:=\frac{\rho(\ell+1)-\rho(\ell)}{\theta(\ell+1)-\theta(\ell)}= \mathrm{i}^{\sigma}(\ell)-\gamma q^{\sigma}(\ell)\qquad\gamma=\begin{cases}+1& \sigma=\mathrm{I},\mathrm{III}\\ -1&\sigma=\mathrm{II},\mathrm{IV}\end{cases} \tag{24}\]
which is valid for \(\ell\geqslant 1\). Evaluating the last expression gives \(\alpha^{\sigma}=\mathbf{i}^{\sigma}\), as in (18), independent of \(\ell\). Thus the line to which the points \(\zeta^{\sigma}(\ell)\) as well as the limit point belong has affine parameters \((\mathbf{i}^{\sigma},\mathbf{j}^{\sigma})\), where \(\mathbf{j}^{\sigma}=\mathbf{i}^{\sigma}\frac{p}{q}-\rho^{\sigma}(\infty).\) Comparing (18) with (17), we see that \(\sigma=\mathrm{I},\mathrm{II}\) correspond to the dominant affine parameters of \(\zeta\upuparrow\), while \(\sigma=\mathrm{III},\mathrm{IV}\) are those of \(\zeta\updownarrow\).
We have proved that the Farey point distinct from \(\zeta\) in the \(\sigma\)-pencil, namely \(\zeta^{\sigma}(\ell),\ell=1,2\ldots\) belong to the dominant line with parameters \((\mathbf{i}^{\sigma},\mathbf{j}^{\sigma})\). It remains to show that all these Farey points belong to the closure of the dominant curve (infinitely many of them do, since \(\zeta\updownarrow\) is an interior point of the dominant curve). By theorem 1 i), we must show that the element of the \(|\mathbf{i}^{\sigma}|\)th Farey sequence which lies to the right (for \(\sigma=\mathrm{I},\mathrm{IV}\)) or to the left (for \(\sigma=\mathrm{II},\mathrm{III}\)) of \(p/q\) lies at least as far out as \(p^{\sigma}(1)/q^{\sigma}(1)\). From (17) and (18) we have \(|\mathbf{i}^{\sigma}|\leqslant q\leqslant|\mathrm{i}^{\sigma}(1)|\), and hence the corresponding Farey sequences satisfy \(\mathcal{F}_{|\mathbf{i}^{\sigma}|}\subset\mathcal{F}_{q}\subset\mathcal{F}_{ |\mathbf{i}^{\sigma}(1)|}\) which proves our assertion.
We now discuss the cases \(\rho=1/q,(q-1)/q\). Let \(\rho=1/q\), i.e., \(\zeta=(\theta,\rho)=(p/q,1/q)\). The Farey points distinct from \(\zeta\) in the I- and II-pencils can be treated in the same way as before. Thus, we focus on those in the III- and IV-pencils. Since \(\rho\neq 1\), we have \(q\geqslant 2\), which gives \(\theta\neq 0,1\), i.e., \(1\leqslant p\leqslant q-1\). Thus, \(\zeta\) is in the triangular region specified by \(\theta-\rho\geqslant 0\), \(\theta+\rho\leqslant 1\), and \(\rho>0\). Since \(|\mathrm{i}^{\pm}(\ell)|>q\) for \(\ell\geqslant 1\), the line with parameters \((\mathrm{i}^{\pm}(\ell),\mathrm{j}^{\pm}(\ell))\), \(\ell\geqslant 1\) intersects the segment \(\rho=0\) (\(0\leqslant\theta\leqslant 1\)), i.e., the positive dominant curve of \(\zeta\updownarrow=(p/q,0)\). We will now show that all the intersection points are the Farey points distinct from \(\zeta\) in the III- and IV-pencils. The \(\theta\)-coordinate of the intersection point of the lines \(\rho=\mathrm{i}^{\pm}(\ell)\theta-\mathrm{j}^{\pm}(\ell)\) and \(\rho=0\) is given by \(\theta=\mathrm{j}^{\pm}(\ell)/\mathrm{i}^{\pm}(\ell)\). Using the condition for neighbouring terms in a Farey sequence, we see that \(\mathrm{j}^{+}(\ell)/\mathrm{i}^{+}(\ell)\) and \(p/q\) (resp. \(p/q\) and \(-\mathrm{j}^{-}(\ell)/-\mathrm{i}^{-}(\ell)\)) are neighbours in \(\mathcal{F}_{|\mathrm{i}^{+}(\ell)|}\) (resp. \(\mathcal{F}_{|\mathrm{i}^{-}(\ell)|}\)). This proves the assertion for the case \(\rho=1/q\). In the case of \(\rho=(q-1)/q\), i.e., \(\zeta=(\theta,\rho)=(p/q,(q-1)/q)\), we can similarly show that the line with parameters \((\mathrm{i}^{\pm}(\ell),\mathrm{j}^{\pm}(\ell))\), \(\ell\geqslant 1\) intersects the segment \(\rho=1\) (\(0\leqslant\theta\leqslant 1\)), i.e., the negative dominant curve of \(\zeta\upuparrow=(p/q,1)\) and that all the intersection points are the Farey points distinct from \(\zeta\) in the I- and II-pencils.
The final item in the proof are the boundary cases \(\rho=0,1\). By definition, a rational critical point \(\zeta\) with \(\rho=0\) has the following affine parameters for positive and negative
chains: for \(\ell=0,1,2,\ldots\)
\[\mathrm{i}^{+}(\ell)=q\ell\qquad\mathrm{j}^{+}(\ell)=p\ell\qquad\mathrm{i}^{-}( \ell)=-q(\ell+1)\qquad\mathrm{j}^{-}(\ell)=-p(\ell+1). \tag{25}\]
Likewise, a rational critical point \(\zeta\) with \(\rho=1\) has the following affine parameters for positive and negative chains: for \(\ell=0,1,2,\ldots\)
\[\mathrm{i}^{+}(\ell)=q(\ell+1)\qquad\mathrm{j}^{+}(\ell)=p(\ell+1)-1\qquad \mathrm{i}^{-}(\ell)=-q\ell\qquad\mathrm{j}^{-}(\ell)=-p\ell-1. \tag{26}\]
We first consider the case where \(\rho=0\) and \(\theta\neq 0,1\), i.e., \(\zeta\) is of the form \(\zeta=(\theta,\rho)=(p/q,0)\) with \(q\geqslant 2\). In this case, the III- and IV-pencils are missing. Using (18) and (25), we see that the \(\theta\)-coordinate, denoted by \(\theta^{\mathrm{I}}(\ell)\), of the intersection point of the line \(\rho=\mathrm{i}^{+}(\ell)\theta-\mathrm{j}^{+}(\ell)\), \(\ell\geqslant 1\) and the positive dominant line of \(\zeta\upuparrow=(p/q,1/q)\) is given by
\[\theta^{\mathrm{I}}(\ell)=\begin{cases}\frac{p\ell-p^{\prime}}{q\ell-q^{ \prime}}&q^{\prime}>0\\ \frac{p\ell-p-p^{\prime}}{q\ell-q-q^{\prime}}&q^{\prime}<0.\end{cases}\]
Similarly, the \(\theta\)-coordinate, denoted by \(\theta^{\mathrm{II}}(\ell)\), of the intersection point of the line \(\rho=\mathrm{i}^{-}(\ell)\theta-\mathrm{j}^{-}(\ell)\), \(\ell\geqslant 1\) and the negative dominant line of \(\zeta\upuparrow\) is given by
\[\theta^{\mathrm{II}}(\ell)=\begin{cases}\frac{p\ell+p^{\prime}}{q\ell+q^{ \prime}}&q^{\prime}>0\\ \frac{p\ell+p^{\prime}+p^{\prime}}{q\ell+q+q^{\prime}}&q^{\prime}<0.\end{cases}\]
Using the condition for neighbouring terms in a Farey sequence, we see that \(p/q\) and \(\theta^{\mathrm{I}}(\ell)\) (resp. \(\theta^{\mathrm{II}}(\ell)\) and \(p/q\)) are neighbours in \(\mathcal{F}_{|\mathrm{i}^{+}(\ell)|}\) (resp. \(\mathcal{F}_{|\mathrm{i}^{-}(\ell)|}\)). Thus, all the intersection points are the Farey points distinct from \(\zeta\) in the I- and II-pencils. Since \(\mathcal{F}_{|\mathrm{I}^{1}|}\subset\mathcal{F}_{q}=\mathcal{F}_{|\mathrm{i} ^{+}(1)|}\) and \(\mathcal{F}_{|\mathrm{i}^{\mathrm{H}}|}\subset\mathcal{F}_{q}\subset\mathcal{F }_{|\mathrm{i}^{-}(1)|}\), all these Farey points belong to the closure of the dominant curves of \(\zeta\upuparrow\). In the case where \(\rho=1\) and \(\theta\neq 0,1\), i.e., \(\zeta\) is of the form \(\zeta=(\theta,\rho)=(p/q,1)\) with \(q\geqslant 2\), the missing pencils are I and II. In this case, we can do the same to show that the Farey points distinct from \(\zeta\) in the III- and IV-pencils, respectively, belong to the positive and negative dominant curves of \(\zeta\updownarrow\). Lastly, we consider \(\zeta\) with \(q=1\), i.e., the four corners \((0,0)\), \((1,0)\), \((0,1)\), \((1,1)\) of the parameter space, each of which has only one pencil. It is easy to see that the Farey points of the I-pencil of \((0,0)\) and those of the II-pencil of \((1,0)\) belong to \(\rho=1\) (\(0\leqslant\theta\leqslant 1\)), i.e., the negative dominant curve of \((0,1)\) and \((1,1)\). We can also see that the Farey points of the IV-pencil of \((0,1)\) and those of the III-pencil of \((1,1)\) belong to \(\rho=0\) (\(0\leqslant\theta\leqslant 1\)), i.e., the positive dominant curve of \((0,0)\) and \((1,0)\).
We remark that the particular choice of continued fraction representation for \(p/q\) has little effect of the above argument. It merely causes a shift by one unity of the quantities \(t^{\pm}\) in (14).
To complete our analysis of the critical curves of the map g, we now characterise the words of all curves incident to a rational point \(\zeta\), in terms of the word at \(\zeta\).
**Theorem 6**.: _Let \(\zeta\) be a rational critical point with denominator \(q\), let \(w^{\sigma}(\ell)\), \(\ell\geqslant 0\) be the word of the \(\ell\)th curve adjacent to \(\zeta\) in the quadrant \(\sigma\). Let \(u^{+}\) (resp. \(u^{-}\)) be the word of the positive (resp. negative) dominant curve of \(\zeta\). Let \(v^{+}\) (resp. \(v^{-}\)) be the word obtained
_by switching the first letter of \(u^{+}\) (resp. \(u^{-}\)). Then \(|u^{+}u^{-}|=|u^{-}u^{+}|=q\) and_
\[w^{\sigma}(\ell)=\begin{cases}u^{+}\left(v^{-}u^{+}\right)^{\ell}&\sigma=\mathrm{ I}\\ u^{-}\left(u^{+}v^{-}\right)^{\ell}&\sigma=\mathrm{II}\\ u^{+}\left(u^{-}v^{+}\right)^{\ell}&\sigma=\mathrm{III}\\ u^{-}\left(v^{+}u^{-}\right)^{\ell}&\sigma=\mathrm{IV}.\end{cases}\]
Proof. Assume first that \(\rho\neq 0,1\). Let
\[u^{\sigma}=\begin{cases}u^{+}u^{-}&\sigma=\mathrm{I},\mathrm{III}\\ u^{-}u^{+}&\sigma=\mathrm{II},\mathrm{IV}.\end{cases}\]
Then \(u^{\sigma}\) is a periodic boundary word at \(\zeta\), for the initial conditions \(0\) or \(\rho\). The periodic orbit has no other boundary point because \(\zeta\) lies in the interior of the curve of both \(u^{+}\) and \(u^{-}\). Thus \(|u^{\sigma}|=q\). Let \(w^{\sigma}(\ell)\) be the word of the curve adjacent to \(\zeta\) in the quadrant \(\sigma\), and let \(n^{\sigma}(\ell)\) be the length of this word. From (17) we find
\[n^{\sigma}(\ell)=|\mathrm{i}^{\pm}(\ell)|=q(\{\pm\tau\}+\ell),\]
with the usual convention on sign, and the quotient of division of \(n^{\sigma}(\ell)\) by \(q\) is given by
\[\lfloor n^{\sigma}(\ell)/q\rfloor=\lfloor\{\pm\tau\}+\ell\rfloor=\ell.\]
The word \(w^{\sigma}(\ell)\) is now computed from theorem 1, part ii). Then \(w^{\sigma}(\ell)\) will consist of \(\ell\) repetitions of a modification of \(u^{\sigma}\), followed by a modification of \(u^{\pm}\), where the modifications are performed on the symbols congruent to \(|\mathrm{i}^{\sigma}(0)|\) modulo \(q\) if the curve is on the right of \(\zeta\) (\(\sigma=\mathrm{I},\mathrm{IV}\)), or those congruent to \(0\) modulo \(q\), except the first symbol, if the curve is on the left of \(\zeta\) (\(\sigma=\mathrm{II},\mathrm{III}\)). This gives \(w^{\sigma}(\ell)\) in the statement.
We now consider the case \(\rho=0\). By definition, \(u^{+}=\varepsilon\) and \(u^{-}=b^{q}\), where \(\varepsilon\) denotes the empty word [see discussion preceding (12)]. Obviously, \(|u^{+}u^{-}|=|u^{-}u^{+}|=q\). For \(\sigma=\mathrm{I},\mathrm{II}\), we denote by \(w^{\sigma}(\ell)\) the word of the curve adjacent to \(\zeta\) in the quadrant \(\sigma\) and put \(n^{\sigma}(\ell)=|w^{\sigma}(\ell)|\), as above. From (25) we have
\[n^{\mathrm{I}}(\ell)=|\mathrm{i}^{+}(\ell)|=q\ell\qquad\quad n^{\mathrm{II}}( \ell)=|\mathrm{i}^{-}(\ell)|=q(\ell+1).\]
The word \(w\) of length \(n^{\mathrm{I}}(\ell)=q\ell\) at \(\zeta\) is given by \(w=b^{q\ell}\). Thus, theorem 1 ii) gives
\[w^{\mathrm{I}}(\ell)=\left(ab^{q-1}\right)^{\ell}=\varepsilon\left(ab^{q-1} \varepsilon\right)^{\ell}=u^{+}\left(v^{-}u^{+}\right)^{\ell}.\]
Likewise, the word \(w\) of length \(n^{\mathrm{II}}(\ell)=q(\ell+1)\) at \(\zeta\) is \(w=b^{q(\ell+1)}\), and the same theorem gives
\[w^{\mathrm{II}}(\ell)=b^{q}\left(ab^{q-1}\right)^{\ell}=b^{q}\left(\varepsilon ab ^{q-1}\right)^{\ell}=u^{-}\left(u^{+}v^{-}\right)^{\ell}.\]
This completes the proof of the case \(\rho=0\). We can do the same for the proof of the case \(\rho=1\).
## 5. Triple points and convergents
In the previous section we considered the solutions of (3) for fixed rational \((\theta,\rho)\), resulting in infinitely many chains passing through that point. A global view on symbolic
dynamics may be gained by considering a different set of chains, namely those corresponding to the set \(\mathcal{N}_{n}\) of solutions of (3) with \(|\mathrm{i}|\) bounded by \(n\), and \(\mathrm{j}\) subject to the bounds \(J(\mathrm{i})\) given in (9):
\[\mathcal{N}_{n}=\bigcup_{\stackrel{{|\mathrm{i}|\leqslant n}}{{J( \mathrm{i})}}}\mathcal{L}_{\mathrm{i},\mathrm{j}}. \tag{27}\]
By construction, the boundary words associated to the chains in \(\mathcal{N}_{n}\) contain all possible critical words of length not exceeding \(n\) and either sign. For brevity, we shall not develop the analysis of \(\mathcal{N}_{n}\) here, but merely prove a geometric theorem, illustrated in figure 3, which deals with a subset of \(\mathcal{N}_{n}\) consisting of six chains near a rational critical point. This result displays a connection between intersections of chains and convergents of continued fractions.
Let \(\zeta=(p/q,r/s)\) be a rational critical point with two neighbours \(\zeta\!\uparrow\) and \(\zeta\!\downarrow\), that is, \(r/s\neq 0,1\). We recall (see beginning of section 4) that \(p_{k}/q_{k}\) denote the convergents of the continued fraction expansion of \(p/q=p_{n}/q_{n}\), chosen so to have the last coefficient equal to one. Then \(p_{n-1}/q_{n-1}\) is the rational closest to \(p/q\) among those with denominator less than \(q\). (This is not the case if the last coefficient is greater than one.) We define
\[\mu=2\lfloor\tau\rfloor-\lfloor\tau^{-}\rfloor-\lfloor\tau^{+}\rfloor\qquad \quad\psi_{\mu}(x)=\begin{cases}\lfloor x\rfloor&\mu=+1\\ \lceil x\rceil&\mu=-1,\end{cases} \tag{28}\]
where \(\tau,\tau^{-},\tau^{+}\) are given in (11).
A **triple point** of \(\zeta\) is a common point of three concurrent dominant lines, which comprise one line from each of \(\zeta,\zeta\!\uparrow,\zeta\!\downarrow\). The next result characterises all triple points.
Figure 3. Examples of triple points for \(\zeta=(3/5,2/5)\) (left, type I) and for \(\zeta=(3/7,2/7)\) (right, type II), as specified in theorem 7. The coordinates are normalised by shifting \(\zeta\) to the origin, and then scaling \(\rho\) by \(q\), and \(\theta\) by \(q^{2}\).
**Theorem 7**.: _Let \(\zeta=(p/q,r/s)\) be a rational critical point with two neighbours, let \(\mu\) and \(\psi\) be given by (28), and let_
\[\chi_{\mu}^{(1)}=\bigg{(}\frac{p_{n-1}}{q_{n-1}},\frac{\psi_{\mu}(\tau)}{q_{n-1} }(-1)^{n-1}\bigg{)}\qquad\quad\chi_{\mu}^{(2)}=\bigg{(}\frac{p_{n-2}}{q_{n-2}}, \frac{rq_{n}}{sq_{n-2}}-\frac{\psi_{\mu}(\tau)}{q_{n-2}}(-1)^{n-1}\bigg{)}.\]
_Then all \(\zeta\) have two triple points, specified below. The triple points of \(\zeta=(p/q,1/q)\) are:_
\[\begin{array}{ll}\chi_{-1}^{(1)}\mbox{ and }\chi_{-1}^{(2)}&\mbox{if $n$ is odd \ (type I);}\\ \chi_{+1}^{(2)}\mbox{ and }\chi_{-1}^{(2)}&\mbox{if $n$ is even \ (type II).}\end{array}\]
_The triple points of \(\zeta=(p/q,(q-1)/q)\) are:_
\[\begin{array}{ll}\chi_{+1}^{(1)}\mbox{ and }\chi_{+1}^{(2)}&\mbox{if $n$ is odd \ (type I);}\\ \chi_{+1}^{(2)}\mbox{ and }\chi_{-1}^{(2)}&\mbox{if $n$ is even \ (type II).}\end{array}\]
_The triple points of all other \(\zeta\) are:_
\[\begin{array}{ll}\chi_{\mu}^{(1)}\mbox{ and }\chi_{\mu}^{(2)}&\mbox{if $\mu \neq 0\ \ (type I);$}\\ \chi_{+1}^{(2)}\mbox{ and }\chi_{-1}^{(2)}&\mbox{if $\mu=0\ \ (type II).}\end{array}\]
The cases listed above as I and II will be referred to as the _generic cases_ --see figure 3.
Proof. Assume first that \(r/s\neq 1/q,(q-1)/q\). We choose a dominant line from each point, and form the matrix whose rows represent the chosen lines. We will show that the determinant of exactly two such matrices must vanish. These determinants take the form [cf. eq.(17)]
\[D(\mu_{1},\mu_{2},\mu_{3})=\begin{vmatrix}\mu_{1}\{\mu_{1}\tau^{-}\}&-1&1\\ \mu_{2}\{\mu_{2}\tau\}&0&1\\ \mu_{3}\{\mu_{3}\tau^{+}\}&1&1\end{vmatrix}\qquad\mu_{1},\mu_{2},\mu_{3}\in \{+1,-1\},\]
where the rows represent the lines through \(\zeta\!\downarrow\!,\zeta\) and \(\zeta\!\uparrow\), respectively, and for each row there is one choice of sign, to select the positive or negative line through that point. To normalise the determinant, we have placed the origin at \(\zeta\), divided the first column by \(q\), and multiplied the second column by \(q\).
Using the identities \(\tau^{+}-2\tau+\tau^{-}=0\), \(\{x\}=x-\lfloor x\rfloor\), and \(-\lfloor-x\rfloor=\lceil x\rceil\), we obtain
\[D(\mu_{1},\mu_{2},\mu_{3}) = -\mu_{1}\{\mu_{1}\tau^{-}\}+2\mu_{2}\{\mu_{2}\tau\}-\mu_{3}\{\mu_{ 3}\tau^{+}\}\] \[= \mu_{1}\lfloor\mu_{1}\tau^{-}\rfloor-2\mu_{2}\lfloor\mu_{2}\tau \rfloor+\mu_{3}\lfloor\mu_{3}\tau^{+}\rfloor\] \[= \psi_{\mu_{1}}(\tau^{-})-2\psi_{\mu_{2}}(\tau)+\psi_{\mu_{3}}(\tau ^{+}),\]
where \(\psi\) is defined in (28). Thus all eight determinants are integer, and the vanishing of the determinant is equivalent to \(\psi_{\mu_{2}}(\tau)\) being the mid-point of \(\psi_{\mu_{1}}(\tau^{-})\) and \(\psi_{\mu_{3}}(\tau^{+})\), that is, that \(\psi_{\mu_{1}}(\tau^{-}),\psi_{\mu_{2}}(\tau),\psi_{\mu_{3}}(\tau^{+})\) form an arithmetic progression.
The sequence \(\lfloor\tau^{-}\rfloor\), \(\lfloor\tau\rfloor\), \(\lfloor\tau^{+}\rfloor\) is non-decreasing if \(q^{\prime}>0\) and non-increasing if \(q^{\prime}<0\). Since \(|q^{\prime}|/q<1\), we have \(|\lfloor\tau^{\pm}\rfloor-\lfloor\tau\rfloor|\leqslant 1\), and hence such a sequence can assume only one of the following values
\[(k,k,k),\quad(k,k,k\pm 1),\quad(k,k\pm 1,k\pm 1),\quad(k,k\pm 1,k\pm 2) \tag{29}\]
for some integer \(k\), where the sign is that of \(q^{\prime}\).
Since \(r/s\neq 1/q,(q-1)/q\), none of the \(\tau\)s is an integer, and so we have
\[\psi_{-1}(x)=\lceil x\rceil=\lfloor x\rfloor+1=\psi_{1}(x)+1,\hskip 28.452756ptx \in\{\tau^{-},\tau,\tau^{+}\}.\]
With this in mind, we match each sequence in (29) with two \(\mu\)-sequences so as to transform it into an arithmetic progression.
\[\begin{array}{ccccc}&\lfloor\tau^{-}\rfloor&\lfloor\tau\rfloor&\lfloor\tau^{+ }\rfloor&\mu_{1}\mu_{2}\mu_{3}&\mu_{1}\mu_{2}\mu_{3}\\ \\ 1&k&k&k&++&---\\ 2&k&k+1&k+2&++&---\\ 3&k&k-1&k-2&++&---\\ 4&k&k&k+1&--+&+--\\ 5&k&k+1&k+1&-++&++-\\ 6&k&k&k-1&++&-++\\ 7&k&k-1&k-1&+--&--+\end{array} \tag{30}\]
The above argument shows that in general a rational critical point has two \(\mu\)-sequences corresponding to two triple points, which we now compute. We recall that \(p_{k}/q_{k}\) are the convergents of \(p/q=p_{n}/q_{n}\). We let \(p^{\prime}=(-1)^{n-1}p_{n-1}\) and \(q^{\prime}=(-1)^{n-1}q_{n-1}\) [cf. (11)], whence \(pq^{\prime}-p^{\prime}q=1\).
We choose to consider the intersection of the line through \(\zeta\!\downarrow\) with slope \(q\mu_{1}\{\mu_{1}\tau^{-}\}\) with the line through \(\zeta\) with slope \(q\mu_{2}\{\mu_{2}\tau\}\). Letting
\[\nabla=\mu_{1}\lfloor\mu_{1}\tau^{-}\rfloor-\mu_{2}\lfloor\mu_{2}\tau\rfloor= \psi_{\mu_{1}}(\tau^{-})-\psi_{\mu_{2}}(\tau), \tag{31}\]
we obtain for the rotation number
\[\theta = \frac{p}{q}+\frac{1}{q^{2}(\mu_{1}\{\mu_{1}\tau^{-}\}-\mu_{2}\{ \mu_{2}\tau\})}=\frac{pq^{\prime}+pq\nabla-1}{q(q^{\prime}+q\nabla)}=\frac{p^ {\prime}+p\nabla}{q^{\prime}+q\nabla}\] \[= \frac{(-1)^{n-1}p_{n-1}+p_{n}\nabla}{(-1)^{n-1}q_{n-1}+q_{n} \nabla}.\]
Having assumed that the last coefficient of the continued fractions of \(p_{n}/q_{n}\) is unity, we have \(p_{n}-p_{n-1}=p_{n-2}\) and \(q_{n}-q_{n-1}=q_{n-2}\). We find
\[\theta=\begin{cases}p_{n-1}/q_{n-1}&\nabla=0\\ p_{n-2}/q_{n-2}&\nabla=(-1)^{n}.\end{cases} \tag{32}\]
We now show that \(\nabla\) does not assume any other value. With reference to table (30) we find:
\[\begin{array}{ccccc}\mu_{1}\mu_{2}&\nabla&\nabla=0&\nabla=-1&\nabla=+1\\ \\ ++&\lfloor\tau^{-}\rfloor-\lfloor\tau\rfloor&1,6&2,5&3\\ --&\lceil\tau^{-}\rceil-\lceil\tau\rceil&1,4&2&3,7\\ +-&\lfloor\tau^{-}\rfloor-\lceil\tau\rceil&7&4\\ -+&\lceil\tau^{-}\rceil-\lfloor\tau\rfloor&5&6\end{array} \tag{33}\]
where the last three columns list the cases in (30) where each value of \(\nabla\) is attained. Considering that \(n\) is odd in cases 2,4,5 and \(n\) is even in cases 3,6,7, we see that, if \(\nabla\neq 0\), then \(\nabla\) is given by \((-1)^{n}\).
We now divide the rows of table (30) into two groups, namely the rows 1-3 where \(\mu=0\) [\(\mu\) is defined in (28)] (that is, \(\lfloor\tau^{-}\rfloor,\lfloor\tau\rfloor,\lfloor\tau^{+}\rfloor\) form an arithmetic progression), and the rows 4-7 where \(\mu\neq 0\). In the latter cases we verify that both values of \(\theta\) listed in (32) occur. Thus the triple points are located on opposite sides of \(\zeta\).
In the former case only one value of \(\theta\) occur, and both triple points lie on the same side of \(\zeta\). To determine the value of \(\theta\), we note that case 1 in (30) does not actually occur. Indeed this would require
\[|\tau^{+}-\tau^{-}|=2\frac{q_{n-1}}{q_{n}}<1\]
or \(q_{n-1}/q_{n}<1/2\). Since \(q_{n-2}+q_{n-1}=q_{n}\), this would give \(q_{n-2}>q_{n-1}\), which is a contradiction. Therefore \(\theta=p_{n-2}/q_{n-2}\) for both points.
It remains to determine the value of \(\rho\) at the triple point. The dominant lines of \(\zeta\) have equation
\[\rho-\frac{r}{s}=q_{n}\mu_{2}\{\mu_{2}\tau\}\Big{(}\theta-\frac{p_{n}}{q_{n}} \Big{)},\]
where \(\mu_{2}\) is determined according to table (30). Letting \(\theta=p_{n-1}/q_{n-1}\), we find
\[\rho = \frac{r}{s}+q_{n}\mu_{2}(\mu_{2}\tau-\lfloor\mu_{2}\tau\rfloor) \frac{(-1)^{n}}{q_{n-1}q_{n}}\] \[= -\mu_{2}\lfloor\mu_{2}\tau\rfloor\frac{(-1)^{n}}{q_{n-1}}=\psi_ {\mu_{2}}(\tau)\frac{(-1)^{n-1}}{q_{n-1}},\]
and we have obtained the triple point \(\chi_{\mu}^{(1)}\). One verifies that \(\mu_{2}=\mu\). Letting \(\theta=p_{n-2}/q_{n-2}\), we find, using [8, theorem 151]
\[\rho=\frac{rq_{n}}{sq_{n-2}}-\psi_{\mu_{2}}(\tau)\frac{(-1)^{n-1}}{q_{n-2}},\]
which is \(\chi_{\mu}^{(2)}\). In this case, from table (30), we see that if \(\mu\neq 0\), then \(\mu_{2}=\mu\), while for \(\mu=0\), both signs are allowed.
We now discuss the cases \(r/s=1/q\) and \(r/s=(q-1)/q\). If \(q=2\), then \(\zeta=(1/2,1/2)\), and we see immediately that the triple points of \(\zeta\) are \((0,0)\) and \((0,1)\) [see discussion preceding (12)].
Let \(r/s=(q-1)/q\) with \(q\geqslant 3\). Then, \(\zeta\!\uparrow\) is on \(\rho=1\), and the slope of a dominant line of \(\zeta\!\uparrow\) is \(q(\mu_{3}+1)/2\) from (26). This gives the condition for three concurrent dominant lines as follows:
\[\psi_{\mu_{1}}(\tau^{-})-2\psi_{\mu_{2}}(\tau)+\tau^{+}-\frac{\mu_{3}+1}{2}=0. \tag{34}\]
As noted above, case 1 in (30) does not occur. Other than case 1, cases 3, 4, 5, 6 do not occur, too: In fact, cases 3, 5, 6 do not occur since \(\tau^{+}=q^{\prime}\) is an integer and
\(q_{n-1}/q<1\) for \(q\geqslant 2\). Likewise, case 4 does not occur since \(|\tau^{+}-\tau^{-}|=2q_{n-1}/q>1\) for \(q\geqslant 3\). For remaining cases 2, 7, we give \(\mu\)-sequences satisfying (34):
\[\begin{array}{ccccc}\lfloor\tau^{-}\rfloor&\lfloor\tau\rfloor&\lfloor\tau^{+} \rfloor=\tau^{+}&\mu_{1}\mu_{2}\mu_{3}&\mu_{1}\mu_{2}\mu_{3}\\ \\ 2&k&k+1&k+2&-++&++-\\ 7&k&k-1&k-1&++&---\end{array}\]
In case 2, \(n\) is odd, and \(\nabla\) defined in (31) is 0 if \(\mu_{1}\mu_{2}\mu_{3}=-++\), and \(-1=(-1)^{n}\) if \(\mu_{1}\mu_{2}\mu_{3}=++-\). Thus, (32) also specifies the \(\theta\) value of a common point for this case. Since \(\mu_{2}=+1\) for the both \(\mu\)-sequences, we have \(\chi_{+1}^{(1)}\) and \(\chi_{+1}^{(2)}\) as the triple points for case 2. In case 7, \(n\) is even, and \(\nabla=1=(-1)^{n}\) for the both \(\mu\)-sequences. Thus, by (32), only \(\theta=p_{n-2}/q_{n-2}\) occurs, and we have \(\chi_{+1}^{(2)}\) and \(\chi_{-1}^{(2)}\) as the triple points for case 7.
In the case of \(r/s=1/q\) with \(q\geqslant 3\), \(\zeta\,\downarrow\) is on \(\rho=0\), and the slope of a dominant line of \(\zeta\!\downarrow\) is \(q(\mu_{1}-1)/2\) [cf. (25)]. This gives the following condition for three concurrent dominant lines:
\[\tau^{-}-\frac{\mu_{1}-1}{2}-2\psi_{\mu_{2}}(\tau)+\psi_{\mu_{3}}(\tau^{+})=0. \tag{35}\]
Incidentally, \(\tau^{-}=0\). By an argument similar to that given above, we can see that only cases 3, 4 in (30) occur. By (35), we have the following table:
\[\begin{array}{ccccc}\lfloor\tau^{-}\rfloor=\tau^{-}&\lfloor\tau\rfloor& \lfloor\tau^{+}\rfloor&\mu_{1}\mu_{2}\mu_{3}&\mu_{1}\mu_{2}\mu_{3}\\ \\ 3&k&k-1&k-2&++&---\\ 4&k&k&k+1&---+&+--\end{array}\]
where \(k\) is actually 0. Let
\[\nabla^{\prime}=\mu_{3}\lfloor\mu_{3}\tau^{+}\rfloor-\mu_{2}\lfloor\mu_{2} \tau\rfloor=\psi_{\mu_{3}}(\tau^{+})-\psi_{\mu_{2}}(\tau).\]
Considering the intersection of dominant lines through \(\zeta\!\uparrow\) and \(\zeta\), we have the \(\theta\) value of a common point as follows:
\[\theta=\begin{cases}p_{n-1}/q_{n-1}&\nabla^{\prime}=0\\ p_{n-2}/q_{n-2}&\nabla^{\prime}=(-1)^{n-1}.\end{cases}\]
In case 3, \(n\) is even, and \(\nabla^{\prime}=-1=(-1)^{n-1}\) for the both \(\mu\)-sequences. Thus, we have \(\chi_{+1}^{(2)}\) and \(\chi_{-1}^{(2)}\) as the triple points for case 3. In case 4, \(n\) is odd, and \(\nabla^{\prime}=0\) if \(\mu_{1}\mu_{2}\mu_{3}=--+\), and \(\nabla^{\prime}=1=(-1)^{n-1}\) if \(\mu_{1}\mu_{2}\mu_{3}=+--\). Since \(\mu_{2}=-1\) for the both \(\mu\)-sequences, we have \(\chi_{-1}^{(1)}\) and \(\chi_{-1}^{(2)}\) as the triple points for case 4.
We note that \(\zeta=(1/2,1/2)\) can be classified into either \(\zeta=(p/q,1/q)\) or \((p/q,(q-1)/q)\) since \(n=2\) is even and its triple points \((0,0)\) and \((0,1)\) are \(\chi_{+1}^{(2)}\) and \(\chi_{-1}^{(2)}\).
The above argument applies to continued fraction representations for which the last coefficient is chosen to be unity. If instead the last coefficient is greater than 1, then we write \(p/q=\bar{p}_{\bar{n}}/\bar{q}_{\bar{n}}\), where \(\bar{n}=n-1\). We find
\[\bar{q}_{\bar{n}-1}=q_{n-2}\qquad\text{and}\qquad\bar{q}_{\bar{n}}-\bar{q}_{ \bar{n}-1}=q_{n-1}\]
and similarly for \(\bar{p}_{\bar{n}}\). It follows that the two values of \(\theta\) in (32) are merely exchanged, yielding the same result.
Considering the slopes of the lines at a triple point, from the above theorem and lemma 3 iii) we obtain at once:
**Corollary 8**.: _A triple point of type I is the Farey point of at least one of the three concurrent curves; for type II, it is the Farey point of at least two curves._
We finally consider sequences of rational critical points approaching an irrational one. Recall (see beginning of section 2) that the point \(\zeta=(\theta,\rho)\) is a critical point iff \(\rho\) belongs to the doubly infinite orbit through the origin. Therefore \(\zeta\) has the form
\[\zeta=(\theta,\{\mathrm{i}\theta\})=(\theta,\mathrm{i}\theta-\mathrm{j}) \qquad\mathrm{j}=\lfloor\mathrm{i}\theta\rfloor, \tag{36}\]
for some \(\mathrm{i}\in\mathbb{Z}\). However, the continued fractions of \(\theta\) and \(\mathrm{i}\theta\) will in general be unrelated, and so it is not possible to construct a sequence of rational critical points that consists of convergents of both \(\theta\) and \(\rho\). We shall therefore prioritise the convergents of \(\theta\), and select rational approximants for \(\rho\) so that the affine parameters \((\mathrm{i},\mathrm{j})\) remain the same throughout the approximation.
Thus choose \(\theta\not\in\mathbb{Q}\) and \(\mathrm{i}\) in (36), and consider the sequence of convergents \(p_{k}/q_{k},k\geqslant 0\) of the continued fraction expansion of \(\theta\). From (36) we obtain the following sequence of rational critical points
\[\zeta_{k}=\Big{(}\frac{p_{k}}{q_{k}},\frac{\mathrm{i}p_{k}-\mathrm{j}q_{k}}{q _{k}}\Big{)}\qquad k\geqslant 0,\]
with \(\zeta_{k}\to\zeta\) as \(k\to\infty\).
By construction, all points \(\zeta_{k}\) are collinear. Now, the dominant affine parameters \((\mathbf{i}^{\pm},\mathbf{j}^{\pm})\) at \(\zeta_{k}\) have the property that \(|\mathbf{i}^{+}-\mathbf{i}^{-}|=q_{k}\) (see beginning of section 4). Since the pair \((\mathrm{i},\mathrm{j})\) is fixed and \(q_{k}\to\infty\), it follows that \((\mathrm{i},\mathrm{j})\) is a pair of dominant affine parameters of the point \(\zeta_{k}\) for all sufficiently large \(k\), while the components of the other dominant pair diverge to infinity.
|
2309.12876 | Gravity Network for end-to-end small lesion detection | This paper introduces a novel one-stage end-to-end detector specifically
designed to detect small lesions in medical images. Precise localization of
small lesions presents challenges due to their appearance and the diverse
contextual backgrounds in which they are found. To address this, our approach
introduces a new type of pixel-based anchor that dynamically moves towards the
targeted lesion for detection. We refer to this new architecture as GravityNet,
and the novel anchors as gravity points since they appear to be "attracted" by
the lesions. We conducted experiments on two well-established medical problems
involving small lesions to evaluate the performance of the proposed approach:
microcalcifications detection in digital mammograms and microaneurysms
detection in digital fundus images. Our method demonstrates promising results
in effectively detecting small lesions in these medical imaging tasks. | Ciro Russo, Alessandro Bria, Claudio Marrocco | 2023-09-22T14:02:22Z | http://arxiv.org/abs/2309.12876v1 | [
###### Abstract
This paper introduces a novel one-stage end-to-end detector specifically designed to detect small lesions in medical images. Precise localization of small lesions presents challenges due to their appearance and the diverse contextual backgrounds in which they are found. To address this, our approach introduces a new type of pixel-based anchor that dynamically moves towards the targeted lesion for detection. We refer to this new architecture as GravityNet, and the novel anchors as gravity points since they appear to be "attracted" by the lesions. We conducted experiments on two well-established medical problems involving small lesions to evaluate the performance of the proposed approach: microcalcifications detection in digital mammograms and microaneurysms detection in digital fundus images. Our method demonstrates promising results in effectively detecting small lesions in these medical imaging tasks.
G Gravity Network for end-to-end small lesion detection]Gravity Network for end-to-end small lesion detection
Alessandro Bria]Ciro Russo\({}^{a}\), Alessandro Bria\({}^{a}\) and Claudio Marrocco\({}^{a,*}\)
\({}^{a}\)Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy
## 1 Introduction
Detection of small lesions in medical images has emerged as a compelling area of research, which holds significant relevance in medicine, especially in fields like Radiology and Oncology, when a timely disease diagnosis is essential [43]. Small lesions are primarily characterized by a limited size and can vary greatly in nature depending on their location and the involved tissue. In numerous real-world scenarios, the identification and classification of small lesions is a challenging and critical diagnostic process. For example, retinal microaneurysms are the earliest sign of diabetic retinopathy and are caused by small local expansion of capillaries in the retina [14]. In ischemic stroke imaging, early identification of small occlusion is crucial to initiate timely treatment [58]. In cancer diagnosis, many forms of cancer originate as small lesions before they grow and spread, such as breast calcifications, which are one of the most important diagnostic markers of breast lesions [46], or pulmonary nodules, which can be the first stage of a primary lung cancer [44]. The ability to early and accurately detect small lesions can make a difference in the treatment and prognosis of patients and have a substantial impact on patient health. Manual interpretation of medical images can be time-consuming and susceptible to human error, especially when the task involves of localization and identification of small lesions within the full image space [13].
There is a long tradition of research on automatic lesion detection methods [55]. Traditional image processing methods, such as thresholding, edge detection, and morphological operations, can be effective for detecting small lesions in images with clear and well-defined structures. However, these methods are limited by the presence of noise and variability in medical images. The use of Machine Learning, and in particular Deep Learning, helps to enhance reliability, performance, and accuracy of diagnosing systems for specific diseases [54]. Actually, the first lesion detection system based on Convolutional Neural Networks (CNNs) was proposed back in 1995 to detect lung nodules in X-ray images [40]. However, only in the last ten years, CNNs have acquired great popularity thanks to their remarkable performance in computer vision [32], rapidly becoming the preferable solution for automated medical lesion detection [59, 17, 21, 22]. The reason for this success is due to the ability of learning hierarchical representations directly from the images, instead of using handcrafted features based on domain-specific knowledge. CNNs are able to build features with increasing relevance, from texture to higher order features like local and global shape [33].
A typical CNN architecture for medical image analysis is applied to subparts of an image containing candidate lesions or background. This means that the image is divided into patches of equal size and partially overlapping, and each patch is processed individually. The output image is formed by reassembling the individually processed patches [25]. Despite patch-based methods being widely used, they suffer from several problems, especially in the case of small lesion detection [7], where accurate detections requires both local information about the appearance of the lesion and global contextual information about its location. This combination is typically not possible in patch-based learning architectures [37], even with a multi-scale approach where the appearance of a small lesion can be missed. An alternative is to use anchoring object detection methods of computer vision [26], like RetinaNet [36], which can be adapted to be used in lesion detection problems [42]. These methods face difficulties when the objects to be detected are very small, mainly for two reasons: (i) lesions have an extremely small size compared to natural objects; (ii) lesions and non-lesions have similar appearances, making it difficult to detect them effectively [6].
We propose a novel one-stage end-to-end detector based on a new type of anchoring technique customised to small
lesion detection in medical images. Differently from classical anchor methods, which make use of anchor boxes to capture scale and aspect ratio of specific classes of objects, the proposed anchor is pixel-based and moves towards the lesion to be detected. We named _GravityNet_ this new architecture and _gravity points_ the new anchors, because they are distributed over the whole image and seem to be "attracted" by hypothetical "gravitational masses" located in the centres of the lesions. Such gravitational anchoring reveals to be particularly effective when small lesions have to be detected in the whole image space. To evaluate the performance of the proposed approach, we focused on two small lesions: microcalcifications on digital mammograms and microaneurysms on digital fundus images. In both cases, the lesions occupy only few pixels within an image, resulting in limited features for them to be distinguished from the surrounding tissues. Thus, their accurate localization becomes a main challenge due to their appearance and to the heterogeneity of their contextual backgrounds.
The paper is organized as follows. Section 2 is a brief overview of object detection techniques in medical images and consequently of small lesion detection methods. Section 3 introduces the proposed method. Section 4 reports the experimental analysis, followed by results in Section 5. Finally, Sections 6 and 7 end the paper with discussion and conclusions.
## 2 Related work
### Object detection in medical images
Object detectors can be divided into two categories: (i) two-stage detector, the most representative is Faster R-CNN [15], (ii) one-stage detector, such as YOLO [49], and SSD [39]. Two-stage detectors are characterized by high localization and object recognition accuracy, whereas the one-stage detectors achieve high inference speed [23, 67]. In a two-stage approach, the first stage is responsible of generating candidates that should contain objects, filtering out most of the negative proposals, whereas the second stage performs the classification into foreground/background classes and regression activities of the proposals from the previous stage.
Recently, the most popular object detection methods in computer vision have been applied to medical imaging [68, 6, 21]. In [12], Faster R-CNN [15] is applied with the VGG-16 [38] network as backbone for pulmonary nodule detection. The YOLO architecture has been modified for lymphocyte detection in immunochemistry [60, 50] and for pneumothorax detection on chest radiographs [47]. In [48], a deep learning algorithm based on the YOLOv5 detection model is proposed for automated detection of intracranial artery stenosis and occlusion in magnetic resonance angiography. Other studies [29, 53, 42] exploited architectures such as RetinaNet and Mask R-CNN for lung nodules and breast masses localization. In [8], Mask R-CNN [19] is used by first assigning bounding boxes for each tumor volume to perform detection and classification of normal and abnormal breast tissue.
### Small lesion detection
Although existing object detection models have been very successful with natural images [23, 67], in medical images the high resolution makes the problem particularly challenging to discover small lesions, requiring complex architectures and the use of more than one stage for multi-resolution detection. In [56], three CNN architectures, each at different scale, are applied to lung nodule detection, whereas in [27] a multi-stream CNN is designed to classify skin lesions, where each individual stream worked on a different image resolution. In [64], a context-sensitive deep neural network is developed to take into account both the local image features of a microcalcification and its surrounding tissue background. In [52], a multi-context architecture is proposed, based on the combination of different CNNs with variable depth and individually trained on image patches of different size. In [3], the problem of class imbalance between lesions and background is addressed by proposing a two-stage deep learning approach where the first stage is a cascade [2] of one-level decision trees, and the second is a CNN, trained to focus on the most informative background configurations identified by the first stage.
Recently, in [63] a hierarchical deep learning framework consisting of three models each with a specific task is proposed for bone marrow nucleated differential count. Some
Figure 1: Gravity-points distribution: on the left, the feature grid of size \(K\times K\); in the middle, the entire image \(H\times W\); on the right, the feature map \(H_{FM}\times W_{FM}\).
studies [10, 18] combined image processing techniques and deep learning algorithms to evaluate lung tumor and liver tumor detection respectively. In [66], the visibility of microcalcifications in mammographic images is increased by difference filtering using the YOLOv4 model. A three-stage multi-scale framework for the microaneurysms detection is designed in [57], whereas multi-scale approach based on YOLOv5 is proposed for the detection of stroke lesions [5].
## 3 GravityNet
This section explains the proposed network architecture and the concept of _gravity points_, a new anchoring technique designed for small lesion detection.
The code is available at this link.
### Gravity points
We define a _gravity point_ as a pixel-based anchor, which inspects its surroundings to detect lesions. The gravity-points distribution is generated with a grid of points spaced by a user-defined _step_ parameter. A base configuration is generated in a squared reference window, named _feature grid_, of size \(K\times K\) where \(K\) is equal to the upper integer of the ratio between the dimensions of the image and the feature map:
\[K\times K=\left\lceil\frac{H}{H_{FM}}\right\rceil\ \times\ \left\lceil\frac{W}{W_{FM}}\right\rceil \tag{1}\]
Assuming that the first gravity point is located in the upper left corner of the feature map, the number of gravity points in a feature grid is equal to:
\[N_{GP}^{FG}=\left(\ \left\lfloor\frac{K-2}{step}\right\rfloor\ +1\right)^{2} \tag{2}\]
where \(0<step\leq K-2\). In cases where \(K-2\) is multiple of _step_ the distribution will be equispaced in the feature grid.
Since each pixel in the feature map corresponds to a feature grid in the image, the complete configuration is obtained by sliding the base configuration over the whole image. The total number of gravity points \(N_{GP}\) in the image is equal to the base configuration times the number of feature grids:
\[N_{GP}=N_{GP}^{FG}\cdot H_{FM}\cdot W_{FM} \tag{3}\]
Fig. 1 shows an example of gravity-points distribution.
### Architecture
GravityNet is a one-stage end-to-end detector composed of a backbone network and two specific subnetworks. The backbone is a convolutional network and plays the role of feature extractor. The first subnet performs convolutional object classification on the backbone output, whereas the second subnet performs convolutional gravity-points regression. Fig. 2 shows the overall architecture.
The backbone is the underlying network architecture of a detection model and provides a feature map containing basic features and representations of input data, which are then processed to perform a specific task. The bottom layers of a backbone net usually extract simple features such as edges and corners, while the top layers learn more complex features like parts of lesions. The feature maps generated by these layers are used as a represent
Figure 2: GravityNet architecture is composed of a backbone (blue) and two subnetworks, attached to the backbone output, one for classification task (orange) and one for regression task (green). The output is a representation of the gravity points in the grid pattern at training time and the subsequent attraction behavior towards the lesion at inference time. Gravity points in light blue correspond to positive candidates trained to collapse toward the ground truth in light green
and fed into two models for classification and regression tasks.
The classification subnet is a fully convolutional network that outputs the probability of lesion presence at each gravity-point location. The subnetwork applies four \(3\times 3\) convolutional layers, each with 256 filters, where the first one maps the number of features output from the backbone, followed by ReLU activations. The last layer applies a filter with \(N_{AP}\cdot 2\) outputs and sigmoid activation to obtain the binary predictions for each gravity point.
The regression subnetwork is connected to the output of the backbone with the purpose of regressing the offset from each gravity point to the closest lesion. The design of the regression subnet is the same of the classification subnet. The last layer outputs \(N_{AP}\cdot 2\) values, indicating the offsets to move each gravity point towards a lesion. It is worth noting that the classification and regression subnets, though sharing a common structure, use separate parameters.
### Gravity loss
_Gravity Loss_ (GL) is a multi-task loss that contains two terms: one for regression (denoted as \(GL_{reg}\)) and the other for classification (denoted as \(GL_{cls}\)).
The multi-task loss can be written as:
\[GL=GL_{cls}+\lambda GL_{reg} \tag{4}\]
where \(\lambda\) is an hyperparameter that controls the balance between the two task losses.
#### 3.3.1 Classification loss
Since significant class imbalance between lesion and background is usually present in medical images [3], the classification loss is a variant of Focal Loss [36]. This loss is designed to address the issue of class imbalance in object detection tasks, where the majority of the examples belong to the negative class (e.g., background) and only a few examples belong to the positive class (e.g., lesion).
The classification loss is defined as:
\[GL_{cls}=-a_{t}\cdot(1-p_{t})^{\varphi}\cdot\log(p_{t}) \tag{5}\]
where \(p_{t}\) is the predicted probability of the true class (lesion), \(\varphi\) is a focusing parameter that controls the rate at which the modulating factor \(a_{t}\) decreases as the predicted probability \(p_{t}\) increases.
To evaluate \(p_{t}\) with gravity points, we introduce a criterion based on the Euclidean distance between the gravity points and the ground-truth lesions1. We consider as belonging to the positive class those gravity points with a distance from the closest ground-truth lesion lower than a threshold distance that we named _hooking distance_\(d_{h}\). All the gravity points within that distance are hooked to the lesion and trained to move towards it. Fig. 3 shows an example of gravity-points hooking process.
Footnote 1: Without loss of generality, we consider as ground truth the center of the smallest bounding box containing the lesion.
#### 3.3.2 Regression loss
Let us indicate as \((d_{x},d_{y})\) the distance between a gravity point and the relative hooked lesion, and as \((o_{x},o_{y})\) the output of the regression subnetwork, which represents the offset to move each gravity point towards the hooked lesion.
We evaluate the regression loss as:
\[GL_{reg}=\sum_{\forall\text{ }hodeled\text{ }GP\text{ }i\in\{x,y\}}\mathit{ smooth}_{L1}(d_{i}-o_{i}) \tag{6}\]
where \(\mathit{smooth}_{L1}(t)\) is the Smooth L1 loss [15], defined as:
\[\mathit{smooth}_{L1}(t)=\left\{\begin{aligned} & 0.5t^{2},& \text{if }|t|<1\\ &|t|-0.5,&\text{otherwise}\end{aligned}\right. \tag{7}\]
### Inference time
The model produces two output predictions for each gravity point for each subnetwork. _Non-Maxima-Suppression_ (NMS) is applied to reduce the number of false candidates (see Fig. 4): (i) an \(L\times L\) box is built for each hooked gravity points, where \(L\) is chosen equal to the average size of the lesions to be detected; (ii) all boxes with an Intersection over Union (IoU) greater than 0.5 are merged; and (iii) for each merger, the gravity point with the highest score is considered as final candidate. After NMS, we determine the lesion class with a threshold \(\gamma\) on the classification score: all predictions with scores above \(\gamma\) belong to the positive class (lesion), the remaining ones to the negative class (no lesion).
Figure 4: An example of NMS: on the left, gravity points and corresponding boxes (light blue) hooked to a lesion (green); on the right, the final candidate corresponding to the gravity point with the highest score (blue)
Figure 3: Hooking process where gravity points (light blue) are hooked to a lesion (light green)
## 4 Experiments
We proved the effectiveness of _GravityNet_ on two detection problems in medical image analysis: (i) microcalcifications on full field digital mammograms and (ii) microaneurysms on digital ocular fundus images.
Microcalcifications (MCs) are calcium deposits and are considered as robust markers of breast cancer [41]. MCs appear as fine, white specaks, similar to grains of salt, with size between \(0.1\leavevmode\nobreak\ mm\) and \(1\leavevmode\nobreak\ mm\). Due their small dimensions and the inhomogeneity of the surrounding breast tissue, identifying MCs is a very challenging task. Moreover, mammograms contain a variety of linear structures (such as vessels, ducts, etc.) that are very similar to MCs in size and shape, making detection even more complex.
Microaneurysms (MAs) are the earliest visible manifestation of Diabetic Retinopathy, one of the leading causes of vision loss globally [34]. MAs are described as isolated small red dots of 10-100 \(\mu m\) of diameter sparse in retinal fundus images, but sometimes they appear in combination with vessels. Retinal vessels, together with dot-hemorrhages and other objects like the small and round spots resulting from the crossing of thin blood vessels, make MAs hard to distinguish.
### Dataset
#### 4.1.1 Microcalcifications dataset
We used the publicly available INBreast database [45], acquired at the Breast Centre in Centro Hospitalar de S. Joao (CHSJ) in Porto, Portugal. The acquisition equipment was the MammoNovation Siemens FPDM, with a solid-state detector of amorphous selenium, pixel size of \(70\leavevmode\nobreak\ \mu m\) (microns) and 14-bit contrast resolution. The image matrix was \(4,084\times 3,328\) (\(243\) images) or \(3,328\times 2,560\) (\(167\) images), depending on the compression plate used in the acquisition and according to the breast size of the patient.
The database has a total of \(410\) images, amounting to \(115\) cases, from which \(90\) cases are from women with both breasts, and \(25\) are from mastectomy patients. Calcifications can be found in \(313\) images for a total of \(7,142\) individual calcifications. In this work, only calcifications with a radius of less than \(7\) pixels were considered for testing, for a total of \(5,657\) microcalcifications identified in \(296\) images.
Mammograms have been cropped to the size \(3,328\times 2,560\) to have all images in the dataset with equal size. We ensured that no MC was missed after cropping.
#### 4.1.2 Microaneurysms dataset
We used the publicly available E-ophtha database [11], designed for scientific research in Diabetic Retinopathy. The acquired images have dimensions ranging from \(960\times 1,440\) to \(1,696\times 2,544\) with a \(45^{\circ}\) field of view (FOV) and a pixel size of \(7\)-\(15\leavevmode\nobreak\ \mu m\). The database has a total of \(381\) images: \(148\) images from unhealthy patients containing \(1,306\) microaneurysms, and \(233\) images from healthy patients.
The original retinal fundus images are RGB, but in this work, green channel has been extracted due to its rich information and high contrast in comparison with the other two color channels [62]. We also evaluated the average dimensions of all the retinas in the dataset and resized all the images to an average dimensions of \(1,216\times 1,408\).
### Data preparation
For all experiments we applied \(2\)-fold image-based cross-validation. The dataset is divided into two equal-sized folds, where one fold is used as training set and the other as test set and vice versa. A subset of the training set fold is used as validation set for parameter optimisation. See Tab. 1 for more details.
In both datasets, data augmentation techniques are used to address class imbalance and enhance the model robustness and accuracy [31]. Three more samples for each image are generated by using horizontal and vertical flipping. All data are normalized with min-max transformation.
### Architecture parameters
GravityNet uses ResNet [20] as its backbone, in order to solve the well-known vanishing/exploding gradient problems [1] by using residual connections. ResNet is composed of \(5\) max-pooling layers, each halving the dimensions of the feature map. According to the input dimensions, the feature map size is \(104\times 80\) for mammograms and \(38\times 44\) for retina fundus images. As a consequence, according to Eq. 1, we obtain \(K=32\) and a feature grid of \(32\times 32\). We generate gravity-points configurations with _step_ multiple of \(K-2\) to ensure equi-spatiality. To take into account the computational cost, we chose to use configurations that did not exceed \(300,000\) gravity points. Fig. 5 shows some examples of the initial configurations used in this work. To ensure that at least one gravity point hooks a lesion, the _hooking distance_\(d_{h}\) was always chosen equal to the _step_. At inference time, we use NMS with \(L=7\) for MCs and \(L=3\) for MAs, which correspond to the average size of the lesions to be detected.
We train ResNet in transfer learning by using a model pretrained on natural images [65]. Both subnetworks are initialised with Xavier technique [16]. During training we
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**INbreast** & \multicolumn{2}{c}{Images} & \multicolumn{2}{c}{Unhealthy} & \multicolumn{2}{c}{MCs} \\ \cline{2-7} & 1-fold & 2-fold & 1-fold & 2-fold & 1-fold & 2-fold \\ \hline \hline Train & 143 & 143 & 108 & 117 & 2,408 & 2,051 \\ Validation & 62 & 62 & 39 & 45 & 516 & 724 \\ Test & 205 & 205 & 154 & 142 & 2,756 & 2,901 \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c} \hline \hline
**E-ophtha-MA** & \multicolumn{2}{c}{Images} & \multicolumn{2}{c}{Unhealthy} & \multicolumn{2}{c}{MAs} \\ \cline{2-7} & 1-fold & 2-fold & 1-fold & 2-fold & 1-fold & 2-fold \\ \hline Train & 154 & 151 & 60 & 60 & 542 & 552 \\ Validation & 38 & 38 & 14 & 14 & 105 & 107 \\ Test & 189 & 192 & 74 & 74 & 659 & 647 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data overview
apply Adam optimization algorithm [30]. The learning rate was set to an initial value of \(10^{-4}\) and decreased with a factor of 0.1 with _patience_ equal to 3. The balance between the two task losses (see Eq. 4) is handled by \(\lambda\) equal to 10. The batch size is by default equal to 8. All training parameters were optimized on the validation set. The training was stopped after 50 epochs. Experiments were conducted on a GPU NVIDIA A100 80GB.
### FROC analysis
The detection quality was evaluated in terms of lesion-based Free-Response Operating Characteristic (FROC) curve by plotting True Positive Rate (TPR) against the average number of False Positives per Image (FPiI) for a series of thresholds \(\gamma\) on the classification score associated to each sample.
A prediction with a value higher than \(\theta\) is considered as True Positive (TP) when its distance from the center of a lesion is no larger than the largest side of the bounding box containing the ground-truth lesion. Otherwise, it is considered as False Positive. Notably, (i) if multiple predictions are associated to the same lesion, only the one with the highest classification score is selected as TP, and (ii) all predictions for gravity points outside the tissue were ignored.
To analyze and compare FROC curves, we chose the non-parametric approach suggested in [4]. The figure-of-merit is the Partial Area under the FROC curve (\(AUC_{\gamma}\)) to the left of \(FPpI=\gamma\) calculated by trapezoidal integration. We normalized \(AUC_{\gamma}\) by dividing with \(\gamma\) to obtain an index in the range \([0,1]\). In particular, for both MCs and MAs detection, we selected \(\gamma=10\), a commonly used value in the literature of the respective fields [9, 52]. All results are presented in percentage values.
## 5 Results
### Model analysis
To verify the effectiveness of the model, for both small lesion detection problems, experiments were conducted using different gravity-points configurations for all different depths of ResNet2. Results are reported in Tab. 2 for MCs, and in Tab. 3 for MAs together with the parameters of the gravity-points configurations. The best result for each backbone is shown in bold, whereas the best of all in italic. FROC curves of the best ResNet configurations are shown in Fig. 6.
Footnote 2: It is worth noting that, due to memory constraints, for MCs detection, we use in training a batch size equal to 4 for _ResNet-50_ and 2 for _ResNet-101_ and _ResNet-152_
For MCs, the best result is a \(AUC_{\gamma}\) equal to 72.25% by using _ResNet-34_ and _step_ 10. Configuration with _step_ 10 turns to be the best for all backbones, except _ResNet-50_, which achieves a \(AUC_{\gamma}\) equal to 71.25% with _step_ 6. Dense configurations present better results with shallower backbones, e.g. with a \(ResNet\)-18_ we obtain a \(AUC_{\gamma}\) equal to 70.89% and 71.47% respectively with _step_ 6 and 10 as opposed to 65.58% and 55.90% respectively with _step_ 15 and 30.
For MAs, the highest \(AUC_{\gamma}\) (71.53%) is obtained with a _ResNet-50_ and _step_ 6. Configuration with _step_ 6 turns to be the best for all backbones, except _ResNet-18_, which achieves a \(AUC_{\gamma}\) equal to 65.36% with _step_ 10. Sparse configurations decrease the performance, even with deeper backbones, e.g. with a _ResNet-152_ we obtain a \(AUC_{\gamma}\) of 67.51% and 54.18% respectively with _step_ 15 and 30 as opposed to 65.81% and 69.86% respectively with _step_ 5 and 6.
Through result analysis, it becomes evident that we need to find the appropriate density configuration for addressing the detection problem at hand. A sparse configuration might fail to identify all lesions, particularly in the case of small ones, whereas a dense configuration could potentially generate a high number of lesion candidates.
### Comparison with the literature
We compare our best models, that are _ResNet-34_ with _step_ 10 for MCs detection and _ResNet-50_ with _step_ 6 for MAs detection, with methods proposed in the scientific literature for the detection problems at hand:
* Context-Sensitive CNN (CSNet) [64]: the architecture comprises two convolutional subnetworks: one for processing the large image context with a window of size 96\(\times\)96 pixels and another for processing the small microcalcification texture with a window of size 9\(\times\)9 pixels. The features extracted from both subnetworks are subsequently merged and fed into a fully connected network.
Figure 5: Examples of initial gravity-points configurations represented in a reference window \(K\times K\), where: **(a)**_step_\(=5\) **(b)**_step_\(=6\) **(c)**_step_\(=10\) **(d)**_step_\(=15\) **(e)**_step_\(=30\)
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Backbone** & \multicolumn{5}{c}{**Configuration**} \\ \cline{2-6} & \(step=6\) & \(d_{h}=6\) & \(step=10\) & \(d_{h}=10\) & \(step=15\) & \(d_{h}=15\) & \(step=30\) & \(d_{h}=30\) \\ & \(N_{GP}\)=299,520 & \(N_{GP}\)=133,120 & \(N_{GP}\)=74,880 & \(N_{GP}\)=33,280 \\ \hline ResNet-18 & 70.89 & **71.47** & 65.58 & 55.90 \\ ResNet-34 & 65.08 & **72.25** & 67.44 & 56.17 \\ ResNet-50 & **71.25** & 67.73 & 69.31 & 57.12 \\ ResNet-101 & 58.85 & **64.90** & 41.69 & 53.05 \\ ResNet-152 & 60.60 & **64.86** & 62.98 & 53.86 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for MCs detection in terms of % of \(AUFC_{{}_{7}}\)
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Backbone** & \multicolumn{5}{c}{**Configuration**} \\ \cline{2-6} & \(step=5\) & \(d_{h}=5\) & \(step=6\) & \(d_{h}=6\) & \(step=10\) & \(d_{h}=10\) & \(step=15\) & \(d_{h}=15\) & \(step=30\) & \(d_{h}=30\) \\ & \(N_{GP}\)=81,928 & \(N_{GP}\)=60,192 & \(N_{GP}\)=26,752 & \(N_{GP}\)=15,048 & \(N_{GP}\)=6,688 \\ \hline ResNet-18 & 60.95 & 61.42 & **65.36** & 63.17 & 53.88 \\ ResNet-34 & 65.01 & **68.57** & 68.38 & 64.80 & 58.76 \\ ResNet-50 & 68.89 & **71.53** & 64.57 & 68.25 & 54.33 \\ ResNet-101 & 66.07 & **69.77** & 69.13 & 65.59 & 54.97 \\ ResNet-152 & 65.81 & **69.86** & 66.84 & 67.51 & 54.18 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for MAs detection in terms of % of \(AUFC_{{}_{7}}\)
Figure 6: FROC results with the best gravity-points configurations for each ResNet backbone on INbreast (a) and E-ophtha-MA (b)
* Deep Cascade (DC) [2]: a cascade of decision stumps able to learn effectively from heavily class-unbalanced datasets. It builds on Haar features computed in a small detection window of \(12\times 12\) pixels, which can contain diagnostically relevant lesions, while limiting the exponential growth of the number of features that are extracted during training.
* MCNet with DC hard mining (DC-MCNet) [3]: a two-stage patch-based deep learning framework, which comprises a DC for hard mining the background samples, followed by a second stage represented by a CNN that discriminates between lesions and the more challenging background configurations.
* Multicontext Ensemble of MCNets (ME-MCNet) [52]: a multi-context ensemble of CNNs aiming to learn different levels of image spatial context by training multiple-depth networks on image patches of different dimensions (\(12\times 12\), \(24\times 24\), \(48\times 48\), and \(96\times 96\)).
To evaluate the behavior of the proposed anchoring technique, we also compared with RetinaNet [36], a well-known one-stage object detector based on anchoring technique. We slightly modified the original anchors configuration by using an anchor box size ranging from \(8^{2}\) to \(128^{2}\) in order to be more suitable for small lesion detection. We applied it to the whole image without any kind of rescale or patching.
We applied a statistical comparison by means of bootstrap method [51] to test the significance of observed performances. Cases were sampled with replacement \(1,000\) times, with each bootstrap containing the same number of cases as the original set. At each bootstrapping iteration, FROC curves were recalculated, and differences in figures-of-merit \(\Delta AUFC_{Y}\) between _GravityNet_ and each of the compared methods were evaluated. Finally, the obtained FROC curves were averaged along the TPR axis, and \(p\)-values were computed as the fraction of \(\Delta AUF_{Y}\) populations that were negative or zero. The statistical significance level was chosen as \(\alpha=0.05\). Average FROC curve are shown in Fig. 7.
The statistical comparison results for MCs and MAs detection are shown in Tab. 4 where significant performances are indicated in bold. Results of the proposed architecture were statistically significantly better than all the other considered approaches. The highest improvement in terms of \(AUFC_{Y}\) is of +50.04% with RetinaNet for MAs and of +42.25% with CSNet for MCs. Compared to patch-based methods such as DC, DC-MCNet and ME-MCNet the improvement is respectively +41.00%, +19.52%, +11.90% for MCs and +15.72%, +10.15%, +5.90% for MAs.
## 6 Discussion
### Gravity points configuration
The gravity points configuration depends directly on the size of the input image and is managed by the _step_ parameter. This implies a higher number of gravity points for images with larger dimensions. In the cases studied, mammograms have a larger size than retina images and consequently have a higher \(N_{GP}\), so requiring much more computational efforts.
Depending on the chosen configuration, gravity-points will behave differently. We chose to train all the configurations with a \(d_{h}\) equal to the _step_ to measure the capacity of gravity-points to move towards ground-truth lesions. A small \(d_{h}\) will have less impact on the movement of gravity-points, compared to a large \(d_{h}\) that let them move more widely, always within the specified distance value. For MCs detection, the best configurations are those with _step_ 10 and thus \(d_{h}\) 10 because these values are more representative of the size and distribution of MCs in mammographies. On the other hand, for MAs detection, where lesions are usually isolated, configurations with a higher density, such as _step_ 5 and _step_ 6, are needed. Fig. 8 shows two detection outputs of the best GravityNet models for MCs with _step_ 10 and _ResNet-34_ and for MAs with _step_ 6 and _ResNet-50_. We can see the gravitational behaviour towards the centres of the lesions in Fig. (b)b for MCs and Fig. (d)d for MAs. Hooked gravity-points that are, at inference time, within the radius of the lesion to be detected are shown in light blue and are defined as predictions of possible TP. The NMS, whose output can be seen in the right panel of the same figures, merges all hooked gravity-points in a single detection so as to obtain a single prediction (in blue) for each small lesion (in green).
### Comparison with anchoring methods
We compared the proposed one-stage detector with a widespread exponent of one-stage object detection methods, i.e. _RetinaNet_, which has also been usefully applied to medical detection problems [24, 42]. Small lesions such as MCs and MAs are often less than 10 pixels in diameter and, in this case, anchoring methods face two main obstacles: (i) the number and size of anchor boxes, and (ii) the pyramidal approach for multi-scale resolution.
Regarding the first issue, we tried to train _RetinaNet_ with the original range of anchor boxes size (from \(32^{2}\) to \(512^{2}\) according to the _Feature Pyramid Network_ (FPN) level), but due to the small size of the lesions, the train failed; thus, we reduced the size in the range \(8^{2}\) to \(128^{2}\). The proposed anchoring technique is based on pixel-shaped gravity points, which only require an initial configuration setting without specifying a box size. This is advantageous, especially in the case of small lesions with variable sizes, as demonstrated in the MA results. In addition, considering all FPN resolution levels, _RetinaNet_ generates a number of anchor boxes more than 10 times the number of gravity points. This is a considerable advantage in computational and temporal terms (see Section 6.4).
As to the second issue, _RetinaNet_ adopts a multi-scale architecture. However, this approach proves to be ineffective because positive anchors (those containing a lesion) only belong to the first level of FPN, which corresponds to the highest resolution level. In _GravityNet_, we decided to not use a multi-scale approach given the shape of the lesions to be detected. For the sake of comparison, we tried to use
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & **Method** & \(AUFC_{{}_{\gamma}}\) & **Compared to** & \(\Delta AUFC_{{}_{\gamma}}\) & **p-Value** \\ \hline \multirow{8}{*}{**MCs detection**} & RetinaNet & 66.47 & & & \\ & CSNet & 30.00 & & & \\ & DC & 31.25 & & & \\ & DC-MCNet & 52.73 & & & \\ & ME-MCNet & 60.35 & & & \\ \cline{2-5} & **GravityNet** & **72.25** & RetinaNet & **+5.78** & \(=0.037\) \\ & & CSNet & **+42.25** & \(<0.001\) \\ & & DC & **+41** & \(<0.001\) \\ & & DC-MCNet & **+19.52** & \(<0.001\) \\ & & ME-MCNet & **+11.9** & \(<0.001\) \\ \hline \multirow{8}{*}{**MAs detection**} & RetinaNet & 21.48 & & & \\ & CSNet & 40.03 & & & \\ & DC & 55.80 & & & \\ & DC-MCNet & 61.38 & & & \\ & ME-MCNet & 65.63 & & & \\ \cline{2-5} & **GravityNet** & **71.53** & RetinaNet & **+50.04** & \(<0.001\) \\ & & CSNet & **+31.49** & \(<0.001\) \\ & & DC & **+15.72** & \(<0.001\) \\ & & DC-MCNet & **+10.15** & \(<0.001\) \\ & & ME-MCNet & **+5.9** & \(<0.001\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results comparison in terms % of \(AUFC_{{}_{\gamma}}\)
Figure 7: Average FROC curves for INbreast (a) and E-ophtha-MA (b) obtained from \(1,000\) bootstrap iterations. Confidence bands (semi-transparent) indicate 95% confidence intervals along the TPR axis.
Figure 8: Examples of MCs and MAs detections. (a) and (c): ground-truth annotations; (b) and (d): GravityNet outputs
_RetinaNet_ without FPN, considering only the outputs of the first level, but this did not improve the performance.
### Comparison with patch-based methods
Patch-based methods have the computational disadvantage of assembling all the individual results to obtain the final one, as opposed to end-to-end systems like _GravityNet_ that obtain the final result directly.
The class imbalance between lesions and background is another issue that affects small lesion detection. We compared our approach with two existing methods, _DC_ and _DC-MCNet_, which are designed to manage this problem. _DC_ discards the majority of easily detectable background samples early in the process, while _DC-MCNet_ utilizes a CNN on the output of _DC_ to enhance detection performance. In this work, we propose the _Gravity Loss_, a variant of _Focal Loss_ typically applied in deep learning methods to address class imbalance issues.
Since small lesions do not have a clear appearance and are similar to the surrounding background, _CSNet_ and _ME-MCNet_ propose two context-sensitive patch-based approaches, where the model is trained with patches of different sizes and then combined. In contrast, our proposal works with the full image without patches and is able to identify small lesions thanks to the new anchoring technique and the regression subnet, which focus more on the distance to the lesion rather than its appearance.
### Computational and inference time
Computational and inference time are an important aspect in medical imaging systems, improving interactivity and the time taken to formulate a diagnosis. We evaluated the computational time for all the compared methods by measuring the average _Time per Epoch_ (TpE) in training, and the _Time per Image_ (TpI) and the Throughput3 in test. Tab. 5 shows the results. We can see how patch-based methods are computationally time-consuming, whereas our proposal has a very high Throughput and an average TpI below one second.
Footnote 3: Throughput is defined as the maximum number of input instances that the method can process in one second
### Limitations
Although our method achieves excellent results in the detection of small lesions, there are some limitations to be considered:
* Clinical applicability: we require a dataset with individually annotated lesions for the training phase, and this can be difficult to meet in a real clinical scenario. In addition, further post-processing (e.g. benign vs. malignant lesion classification) is needed to build a full CAD system.
* Configuration limit: by employing an equispaced grid configuration, the distribution of gravity points becomes uniform, even in areas of the image where there is no tissue. In training this might not be advantageous. Different approaches to generate the initial configuration can be investigated.
* Computational requirements: the number of gravity points directly increases with the size of the image. In case of large images, _GravityNet_ can require considerable computational resources. A solution can be to limit the number of gravity points by using sparse initial configuration, but this can affect the detection performance of the method.
* Memory constraints: the use of a backbone in the proposed model necessitates remarkable resource requirements. As the backbone architecture becomes more complex and deeper, it requires a larger memory allocation, which can be a significant limitation for training the model.
## 7 Conclusions and future work
In this work, we introduced _GravityNet_, a new one-stage end-to-end detector specifically designed to detect small lesions in medical images. The accurate localization of small lesions, given their appearance and diverse contextual backgrounds, is a challenge in several medical applications. To address this point, our approach employed a novel pixel-based anchor that dynamically moves towards the targeted lesion during detection. Through a comparative evaluation with state-of-the-art anchoring and patch-based methods, our proposed approach demonstrated promising results in effectively detecting small lesions.
Our primary future direction will involve testing _GravityNet_ in various detection problems, particularly those where the target object is point-like, such as nuclei localization in whole-slide images [22]. We will also explore the possibility of extending the proposed architecture to address other tasks or image dimensionality involving small lesions, such as segmentation [35] or three-dimensional images [28, 61].
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & **Method** & **TpE (s)** & **TpI (s)** & **Throughput** \\ \hline \multirow{8}{*}{**MCs detection**} & RetinaNet & 1254 & 0.121 & 14.70 \\ & CNet & 6959 & 822 & \(1.2\times 10^{-3}\) \\ & DC & n.a. & 1.1 & 0.9 \\ & DC-MCNet & 32 & 1.2 & 0.8 \\ & ME-MCNet & 5360 & 386 & \(3.8\times 10^{-3}\) \\ & **GravityNet** & **607** & **0.061** & **19.25** \\ \hline \multirow{8}{*}{**MAs detection**} & RetinaNet & 137 & 0.057 & 34.04 \\ & CNet & 1260 & 266 & \(3.8\times 10^{-3}\) \\ \cline{1-1} & DC & n.a. & 1.3 & 0.7 \\ \cline{1-1} & DC-MCNet & 6 & 1.4 & 0.7 \\ \cline{1-1} & ME-MCNet & 1564 & 203 & \(4.9\times 10^{-3}\) \\ \cline{1-1} \cline{2-5} & **GravityNet** & **104** & **0.045** & **37.49** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Computational times compared in terms of _Time per Epoch_ (TpE) in training, and _Time per Image_ (TpI) and _Throughput_ in test |
2309.08190 | Learning in the Dark: Privacy-Preserving Machine Learning using Function
Approximation | Over the past few years, a tremendous growth of machine learning was brought
about by a significant increase in adoption and implementation of cloud-based
services. As a result, various solutions have been proposed in which the
machine learning models run on a remote cloud provider and not locally on a
user's machine. However, when such a model is deployed on an untrusted cloud
provider, it is of vital importance that the users' privacy is preserved. To
this end, we propose Learning in the Dark -- a hybrid machine learning model in
which the training phase occurs in plaintext data, but the classification of
the users' inputs is performed directly on homomorphically encrypted
ciphertexts. To make our construction compatible with homomorphic encryption,
we approximate the ReLU and Sigmoid activation functions using low-degree
Chebyshev polynomials. This allowed us to build Learning in the Dark -- a
privacy-preserving machine learning model that can classify encrypted images
with high accuracy. Learning in the Dark preserves users' privacy since it is
capable of performing high accuracy predictions by performing computations
directly on encrypted data. In addition to that, the output of Learning in the
Dark is generated in a blind and therefore privacy-preserving way by utilizing
the properties of homomorphic encryption. | Tanveer Khan, Antonis Michalas | 2023-09-15T06:45:58Z | http://arxiv.org/abs/2309.08190v1 | # Learning in the Dark: Privacy-Preserving Machine Learning using Function Approximation
###### Abstract
Over the past few years, a tremendous growth of machine learning was brought about by a significant increase in adoption and implementation of cloud-based services. As a result, various solutions have been proposed in which the machine learning models run on a remote cloud provider and not locally on a user's machine. However, when such a model is deployed on an untrusted cloud provider, it is of vital importance that the users' privacy is preserved. To this end, we propose Learning in the Dark - a hybrid machine learning model in which the training phase occurs in plaintext data, but the classification of the users' inputs is performed directly on homomorphically encrypted ciphertexts. To make our construction compatible with homomorphic encryption, we approximate the ReLU and Sigmoid activation functions using low-degree Chebyshev polynomials. This allowed us to build Learning in the Dark - a privacy-preserving machine learning model that can classify encrypted images with high accuracy. Learning in the Dark preserves users' privacy since it is capable of performing high accuracy predictions by performing computations directly on encrypted data. In addition to that, the output of Learning in the Dark is generated in a blind and therefore privacy-preserving way by utilizing the properties of homomorphic encryption.
Activation Function, Homomorphic Encryption, Neural Networks, Polynomial Approximation, Privacy,
## I Introduction
Machine Learning (ML), specifically Deep Learning (DL), has garnered significant attention from researchers due to its solid performance in many tasks, such as speech recognition, spam detection, image classification, traffic analysis, face recognition, financial detection, and genomics prediction [1, 2, 3, 4, 5, 6]. To meet the growing demand for ML services, Cloud Service Providers (CSPs) such as Google Prediction API [7], Microsoft Azure ML [8], and Ersatz Lab [9] also offer Machine Learning as a Service (MLaaS), enabling users to train and test the ML models using the CSP infrastructure. Typically, these models involve a training phase where the model learns from a dataset and a testing phase where the model predicts outputs based on unseen inputs. Once the model is trained and deployed on the CSP, the users can use it for online prediction services. However, the adoption of MLaaS raises concerns about the privacy of data being outsourced, in sensitive domains such as finance and healthcare [10]. There is a risk of data misuse or theft when sending data to prediction models hosted by CSPs. To address these privacy concerns, researchers proposed various methods to protect user data in MLaaS settings [11, 12, 4, 13, 14].
This work aims to demonstrate the application of Neural Network (NN) on Encrypted Data (ED) using Homomorphic Encryption (HE). HE allows performing arithmetic operations (addition and multiplication) over ED without decryption, enabling the homomorphic evaluation of functions relying on these operations. More specifically, our focus is to evaluate the Convolution Neural Network (CNN) on ED, where most operations, except for Non-linear Activation Functions (NLAF), can be homomorphically evaluated.
Enabling the homomorphic evaluation of CNNs on ED has been an active area of research, with significant efforts dedicated to designing efficient support for NLAFs [15]. Various approaches have been proposed, including the utilization of power functions [4], look-up table [16], and polynomial approximations [17, 18, 19]. In this work, we employ low-degree Chebyshev polynomials to approximate NLAF.
### _Background on Polynomial Approximations_
Approximating continuous functions is a problem that has drawn mathematicians' attention for a very long time. While there are several ways to approximate a continuous function, in this work we are only interested in polynomial approximations. More specifically, we are using Chebyshev polynomials to approximate the Sigmoid and the ReLU functions. However, there are various works that use different approaches such as the \(x^{2}\) function [4], the _Piecewise_ approximation [18], lookup tables [16] etc. Unfortunately, all these methods face certain limitations. For example, the \(x^{2}\) method can cause instability during the training phase and the creation of a piecewise linear approximation can sometimes be a complex optimization problem. With this in mind, we chose to work with Chebyshev polynomials. The general form of these polynomials is:
\[T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x) \tag{1}\]
where \(T_{n}(x)\) represents a polynomial of degree \(n\). Chebyshev polynomials allow us to efficiently compute any continuous
function in a given interval, using only low-degree polynomials. This feature significantly boosts efficiency and lower the overall computational complexity.
### _Our Contribution_
The main contributions of this paper are manifold.
* We show how to approximate NLAFs like ReLU and Sigmoid using Chebyshev polynomials. By substituting these NLAFs with the Chebyshev polynomials, we conduct a comprehensive analysis to compare the differences in terms of efficiency and accuracy.
* We design a PPML model in which the CNN is trained on plaintext data while the classification process operates on homomorphically ED.
* To illustrate the effectiveness of our model, we conducted extensive experiments and provided a comparative analysis with other state-of-the-art works in the field of PPML.
* We designed a protocol that demonstrate the practical application of our PPML model in a realistic scenario while ensuring its security under malicious threat model.
### _Organization_
The rest of the paper is organized as follows: In Section II, we present important published works in the area of PPML. In Section III, we provide the necessary background information needed for our construction. Then, in Section IV, we show how to approximate the ReLU and Sigmoid AFs using low-degree Chebyshev polynomials. The methodology of our work is illustrated in Section V, followed by extensive experimental results in Section VI. In Section VII, we design a protocol, that demonstrates the applicability of our work and finally, in Section VIII we conclude the paper.
## II Related Work
The first step in preserving the privacy of the ML model is achieved through Multiparty Computation (MPC). This approach allows parties jointly compute a function while keeping the original inputs private. Several methods based on MPC have been proposed for preserving the privacy of ML models, such as K-means clustering, linear regression, SVM classifier, Decision tree, etc. [20, 21, 22, 23, 24].
One approach called SecureML, designed by Mohassel _et al._, [25], uses a two-server model in which data is distributed among two non-colluding servers. It is an efficient protocol for preserving the privacy of various ML models using MPC. These servers train various models on the joint data using secure MPC with support for approximating Activation Functions (AF) during training. Since SecureML requires changes in the training phase, the model does not apply to the problem of making the existing NN model oblivious. Another approach MiniONN [26], converts any NN into an oblivious NN using MPC providing privacy-preserving predictions. While MiniONN uses cryptographic primitives, such as garbled circuits and secret sharing, it still reveals information about the network (e.g. size of the filter) [27]. MOBIUS is another secure prediction protocol for binarized NN [28], allowing fast and scalable PPML model by delegating a protected model to a resource provider. The resource provider offers prediction to client without knowing anything about client's input.
Due to the high communication cost associated with MPC techniques mentioned above, alternative methods using HE have been explored. Wu _et al._[29] proposed a privacy-preserving logistic regression model. As the logistic function is _not_ linear, the authors use polynomial fitting to achieve a good approximation. However, it lowers the accuracy of model. Graepel _et al._[30] used Somewhat Homomorphic Encryption (SHE) [31] to train two simple classifiers on ED, employing low-degree polynomials for efficient computations.
Ehsan _et al._[32] proposed a technique based on Leveled HE (LHE) [33] to preserve the privacy of CNN while at the same time keep the accuracy as close as possible to the original model. They approximated the Sigmoid, ReLU and Tanh functions and achieved an accuracy of 99.52% on MINST dataset [34]. This is good, as the accuracy of the original model was measured at 99.56%. Unfortunately, their approach is computationally expensive, as the training and testing phases are both performed on ED.
In [35], the authors present Fast Homomorphic Evaluation of Deep Discretized NN (FHE-DiNN). Their design utilizes HE to evaluate an NN. The user encrypts the data using HE and transfers it to the cloud. The cloud blindly classifies the ED using HE and sends the ED back to the user. Upon reception, the user uses her secret key to decrypt it. In this scheme, the encryption parameters are dependent on the model structure. So, if the server updates its model, the client is forced to re-encrypt all of its data. While communication-wise HE schemes are very efficient, the computation cost at the server-side is very large.
A notable related work is CryptoNets [4] which applies an NN model to ED. While CryptoNets achieves remarkable accuracy, the construction is based on the use of square AF. Hence, approximating a non-linear function causes instability during the training phase when the interval1 is large. In our work, we address this issue by using Chebyshev approximation, which accurately approximates AFs even in larger intervals. Additionally, we adapt an approach where the client's input is encrypted but the model remains in plaintext, aiming for better efficiency in the classification process.
Footnote 1: By interval we mean the domain of definition of the AF.
## III Preliminaries
NotationIf \(x\) and \(y\) are two strings, by \(x||y\) we denote the concatenation of \(x\) and \(y\). A _probabilistic polynomial time_ (PPT) adversary \(\mathcal{ADV}\) is a randomized algorithm for which there exists a polynomial \(p(\cdot)\) such that for all input \(x\), the running time of \(\mathcal{ADV}(x)\) is bounded by \(p(|x|)\). A neuron is a mathematical function that takes one or more inputs, multiplies them by some values called "weights" and adds them together. This value is then passed to NLAF, to become neuron's output.
### _Convolutional Neural Network (CNN)_
A typical NN is a combination of neurons arranged in layers. Each neuron receives input from other neurons with an associated weight \(w\) and a bias \(b\), as shown in Figure 1. It then uses equation 2 to compute some function \(f\) on the weighted sum of its input. The output of this neuron is given as input to other neurons.
\[y=f\left(\sum_{i=1}^{3}x_{i}w_{i}+b\right) \tag{2}\]
In equation 2, \(x_{i}\) is the input, \(w_{i}\) is the weight, \(b\) is the bias term and \(f\) is the AF.
In our work, we focus on CNN, a deep NN algorithm primarily used for _image classification_. In CNN, each input passes through a series of layers during the training and testing phases2. These layers consist of convolutional layers (Conv), AFs, pooling layers, Fully Connected (FC) layers and a softmax layer [36]: Footnote 2: [https://shorturl.at/nzHK1](https://shorturl.at/nzHK1)
* Convolution Layer: Conv is the first layer in CNN and acts as a feature detector (used for feature mapping). To generate a feature map, convolution is performed by moving the filter over the input with a certain stride. On a single input, multiple convolutions can be performed using numerous filters to extract more than one feature from the input. Also, padding is performed to make size of convolved features same as that of the input.
* Activation Function: In NN, all operations are linear except the AF. These functions are used to introduce non-linearity in the network. The most commonly used AFs are Sigmoid, ReLU and Tanh, as shown in the Table I.
* Pooling: This layer is responsible for extracting the dominant features (maximum or average pixel values) to reduce the size of the input image. The popular pooling operations are _max-pooling_ and _average-pooling_. In max-pooling, the maximum value, and in average pooling, the average values are extracted from the part of the image covered by the filter.
* Flattening: It convert data into a 1-dimensional array that is given as input to the next layer. There, the image matrix is converted into a vector and feed to a FC NN.
* Fully Connected: The FC layers are activated at the last phase of the process after Conv, pooling and AFs. This layer connects every neuron in one layer to every neuron in the next layer and performs a weighted sum of the inputs and add bias.
* Softmax: In a classification problem, softmax is the final output layer with discrete class labels. It assigns a probability to each class that adds up to 1. The node having the highest probability is determined to be the most likely class for the given input.
Our CNN ( Figure 2) consists of two Conv with a ReLU AF, two pooling operations, two FC layers and a softmax layer.
### _Homomorphic Encryption (HE)_
HE is an encryption scheme that allows users to perform computations on ED. Given two ciphertexts \(c\) and \(c^{\prime}\), a user can compute \(f(c,c^{\prime})\) where \(f\) is a function associated either with an addition or multiplication operation. A typical HE scheme consists of the following four algorithms:
* \(\mathsf{HE.KeyGen}(1^{\lambda})\rightarrow(\mathsf{pk},\mathsf{sk})\): The key generation algorithm takes as input a security parameter \(\lambda\) and outputs a public/private key pair (\(\mathsf{pk}\), \(\mathsf{sk}\)).
* \(\mathsf{HE.Enc}(\mathsf{pk},m)\to c\): This algorithm takes as input a \(\mathsf{pk}\) and a message \(m\) and outputs a ciphertext \(c\).
Fig. 1: Structure of a Neuron in a Neural Network
Fig. 2: Convolutional Neural Network
* HE.Eval(pk,\(f,c,c^{{}^{\prime}}\)) \(\to c_{eval}\): This algorithm takes as an input two ciphertexts \(c\) and \(c^{{}^{\prime}}\), a pk and a homomorphic function \(f\) and outputs an evaluated ciphertext \(c_{eval}\).
* HE.Dec(sk,\(c\)) \(\to m\): The decryption algorithm takes as input a private key sk and \(c_{eval}\) and outputs \(f(m,m^{\prime})\).
Currently, there are three different kinds of HE schemes; Partial HE (PHE), Fully HE (FHE) and somewhat HE (SHE). PHE allows users to perform an _unlimited number_ of operations on the ciphertexts [37, 38]. However, they support only one type of operation (either addition or multiplication) and hence, are not suitable for our work. Furthermore, while FHE schemes offer the possibility to perform an unlimited number of both additions and multiplications [39], they are computationally expensive [40]. As a result, we choose to work with SHE that offers similar functionalities as FHE but in a more efficient manner [41, 31]. The key difference between FHE and SHE is that in SHE schemes users can only perform a limited number of operations.
## IV Chebyshev Polynomials
In this section, we show how low-degree Chebyshev polynomials can be utilized to approximate the AFs. As mentioned in [42], using these polynomials the AFs can be approximated at a given interval. The first few Chebyshev polynomials are given below while their generalization is given in equation 1.
\[T_{0}(x)=1,T_{1}(x)=x,T_{2}(x)=2x^{2}-1,T_{3}(x)=4x^{3}-3x\]
Chebyshev approximation is also known as the minimax approximation. The minimax polynomial approach is used for function approximation by improving the accuracy and lowering the overall computational complexity [43]. Instead of minimizing the error at the point of expansion like Taylor's polynomial approximation, the minimax approach minimizes the error across a given input segment. The minimax approximation is used to find a mathematical function that minimizes the maximum error. As an example, for a function \(f\) defined over the interval \([a,b]\), the minimax approximation finds a polynomial \(p(x)\) that minimizes \(\underset{a\leq x\leq b}{max}|f(x)-p(x)|\).
The first order minimax polynomial is defined as:
\[p(x)=c_{0}+c_{1}x\approx f(x)\]
where \(c_{0}\) and \(c_{1}\) are the coefficients of the polynomial.
### _Chebyshev Approximation_
To approximate a continuous function \(f\), defined over \([a,b]\), we first need to express \(f\) as a series of Chebyshev polynomials at \([-1,1]\). More precisely, \(f\) is expressed as: \(f(x)=\sum_{k=0}^{n}c_{k}T_{k}(x),\quad x\in[-1,1]\), where \(c_{k}\) is the Chebyshev coefficient and \(T_{k}(x)\) can be calculated from equation 1. As a next step, we calculate the coefficients of the polynomial and finally, express the polynomial in the original interval \([a,b]\). This procedure is illustrated in algorithm 1.
```
Input:\(c_{k},f(x),T_{k}(x)\) Output:\(p(x)\)
1 Express \(f\) as: \(f(x)=\sum_{k=0}^{n}c_{k}T_{k}(x),\quad x\in[-1,1]\)
2 Chebyshev coefficients \(c_{k}=\frac{n}{2}\int_{-1}^{1}f(x)\frac{T_{k}(x)}{\sqrt{1-x^{2}}}\)
3 Approximation domain: from [-1, 1] to [a, b]: \(x=\frac{a+b-2z}{b-z}\), \(z\in[a,b]\)
```
**Algorithm 1**Chebyshev Polynomial Approximation
Our results for approximating Sigmoid and ReLU, using algorithm 1, are illustrated in Table II. The approximation error for both AFs is calculated using equation \(E(x)=f(x)-p(x)\).
## V Methodology
We start this section by describing our system model. We assume a client-server model involving the following entities:
* _Users_: We consider users who own a list of images and wish to use a cloud-based ML service to classify them in a privacy-preserving way (i.e. without revealing anything about the content of the images to the cloud).
* _Cloud Service Provider (CSP)_: The CSP can receive a large number of _encrypted_ images from users and classify them in a privacy-preserving way by giving them as input to a ML algorithm.
The topology of our work is illustrated in Figure 3.
In our model, we consider a CNN capable of analyzing large volumes of data (images) in a variety of domains. The CNN is deployed in a privacy-preserv
Fig. 3: Learning in the Dark High Level Overview
preserve the privacy of users data, we use HE. Using an HE scheme allows us to perform computations on ED. However, HE schemes face certain limitations as they only support addition and multiplication operations. Most of the operations in a CNN are simple additions and multiplications and can thus be evaluated using HE. However, AFs are non linear and as a result, we cannot use HE to perform operations on them. To this end, we replace the AFs with polynomial approximations as already discussed in section IV. While higher degree polynomials would provide us better approximation, they also introduce higher computation and communication costs and hence, render our construction inefficient.
_Flow:_ The CNN model is deployed in the CSP and is trained using plaintext data. The weights and biases for this model are measured and made available to the CSP. For the training phase, we use the CNN given in Figure 4. The user generates a public/private key pair for the HE scheme, encrypts an image and sends it to the CSP. Upon reception, the CSP runs the ML model and performs the classification in a privacy-preserving way.
### _Inference Phase_
Although, the operations performed in the inference phase are nearly the same as in the training phase, there are few fundamental differences. For example, all operations in the inference phase are taking place on ED while in contrast to the training phase where plaintext data is used. Similarly, the softmax which is part of the training phase is no longer available in the inference phase as shown in Figure 5.
For the inference phase, we use the Fan-Vercauteren SHE scheme [31]. The reason for using this specific scheme is that it allows us to perform _both_ addition and multiplication. It is important to note that this scheme has three important parameters that affect the security level, and its performance:
* Polynomial Modulus: This is an important parameter that affects the security level of the scheme. Polynomial modulus uses a power of two cyclotomic polynomial [44] and the recommended degrees for these polynomials are 1024, 2048, 4096, 8192 and beyond. On one side, a higher degree gives more security to the scheme while on the other side it degrades its performance.
* Coefficient Modulus: This parameter determines the Noise Budget (NB) in the encrypted ciphertext. The coefficient modulus is directly proportional to NB and inversely proportional to the security level of the scheme.
* Plaintext Modulus: The plaintext modulus affects NB in the freshly encrypted ciphertext. Additionally, it affects the NB consumption of homomorphic multiplications. For good performance, the recommendation is to keep the plaintext modulus as small as possible.
Each ciphertext in this encryption scheme has a specific quantity called NB - measured in bits. The NB is determined by the above parameters and consumed by the homomorphic operations. The consumption of the NB is based on the chosen encryption parameters. For addition operations, this budget consumption is almost negligible in comparison to multiplication operation. In sequential multiplication that occurs at the Conv and FC layer, the consumption of NB is very high. Hence, it is important to reduce the multiplicative depth of the circuit by considering appropriate encryption parameters. Once the NB drops to zero, then the decryption of ciphertext is not possible. Therefore it is necessary to choose the parameters to be large enough to avoid this, but not so large that it becomes ineffective and non functional.
While the HE scheme is based on polynomials, user's input is provided as a real number. Therefore, there is a clear mismatch between the two. Hence, it is important to use an encoding scheme that maps one to the other. To this end, the user encodes the input using the plaintext modulus and then encrypts it using the public key. The user generates encryption parameter and shares it with CSP. To perform computations on ED, CSP must have access to these parameters.
Using calculated weights and biases from the training phase and encryption parameters, CSP runs the inference phase on encrypted image. The inference network is same as training network except that AFs are replaced by polynomial approximation and are built using an HE function.
The AFs are substituted by polynomials. Since these polynomials only have addition and multiplication operations that are supported by HE. Consequently, we can perform encrypted computations on these functions. Similarly, pooling operation in the inference phase is straightforward - calculate the average value of four ciphertexts and multiply it with appropriate values. However, Conv is a bit expensive in terms of NB as it is a sequence of multiplication operations.
Softmax is not a part of the inference network and CSP use it to perform computation on ED and obtains an encrypted output. The CSP does not have access to the secret key and thus cannot access the result. Furthermore, as the softmax layer is removed from the inference network the CSP is not able to predict the final output of the layer.
At the end, the encrypted result of the output layer - an array of 10 values which are homomorphically encrypted - is sent back to the user. The user decrypts the results using the secret key and finds the output of the model which is the index corresponding to the highest among the 10 values.
Fig. 4: Convolutional Neural Network for Training Phase
At this point, it is important to highlight that the user utilized the ML model offered by the CSP and received the results without getting any valuable information about the underlying model. Similarly, the CSP ran the model on the encrypted image but at the same time was unable to extract any valuable information either for the content of the image or the actual prediction that sent back to the user. Hence, our model is considered as a privacy-preserving one.
## VI Performance Evaluation
We present our experimental results. In the first part, we provide experimental results on function approximation using Chebyshev polynomials. Then, we evaluate the performance of the proposed ML model and compare it with CryptoNETs.
Experimental SetupAll experiments were conducted in Python 3 using Ubuntu 18.04 LTS 64 bit (Intel Core i7, 2.80 GHz, 32GB). For the training phase, we used Tensor flow to train our CNN model, while the actual experiments for that phase were conducted on Google Colab (with GPU enabled). Finally, for the inference phase we used Microsoft's Simple Encrypted Arithmetic Library (SEAL) [45].
DatasetTo evaluate our model, similar to other works in the area, we used the MNIST dataset [34] which consists of 60,000 images of handwritten digits. To train our CNN model we used 50,000 images while the rest 10,000 were used for testing. Each image is \(28\times 28\) pixel array and is represented by its gray level in the range of 0-255.
### _Activation Function Approximation_
As we mentioned in the previous sections, in our approach we use Chebyshev polynomials to approximate the ReLU and Sigmoid AFs where inputs are images encrypted with an SHE scheme. The polynomial approximation of the ReLU AF is shown in Table III while for Sigmoid in Table IV. Since the choices of the degree and the interval affect the performance of the model, it is necessary to choose suitable parameters. For this purpose, we conducted a series of experiments using different degrees and intervals. As can be seen in Table III, the AFs are more accurately approximated when using polynomials of higher degree in small intervals. For example, the polynomial having degree 9 and interval \([-10,10]\) more accurately approximate the ReLU function than the rest of the polynomials. The same applies to the Sigmoid AF, where a high degree 9 and small interval \([-10,10]\) give a better approximation as can be seen in Table IV. However, the use of higher degree polynomials introduces a significant computation overhead, and small intervals limit the use of the approximation function. The results for approximating the Sigmoid AF using low-degree Chebyshev polynomials are presented in Table IV.
Furthermore, we performed a plethora of different experiments on the CNN model. We trained different networks by increasing the size of the Conv and the size of the filters. We noticed that changing the number of Conv and filters affects the overall accuracy of the network. As the size of the filter and layer increases, the accuracy of the network also increases. However, the efficiency of the network drops significantly. Hence, for the training phase, we considered the network given in Figure 4. First, we trained the CNN model using the ReLU AF. The measured accuracy for that part was 99.2%. Then the same network was trained using the polynomial approximation function where we got an accuracy of 98.5% - a result that is very close to the original AF.
For comparison, we used the model proposed in CryptoNets [4] which is similar to the one proposed in our paper - a Conv, FC layers and an average polling layer as shown in Table V. Training the model with the ReLU AF, the accuracy of our model was 99.20% whereas CryptoNets achieved a 99%. Similarly, for the approximated function we obtained an accuracy of 98.5% while CryptoNets achieved 98.95%. For the same network, the accuracy of the model proposed in [46] was 99.02% using the ReLU AF and 99% using the approximated function.
### _Performing Computation on Encrypted Data_
Now, we proceed by discussing how the use of HE can affect the performance of the NN model. In our work, we trained the CNN model on plaintext data while the classification was performed on the ciphertexts. As a result, we had to perform computations on two types of data - plaintext and ciphertext. For this purpose, we used the SEAL library that allowed us to perform computations on ciphertext. Although the use of SEAL is straightforward, we still had to define certain parameters (see Section V-A).
We performed a series of experiments using different encryption parameters. First, we looked at the polynomial modulus - the encryption parameter used in SEAL. During the experiments we observed that a smaller value of polynomial modulus leads to a more efficient result but at the same time the accuracy is decreased. In contrast, a higher value of the polynomial modulus gives more accurate results, however, degrades the performance. The second encryption parameter is the coefficient modulus that decides the NB in the freshly encrypted ciphertext. This parameter is automatically set by SEAL based on the value of security level and polynomial
Fig. 5: Convolutional Neural Network for Inference Phase
modulus. Finally, increasing the value of the plaintext modulus, decreases the consumption of the NB.
### _Comparison with the Existence Model_
Finally, we compared our results with state-of-the-art privacy-preserving NNs that utilize HE. The work proposed in CryptoNets [4] is similar to ours. In CryptoNets, the model is trained on plaintext data and then the trained model is used for the classification of encrypted instances. In order to have a fair comparison, it is important to incorporate the same network used in both works. To this end, we used the CryptoNets model. Instead of using the overall performance of the model we decided to equate each layer. As can be seen in Table V, our model outperforms CryptoNets at both the encryption and decryption times as well as in the activation layer.
## VII Learning in the Dark Protocol
In the first part of this section, we formalize the communication between the user and the CSP by designing a detailed protocol. Then, we prove the security of our construction in the presence of a malicious adversary. For the rest of the section, we assume the existence of a cryptogrpahic hash function that is first and second pre-image resistant. Before we proceed to the formal description of our protocol, we present a high-level overview of our construction.
High-Level OverviewWe assume that a user \(u\) wishes to classify an image in a privacy-preserving way. To this end, \(u\) first outputs an image and encrypts it using an HE scheme. As a next step, \(u\) sends the encrypted image to the CSP. Upon reception, the CSP commences the classification process directly on the encrypted image without the need to decrypt it. To achieve this, the CSP runs the evaluation algorithm of the HE scheme on the encrypted image and finally, outputs an encrypted vector. Each element of the vector represents the probability that the image belongs to a certain class. Finally, the CSP sends the encrypted vector back to \(u\). Upon reception, \(u\) decrypts the vector and classifies her image to the class that has the highest probability.
### _Construction_
As already stated in Section V, we assume a client-server model. Our protocol takes part in two different phases; a _Setup_
phase and a _Running_ phase.
_Setup Phase:_ In the Setup phase, the user \(u\) and the CSP establish a shared symmetric key \(\mathsf{K}\). This key will be used to secure the communication between the two entities. Apart from that \(u\) also generates a public/private key pair for a HE scheme. More specifically, \(u\) executes \(\mathsf{HE.KeyGen}(1^{\lambda})\rightarrow(\mathsf{pk},\mathsf{sk})\), for some \(\lambda\). We assume that upon its generation, pk is publicly known while sk remains secret.
Running PhaseAfter the successful execution of the _Setup_ phase, \(u\) can start communicating with the CSP. To do so, \(u\) first encrypts an image \(img\) by running \(\mathsf{HE.Enc}(\mathsf{pk},img)\to c_{img}\). Moreover, \(u\) generates an unpredictable random number \(r_{1}\) and sends to the CSP \(m_{1}=\langle r_{1},c_{img},\mathsf{HMAC}(\mathsf{K},r_{1}||c_{img})\rangle\). Upon reception, the CSP checks the freshness of the message by looking at \(r_{1}\), and verifies the \(\mathsf{HMAC}\) using the shared key \(\mathsf{K}\). If any of the above verifications fail, the CSP will output \(\bot\) and abort the protocol. Otherwise, the CSP proceeds with the execution of the ML model described in Section V. In particular, the CSP starts running \(\mathsf{HE.Eval}\) and finally, outputs an encrypted vector \(c_{eval}\). The encrypted vector is then sent back to \(u\) via \(m_{2}=\langle r_{2},c_{eval},\mathsf{HMAC}(\mathsf{K},r_{2}||c_{eval}|c_{img}|)\rangle\). Upon receiving \(m_{2}\), \(u\) verifies both the freshness of the message and the \(\mathsf{HMAC}\). If the verification fails, \(u\) outputs \(\bot\) and aborts the protocol. Otherwise, \(u\) decrypts \(c_{v}\) by running \(\mathsf{HE.Dec}(\mathsf{sk},c_{eval})\to v\). Having the plaintext vector at her disposal, \(u\) can now classify her image in accordance with the probabilities included in the
vector. Our construction is illustrated in Figure 6
### _Security Analysis_
We prove security of our protocol in presence of malicious adversary \(\mathcal{ADV}\). We start by defining the threat model:
_Threat Model:_ Our threat model is similar to the one described in [47], which is based on the Dolev-Yao adversarial model [48]. Moreover, we extend the above threat model by defining a set of attacks available to \(\mathcal{ADV}\).
**Attack 1** (Image Substitution Attack (ISA)).: _Let \(\mathcal{ADV}\) be an adversary that overhears the communication between user and CSP. \(\mathcal{ADV}\) successfully launches an ISA if she manages to replace encrypted image sent from user to CSP, in a way that is indistinguishable from CSP._
**Attack 2** (Vector Substitution Attack (VSA)).: _Let \(\mathcal{ADV}\) be an adversary that overhears the communication between the user and the CSP. \(\mathcal{ADV}\) successfully launches a VSA, if she manages to replace the encrypted vector sent from the CSP to the user, in a way that is indistinguishable for the user._
We now proceed with proving that our protocol is secure against the defined threat model.
**Proposition 1** (Image Substitution Attack Soundness).: _Let \(\mathcal{ADV}\) be a malicious adversary. Then \(\mathcal{ADV}\) cannot successfully launch an ISA._
Proof.: For \(\mathcal{ADV}\) to successfully launch an ISA, she needs to tamper with \(m_{1}=\langle r_{1},c_{img},\mathsf{HMAC}(\mathsf{K},r_{1}||c_{img})\). To do so, \(\mathcal{ADV}\) has two options:
1. Forge a new \(m_{1}\) message.
2. Replay an old \(m_{1}\) message.
We will show that in both cases, \(\mathcal{ADV}\) can successfully launch her attack with negligible probability.
* Since we assume that the pk of the HE scheme is publicly known, \(\mathcal{ADV}\) can generate a valid ciphertext \(c^{\prime}_{img}\) that is indistinguishable from the original \(c_{img}\). As a next step, \(\mathcal{ADV}\) replaces the original \(c_{img}\) with the newly generated \(c^{\prime}_{img}\) and forwards \(m^{\prime}_{1}=\langle r_{1},c^{\prime}_{img},\mathsf{HMAC}(\mathsf{K},r_{1} ||c_{img})\) to the CSP. Upon reception, the CSP will try to verify the HMAC. However, as \(c^{\prime}_{img}\neq c_{img}\) the verification will fail, and the CSP will abort the protocol. Hence, \(\mathcal{ADV}\) also needs to forge a valid HMAC. However, as \(\mathcal{ADV}\) does not possess the shared key \(\mathsf{K}\), this can only happen with negligible probability and thus, the attack fails.
* The only other alternative for \(\mathcal{ADV}\), is to replay an older message. To do so, \(\mathcal{ADV}\) replaces the \(m_{1}\) message sent from the user to the CSP with an older \(m^{\prime}_{1}\) message from a previous session. Upon receiving \(m^{\prime}_{1}\), the CSP will verify the validity of the HMAC but it will fail to verify the freshness of the message. An alternative for \(\mathcal{ADV}\), would be to generate a fresh random number and to replace the old one. However, since the random number is also included in the HMAC, \(\mathcal{ADV}\) would also need to forge a valid HMAC. Given the fact that \(\mathcal{ADV}\) does not possess the shared key \(\mathsf{K}\), this can only happen with negligible probability and thus, the attack fails.
**Proposition 2** (Vector Substitution Attack Soundness).: _Let \(\mathcal{ADV}\) be a malicious adversary. Then \(\mathcal{ADV}\) cannot successfully launch a VSA._
Proof.: The proof is omitted as it is similar to the previous one. More specifically, the security properties of the HMAC, and the fact that \(\mathcal{ADV}\) does not know the shared \(\mathsf{K}\) are enough to ensure that \(\mathcal{ADV}\) cannot successfully launch a VSA.
**Open Science and Reproducible Research:** To support open science and reproducible research, and provide researchers with the opportunity to use, test, and hopefully extend our work, our source code has been made usable online3.
Footnote 3: [https://gitlab.com/nisc/blind_faith](https://gitlab.com/nisc/blind_faith)
## VIII Conclusion
Undoubtedly, ML models and their underlying applications are driving the big-data economy. However, in practice, the systems using these models incorporate proxies. Many existing systems can introduce biases or rely on proxies like gender or race, leading to unfair outcomes. With this work, we aim to create a more equitable and unbiased approach to decision-making. Learning in the Dark allows us to apply ML models directly to encrypted data so the information remains secure. We accomplished this by estimating the behavior of activation functions, which are components of ML models. Our experiments and evaluations showed promising results, demonstrating that Learning in the Dark can effectively analyze encrypted data while maintaining high accuracy. We believe this research can inspire further advancements in privacy-preserving machine learning and contribute to systems that promote fairness, privacy, and transparency in our increasingly data-driven world.
|
2307.00060 | Connection Between SDSS Galaxies and ELUCID Subhaloes in the Eye of
Machine Learning | We explore the feasibility of learning the connection between SDSS galaxies
and ELUCID subhaloes with random forest (RF). ELUCID is a constrained $N$-body
simulation constructed using the matter density field of SDSS. Based on an
SDSS-ELUCID matched catalogue, we build RF models that predict $M_r$ magnitude,
colour, stellar mass $M_*$, and specific star formation rate (sSFR) with
several subhalo properties. While the RF can predict $M_r$ and $M_*$ with
reasonable accuracy, the prediction accuracy of colour and sSFR is low, which
could be due to the mismatch between galaxies and subhaloes. To test this, we
shuffle the galaxies in subhaloes of narrow mass bins in the local
neighbourhood using galaxies of a semi-analytic model (SAM) and the TNG
hydrodynamic simulation. We find that the shuffling only slightly reduces the
colour prediction accuracy in SAM and TNG, which is still considerably higher
than that of the SDSS. This suggests that the true connection between SDSS
colour and subhalo properties could be weaker than that in the SAM and TNG
without the mismatch effect. We also measure the Pearson correlation
coefficient between galaxy properties and the subhalo properties in SDSS, SAM,
and TNG. Similar to the RF results, we find that the colour-subhalo correlation
in SDSS is lower than both the SAM and TNG. We also show that the
galaxy-subhalo correlations depend on subhalo mass in the galaxy formation
models. Advanced surveys with more fainter galaxies will provide new insights
into the galaxy-subhalo relation in the real Universe. | Xiaoju Xu, Xiaohu Yang, Haojie Xu, Youcai Zhang | 2023-06-30T18:00:22Z | http://arxiv.org/abs/2307.00060v1 | # Connection Between SDSS Galaxies and ELUCID Subhaloes in the Eye of Machine Learning
###### Abstract
We explore the feasibility of learning the connection between SDSS galaxies and ELUCID subhaloes with random forest (RF). ELUCID is a constrained \(N\)-body simulation constructed using the matter density field of SDSS. Based on an SDSS-ELUCID matched catalogue, we build RF models that predict \(M_{r}\) magnitude, colour, stellar mass \(M_{*}\), and specific star formation rate (sSFR) with several subhalo properties. While the RF can predict \(M_{r}\) and \(M_{*}\) with reasonable accuracy, the prediction accuracy of colour and sSFR is low, which could be due to the mismatch between galaxies and subhaloes. To test this, we shuffle the galaxies in subhaloes of narrow mass bins in the local neighbourhood using galaxies of a semi-analytic model (SAM) and the TNG hydrodynamic simulation. We find that the shuffling only slightly reduces the colour prediction accuracy in SAM and TNG, which is still considerably higher than that of the SDSS. This suggests that the true connection between SDSS colour and subhalo properties could be weaker than that in the SAM and TNG without the mismatch effect. We also measure the Pearson correlation coefficient between galaxy properties and the subhalo properties in SDSS, SAM, and TNG. Similar to the RF results, we find that the colour-subhalo correlation in SDSS is lower than both the SAM and TNG. We also show that the galaxy-subhalo correlations depend on subhalo mass in the galaxy formation models. Advanced surveys with more fainter galaxies will provide new insights into the galaxy-subhalo relation in the real Universe.
keywords: dark matter -- galaxies: haloes -- methods: statistical
## 1 Introduction
Understanding the formation and evolution of galaxies is a crucial aspect of modern cosmology. In recent years, large-volume galaxy surveys such as Sloan Digital Sky Survey (SDSS, York et al., 2000), SDSS-III (Eisenstein et al., 2011) and SDSS-IV (Dawson et al., 2016), and the Dark Energy Spectroscopic Instrument (DESI, DESI Collaboration et al., 2016) provide high-precision measurements of galaxy observables, leading to significant progress in this field. Since galaxies are believed to form within dark matter haloes, studying the connection between them can provide valuable insights into galaxy formation and evolution. However, unlike galaxy properties such as magnitude and colour which can be observed directly, the inner structure and formation histories of dark matter haloes are challenging to measure through observations.
In contrast, the formation history of dark matter halo and subhalo can be easily traced through \(N\)-body simulations, which evolve dark matter particles under gravity (Springel et al., 2005; Prada et al., 2012; Wang et al., 2020). To simulate galaxies, semi-analytic models (SAM) of galaxy formation processes can be implemented on the subhalo merger tree extracted from \(N\)-body simulations (Guo et al., 2011, 2013; Croton et al., 2016; Cora et al., 2018). Furthermore, hydrodynamic simulations are developed to produce galaxies in dark matter haloes by adding baryonic particles beyond dark matter particles (Vogelsberger et al., 2014; Schaye et al., 2015; Nelson et al., 2015, 2019). Both SAM and hydrodynamic simulations can be tuned to reproduce statistical galaxy observables such as abundance and clustering. However, since the galaxy formation processes are not yet fully understood, simulated galaxies may still deviate from those in the real Universe. Additionally, it is difficult to compare simulated galaxies individually with the real ones, as the one-to-one correspondence between them is not guaranteed.
One approach to address these issues is to construct constrained simulations based on the observed distribution of galaxies in the local universe. Using the group catalogue built from SDSS galaxies (Yang et al., 2007, 2012), the matter density field at low redshift can be constructed and treated as the final output of the constrained simulations (Wang et al., 2009, 2012). To infer the initial condition of the final density field, Wang et al. (2014) proposed a method that utilises the Hamiltonian Markov Chain Monte Carlo algorithm to sample the posterior distribution of the initial condition, together with a Particle-Mesh model that evolves the initial condition to the final state. With the constrained initial condition, Wang et al. (2016) carry out the ELUCID \(N\)-body simulation, which accurately reproduces the observed large-scale structures in SDSS Data Release 7
(DR7, Abazajian et al., 2009). Based on this similarity, Yang et al. (2018) implements a neighbourhood abundance matching method that matches the observed galaxies in DR7 to the subhaloes in the ELUCID simulation.
The one-to-one matching between observed galaxies and simulated subhaloes provides a novel path for investigating the galaxy-halo relation. It is shown that this approach can recover the massive haloes to a large extent (Tweed et al., 2017), and the haloes linked to the bright galaxies may represent the actual haloes in the Universe with a high possibility. This provides an opportunity to compare galaxies in observation with those in the SAM implemented on the ELUCID simulation and the upcoming ELUCID hydrodynamic simulation on an individual level. Such comparison is helpful for understanding the differences between galaxy formation models and the actual galaxy formation processes in the Universe. In addition, studying the galaxy-halo relation of the SDSS-ELUCID matching pairs statistically also provides insights into galaxy formation and evolution in the real Universe. In this work, we aim to capture this relation with machine learning and predicting galaxy properties based on subhalo properties.
Machine learning models are widely used in cosmological studies in the literature due to the ability to efficiently learn non-linear multivariate dependencies between input and output variables. Efforts have been made on predicting halo occupations or galaxy properties with dark matter halo or subhalo properties based on SAM or hydrodynamic simulations (Kamdar et al., 2016, 2016; Agarwal et al., 2018; Lovell et al., 2022; Xu et al., 2021, 2022). Once trained, these machine learning models can be applied to large-volume \(N\)-body simulations to create mock galaxy catalogues that reproduce the galaxy-halo connection in corresponding galaxy formation models. In this work, we focus on predicting galaxy properties from subhalo properties based on the SDSS-ELUCID matching catalogue in Yang et al. (2018), and we evaluate the feasibility of using machine learning to produce realistic mock catalogues with large-volume \(N\)-body simulations. However, the accuracy of the dark matter halo reconstruction in ELUCID, particularly for low-mass haloes, is not guaranteed, which may affect the robustness of our analysis. Therefore, we perform tests to estimate the impact of uncertainties in subhalo properties (or in other words, the mismatching between subhaloes and galaxies) on the prediction of galaxy properties. This study is helpful for revealing discrepancies between observed galaxies and modeled galaxies and shedding light on galaxy-subhalo relation in the real Universe.
The structure of this paper is as follows. We provide an overview of the ELUCID \(N\)-body simulation, the SDSS-ELUCID matching catalogue, galaxy formation models, and the machine learning method we implemented in Section 2. The main results of predicting SDSS galaxy properties are shown in Section 3. We then investigate the possible effect of mismatching between galaxies and subhaloes with a SAM implemented on the ELUCID simulation and a hydrodynamic simulation in Section 4. Finally, we summarise and discuss our results in Section 5.
## 2 Data and Methods
### ELUCID simulation and SDSS-ELUCID matching catalogue
In this study, we utilise the SDSS-ELUCID matching catalogue from Yang et al. (2018) (Match2 method), which links observed galaxies to subhaloes in the ELUCID \(N\)-body simulation. ELUCID is a constrained simulation designed to reproduce the large-scale distributions of galaxies observed in the Northern Galactic Cap (NGC) region of SDSS DR7 (Abazajian et al., 2009), in the range of \(99^{\circ}<\mathrm{R.A.}<283^{\circ}\), \(-7^{\circ}<\mathrm{dec.}<75^{\circ}\) and \(0.01<z<0.12\). To achieve this, the matter density field reconstructed from the Yang et al. (2007) group catalogue, which is built based on the New York University Value-Added Galaxy Catalogue (NYU-VAGC, Blanton et al., 2005), is used as the final condition for inferring the corresponding initial condition. For this purpose, Hamiltonian Markov Chain Monte Carlo method (HMCMCMC, Duane et al., 1987) and PM dynamics (White et al., 1983; Jing & Suto, 2002) are used. The former samples the posterior distribution of linear initial conditions with a specific final condition, and the latter evolves initial condition to final density field by efficiently evaluating gravitational forces at each time step. With the inferred initial condition, the ELUCID simulation evolves \(3072^{3}\) dark matter particles of mass \(3.0875\times 10^{8}\,h^{-1}\,\mathrm{M}_{\odot}\) in a box with a comoving length of 500 \(h^{-1}\,\mathrm{Mpc}\) on a side using an updated version of the GADGET-2 code (Springel et al., 2005). The simulation adopts the _WMAPS_ cosmology with cosmological parameters \(\Omega_{\mathrm{m}}=0.258\), \(\Omega_{\mathrm{b}}=0.044\), \(h=0.72\), and \(n_{\mathrm{s}}\)=0.963, and \(\sigma_{\mathrm{S}}=0.796\)(Dunkley et al., 2009).
In each snapshot of the simulation, dark matter haloes and subhaloes are identified using the Friend-of-Friend (FOF) algorithm (Davis et al., 1985)) and SUBFIND method (Springel et al., 2001), respectively. Subhalo merger tree is then constructed by linking subhaloes from SUBFIND in each snapshot. Yang et al. (2018) match the SDSS DR7 galaxies in the above survey area to the ELUCID subhaloes at \(z\)=0 with a novel neighbourhood abundance matching technique, which we refer to as the SDSS-ELUCID matching catalogue in the following. This approach is similar to the traditional subhalo abundance matching (SHAM, Conroy et al., 2006; Behroozi et al., 2010; Moster et al., 2010; Reddick et al., 2013; Guo et al., 2016) that links the galaxies and subhaloes through their luminosity (or stellar mass) and subhalo mass (or circular velocity). In addition to this, it takes into account the separation between galaxies and subhaloes and prefers to match the galaxy to the subhalo of appropriate mass in the neighbourhood. As a result, 296,488 galaxies out of 396,069 are assigned to central subhaloes as central galaxies, and 99,581 are assigned to satellite subhaloes as satellite galaxies. We refer the reader to Yang et al. (2018) for more details regarding the neighbourhood abundance matching method and the SDSS-ELUCID matching catalogue. The ELUCID simulation and SDSS-ELUCID matched catalogue are available on the ELUCID website 1.
Footnote 1: [https://gax.ajtu.edu.cn/data/ELUCID.html](https://gax.ajtu.edu.cn/data/ELUCID.html)
We investigate the connection between galaxy properties and subhalo properties in the SDSS-ELUCID matching catalogue with machine learning. The galaxy properties we mainly focused on are r-band absolute magnitude \(M_{r}\) and the \(g-r\) colour, with the magnitudes being K-corrected with evolution corrections to \(z=0.1\) according to Blanton et al. (2003) and Blanton & Roweis (2007). We also consider derived physical galaxy properties such as stellar mass and specific star formation rate (sSFR). The subhalo properties we focused on are:
1. \(M_{\mathrm{sub}}\), the subhalo mass, in units of \(\,h^{-1}\,\mathrm{M}_{\odot}\);
2. \(M_{\mathrm{peak}}\), the peak value of \(M_{\mathrm{sub}}\) over the formation history of the subhalo;
3. \(M_{\mathrm{acc}}\), the value of \(M_{\mathrm{sub}}\) when the subhalo accretes onto its host (\(M_{\mathrm{acc}}=0\) for central subhalo);
4. \(r_{\mathrm{half}}\), the half mass radius of the subhalo;
5. \(V_{\mathrm{max}}\), the maximum circular velocity of the subhalo;
6. \(V_{\mathrm{peak}}\), the peak value of \(V_{\mathrm{max}}\) over the formation history of the subhalo;
7. \(V_{\rm max,acc}\), the value of \(V_{\rm max}\) when the subhalo accretes onto its host;
8. \(V_{\rm disp}\), the velocity dispersion of the subhalo;
9. \(z_{\rm speak}\), the redshift when \(V_{\rm max}(z_{\rm speak})=V_{\rm peak}\);
10. \(z_{\rm mepeak}\), the redshift when \(M_{\rm sub}(z_{\rm mepeak})=M_{\rm peak}\);
11. \(z_{\rm acc}\), the redshift when the subhalo accretes onto its host halo;
12. \(z_{\rm 0.1/0.3/0.5/0.7/0.9}\), the formation redshift of subhalo, defined by the redshift when the subhalo reaches 0.1/0.3/0.5/0.7/0.9 of its peak mass for the first time;
13. \(N_{\rm merge}\), the total number of major mergers (defined by a mass ratio of 1/3 between the progenitors) on the main branch of the subhalo merger tree;
14. \(z_{\rm first}\), the redshift of the first major merger of the subhalo;
15. \(z_{\rm last}\), the redshift of the last major merger of the subhalo;
16. \(t_{\rm fast}\), the total time during which the subhalo is a satellite around the central subhalo, in the unit of Gyr;
17. \(\lambda\), the spin parameter of the subhalo, and the environmental properties included are:
1. \(\delta_{2.1}\), the matter density smoothed by a Gaussian filter with a smoothing scale of \(2.1\,h^{-1}\,{\rm Mpc}\);
2. \(T_{\rm web}\), cosmic web type, classified as one of knot, filament, sheet, and void according to the eigenvalues of the Hessian matrix (Zhang et al., 2009; Paranjape et al., 2018) calculated with \(\delta_{2.1}\).
### SAM and hydrodynamic simulation
To examine the impact of the mismatch between SDSS galaxies and ELUCID subhaloes on our results, we make use of the Luo et al. (2016) SAM implemented on the subhalo merger tree of ELUCID. As an L-Galaxies model (Guo et al., 2011, 2013; Fu et al., 2013), it accounts for various galaxy formation processes such as gas cooling, star formation, gas stripping, and feedback from AGN and supernova. In comparison with other SAMs, it introduces an analytic approach to trace the evolution of low-mass subhaloes that fall below the mass resolution of the simulation, improving the modeling of satellite quenching and galaxy clustering.
To further assess the impact of the mismatch, We also perform tests with the TNG-300 hydrodynamic simulation (Marinacci et al., 2018; Naiman et al., 2018; Nelson et al., 2018, 2019; Pillepich et al., 2018; Springel et al., 2018). This simulation evolves \(2500^{3}\) dark matter particles of mass \(5.9\times 10^{7}\,h^{-1}\,{\rm M}_{\odot}\) and the same number of baryonic particles of mass \(1.1\times 10^{7}\,h^{-1}\,{\rm M}_{\odot}\) in a cubic box with a length of \(205\,h^{-1}\,{\rm Mpc}\) on a side using the MEPP moving-mesh code (Springel, 2010). The Planck cosmology (Planck Collaboration et al., 2016) is adopted, with cosmological parameters \(\Omega_{\rm m}=0.31\), \(\Omega_{\rm b}=0.0486\), \(h=0.677\), and \(n_{8}=0.97\), and \(\sigma_{8}=0.816\). The TNG-300 simulation is an updated version of the original Illustris simulation (Vogelsberger et al., 2014; Nelson et al., 2015), with improvements on AGN feedback, galactic wind, and magnetic fields. Compared to the original Illustris, the galaxy colour distribution in TNG is found to be more consistent with observation.
We use the subhaloes from TNG-300-dark, which is a dark-matter-only (DMO) counterpart of the full-physics (FP) TNG-300 simulation. We adopt similar subhalo properties as in Section 2.1 calculated from the SUBRLINK merger tree of TNG-300-dark, including \(M_{\rm sub}\), \(M_{\rm peak}\), \(M_{\rm max}\), \(V_{\rm max}\), \(V_{\rm peak}\), \(V_{\rm disp}\): \(z_{\rm speak}\): \(z_{\rm mepeak}\), \(z_{\rm acc}\): \(z_{\rm 0.1}\), \(z_{\rm 0.3}\), \(z_{\rm 0.5}\), \(z_{\rm 0.7}\), \(z_{\rm 0.9}\), \(N_{\rm merge}\), \(z_{\rm first}\), \(z_{\rm last}\), \(t_{\rm start}\), \(\lambda\). To assign galaxies to DMO subhaloes, We apply the matching catalogue between the subhaloes of the DMO and FP runs in Rodriguez-Gomez et al. (2015). In the case that multiple galaxies are matched to one subhalo, we assign the most massive galaxy to the subhalo. To reduce matching noise, we exclude outliers with \(|\log M_{\rm sub,DMO}-\log M_{\rm sub,FP}|>1\) for the galaxies. The TNG snapshot data, group catalogue, and SUBFIND catalogue are all available on the TNG website 2.
Footnote 2: [https://www.tng-project.org/](https://www.tng-project.org/)
### Random forest
We focus on reproducing galaxy properties based on subhalo properties with machine learning techniques to better understand the connection between the two. To accomplish this, we utilise the random forest (RF) model (Breiman, 2001), which is highly efficient in capturing complex multi-variate dependencies between input and output variables. The RF model is widely used in galaxy formation studies and shows promising results in reproducing galaxy properties based on halo or subhalo properties (Kamdar et al., 2016; Agarwal et al., 2018; Xu et al., 2021, 2022).
RF is an ensemble of decision trees (Breiman et al., 1984) which are constructed by splitting training data into hierarchical nodes. At each node, the training data including feature variables and the target variable is split into lower-level nodes in a way that minimises the cost function (e.g. the Gini impurity for classification tree and mean squared error for regression tree), until the specified maximum level of tree is reached, or the minimum number in node is reached. The predicted output is then calculated from the bottom level of nodes, also known as leaves. For a classification tree, the output is the majority of the target variable of data in the leaf, and for a regression tree, the output is the mean of the target variable of the data in the leaf. Once trained, the RF can be tested using a test sample, and the prediction performance can be estimated by performance scores such as \(F_{1}\) for classification and \(R^{2}\) for regression. To predict galaxy properties, we employ the regression RF in the sklearn package of Python and the \(R^{2}\) score. For all the RF analyses in this work, we use 60% of the original data as training sample and the rest as test sample.
## 3 Predicting SDSS galaxy properties
We construct RF models for predicting galaxy \(r\)-band absolute magnitude \(M_{r}\) and \(g-r\) colour separately. These models are trained using galaxies selected from the SDSS-ELUCID matching catalogue, and the subhalo properties listed in Section 2.1 are used as input variables. With the predicted \(M_{r}\), we compare the luminosity function and galaxy-matter cross-correlation in different \(M_{r}\) bins to those in observation. We then compare the predicted colour distribution to that of the SDSS.
### Subhalo mass completed sample
For training the RF, we first select an appropriate sample from the SDSS-ELUCID matching catalogue. Since only the galaxies brighter than a specific magnitude threshold can be observed at fixed redshift, the low-mass subhaloes with faint galaxies are likely underrepresented in the SDSS-ELUCID matching catalogue, which is also known as the Malmquist bias. With this bias, the number density of subhaloes of fixed mass decreases beyond a certain redshift, which we refer to as the limited redshift \(z_{\rm lim}\). In other words, the subhalo population of this mass is incomplete above \(z_{\rm lim}\). For a specific low subhalo mass, galaxies residing in early-formed subhaloes with luminosities higher than average are more likely to be observed, while
those with luminosities lower than average may fall below the detection limit of the survey. This leads to a biased luminosity-subhalo mass relation for the low-mass subhaloes in the SDSS-ELUCID catalogue. If the RF captures this biased relation, the predicted magnitude would be brighter than expected at fixed subhalo mass. It will also introduce biases in the relationships between other galaxy properties and subhaloes since the early-formed subhaloes are more represented in observation. To avoid this kind of bias, it is necessary to select the subhaloes with redshift smaller than their \(z_{\rm lim}\).
In Figure 1, we compare the number densities of SDSS-ELUCID matched subhaloes (solid) to all the ELUCID subhaloes (dashed) in the SDSS region in log\(M_{\rm sub}\) bins of 0.2 dex as a function of redshift. Different colours represent log\(M_{\rm sub}\) bins in the range of [11, 12]. The total number densities of SDSS region subhaloes are approximately constant across the redshift range, except for a bump near z=0.08, which may be caused by the well-known Sloan "great wall" structure. In contrast, the number densities of SDSS-matched subhaloes deviate from those of the SDSS region subhaloes and decline beyond specific redshifts, which increase with subhalo mass. This indicates again that the subhalo sample matched to SDSS galaxies may be incomplete due to the Malmquist bias. The impact of Malmquist bias vanishes for subhaloes of log\(M_{\rm sub}>12\). The bottom panel shows the ratio between the two number densities \(n_{\rm matched}/n_{\rm total}\). For each subhalo mass bin, we define the limited redshift \(z_{\rm lim}\) at which the ratio drops to 0.9 (shown by the dashed line). By interpolating between the mass bins, a \(z_{\rm lim}\) can be calculated for each galaxy (subhalo) in the SDSS-ELUCID matched sample according to the subhalo mass. We then select the galaxies with redshift below their \(z_{\rm lim}\). As a result, 201,980 galaxies are selected from the original 396,069 galaxies for our RF analysis. We refer to this sample as the \(z_{\rm lim}\)-selected sample. We also perform a test calculating \(z_{\rm lim}\) with log\(M_{\rm peak}\) instead of log\(M_{\rm sub}\), and the result is similar.
### r-band magnitude
The results of the \(M_{r}\) prediction are shown in Figure 2. In the top-left panel, we compare the luminosity function (LF) of the SDSS \(M_{r}\) (blue solid) of the \(z_{\rm lim}\)-selected sample with the corresponding RF predictions (blue dashed for training and blue dotted for test set). To measure the LF, we adopt the \(V_{\rm max}\) method, which determines the maximum volume in which the galaxy can be observed above the flux limit of the survey (note that this is different from the subhalo maximum circular velocity \(V_{\rm max}\)). For each galaxy, a weight of inverse \(V_{\rm max}\) is assigned for number counting. The RF predictions demonstrate good agreement with the SDSS measurement within the magnitude range of \(-22<M_{r}<-18\). However, discrepancies arise at both the bright end (\(M_{r}<-22\)) and the faint end (\(M_{r}>-18\)), where the prediction is lower than the SDSS. This is not surprising, as the machine learning methods are unable to reproduce 100% variance of the input data and tend to underpredict extreme values (Agarwal et al., 2018). The RF predictions on the training and test sample are in excellent agreement, indicating that the construction of the RF is appropriate and the prediction result is reliable.
We then apply this trained RF to all subhaloes in the SDSS-ELUCID sample and show the predicted \(M_{r}\) LF by the black solid. For comparison, the direct measurement of the SDSS LF of the same sample is shown by the red solid. Similar to the result of the \(z_{\rm lim}\)-selected sample, the prediction is consistent with the direct measurement within the range of \(-22<M_{r}<-18\). As the SDSS region only covers a fraction of the ELUCID volume, we also apply the trained RF model to all the subhaloes in the ELUCID simulation and show the predicted LF with the black dotted curve. It is again very similar to the SDSS measurement, with the exception of the bright and faint ends. A bump exhibits at \(M_{r}>-18\), which is likely attributed to the low abundance of faint galaxies hosted by low-mass subhaloes in our training sample. In addition to this, the cosmic variances may also contribute to the discrepancy at the faint end. As highlighted in Chen et al. (2019), the faint end slope of the LF was significantly underestimated due to the cosmic variances in the SDSS observation.
The top-right panel presents a direct comparison between the SDSS \(M_{r}\) (x-axis) and the predicted \(M_{r}\) (y-axis) for all galaxies in the \(z_{\rm lim}\)-selected sample. The blue contours show the 20%, 40%, 60%, 80%, 95% of the data distribution, and the black solid (shadow) shows the median (16%-84%) of predicted \(M_{r}\) at fixed SDSS \(M_{r}\). The black dashed line along the diagonal indicates equality between the prediction and SDSS values. Overall, the prediction is consistent with SDSS along the equality except for the faint and bright end. For galaxies fainter than \(M_{r}\sim-20\), the RF tends to predict brighter magnitudes, while the trend is reversed for galaxies brighter than \(M_{r}\sim-20\). Scatters exist in the prediction at fixed SDSS \(M_{r}\), with smaller scatter for brighter galaxies compared to fainter ones. To quantify the performance of the prediction, we provide the \(R^{2}\) score which describes the fraction of the variance in the target variable (e.g. \(M_{r}\) in this case) captured by the prediction at the bottom right of the panel. As \(R^{2}=1\) represents a perfect prediction that recovers the full variance in the target variable, an \(R^{2}\) of 0.8 indicates that
Figure 1: Top: subhalo number density as a function of redshift at fixed log\(M_{\rm sub}\) for SDSS-ELUCID matched subhaloes (solid) and ELUCID subhaloes in the SDSS region (dotted). A few selected log\(M_{\rm sub}\) bins are shown with different colours. Bottom: the ratio between the number density of SDSS-ELUCID matched subhaloes and ELUCID subhaloes in the SDSS region. The complete threshold of 0.9 is indicated by the black dashed line.
our prediction captures a significant fraction of the variance in SDSS \(M_{r}\).
The bottom-left and bottom-right panels show the same comparison for the training sample and test sample, respectively. The \(R^{2}\) of the training sample is slightly higher than that of the full sample, and the \(R^{2}\) of the test sample is slightly lower. This is reasonable since the RF is data-driven, and the model is trained to fit the training sample with a priority. We also perform the same analysis to predict the stellar mass, and the result (shown in Appendix A) is very similar to that of the \(M_{r}\) prediction.
We then proceed to compare the \(M_{r}\)-dependent galaxy clustering in SDSS and the predictions. To measure the SDSS clustering, we construct four volume-limited \(M_{r}\) bin samples in which the sample completeness is ensured. In other words, the apparent magnitudes of all galaxies in each bin fall in the detection limits of the survey from \(m_{r}\)=14.5 to \(m_{r}\)=17.72 (Zehavi et al., 2005). To obtain a higher signal-to-noise signal, we calculate the two-point galaxy-matter cross-correlation using the estimator \(\xi_{\rm{gm}}\)=DD/DR-1 in the ELUCID coordinate instead of the galaxy-galaxy auto-correlation, where DD is the number of galaxy-matter pairs, and DR is the number of galaxy-random pairs. The positions of subhaloes serve as the positions of their matched galaxies.
The SDSS clustering of each \(M_{r}\) bin is illustrated by the red solid curve in each panel of Figure 3. The black solid curve shows the
Figure 2: \(M_{r}\) Prediction trained on the \(z_{\rm{lim}}\)-selected SDSS-ELUCID catalogue. Top-left: luminosity function of the \(z_{\rm{lim}}\)-selected SDSS galaxies (solid blue) and RF prediction separated into training sample (blue dashed) and test sample (blue dotted). The solid black curve shows the predicted \(M_{r}\) applying the trained RF on all subhaloes in the SDSS-ELUCID sample, and the solid red shows the measurement of galaxies in the same sample. The dotted black indicates the RF prediction on all ELUCID subhaloes. Top-right: distribution of comparison between SDSS \(M_{r}\) (\(x\)-axis) and predicted \(M_{r}\) (\(y\)-axis) of the \(z_{\rm{lim}}\)-selected sample, shown by the blue contours (20%, 40%, 60%, 80%, 95% of the sample). The black solid and shadow indicate the median and 16%-84% of the prediction at fixed SDSS \(M_{r}\). Equality is shown by the black dashed line along the diagonal direction. Bottom-left: configuration between SDSS and prediction in the training/test sample.
prediction of SDSS-matched subhaloes, and the black dotted curve indicates the prediction from all subhaloes in ELUCID. In the three bright bins where \(-22<M_{r}<-19\), both the prediction of SDSS-matched subhaloes and all subhaloes consist with the SDSS, except for very small scales. It is worth noting that for the clustering of SDSS-matched prediction, we still utilise the position of subhaloes, so the clustering discrepancy is solely due to the prediction of \(M_{r}\).
In the faintest bin where \(-19<M_{r}<-18\), the prediction of the SDSS-matched sample still agrees with SDSS measurement. However, the prediction from all subhaloes in ELUCID exhibits a lower clustering amplitude than SDSS on all scales. This discrepancy can be attributed to the bump of the black dotted curve in Figure 2, which could be a result of the scarcity of low-mass subhaloes in the training sample and therefore the low accuracy of \(M_{r}\) prediction in these subhaloes.
### g-r colour
In addition to the \(M_{r}\), we also train the RF model to predict \(g-r\) colour with the subhalo properties, and the results are shown in Figure 4. The top-left panel displays the distribution of the SDSS colour of the \(z_{\rm lim}\)-selected sample (blue solid) and the RF prediction separated into training (blue dashed) and test sets (blue dotted, overlapping with the blue dashed). We then apply this RF on all subhaloes of the SDSS-ELUCID catalogue and show the prediction by the black solid, and also provide the SDSS colour distribution of the same subhaloes by red solid for comparison. The SDSS colour distribution consists of a narrow red peak around \(g-r=1\) and a smooth blue component in the range of \(0.4<g-r<0.7\), and only the red peak remains after the \(z_{\rm lim}\) selection. However, the red peak of the RF prediction shifts towards lower values of \(g-r\). Additionally, the width of the predicted distribution is narrower than that of the SDSS, indicating that extreme red and blue values are not fully recovered by the RF. Since the RF is trained solely on the red peak galaxies, it is not able to recover the blue component when applied to all subhaloes in the SDSS-ELUCID matched catalogue.
In the top-right panel, we show the comparison between the SDSS colour (x-axis) and the predicted colour (y-axis). The overall trend deviates more noticeably from the diagonal compared to that of the \(M_{r}\) prediction, and the \(R^{2}\) score (\(\sim 0.3\)) is significantly lower. The bottom-left and bottom-right panels display the prediction for the training sample and test sample, respectively. The \(R^{2}\) of the training (test) sample is slightly higher (lower) than that of the full sample but still indicates a similar level of prediction accuracy. Instead of using the inferred assembly properties characterising subhalo formation history such as \(z_{0.1/0.3/0.5/0.7/0.9}\), we also input the original merger tree information to the RF by using the subhalo masses of 21 snapshots from \(z=4.86\) to \(z=0\), and masses are set to zero if the subhaloes are not identified in early redshifts. The result is very similar to that using the inferred assembly properties. We also build RF models for central and satellite galaxies separately, but no significant improvements in the prediction are found. This indicates that predicting the SDSS galaxy colour with subhalo properties is more challenging than predicting \(M_{r}\). We also train the RF to predict the SFR and specific sSFR of SDSS galaxies based on subhalo properties. The results are shown in Appendix A. The \(R^{2}\) of sSFR prediction is similar to that of the colour, while the \(R^{2}\) of SFR is much lower.
The reasons for the low-accuracy colour prediction are complicated. Firstly, the correlation between galaxy colour and subhalo properties may be weak in SDSS, and baryonic processes such as AGN feedback could have more significant effects on galaxy colour. Secondly, noise in the galaxy-subhalo relation of the training sample may raise from possible mismatches between the SDSS galaxies and ELUCID subhaloes. It is difficult to test the first possibility directly since subhalo or halo properties such as formation redshift are difficult to measure in observation. Empirical models can be used to infer the correlation between galaxy colour and halo property. For example, Hearin & Watson (2013) propose an age-matching model that assumes a monotonic relation between galaxy colour and subhalo assembly property to reproduce colour-dependent galaxy clustering. On the other hand, Xu et al. (2018) propose a conditional colour-magnitude distribution model that assumes magnitude and colour depend purely on halo mass and find that it can also reproduce the observed galaxy clustering dependence on colour reasonably well. In these two models, the former suggests a non-zero relation between colour and halo assembly history, while the latter suggests an independent trend. This indicates that the conclusion can be model-dependent, and further investigations are needed to resolve this debate.
Figure 3: Galaxy-matter cross-correlation of SDSS \(M_{r}\) samples and the predictions. Four \(M_{r}\) bins are shown in four panels. In each panel, the original SDSS cross-correlation is shown by the red solid, and the error bars are measured from 16 jackknife samples. The cross-correlation of predicted \(M_{r}\) of SDSS subhaloes (all subhaloes) is shown by the black dashed (dotted).
In this study, we will focus on investigating the second possible reason mentioned above, which is the mismatch between SDSS galaxies and ELUCID subhaloes. It is important to note that the term "mismatch" here refers not only to errors in matching caused by the neighbourhood abundance matching method, but also to other sources of noise that could introduce biases in the galaxy-subhalo relation. All the RF studies above are based on the assumption that the matching is accurate, or in other words, that the true subhalo properties of a galaxy can be accurately recovered by those of the matched ELUCID subhalo. However, this is not guaranteed, especially for the low-mass subhaloes that are expected to host faint galaxies, as these may not be recovered by the constrained simulation. The reconstruction of the matter density from the group catalogue only uses groups of mass above \(\log\)\(M_{\rm group}=12\) and applies a Gaussian kernel with a smoothing scale of 2 \(\,h^{-1}\,\)Mpc (Wang et al., 2016). As a result, information on haloes and subhaloes below this mass scale and length scale is lost, and the reconstructed (sub)haloes could differ from the actual ones. Matching galaxies to these subhaloes could introduce noises to the galaxy-halo relations compared to the true ones. Therefore, it is necessary to consider the mismatch effect when analysing the galaxy-halo relations based on this galaxy-subhalo matching catalogue. In the following section, we will perform tests to investigate
Figure 4: \(g-r\) colour prediction trained on the \(z_{\rm lim}\)-selected SDSS-ELUCID catalogue. Top-left: SDSS colour distribution of the \(z_{\rm lim}\) sample (blue solid) and the RF predictions of the training (blue dashed) and test (blue dotted) sets from this sample. The application of this RF to all SDSS-ELUCID subhaloes is shown by the black solid, and the corresponding true SDSS colour distribution of these subhaloes is shown by the red solid. Top-right: comparison between SDSS colour (x-axis) and predicted colour (y-axis). Bottom-left/right: comparison between SDSS and prediction in the training/test sample.
the impact of the mismatch effect on our RF results using SAM and hydrodynamic simulation.
## 4 Mismatch effect in prediction
### Mismatch effect using SAM
In this section, we aim to test the potential impact of the mismatch effect between SDSS galaxies and ELUCID subhaloes by creating a similar mismatch in galaxies of a SAM model implemented on ELUCID (Luo et al., 2016). Since we regard the mismatch effect as noise in the galaxy-subhalo relation, we mimic it by randomly shuffling the SAM galaxies in subhaloes within narrow \(M_{\text{peak}}\) bins of 0.2 dex in the vicinity of 5 \(h^{-1}\) Mpc cubic cells. The constrain of narrow \(M_{\text{peak}}\) bin maintains a relatively reasonable stellar mass-\(M_{\text{peak}}\) relation, consistent with the principle of neighbourhood abundance matching when assigning SDSS galaxies to ELUCID subhaloes. Shuffling in the neighbourhood of 5 \(h^{-1}\) Mpc cells is in line with the advantage of the constrained simulation that it can recover the subhalo distribution at this scale. For example, Yang et al. (2018) investigate the separation between galaxy and subhalo pairs in the SDSS-ELUCID matched catalogue and find that most of the pairs are separated below \(\sim 5\,h^{-1}\) Mpc in both \(r_{p}\) and \(\pi\) directions. The shuffling breaks the original connection between galaxy properties and subhalo properties other than \(M_{\text{peak}}\), thus adding noise to the true galaxy-subhalo relation.
We subsequently construct RF models to predict galaxy colour using the original SAM galaxy-subhalo pairs and the shuffled pairs, respectively. To access a reasonable estimation of the mismatch effect, we use the SAM galaxies hosted by the subhaloes of the \(z_{\text{lim}}\)-selected sample before shuffling. The left panel of Figure 5 shows the SAM colour distributions of galaxies in \(z_{\text{lim}}\)-selected subhaloes (red solid) and the RF prediction (black solid), which are highly similar. The middle panel displays the two-dimensional distribution. Generally, the contours are aligned with equality with small deviation. The black solid with shadow indicates the median and 16%-84% of prediction at fixed SAM colour bins. The large deviation at \(g-r<0.2\) of SAM is possibly due to the low number of extreme blue galaxies in this range. The \(R^{2}\) of the prediction is \(\sim 0.8\), significantly higher than that in the SDSS prediction. This indicates that the galaxy-subhalo connection in the SAM is much stronger, which is consistent with the construction of the SAM. Xu et al. (2022) find that adding galaxy properties such as black hole mass and cold gas mass can further improve the prediction of the SAM colour.
Recently, Jespersen et al. (2022) propose a graph neural network method to predict several SAM galaxy properties based on halo merger trees. Unlike traditional machine learning methods where the input features are halo properties extracted from the merger tree, their model uses the merger tree itself as input, maximizing the information obtained from the growth history of the halo. Their prediction performance of SFR is impressively higher (\(R^{2}=0.876\)) compared to previous studies in the literature. We also perform a test predicting the SFR with RF and find that the \(R^{2}\) score is 0.864, which is very similar to that of the graph neural network. This indicates that the RF is capable of capturing the connections between galaxy properties and halo or subhalo properties if exist.
Back to the left panel of Figure 5, the black dashed curve indicates the prediction based on the shuffled sample. The prediction still features the blue and red peaks, but the red peak is slightly lower than that in the original SAM, and the blue peak is slightly higher. The overall recovered colour range is narrower, and some of the extreme blue and red values are missing compared to the prediction before shuffling. The right panel is the two-dimensional comparison between the shuffled prediction and the original SAM. The deviation from equality is larger than that in the middle panel, especially at \(g-r<0.4\), and the scatter in the prediction at fixed SAM colour is also larger. The \(R^{2}\) value of 0.655 is lower than that before shuffling.
With the noises in the galaxy-subhalo relation introduced by the shuffling, the performance of RF colour predicting is impacted. However, even with shuffling, the \(R^{2}\) score of the prediction is still higher than that of the SDSS prediction. We find from the RF that the most important subhalo feature for predicting SAM colour is \(V_{\text{peak}}\), which is highly correlated with \(M_{\text{peak}}\) and likely remains similar after shuffling. Other relatively important subhalo features for the prediction are subhalo assembly properties such as \(z_{\text{acc}}\) and \(z_{0.1/0.3/0.5/0.7/0.9}\). Although the shuffling process reassigned these subhalo properties for a given galaxy, the correlations between galaxy and subhalo properties may not be completely removed due to the constraints of the shuffling. This will be further demonstrated by the correlation coefficients before and after shuffling in Section 4.3. As a result, the galaxy colour can still be partially reproduced after the shuffling. If the colour-subhalo relation in the real Universe is similar to that in the SAM, the RF could capture this relation with an \(R^{2}\) of approximately 0.6, accounting for possible mismatches. Thus, it is likely that the connection between colour and subhalo properties in the real Universe is not as strong as that in the SAM. As a further step, we perform a similar test using the TNG300 hydrodynamic simulation and compare it with the SDSS prediction in the following section.
### Mismatch effect using TNG300
Without the SDSS region, comparisons between the TNG300 predictions and the SDSS or SAM predictions are indirect. Since the SDSS galaxies are matched to a fraction of ELUCID subhaloes in the corresponding SDSS region, and we select SDSS galaxies according to \(z_{\text{lim}}\) to ensure the completeness of subhaloes, some subhaloes in the SDSS region of ELUCID are empty (i.e. not occupied by \(z_{\text{lim}}\)-selected SDSS galaxies). Since more massive subhaloes tend to host brighter galaxies that are more likely to be observed, the occupied fraction of subhaloes will increase with \(\log M_{\text{sub}}\). To account for this effect in TNG300, we measure the occupied fraction as a function of \(\log M_{\text{sub}}\) in the SDSS \(z_{\text{lim}}\) catalogue and select a random sample of galaxies in TNG300 which can reproduce this trend.
Figure 6 displays the occupied fraction in both the SDSS \(z_{\text{lim}}\) sample (red solid) and the selected TNG300 sample (black solid). The occupied fraction is \(\sim 0\) for \(\log M_{\text{sub}}<11\) and rapidly increases to \(\sim 1\) at \(\log M_{\text{sub}}\sim 12\). This implies that the abundance of low-mass subhaloes is largely suppressed in observation, while the subhaloes of \(\log M_{\text{sub}}>12\) are barely affected. The advantage of this selection in TNG300 is that it can create a training sample where the subhalo population is similar to that of the SDSS training sample. This is important because the machine learning performance of colour prediction might depend on \(\log M_{\text{sub}}\).
To investigate the effect of mismatch on RF colour prediction using the TNG300 simulation, we implement the shuffling strategy described in Section 4.1 on the selected sample, shuffling galaxies in subhaloes of fixed subhalo mass bins (0.2 dex) in cells of 5 \(h^{-1}\) Mpc. Similar to the SAM, we construct RF models to predict galaxy colour with subhalo properties before and after shuffling and present the results in Figure 7. The left panel shows the colour distribution of the selected TNG sample (red solid) and the corresponding prediction (black solid). The TNG colour distribution shows a narrow red peak at \(g-r=0.75\) and a broad blue peak around \(g-r=0.4\). The prediction
successfully captures the red peak, but the predicted blue peak is narrower and higher than that in TNG, and the amount of extreme blue galaxies with \(g-r<0.3\) are underestimated. In the middle panel, the deviation of the prediction is mainly seen at \(g-r<0.4\) where the predicted values are higher, and the prediction at the red end aligns more closely with TNG. The \(R^{2}\) score for the prediction is 0.726, which is comparable to that in the SAM. However, the performance of TNG RF in recovering the blue colour is relatively worse than that of the SAM.
The black dashed in the left panel represents the RF prediction based on the shuffled sample. Compared to the prediction before shuffling, both the predicted red and blue peaks deviate more from those in the original TNG, in the way that the red peak is lower and the blue peak is higher. Moving to the right panel which illustrates the two-dimensional distribution of the shuffled prediction and the original TNG, we find that the deviation from equality is also more pronounced, with an \(R^{2}\) score of 0.588. Compared to the SAM results in Figure 5, the mismatch effect shows a similar impact on the RF prediction of the TNG sample.
It is worth noting that both the \(R^{2}\) of SAM and TNG prediction after shuffling are higher than that of the SDSS prediction. Assuming that the SDSS-ELUCID matched catalogue is also subject to a similar mismatch effect, it is reasonable to infer that the true connection between galaxy colour and subhalo properties in SDSS is weaker than those in the SAM and TNG before shuffling. This suggests that the galaxy colour in the real Universe may also depend on baryonic processes such as AGN feedback. In the next subsection, we will compare the galaxy-subhalo relation in SDSS, SAM, and TNG in more detail in terms of the correlation coefficient between galaxy properties and subhalo properties.
### Comparison between SDSS, SAM, and TNG
To further investigate the differences in the galaxy-subhalo relations between the SDSS, SAM, and TNG samples, we calculate the Pearson correlation coefficient \(\rho\) between each pair of galaxy properties and halo properties. The correlation coefficient is a statistical measure that quantifies the strength and direction of the correlation between two variables. It ranges from -1 to 1, and values close to 1 (-1) indicate strong positive (negative) correlations, while values close to 0 indicate weak correlations. In Figure 8, we show the correlation coefficients between SDSS or SAM galaxy properties (\(y\)-axis) and ELUCID subhalo properties (\(x\)-axis). Subhaloes with non-physical \(z_{0.1/0.3/0.5/0.7/0.9}\) (i.e. main branch starts with a fraction of peak mass larger than 0.1/0.3/0.5/0.7/0.9) are excluded when measuring the correlation coefficients related to these properties. The colour coding indicates the correlation coefficients, with reddish for positive correlations and blueish for negative correlations.
The top panel shows the correlation coefficients in the SDSS-ELUCID matched sample with \(z_{\rm lim}\) selection. The \(M_{r}\) values of SDSS galaxies exhibit a strong correlation with subhalo mass indi
Figure 5: \(g-r\) colour prediction based on original SAM and shuffled SAM. Left: SAM colour distribution of SDSS \(z_{\rm lim}\)-selected subhaloes (solid red), predicted colour of these subhaloes (black solid), and the prediction based on the shuffled sample (black dashed). Middle: comparison of SAM colour and predicted colour for SDSS-matched subhaloes. Right: comparison of SAM colour and prediction based on the shuffled sample for these subhaloes.
Figure 6: The fraction of subhaloes occupied by SDSS \(z_{\rm lim}\)-selected galaxies as a function of log\(M_{\rm sub}\). Red solid indicates the occupied fraction in the SDSS \(z_{\rm lim}\) sample, which is defined as the ratio of the number of subhaloes in this sample to the total number of subhaloes in the SDSS region of ELUCID. The black solid indicates the occupied fraction in the selected sample of TNG300.
cators (i.e. mass properties and circular velocity properties), in the way that more massive subhaloes host brighter galaxies. Subhalo assembly properties such as \(z_{0.1/0.3/0.5/0.7/0.9}\) and \(z_{\rm first/last}\) also demonstrate a correlation with \(M_{r}\). In comparison to \(M_{r}\), correlations between colour and subhalo properties are generally weaker. Moderate correlations (\(\sim 0.4\)) are found between colour and mass indicators, and the correlations with subhalo assembly properties are close to zero. Environmental properties such as \(\delta_{2,1}\) and \(t_{\rm web}\) correlate weakly with both \(M_{r}\) and colour.
The second panel displays the correlations of the original SAM sample using the \(z_{\rm lim}\)-selected subhaloes. Compared to the findings in SDSS, \(M_{r}\) in the SAM sample correlates weaker with mass indicators, and the correlations with subhalo assembly properties are negligible. In contrast, galaxy colour in the SAM correlates stronger with mass indicators than that in SDSS. This may be the reason that the RF provides a more accurate prediction of galaxy colour in the SAM. Both \(M_{r}\) and colour in the SAM correlate very weakly with halo assembly properties.
In the corresponding shuffled sample shown in the third panel, all the correlation coefficients involving subhalo mass indicators and environmental properties are almost maintained from the original sample due to the shuffling constraints. Since the correlations relating
Figure 8: Pearson correlation coefficient between galaxy properties (\(y\)-axis) and subhalo properties (\(x\)-axis). From top to bottom, the samples are \(z_{\rm lim}\)-selected SDSS galaxies (subhaloes), original SAM galaxies of these subhaloes, shuffled SAM, and original SAM of all subhaloes above \(M_{\rm sub}\)=10.
Figure 7: \(g-r\) colour Prediction based on original TNG and shuffled TNG. Left: TNG colour distribution of selected subhaloes (red solid), prediction of these subhaloes (black solid), and prediction based on the shuffled sample (black dashed). Right: comparison of the TNG colour and the prediction. Right: comparison of TNG colour and prediction based on the shuffled sample.
to subhalo assembly properties are weak in the original SAM, the overall correlations between \(M_{r}\) or colour and subhalo properties are essentially unchanged. However, it is important to note that the correlation coefficient captures the correlations between individual pairs of variables instead of the multi-variate dependence. With the shuffling, the multi-variate dependence between colour and subhalo properties experiences small variations, as indicated by the slightly lower \(R^{2}\) after shuffling compared to that before the shuffling.
The fourth panel shows the correlations in the original SAM sample using all subhaloes above \(M_{\rm sub}\)=10. This sample contains more low-mass subhaloes compared to the \(z_{\rm lim}\)-selected sample. Compared to the second panel, this sample shows tighter correlations between \(M_{r}\) and mass indicators, as well as the merger tree properties such as \(N_{\rm merge}\) and \(z_{\rm first/last}\). The correlations between colour and mass indicators are weaker in this sample, while the correlations between colour and subhalo assembly properties are stronger. Positive correlation coefficients suggest that red galaxies tend to reside in early-formed subhaloes. Interestingly, the colour correlates more strongly with late formation stage properties (i.e. \(z_{0.7}\)) than those characterising the early formation stage of subhaloes (i.e. \(z_{0.1}\)).
Considering the differences between the second panel and the fourth panel, it is important to acknowledge that generalizing ML models based on the \(z_{\rm lim}\)-selected subhaloes to the entire ELUCID simulation may introduce biases if the galaxy-subhalo relation also depends on subhalo mass in the real Universe. Observations including more faint galaxies (and thus low-mass subhaloes) such as DESI and constrained \(N\)-body simulation recovering smaller mass and length scales could be helpful for investigating galaxy-subhalo relations in the low mass range.
We also conduct the same analysis with TNG galaxies. The top panel of Figure 9 presents the correlation coefficients of the selected subhaloes of TNG. \(M_{r}\) is highly correlated with mass indicators and weakly correlated with assembly properties, which are both stronger than those in the SAM. Notably, the \(M_{r}\) correlations with \(z_{0.1}\sim z_{0.9}\) gradually decrease, suggesting that \(M_{r}\) depends more on the early formation stage than the late formation stage of the subhalo. This trend is absent in the \(z_{\rm lim}\)-selected SAM sample but is also present in the SDSS sample. Similar to the SAM, the TNG colour moderately correlates with mass indicators and is nearly independent of assembly properties. The second panel shows the results of the shuffled sample. We find again that the shuffling barely affects the correlations related to mass indicators. Additionally, the correlations between \(M_{r}\) and \(z_{0.1}\sim z_{0.9}\) remain partially intact, along with the gradually decreasing trend. This is possibly due to the shuffling constraint which limits the shuffling within 5 \(h^{-1}\) Mpc cells, and the subhaloes assembly properties of similar mass may exhibit minimal variations within these cells.
In the third panel of Figure 9 which includes all subhaloes above \(M_{\rm sub}\)=10, the \(M_{r}\) correlations with mass indicator properties are slightly stronger, and correlations with assembly properties can be both higher (e.g., \(N_{\rm merge}\), \(z_{\rm first/last}\)) and lower (e.g., \(z_{0.1}\sim z_{0.9}\)) compared to the selected sample. With a large amount of low-mass subhaloes, the colour correlations with mass indicators are much lower than those in the selected sample. However, the colour correlations with assembly properties are higher. Overall, the colour-subhalo correlations are weaker in the TNG compared to those in the SAM in all subhalo above \(M_{\rm sub}\)=10.
Comparing the results of the SDSS sample in the top panel of Figure 8 with the corresponding SAM (second and third panels of Figure 8) and TNG results (top two panels of Figure 9), we find that the \(M_{r}\)-subhalo relation in the SDSS is more similar to that in TNG, in terms of the dependence on mass indicators and some of the assembly properties. The SDSS colour-subhalo correlation is weaker than both the SAM and TNG, even after shuffling. So it is possible that the true underlying colour-subhalo connection in SDSS without the mismatch effect is lower than those in the SAM and TNG before shuffling, and baryonic processes such as AGN feedback and other stochastic processes may have significant impacts on SDSS galaxies. Further comparison between the SDSS and TNG galaxies can be carried out with the upcoming ELUCID hydrodynamic simulation (HELUCID, Cui in prep), which can provide new insights into galaxy-subhalo relation in the real Universe.
## 5 Summary
Using a catalogue matching SDSS galaxies with ELUCID subhaloes, we employ random forest to predict galaxy magnitude and colour based on a few subhalo properties that characterise subhalo mass, assembly history, and environment. Before training the RF, we select a sample of galaxy-subhalo pairs from the SDSS-ELUCID matched catalogue according to the redshift limitation that corresponds to subhalo mass completeness. This eliminates most of galaxies with subhaloes of \(\log M_{\rm sub}<11\) and a fraction of galaxies with subhaloes of \(11<\log M_{\rm sub}<12\). Training on this selected sample, the RF model can predict the \(M_{r}\) reasonably accurately with an \(R^{2}\) score of \(\sim\)0.8, with deviations mainly arising from extremely bright and faint galaxies. The prediction can recover the luminosity function and galaxy-matter cross-correlation in the range of \(-22<M_{r}<-18\). Extending the predictions to all ELUCID subhaloes results in slightly larger deviations, especially at the faint end. In contrast, the accuracy of colour prediction is significantly lower, with an \(R^{2}\) score of \(\sim 0.3\). The RF model fails to reproduce the position of the red peak in SDSS \(z_{\rm lim}\)-selected sample, leading to large deviations in predicted colour values from the true colour. We also train RF models to predict physical galaxy properties such as \(M_{*}\) and sSFR. The prediction performance of \(M_{*}\) is similar to that of the \(M_{r}\), and the prediction performance of sSFR is similar to that of the colour.
One possible explanation for the low accuracy of colour prediction is the difference between the matched subhaloes and the underlying true subhaloes of SDSS galaxies, or in other words, the mismatch between SDSS galaxies and subhaloes. To investigate this effect, we utilise galaxies from a SAM model implemented on ELUCID. We shuffle the galaxies around subhaloes in \(\log M_{\rm peak}\) bins of 0.2 dex and in cubic cells of 5 \(h^{-1}\) Mpc. RF models are trained on the \(z_{\rm lim}\)-selected subhaloes both before and after the shuffling. Before the shuffling, the colour prediction is reasonable with an \(R^{2}\) of 0.79, and the bimodal distribution of colour is reproduced. The effect of shuffling lowers the \(R^{2}\) score to 0.66, but still higher than that of the SDSS sample.
We also perform the same test using galaxies in TNG300. Since the density field of TNG300 is not directly matched to the SDSS, we select random fractions of subhaloes as a function of \(\log M_{\rm sub}\) to ensure that the selected subhalo sample reproduces the subhalo abundance in the \(z_{\rm lim}\)-selected subhaloes of SDSS. Before shuffling, the \(R^{2}\) of colour prediction is 0.73, and it decreases to 0.59 after shuffling. The impact of shuffling in TNG is comparable to that in the SAM, which slightly lowers the colour-subhalo connection. This finding suggests that the colour-subhalo connection in SDSS may be weaker than both the SAM and TNG, even in the absence of the mismatch effect.
In the end, we measure the Pearson correlation coefficients between \(M_{r}\) or colour and the subhalo properties for SDSS, SAM, and TNG samples. In the SDSS and selected TNG, \(M_{r}\) shows a strong
correlation with subhalo mass indicators such as mass properties and circular velocity properties and a weak correlation with subhalo assembly properties. However, these correlations appear weaker in the selected SAM sample. The colour in both selected SAM and TNG correlates moderately with mass indicators and exhibits small dependence on assembly properties, and the correlations between SDSS colour and subhalo properties are weaker than both the SAM and TNG. The shuffling shows minimal effects on the correlation coefficients of both the SAM and TNG samples. In terms of the correlation coefficients, the colour in SDSS also demonstrates a lower connection with subhaloes compared to both the SAM and TNG, taking the mismatch effect into consideration.
We also show that the correlation coefficients in SAM and TNG depend on the subhalo mass. Including more low-mass subhaloes, the \(M_{r}\) correlation coefficients increase in both SAM and TNG, especially those with mass indicators. The colour correlations with subhalo assembly properties also increase, but those with mass indicators decrease, especially those in the TNG. It is possible that the galaxy-subhalo correlation in SDSS also depends on subhalo mass. Appropriate care should be taken when generalizing studies of galaxy-halo connection from SDSS-like subhaloes to a broader range of halo masses.
The results above suggest that it is reasonable to learn the \(M_{r}\)-subhalo relation with machine learning using a galaxy-subhalo matched catalogue built on constrained simulation, but it is difficult to capture the colour-subhalo relation. The HELUCID simulation which includes baryonic particles in addition to the dark matter particles will be available in the future. It is helpful for investigating the difference between simulated galaxies and real galaxies on a one-to-one level and provides new insights into galaxy-halo relations and galaxy formation and evolution. Advanced surveys such as DESI which include more faint galaxies are also important in resolving the issue of SDSS on the galaxy-halo relations in the low-mass end.
## Acknowledgements
This work is supported by the National Science Foundation of China (grant Nos. 11833005, 11890692, 11621303), 111 project No. B20019, and Shanghai Natural Science Foundation, grant No.19ZR1466800. We acknowledge the science research grants from the China Manned Space Project with No.CMS-CSST-2021-A02. XX acknowledges the support from Shanghai Post-doctoral Excellence Program (2021231) and China Postdoctoral Science Foundation (2022M712085). YZ acknowledges the support from the National Science Foundation of China (grant No. 12273088)
## Data availability
The ELUCID simulation and the SDSS-ELUCID matched catalogue used in this work can be accessed at [https://gar.sjtu.edu.cn/data/ELUCID.html](https://gar.sjtu.edu.cn/data/ELUCID.html). TNG simulation data can be accessed at [http://www.TNG-project.org](http://www.TNG-project.org).
|
2302.14813 | Multirate Spectral Domain Optical Coherence Tomography | Optical coherence tomography is state-of-the-art in non-invasive imaging of
biological structures. Spectral Domain Optical Co-herence Tomography is the
popularly used variation of this technique, but its performance is limited by
the bandwidth and res-olution of the system. In this work, we theoretically
formulate the use of phase modulators and delay lines to act as filters on the
tomography system and scan multiple channels. Various channels are then
combined in a digital computer using filter bank theory to improve the sampling
rate . The combination of multiple channels allows for increasing the axial
resolution and maximum unambiguous range beyond the Nyquist limit. We then
simulate the multirate spectral domain optical coherence tomography with 2
channels. We show that a single delay line can improve the axial resolution
while a pair of phase modulators can improve the maximum unambiguous range of
the system. We also show the use of multirate filter banks to carry out this
process. Thus, by using a few extra components in the spectral domain optical
coherence tomography, its performance can be increased manifold de-pending on
the number of channels used. The extra cost is the time taken to perform the
extra scans that is trivial for stationary objects like biological tissues. | Prabhav Gaur, Andrew Grieco, Yeshaiahu Fainman | 2023-02-28T18:06:15Z | http://arxiv.org/abs/2302.14813v1 | # Multirate Spectral Domain Optical Coherence Tomography
###### Abstract
Optical coherence tomography is state-of-the-art in non-invasive imaging of biological structures. Spectral Domain Optical Coherence Tomography is the popularly used variation of this technique, but its performance is limited by the bandwidth and resolution of the system. In this work, we theoretically formulate the use of phase modulators and delay lines to act as filters on the tomography system and scan multiple channels. Various channels are then combined in a digital computer using filter bank theory to improve the sampling rate. The combination of multiple channels allows for increasing the axial resolution and maximum unambiguous range beyond the Nyquist limit. We then simulate the multirate spectral domain optical coherence tomography with 2 channels. We show that a single delay line can improve the axial resolution while a pair of phase modulators can improve the maximum unambiguous range of the system. We also show the use of multirate filter banks to carry out this process. Thus, by using a few extra components in the spectral domain optical coherence tomography, its performance can be increased manifold depending on the number of channels used. The extra cost is the time taken to perform the extra scans that is trivial for stationary objects like biological tissues.
Optical Coherence Tomography; Spectral Domain; Multirate filter bank; Phase modulation; Delay line; Signal processing
## 1 Introduction
Optical Coherence Tomography (OCT) is an important and powerful 3D imaging tool for several biomedical applications. The first OCT was demonstrated three decades ago by [1], but has evolved since then and has remained a major technique to study layered structures for ophthalmology. OCT was initially performed as Time Domain OCT (TD-OCT) [2] which is analogous to ultrasound. TD-OCT uses an echo time delay of backscattered light to create a cross-sectional image of the tissue under investigation. With the improvement in hardware technology, TD-OCT led to Fourier Domain OCT (FD-OCT)[3] which relies on multiple wavelengths to make the axial scan, also called the A-scan. The FD-OCT can be implemented in two different ways, namely Spectral Domain OCT (SD-OCT) [4] and Swept Source OCT (SS-OCT) [5]. Both of them use interferometry of reflected and reference light wave, and measurement of several wavelengths to gather depth information. SD-OCT works with a partially coherent broadband source and a spectrometer is used to detect the interference of different wavelengths. SS-OCT utilizes a wavelength sweep from a coherent laser and the interference is detected using a photodetector. The Fourier transform of the detected signal yields the depth information in both of the above cases. The A-scan can measure up to a depth of few mm with a resolution of few \(\upmu\)m [6,7]. Multiple A-scan while moving in the other two dimensions, also called B-scan, can be used to determine the 3D structure of the object.
SD-OCT is currently one of the most popular commercial techniques [8] to evaluate tissue structure due to its high data acquisition rate, high axial resolution, good SNR and the simplicity of the hardware required to perform A-scan. SD-OCT has most extensively been used to diagnose diseases and disorders in the brain [9] and retina [10-12] but has also found application for other tissues such as the breast [13], kidney [14,15], and skin [16]. Recently, a wide variety of modifications have been applied to SD-OCT to improve its performance [17-30]. Given enough SNR [31] in the SD-OCT system, the axial resolution and the maximum unambiguous range are determined by the Nyquist-Shannon sampling theorem [32], where frequency and time (length) domain form a fourier pair. Let \(f_{o}\) be the resolution of the spectrometer being used and \(B\) be the bandwidth of the system either limited by source or spectrometer. The axial resolution \(l_{o}\) and maximum unambiguous distance \(L\) can be given by
\[l_{o}=c/B\ \ \ ;\ \ L=c/2f_{o} \tag{1}\]
where \(c\) is the speed of light in vacuum. Thus, the system after detection behaves like an analog system before and then like a digital system after photodetection.
In this work, we theoretically formulate the use of delay line and phase modulators along with multiple A-scan at the same position to improve the resolution and maximum unambiguous depth of SD-OCT. We show that the use of optical components at various positions in an SD-OCT system can be treated as filters acting on the depth information.
By making several scans with different filters, multiple channels can be created that have depth information encoded in them. After detection, the theory of multirate filter banks [33] is used to combine the channels and increase the resolution in the frequency or length domain. We then simulate simple cases of this system to demonstrate the multirate SD-OCT. This work is based on the principles of linear digital signal processing and statistical optics, which although well known, have not been combined in such a way to improve SD-OCT to the best of our knowledge.
## 2 Materials and Methods
The Multirate filter banks are sets of filters, decimators, and interpolators used widely in conventional digital systems [34]. Usually, decimators downsample the signal after passing through analysis filters. This compressed information is stored or transmitted via a channel. On the other end of the channel, the signal is interpolated or upsampled and passed through synthesis filters to retrieve the original information. The downsampling process means decreasing the system's resolution, which is similar to an undersampled tomography system. The tomography systems are also discrete after detection, and analog filters can be implemented by phase modulation/delay line on the optical carrier signal before detection and by digital processing after detection. Hence, the imaging system can be considered as a multirate filter bank with each scanning cycle representing a single channel and carrying object information in a compressed form. In this work, we formulate the underlying equations governing the SD-OCT to determine the analog filter applied to it. We use the theory of multirate filter bank to determine the digital filters needed to combine back the information. For proof of concept, we simulate a 2-channel filter bank implementation that results in a twofold improvement in both the length and frequency resolution of the tomography system. In our previous work [35], we have demonstrated theoretically and experimentally a similar multirate system for SS-OCT. For SS-OCT, a single phase modulator is sufficient to improve the frequency and length resolution. In this work we simulate a similar result for SD-OCT, where an extra phase modulator and delay line is needed. Also, the formulation requires the inclusion of the statistics of broadband source, which is not necessary for SS-OCT.
### Spectral Domain OCT
SD-OCT uses a broadband source for illumination as shown in Figure 1(a). Let \(r_{s}(t)\) be the complex field emitted by this source as a function of time \(t\). We assume that \(r_{s}(t)\) is an ergodic process, and thus by Wiener-Khinchin theorem [36] the power spectral density \(S(f)\) of this source can be given by
\[PSD_{s}(f)=\mathbb{F}\left\{R_{s}\left(\tau\right)\right\}(f)=S(f) \tag{2}\]
where \(\mathbb{F}\{.\}\) represents the Fourier transform, \(f\) is the frequency and \(R_{s}(t)\) is the autocorrelation function of the complex field emitted by the source given by
\[R_{s}\left(\tau\right)=\lim_{T\to\infty}\frac{1}{T}\int_{t\to T/2}^{T/2}r_{s} \left(t+\tau\right)r_{s}^{*}\left(t\right)dt \tag{3}\]
Next, consider a heterogeneous object with multiple optical media and their corresponding surface present only at spacings of effective optical distance \(il_{o}\) from the first surface (Figure 1(a)). Since a reflected beam will pass through each section twice (once in the transmission direction, and once in the reflected direction), the effective optical path length of each section is defined as twice the distance multiplied by the effective index of the medium. \(i\) is an integer in interval [1, _N_-1], where \(N\) is the total number of surfaces that can be present, including the calibrated first surface which lies on the balance point of the interferometer and is a free parameter determined by the path difference between the sample and reference arm.. \(l_{o}\) determines the axial resolution of the imaging system. The
light source is incident on this sample. The reflection from the \(i^{\text{th}}\) surface is given by the following relation
\[r_{s}\left(t\right)=a\left(i\right)r_{s}\left(t-t_{s}\right) \tag{4}\]
\(a(i)\) is a complex number describing reflection from the \(i^{\text{th}}\) surface. The reflection coefficient can be calculated from Fresnel equations. Theoretically, \(a(i)\) can have contributions from surfaces other than the \(i^{\text{th}}\) surface. This is due to possible multiple reflections in between the surfaces in the multiple layers that give the same delay as the \(i^{\text{th}}\) surface would have produced. But these extra terms can be neglected in biological samples with small refractive index changes because usually \(r\) (reflection coefficient) \(\ll t\) (transmission coefficient), which will attenuate the multiple reflections. This approximation was first used in Fizeau interferometer [37] and is often used in interferometry. If the \(i^{\text{th}}\) surface is absent
the \(a(i)\) can be considered to be zero. \(t_{i}\) is the time delay corresponding to reflection from the \(i^{\text{th}}\) surface. Scattering is neglected to keep the formulation simple. The total reflection coming from the object is given by equation
\[r_{\text{total}}\left(t\right)=\sum_{i=1}^{N-1}r_{i}^{2}\left(t\right) \tag{5}\]
The field in the sample arm will be proportional to the \(r_{\text{total}}\). The proportionality constant depends on 1) the coupling coefficient of the 3db fiber coupler, losses, etc. which are neglected as they correspond to scaling terms, and 2) sample arm length, which is assumed to be equal to that of the reference arm is also neglected.
Figure 1: Setup for SD-OCT along with the variants studied in this work. (a) Regular SD-OCT setup in fiber and the object that is under investigation. (b) Addition of a delay line to SD-OCT in the sample arm. This implements a transfer function in the frequency domain and improves resolution in the length domain. (c) Addition of a couple of phase modulators to sample and reference arms. This implements a transfer function in the length domain and improves the maximum unambiguous range. (d) Representation of SD-OCT and the signal processing in form of a block diagram. The Z domain corresponds to the frequency or length domain depending on the implementation of (b) or (c). \(a(n)\) represents the optical signal that the interference term of SD-OCT carries and our aim is to recover this signal digitally. The green blocks represent analysis filters that are provided optically using a delay line or phase modulators. The blue blocks represent downsampling which is naturally present in the system when the spectrometer has worse bandwidths or resolution than desired. Broken lines represent spectrum measurement using a spectrometer. The red blocks represent upsampling which is implemented on a digital computer. The yellow blocks represent synthesis filters that can be calculated from the theory of multirate signal processing and are also implemented digitally. Such scans are made M times with different filters as shown as M different channels. Finally, all the channels are combined to give \(y(n)\) which should be close to a perfect reconstruction of \(a(n)\) with desire resolution/maximum unambiguous range.
\[r_{\text{sample}}=r_{\text{total}}=\sum_{i=1}^{N+1}r_{i}\left(t\right) \tag{6}\]
The field in the reference arm is the original field of the source that is transmitted to the object and is given by
\[r_{\text{reference}}=r_{s}\left(t\right) \tag{7}\]
The complex field at the spectrometer is
\[r\left(t\right)=r_{s}\left(t\right)+\sum_{i=1}^{N-1}a\left(i\right)r_{s}\left( t-t_{i}\right) \tag{8}\]
To calculate the power spectral density, we will use the Wiener-Khinchin theorem and assume that the statistics of the broadband source is ergodic. The autocorrelation function of this field is given by
\[R\left(\tau\right)=\lim_{\tau\rightarrow\infty}\frac{1}{T}\int_{t\to T/2}^{T/ 2}r\left(t+\tau\right)r^{*}\left(t\right)dt \tag{9}\]
\[R\left(\tau\right)=\lim_{\tau\rightarrow\infty}\frac{1}{T}\int_{t\to T/2}^{T /2}\left[r_{s}\left(t+\tau\right)+\sum_{i=1}^{N-1}a\left(i\right)r_{s}\left(t-t _{i}+\tau\right)\right]\left[r_{s}^{*}\left(t\right)+\sum_{i=1}^{N-1}a^{*} \left(i\right)r_{s}^{*}\left(t-t_{i}\right)\right]dt \tag{10}\]
The interference term is given by
\[R_{\text{tar}}\left(\tau\right)=\lim_{\tau\rightarrow\infty}\frac{1}{T}\int_ {t\to T/2}^{T/2}\left[\sum_{i=1}^{N-1}a\left(i\right)r_{s}\left(t-t_{i}+ \tau\right)r_{s}^{*}\left(t\right)+\sum_{i=1}^{N-1}a^{*}\left(i\right)r_{s} \left(t+\tau\right)r_{s}^{*}\left(t-t_{i}\right)\right]dt \tag{11}\]
Thus, the interference term of the autocorrelation is given by
\[R_{\text{tar}}\left(\tau\right)=\sum_{i=1}^{N-1}a\left(i\right)R_{s}\left( \tau-t_{i}\right)+\sum_{i=1}^{N-1}a^{*}\left(i\right)R_{s}\left(\tau+t_{i}\right) \tag{12}\]
Assume a new set of time delay with mapping \(t_{i}\to t_{i}\) for \(i>0\), -\(t_{i}\to t_{i}\) for \(i<0\), and \(t_{0}=0\). Also mapping \(a(i)\to a(i)\) for \(i>0\), \(a^{*}(-i)\to a(i)\) for \(i<0\), and \(a(0)=0\) represents the calibrated first surface.
\[R_{\text{nat}}\left(\tau\right)=\sum_{i=N\to 1}^{N-1}a\left(i\right)R_{s}\left( \tau-t_{i}\right) \tag{13}\]
The effective distance between the \(1^{\text{st}}\) and \(i^{\text{th}}\) surface (as defined before) is \(il_{o}\) for \(i>0\). Performing mapping for time delay \(t_{i}\) as defined above, it can be shown that equation (14) holds for all possible values of \(i\) in the interval [-\(N+1\),\(N\)-1]
\[t_{i}=\frac{il_{o}}{c} \tag{14}\]
Applying Fourier transform on equation (13)
\[PSD_{\text{nat}}\left(f\right)=S\left(f\right)\sum_{i\rightarrow-N+1}^{N-1}a \left(i\right)\exp\left(\frac{-j2\pi i\beta t_{o}}{c}\right) \tag{15}\]
where \(j\) is the unit imaginary number. For a spectrometer like a grating spectrometer, we can measure discrete frequencies with resolution \(f_{o}\). As we have \(2N\)-1 terms in the summation, we measure the interference term at \(2N\)-1. For an integer \(k\) in [0, 2\(N\)-2]
\[f=kf_{o} \tag{16}\]
Usually, the frequency sweep would not start from \(f=0\), but we ignore an offset term in equation (16) as it would only contribute to a constant phase term in equation (15) and can be omitted as a scaling factor. To comply with the Nyquist sampling condition, which is the result of the Nyquist-Shannon sampling theorem, the frequency resolution is chosen such that \(f_{o}\)\(\beta c^{*}\)1=(\(2N\)-1)-1 and the spectrometer measures \(PSD_{\text{int}}(f)\) at \(2N\)-1points. If \(S(f)\) is known, \(a(i)\) can be obtained using inverse discrete fourier transform on equation (15)
\[a(i)=\sum_{k=0}^{2N-2}\frac{PSD_{\text{\tiny{\it{\it{\it{\it{\it{\it{\it{\it{ \it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it}}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}}}} \atop{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}}}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\]
\[\]
\[\]
\]
\[\]
\]
\[\]
\]
[MISSING_PAGE_POST]
\[\
\[r\left(t\right)=r_{s}\left(t\right)\exp\left(j\phi_{2}\left(t\right)\right)+\sum_{i =1}^{N-1}a\left(i\right)r_{s}\left(t-t_{i}\right)\exp\left(j\phi_{i}\left(t \right)\right) \tag{27}\]
The field in this case is no longer ergodic (not even wide sense stationary). To determine the power spectral density at the spectrometer, we will rederive Wiener-Khinchin Theorem as direct autocorrelation is unable to provide the power spectral density. Let \(U_{r}\) be the Fourier transform \(r(t)\) windowed in a region of length \(T\) with center at \(t=0\).
\[U_{T}\left(f\right)=\int\limits_{-\tau/2}^{\tau/2}r(t)\exp\left(-j2\pi ft \right)\ dt \tag{28}\]
Then the power spectral density is given by
\[PSD\left(f\right)=\lim_{T\rightarrow\infty}\frac{1}{T}E\left[\left[U_{r} \left(f\right)\right]^{2}\right] \tag{29}\]
where \(E[.]\)is the expected value of the statistics.
\[PSD\left(f\right)=\lim_{T\rightarrow\infty}\frac{1}{T}E\left[\int\limits_{t -\tau/2}^{\tau/2}r(t)\exp\left(-2\pi ft\right)\ dt\int\limits_{t-\tau/2}^{\tau /2}r^{*}\left(t^{*}\right)\exp\left(2\pi ft\right)dt^{*}\right] \tag{30}\]
The interference term is given by
\[\begin{array}{l}PSD_{\text{int}}\left(f\right)=\lim_{T\rightarrow\infty} \frac{1}{T}E\left[\int\limits_{t-\tau/2,t-\tau/2}^{\tau/2}\int\limits_{t-\tau /2}^{\tau/2}\left[\sum\limits_{i=1}^{N-1}a\left(i\right)r_{s}\left(t-t_{i} \right)r_{s}^{*}\left(t^{*}\right)\right]\exp\left(j\phi_{i}\left(t\right)-j \phi_{2}\left(t\right)\right)\right.\\ \\ \left.+\sum\limits_{i=1}^{N-1}a^{*}\left(i\right)r_{s}^{*}\left(t^{*}-t_{i} \right)r_{s}\left(t^{*}\right)\exp\left(j\phi_{2}\left(t\right)-j\phi_{i}\left( t\right)\right)\right]\exp\left(-j2\pi f\left(t-t^{*}\right)\right)dt^{*}dt\end{array} \tag{31}\]
\[\begin{array}{l}PSD_{\text{int}}\left(f\right)=\lim_{T\rightarrow\infty} \frac{1}{T}\int\limits_{t-\tau/2,t^{*}-\tau/2}^{\tau/2}\int\limits_{t-\tau/2 }^{\tau/2}\left[\sum\limits_{i=1}^{N-1}a\left(i\right)E\left[r_{s}\left(t-t_{i }\right)r_{s}^{*}\left(t^{*}\right)\right]\exp\left(j\phi_{i}\left(t\right)-j \phi_{2}\left(t\right)\right)\right.\\ \\ \left.+\sum\limits_{i=1}^{N-1}a^{*}\left(i\right)E\left[r_{s}^{*}\left(t^{*}-t_{ i}\right)r_{s}\left(t\right)\right]\exp\left(j\phi_{2}\left(t\right)-j\phi_{i} \left(t\right)\right)\right]\exp\left(-2\pi f\left(t-t^{*}\right)\right)dt^{*}dt \end{array} \tag{32}\]
As \(r_{s}(t)\) is an ergodic process, the expected value in the above equation is the autocorrelation function as defined by equation (2)
\[\begin{array}{l}PSD_{\text{int}}\left(f\right)=\lim_{T\rightarrow\infty} \frac{1}{T}\int\limits_{t-\tau/2,t-\tau/2}^{\tau/2}\left[\sum\limits_{i=1}^{N -1}a\left(i\right)R_{s}\left(t-t^{*}-t_{i}\right)\exp\left(j\phi_{i}\left(t \right)-j\phi_{2}\left(t^{*}\right)\right)\right.\\ \\ \left.+\sum\limits_{i=1}^{N-1}a^{*}\left(i\right)R_{s}\left(t-t^{*}+t_{i} \right)\exp\left(j\phi_{2}\left(t\right)-j\phi_{i}\left(t^{*}\right)\right) \right]\exp\left(-j2\pi f\left(t-t^{*}\right)\right)dt^{*}dt\end{array} \tag{33}\]
Substituting \(t=t^{*}+\tau\) and changing limits accordingly
\[\begin{array}{l}PSD_{\text{int}}\left(f\right)=\sum\limits_{i=1}^{N-1}a \left(i\right)\lim_{T\rightarrow\infty}\frac{1}{T}\int\limits_{t-\tau-T}^{\tau }R_{s}\left(\tau-t_{i}\right)\int\limits_{t^{*}-\tau/2-t^{*}}^{\tau/2-t^{*}} \exp\left(j\phi_{i}\left(t^{*}+\tau\right)-j\phi_{2}\left(t^{*}\right) \right)\exp\left(-j2\pi f\tau\right)\ dt^{*}d\tau\\ \\ +\sum\limits_{i=1}^{N-1}a^{*}\left(i\right)\lim_{T\rightarrow\infty}\frac{1}{T} \int\limits_{t-\tau-T}^{\tau}R_{s}\left(\tau+t_{i}\right)\int\limits_{t^{*}- \tau/2-t^{*}}^{\tau/2-t^{*}}\exp\left(j\phi_{2}\left(t^{*}+\tau\right)-j\phi_{ 2}\left(t^{*}\right)\right)\exp\left(-j2\pi f\tau\right)\ dt^{*}d\tau\end{array} \tag{34}\]
The second integral in the limit in both the summation terms becomes a cross-correlation function.
\[\begin{array}{l}PSD_{\text{int}}\left(f\right)=\sum\limits_{i=1}^{N-1}a\left(i \right)\int\limits_{t\rightarrow\infty}^{\infty}R_{s}\left(\tau-t_{i}\right)R_{ \phi}\left(\tau\right)\exp\left(-j2\pi f\tau\right)\ d\tau\\ \\ +\sum\limits_{i=1}^{N-1}a^{*}\left(i\right)\int\limits_{t\rightarrow\infty}^{ \infty}R_{s}\left(\tau+t_{i}\right)R_{\phi}^{*}\left(-\tau\right)\exp\left(-j2 \pi f\tau\right)\ d\tau\end{array} \tag{35}\]
where,
\[R_{\phi}\left(\tau\right)=\lim_{\tau\rightarrow\infty}\frac{1}{T}\int_{\tau \rightarrow\tau/2}^{\tau/2}\exp\left(j\phi_{t}\left(t+\tau\right)-j\phi_{2} \left(t\right)\right)dt. \tag{36}\]
The remaining integral in equation (34) resembles fourier transformation. Mapping \(a(i)\) to \(a(i)\) as before
\[\begin{array}{c}PSD_{\text{\tiny{int}}}\left(f\right)=\left[S\left(f\right) \sum_{i=1}^{N-1}a\left(i\right)\exp\left(-j2\pi f_{i_{i}}\right)\right]\otimes \Phi\left(f\right)\\ \\ +\left[S\left(f\right)\sum_{i=N+1}^{i-1}a^{\ast}\left(i\right)\exp\left(-j2\pi f _{i_{i}}\right)\right]\otimes\Phi^{\ast}\left(f\right)\end{array} \tag{37}\]
where,
\[\Phi\left(f\right)=\mathbb{F}\left\{R_{\phi}\left(\tau\right)\right\}\left(f \right). \tag{38}\]
The second term in equation (35) is the negative part of information in length domain. If support of \(\Phi^{\ast}(f)\), which acts as filter, is small compared to \(N\), the second summation can be ignored in the post-processing of first summation which contains all information of \(a(i)\). Thus, for the first term, the applied phase modulation results in a convolution in the frequency domain with transfer function \(R_{\phi}(\tau)\). Discretizing the interference term and converting to \(Z\) domain gives
\[U(z)=H(z)A(z), \tag{39}\]
\[U\left(z\right)=\mathbb{Z}\left\{PSD_{\text{\tiny{int}}}\left(k\right)\right\} \text{ ; }A\left(z\right)=\mathbb{Z}\left\{S\left(k\right)\sum_{i=1}^{N-1}a \left(i\right)\exp\left(-j2\pi f_{i_{i}}\right)\right\}\text{ ; }H\left(z\right)=\mathbb{Z}\left\{\Phi\left(k\right)\right\}. \tag{40}\]
Equation (39) represents a linear system with transfer function in time domain. The \(Z\) -transforms can be calculated by equation (40). Hence, phase modulation gives transfer function in the length domain unlike the delay line in equation (25) that results in transfer function in the frequency domain.
### Multirate Filter Bank
Use of a tunable delay line in represent a linear system in which multirate signal processing can be used to increase the length resolution of the system as shown in Figure 1(d). Equation (25) corresponds to a transfer function block with the \(Z\)-transform in the frequency domain. As the maximum bandwidth of the spectrometer is usually limited, it may cause the resolution in the length domain (axial resolution) to be less than desired, resulting in under-sampling. Hence phase modulation can be interpreted as a transfer function [ \(H(z)\) ] on the resolution limited signal [ \(A(z)\) ]. Consequently, equation (25) corresponds to a single channel on the left hand side of Fig. 1(d). Likewise, multiple scans can be used to obtain information for all the channels. Next, the channels on the right-hand side are implemented on a digital computer. The depth information can be retrieved numerically by implementing the synthesis filters [ \(F(z)\) ] and then combining the various channels. Similarly, for cross-arm phase modulation, equation (39) corresponds to a single channel but here the \(Z\) domain represents time domain. If the resolution of spectrometer is limited, multiple scans can be used to obtain a spectrum of desired resolution and thus providing the desired maximum depth of the OCT.
Let the spectrometer have a bandwidth that is M times smaller than required so that the axial resolution is downsampled by a factor of M from the desired \(l_{o}\). This can be depicted by the block diagram as shown in Fig. 1(d). The block diagram resembles a single channel of the M channel filter bank. If we make the measurement M times with M different analysis filters ( \(H_{n}\) ), the ideally sampled signal can be reconstructed using synthesis filters ( \(F_{n}\) ). For demonstration purposes, we discuss the situation when M=2 and thus \(m=\) [0,1]. The perfect reconstruction (PR) of \(a(n)\), which is the ideally sampled signal, is said to be achieved when \(y(n)=a(n-K)\), i.e., \(y(n)\) is a perfect replica of \(a(n)\) and is with a shift of \(K\) points. This removes both aliasing and distortion from the reconstruction. For a two-channel filter bank, the PR condition is given by
\[\begin{bmatrix}F_{o}(z)\\ F_{i}(z)\end{bmatrix}=\frac{2z^{-L}}{\Delta(z)}\begin{bmatrix}H_{1}(-z)\\ -H_{o}(-z)\end{bmatrix} \tag{41}\]
where \(L\)=2\(K\)+1 and \(\Delta(z)\) is given by
Similarly, the spectrometer can have resolution M time worse than required so the frequency resolution is downsampled by M. Hence, the block diagram can be again applicable to this case but with inverted domains. Analysis filter \(H_{m}(z)\) can be calculated using equation (26) and (40) depending on the tunable delay line/phase modulation given and synthesis filter \(F_{m}(z)\) can be calculated using equation (41) and (42).
We stress that although equation (26) and equation (39) look the same for both the delay line and cross-arm phase modulation scenarios, it is important to know that their domain is opposite (specifically frequency and length respectively). By performing multiple scans, axial resolution can be improved in the delay line case while the maximum depth is increased in the cross arm phase modulation. Consequently, the delay is of interest when it is desired to overcome the limitation of the bandwidth of the spectrometer,
while the cross-arm phase modulation is of interest when it is desired to overcome the frequency resolution limit of the system. Conceptually, these two cases are equivalent to the presence of a downsampled block in the system. Analysis filters are implemented optically using delay line/phase modulators while the synthesis filters and upsampling blocks are implemented on a digital computer. As the number of channels possible can only be integers, the resolution/maximum unambiguous range can only be improved by an integral multiple.
## 3 Simulation Results
First, to demonstrate the working of a regular SD-OCT, we simulate a single A-scan in this section. We assume a source that has a gaussian shape and is centered around 1300 nm wavelength as shown in Figure 2(a). The spectrometer used is assumed to have a bandwidth of 200 nm and 0.5 nm resolution. This corresponds to an axial optical resolution of 8.45 um and maximum optical depth of 1.7 mm. Consider a simple object that is under investigation and is made up of three reflective surfaces. Two of them are at about optical distance of 0.69 mm with distance between them being 16.9
Figure 2: Demonstration of SD-OCT (**a**) Spectrum of the broadband source used for SD-OCT. (**b**) Spectrum detected by the spectrometer. (**c**) Inverse fourier transform of the detected spectrum that can be used to locate the surfaces and their thickness in the object.
\(\mu\)m. The third surface is present at a distance of 1.5 mm. The interference spectrum that is detected at the spectrometer is given by Figure 2(b). Computing the inverse fourier transform gives the location of these surfaces in the form of peaks as shown in Figure 2(c).
Now, we assume that the bandwidth of the spectrometer is limited to 100 nm, i.e., from 1200 nm to 1300 nm. This would mean that resolution of SD-OCT is 16.9 \(\mu\)m, and for the same object described above, the first two surfaces are not resolvable by the SD-OCT. This is shown in Figure 3(a) where only one peak is visible for the first two surfaces. This measurement acts like our first channel where no delay is present in the sample arm. Next, for a second channel we provide a delay given by:
\[\zeta\left(f\right)=\exp\left(j2\pi f_{o}\right) \tag{43}\]
where \(t_{o}\)=28.3 fs. The length domain information of the second channel is shown by Figure 3(b). Thus, for these two channels
\[H_{o}\left(z\right)=1\;; H_{1}\left(z\right)=z^{-1}. \tag{44}\]
To fulfill the PR condition, the parameters for reconstruction can be calculated as follows
Figure 3: Demonstration of SD-OCT with delay line to improve axial resolution (**a**) Measured depth information from first channel. The limited bandwidth of spectrometer leads to less than desired axial resolution. Thus, the surface close together cannot be distinguished as peaks. (**b**) Measurement from the second channel that utilizes a delay line in the sample arm. (**c**) Combined graph from both the channels is able to resolve the surface close together and the position of all three surfaces are known with 2 times the accuracy compared to a single channel.
Using both the channels and the synthesis filters, the depth information of the SD-OCT can be computed with 2 times better resolution than just a single channel. Also, the two surfaces can be resolved as the resolution has improved to distinguish their peaks. The result from combined channel is shown in Figure 3(c).
Next, we consider the case when the resolution of the spectrometer is limited to 1 nm. This results in a maximum depth of 0.85 mm and the second peak lies beyond the maximum unambiguous range. Thus, when making the resolution of the spectrometer is limited to 1 nm, the resolution of the spectrometer is limited to 1 nm. This result is also shown in Figure 3(d).
Figure 4: Demonstration of SD-OCT with phase modulation to improve maximum ambiguous depth **(a)** Measured depth information using a limited resolution spectrometer. The less than desired resolution results in aliasing and the location of third peak is not its true position. **(b)** The analysis filters implemented using phase modulation. **(c)** Measured depth information in the two channels after implementing phase modulation. **(d)** The synthesis filter calculated numerically using filter bank theory. **(e)** Reconstructed signal by combining the two channels after implementing synthesis filters. The maximum unambiguous range has doubled, and the measured position of the third peak is its true position.
measurement using this limited resolution system, the aliased version of third peak appears which is not the true position of this peak. This is shown Figure 4(a) where the peak that should be at 1.5 mm appears around 0.2 mm. To increase the maximum depth of the OCT and obtain the true position of the third peak (and also the first two peaks) we use two different channels with different phase modulation. For first channel we use \(\phi_{11}\left(t\right)\) and \(\phi_{22}\left(t\right),\) while for the second channel we use \(\phi_{22}\left(t\right)\) and \(\phi_{22}\left(t\right).\) The optical phase modulation is usually generated using RF electrical signals. As arbitrary signals are difficult and cost-ineffective at high speed, we assume sinusoidal modulations. Thus, the phase modulation can be given by:
\[\phi_{\mu}\left(t\right)=A_{p}\sin\left(2\pi f_{\mu\nu}t\right) \tag{46}\]
where \(p,q\in\left[1,2\right].\) We choose the following values for the demonstration: \(f_{11}=f_{21}=22.1\) GHz ; \(f_{12}=f_{22}=44.2\) GHz ; \(A_{1}=2\) rad ; \(A_{2}=1\) rad. The two analysis filters can be calculated by using equations (36), (39) and (40), and are shown in Figure 4(b). After the analysis filters are implemented via phase modulation, the signal is detected using spectrometer and are shown in Figure 4(c). The signal is then upsampled by a factor of 2 in frequency domain. The synthesis filters can be calculated using equation (41) and be implemented digitally on the upsampled signal. Note that to implement the synthesis filters \(\Delta(z)\) should be invertible. This can be ensured by either converting \(\Delta(z)\) to minimum phase filter or engineering stable synthesis filters using various modulation schemes. The coefficients of the synthesis filter are shown in Figure 4(d). After implementing the synthesis filters, the channels are combined to develop the depth information with double the maximum ambiguous length. As shown in Figure 4(e) the true position of the third surface is recovered without any aliasing.
## 4 Discussion
In summary, the multirate SD-OCT was formulated and validated with simulation. In recent literature, a number of techniques have been developed to improve the OCT either using superior hardware [38; 39] or complex post-processing [40; 41]. The novelty of our multirate SD-OCT is not only that it combines the effect of both additional hardware and post processing but also is compatible with some of the other existing technique as use of filters and multiple channels is universally applicable to a linear system. Also, some of the techniques previously shown in literature such as use of multiple broadband sources [30] to improve axial resolution is a special case of multirate SD-OCT where each broadband source can be considered as separate channel multiplexed in wavelength rather in time as in our case.
The resolution and bandwidth of the SD-OCT system are often limited by the spectrometer. Grating spectrometers are widely used in SD-OCT [42] and its performance is determined by various physical parameters such as grating length, number of gratings, material used, wavelengths, etc. [43]. Often these parameters are interdependent and are restricted by technology and feasibility. This, in turn, makes the resolution and bandwidth of the spectrometer interdependent and limited. By using multirate SD-OCT, not only the cost of spectrometer is reduced for one of bandwidth and resolution, but it also gives the opportunity sacrifice one for the other as the sacrificed parameter can be improved by this technique. The price to pay for this technique is the modulator/delay line cost and the extra time required to carry out multiple scans. As the object in SD-OCT are usually stationary, extra scans would not pose a challenge.
Conceptualization, P.G. and A.G.; methodology, P.G.; software, P.G.; validation, P.G., A.G. and Y.F.; formal analysis, P.G.; investigation, P.G.; resources, Y.F.; data curation, P.G.; writing\(-\)original draft preparation, P.G.; writing\(-\)review and editing, A.G. and Y.F.; visualization, P.G.; supervision, A.G. and Y.F.; project administration, Y.F.; funding acquisition, Y.F. All authors have read and agreed to the published version of the manuscript. This work was partially supported by the National Science Foundation (NSF) grant NSF ECCS-2023730, the San Diego Nanotechnology Infrastructure (SDNI) supported by the NSF National Nanotechnology Coordinated Infrastructure (grant ECCS-2025752), and the ASML/Cymer Corporation. The data presented in this study are available on request from the corresponding author. We acknowledge Alexander Franzen for providing ComponentLibrary
The authors declare no conflict of interest.
|
2309.10907 | On Metrics for Analysis of Functional Data on Geometric Domains | This paper employs techniques from metric geometry and optimal transport
theory to address questions related to the analysis of functional data on
metric or metric-measure spaces, which we refer to as fields. Formally, fields
are viewed as 1-Lipschitz mappings between Polish metric spaces with the domain
possibly equipped with a Borel probability measure. We introduce field
analogues of the Gromov-Hausdorff, Gromov-Prokhorov, and Gromov-Wasserstein
distances, investigate their main properties and provide a characterization of
the Gromov-Hausdorff distance in terms of isometric embeddings in a Urysohn
universal field. Adapting the notion of distance matrices to fields, we
formulate a discrete model, obtain an empirical estimation result that provides
a theoretical basis for its use in functional data analysis, and prove a field
analogue of Gromov's Reconstruction Theorem. We also investigate field versions
of the Vietoris-Rips and neighborhood (or offset) filtrations and prove that
they are stable with respect to appropriate metrics. | Soheil Anbouhi, Washington Mio, Osman Berat Okutan | 2023-09-19T20:01:18Z | http://arxiv.org/abs/2309.10907v2 | # On Metrics for Analysis of Functional Data on Geometric Domains
###### Abstract
This paper employs techniques from metric geometry and optimal transport theory to address questions related to the analysis of functional data on metric or metric-measure spaces, which we refer to as fields. Formally, fields are viewed as 1-Lipschitz mappings between Polish metric spaces with the domain possibly equipped with a Borel probability measure. We introduce field analogues of the Gromov-Hausdorff, Gromov-Prokhorov, and Gromov-Wasserstein distances, investigate their main properties and provide a characterization of the Gromov-Hausdorff distance in terms of isometric embeddings in a Urysohn universal field. Adapting the notion of distance matrices to fields, we formulate a discrete model, obtain an empirical estimation result that provides a theoretical basis for its use in functional data analysis, and prove a field analogue of Gromov's Reconstruction Theorem. We also investigate field versions of the Vietoris-Rips and neighborhood (or offset) filtrations and prove that they are stable with respect to appropriate metrics.
_Keywords:_ functional data, Urysohn field, optimal transport, functional curvature.
_2020 Mathematics Subject Classification:_ 51F30, 60B05, 60B10.
###### Contents
* 1 Introduction
* 1.1 Overview
* 1.2 Main Results
* 1.3 Organization
* 2 Gromov-Hausdorff Distance for Metric Fields
* 3 The Urysohn Field
* 4 Gromov-Prokhorov Distance for Metric-Measure Fields
* 5 Gromov-Wasserstein Distance for Metric-Measure Fields
* 6 Gromov-Wasserstein Through Functional Curvature
* 7 Topological Multifiltrations and Their Stability
* 7.1 Neighborhood Multifiltrations
* 7.2 Vietoris-Rips Multifiltrations
* 8 Summary and Discussion
* A Appendix
Introduction
### Overview
This paper addresses problems in _functional metric geometry_ that arise in the study of data such as signals recorded on geometric domains or the nodes of a network. Formally, these may be viewed as functions defined on metric spaces, sometimes equipped with additional structure such as a probability measure, in which case the domain is referred to as a _metric-measure space_, or simply \(mm\)-space. Datasets comprising such objects arise in many domains of scientific and practical interest. For example, on a social network, the edges normally represent some form of direct interaction but a metric \(d\) on \(V\), such as the shortest-path distance, the diffusion distance, or the commute-time distance [12, 27], is useful in quantifying indirect interactions as well. A probability distribution \(\mu\) on its set \(V\) of nodes can be used to describe how influential the various members of the network are. The triple \((V,d,\mu)\) defines an \(mm\)-space. Attributes such as individual preferences, traits or characteristics may be viewed as a function \(f\colon V\to B\), where \(B\) is a metric space such as \(\mathbb{R}^{n}\) for vector-valued attributes, or \(\mathbb{Z}_{2}^{n}\) (binary words of length \(n\)) equipped with the \(\ell_{1}\)-norm for discrete attributes such as a like-or-dislike compilation of preferences. The quadruple \((V,d,\mu,f)\) is a functional \(mm\)-space that can be employed for data representation and analysis in many different scenarios. Social networks are dynamic, with individuals joining and leaving the network, their relevance changing over time, as well as their attributes [32, 29, 13]. This leads to a family of functional \(mm\)-space \((V_{t},d_{t},\mu_{t},f_{t})\) parameterized by time. To analyze, visualize and summarize dynamical structural and functional changes, it is important to define metrics that are sensitive to such changes and amenable to computation. Targeting problems such as this involving functional data, our primary goal is threefold: (i) to develop metrics that allow us to model and quantify variation in functional data, possibly with distinct domains, (ii) to investigate principled empirical estimations of these metrics, and (iii) to construct stable filtered spaces or complexes associated with functional data to enable geometric analysis via topological methods.
The analysis of structural variation in datasets comprising geometric objects has been a subject of extensive study using techniques from areas such as metric geometry, optimal transport theory, and topological data analysis. The Gromov-Hausdorff distance \(d_{GH}\)[7] has played a prominent role in quantifying shape contrasts and similarities in families of compact metric spaces and in addressing the stability of topological signatures of the shape of objects such as point clouds [10, 9]. These topological signatures also provide lower bounds to \(d_{GH}\) that are computationally more accessible [10]. For \(mm\)-spaces, a similar role is played by the Gromov-Prokhorov distance \(d_{GP}\)[19, 18] and the Gromov-Wasserstein distance \(d_{GW}\)[28, 31] that highlight the shape of regions of larger probability mass. As ubiquitous as \(d_{GH}\), \(d_{GP}\) and \(d_{GW}\) have been in geometric and topological data analysis [10, 28, 5], only a few aspects of their functional counterparts have been investigated (cf. [2, 10, 9, 21, 33]) with a more thorough study seemingly lacking in the literature. This paper carries out such a study and also investigates discrete representations of \(mm\)-fields by means of _functional curvature sets_ that encode their structural and functional shape.
The aforementioned topological signatures are frequently derived from filtered complexes or spaces such as the Vietoris-Rips filtration of a metric point cloud [34, 8], the neighborhood filtration (also known as the offset filtration) of a compact subspace of a metric space [10, 20], or the metric-measure bifiltration of an \(mm\)-space [5, 30]. For this reason, we also study variants of such (multiparameter) filtrations in the functional setting, establishing stability results that ensure that they can be used reliably in data analysis.
### Main Results
We study the class of \(1\)-Lipschitz maps \(\pi_{X}\colon X\to B\), where \(X\) and \(B\) are Polish (complete and separable) metric spaces, \(B\) fixed. These mappings are the morphisms in the category Met of metric spaces restricted to Polish spaces. We refer to such \(1\)-Lipschitz mappings as \(B\)_-valued fields_ on \(X\), or simply \(B\)-fields, and denote them as triples \(\mathfrak{X}=(X,d_{X},\pi_{X})\). If \((X,d_{X})\) is also equipped with a Borel probability measure \(\mu_{X}\), we refer to the quadruple \(\mathfrak{X}=(X,d_{X},\pi_{X},\mu_{X})\) as an \(mm\)-field over \(B\).
**Metric Fields.** We define the Gromov-Hausdorff distance \(d_{GH}\) between compact \(B\)-fields as the infimum of the Hausdorff distance between isometric embeddings into common \(B\)-fields. Analogous to the corresponding result for compact metric spaces [7], we provide a characterizations for \(d_{GH}\) in terms of functional distortions and use it to prove the following theorem.
**Theorem 2.12**.: _The Gromov-Hausdorff distance \(d_{GH}\) metrizes the moduli space \(\mathcal{F}_{B}\) of compact \(B\)-fields and \((\mathcal{F}_{B},d_{GH})\) is a Polish metric space._
We also show in Proposition 2.14 that \((\mathcal{F}_{B},d_{GH})\) is a geodesic space if and only if \(B\) is a geodesic space. A second characterization of \(d_{GH}\) is in terms of isometric embeddings into a fixed _Urysohn universal field_\(\mathcal{U}_{B}\) modulo the action of isometries. This Urysohn \(B\)-field has the property that any other \(B\)-field can be isometrically embedded in it and any two embeddings of the same \(B\)-field differ by the (left) action of an automorphism of \(\mathcal{U}_{B}\).
Let \(F(\mathcal{U}_{B})\) be the space of compact subfields of a Urysohn field \(\mathcal{U}_{B}=(U,B,\pi_{U})\), equipped with the Hausdorff distance, and \(Aut(B)\) the automorphism group of \(\mathcal{U}_{B}\), which acts on \(\mathcal{U}_{B}\) by isometries. Denote the quotient metric on \(F(\mathcal{U}_{B})/Aut(B)\) by \(d_{F}^{B}\).
**Theorem 3.3**.: _The moduli space \((\mathcal{F}_{B},d_{GH})\) of isometry classes of compact \(B\)-fields equipped with the Gromov-Hausdorff distance is isometric to the quotient space \((F(\mathcal{U}_{B})/Aut(B),d_{F}^{B})\)._
**Metric-Measure Fields.** We define and investigate the main properties of \(mm\)-field analogues of the Gromov-Prokhorov and Gromov-Wasserstein distances that have been studied extensively in the realm of \(mm\)-spaces [19, 18, 28, 31]. For \(mm\)-fields \(\mathcal{X}\) and \(\mathcal{Y}\) over \(B\), the Gromov-Prokhorov distance is denoted \(d_{GP}(\mathcal{X},\mathcal{Y})\), whereas the Gromov-Wasserstein distance depends on a parameter \(1\leq p\leq\infty\) and is denoted \(d_{GW,p}(\mathcal{X},\mathcal{Y})\). Two different approaches to Gromov-Wasserstein have been developed for \(mm\)-spaces in [28] and [31] and the field counterpart we present is along the lines of [28].
We define \(d_{GP}(\mathcal{X},\mathcal{Y})\) as the infimum of the Hausdorff distances between isometric embeddings of \(\mathcal{X}\) and \(\mathcal{Y}\) into commom \(B\)-fields and Theorem 4.6 shows how to express \(d_{GP}\) in terms of couplings. This characterization is used in Theorem 4.9 to prove that \(d_{GP}\) metrizes the set \(\widehat{\mathcal{F}}_{B}\) of isometry classes of fully supported \(mm\)-fields over \(B\) making \((\widehat{\mathcal{F}}_{B},d_{GP})\) a Polish space.
The Gromov-Wasserstein distance \(d_{GW,p}(\mathcal{X},\mathcal{Y})\), \(1\leq p\leq\infty\), is defined through couplings and Theorem 5.5 provides the following characterization of \(d_{GW,\infty}\) in terms of equidistributed sequences \((x_{i})\) and \((y_{i})\) in \((X,d_{X},\mu_{X})\) and \((Y,d_{Y},\mu_{Y})\), respectively:
\[d_{GW,\infty}(\mathcal{X},\mathcal{Y})=\inf\max\left\{\frac{1}{2}\sup_{i,j}|d _{X}(x_{i},x_{j})-d_{Y}(y_{i},y_{j})|,\,\sup_{i}d_{B}(\pi_{X}(x_{i}),\pi_{Y}(y _{i}))\right\},\]
with the infimum taken over all equidistributed sequences \((x_{i})\) and \((y_{i})\). (A sequence \((x_{i})\) in \((X,d_{X})\) is \(\mu_{X}\)-equidistributed if the empirical measures \(\sum_{i=1}^{n}\delta_{x_{i}}/n\) converge weakly to \(\mu_{X}\).) To our knowledge, this result is new even for the Gromov-Wasserstein distance between metric-measure spaces, a result that follows by taking \(\pi_{X}=\pi_{Y}=0\).
With an eye toward empirical estimation of the Gromov-Wasserstein distance between \(mm\)-fields, we introduce the notion of _extended distance matrices_, much in the way distance matrices are used to study \(mm\)-spaces (cf. [19, 17]). For an \(mm\)-field \(\mathcal{X}\) over \(B\) and a sequence \((x_{i})\) in \(X\), \(i\geq 1\), form the countably infinite (pseudo) distance matrix \(R=(r_{ij})\in\mathbb{R}^{\mathbb{N}\times\mathbb{N}}\) and the infinite sequence \(b=(b_{i})\in B^{\mathbb{N}}\), where \(r_{ij}=d_{X}(x_{i},x_{j})\) and \(b_{i}=\pi(x_{i})\in B\). We refer to the pair \((R,b)\) as the _augmented distance matrix_ of \(\mathcal{X}\) associated with the sequence \((x_{i})\), which records the shape of the graph of \(\pi_{X}\) restricted to the sequence. This construction let us define a Borel measurable mapping \(F_{\mathcal{X}}\colon X^{\infty}\to\mathbb{R}^{\mathbb{N}\times\mathbb{N}} \times B^{\infty}\), with both the domain and co-domain equipped with the weak topology. The pushforward of \(\mu^{\infty}\) under \(F_{\mathcal{X}}\) yields a probability measure on \(\mathbb{R}^{\mathbb{N}\times\mathbb{N}}\times B^{\infty}\) that we denote by
\[\mathcal{D}_{\mathcal{X}}:=(F_{\mathcal{X}})_{*}(\mu^{\infty})\,, \tag{1}\]
and refer to as the _field curvature distribution_ of \(\mathcal{X}\). This terminology is motivated by the notion of _curvature set_ of a metric space [19]. A similar construction for finite sequences \(\{x_{i}\}\), \(1\leq i\leq n\), gives a measure \(\mathcal{D}_{\mathcal{X}}^{n}\) on \(\mathbb{R}^{n\times n}\times B^{n}\).
Our main result supporting empirical estimation of the Gromov-Wasserstein distance between \(mm\)-fields is the following convergence theorem.
**Theorem 6.5**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be bounded \(mm\)-fields over \(B\). Then, for any \(1\leq p\leq\infty\), we have_
\[\lim_{n\to\infty}d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y} }^{n})=d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})=d_{GW, \infty}(\mathcal{X},\mathcal{Y}).\]
A metric-measure field counterpart to Gromov's Reconstruction Theorem for \(mm\)-spaces [19] is a special case of this more general result. The \(mm\)-field reconstruction theorem is stated and proven in Theorem 6.3.
### Organization
Section 2 introduces the Gromov-Hausdorff distance between compact fields and provides a characterization of \(d_{GH}\) in terms of distortions of (functional) correspondences, whereas Section 3 shows that \(d_{GH}\) can be realized as a Hausdorff distance through isometric embeddings in a Urysohn universal field. The \(d_{GP}\) and \(d_{GW}\) distances for metric-measure fields are studied in Sections 4 and 5, respectively. Section 6 introduces a representation of \(mm\)-fields by distributions of infinite augmented distance matrices, proves a field reconstruction theorem based on these distributions and also addresses empirical approximation questions. Section 7 introduces functional analogues of the Vietoris-Rips and neighborhood filtrations and proves their stability. Section 8 closes the paper with a summary and some discussion.
## 2 Gromov-Hausdorff Distance for Metric Fields
Recall that a \(B\)-field is a \(1\)-Lipschitz map \(\pi\colon X\to B\), where \(X\) and \(B\) are Polish metric spaces. We sometimes denote the field as a triple \(\mathcal{X}=(X,d_{X},\pi)\).
**Definition 2.1**.: Let \(\mathcal{X}=(X,d_{X},\pi_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y})\) be \(B\)-fields.
1. A mapping from \(\mathcal{X}\) to \(\mathcal{Y}\) over \(B\), denoted \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{Y}\), consists of a \(1\)-Lipschitz mapping \(\phi\colon X\to Y\) such that the diagram \[\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{ \tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{ \tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{\tikzcd{X}{ \tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y} \tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tik{Y}\tikzcd{Y}\tikzcd{Y}\tiktikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tiktikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tiktikzcd{Y}\tikzcd{Y}\tikzcd{Y}\tiktik{Y}\tikzcd{Y}\tik{Y}\tikzcd{Y}\tiktikzcd{Y}\tikzcd{Y}\tiktikzcd{Y}\tik{Y}\tikzcd{Y}\tiktik{Y}\tikzcd{Y}\tiktikzcd{Y}\tik{Y}\tikzcd{Y}\tiktik{Y}\tikzcd{Y}\tiktik{Y}\tikzcd{Y}\tiktik{Y}\tiktik{Y}\tikzcd{Y}\tik{Y}\tiktikzcd{Y}\tiktik{Y}\tikzcd{Y}\tik{Y}\tikzcd{Y}\tiktik{Y}\tikzcd{Y}\tiktik{Y}\tiktik{Y}\tikzcd{Y}\tiktik{Y}\tiktik{Y}\tik{Y}\tiktikzcd{Y}\tik{Y}\tiktik{Y}\tikzcd{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tik{Y}\tiktik{Y}\tik{Y}\tiktik{Y}\tik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\tiktik{Y}\
**Lemma 2.4** (Gluing Lemma).: _If \(\mathcal{Y},\mathcal{Z}_{1},\mathcal{Z}_{2}\) are \(B\)-fields and \(\Phi\colon\mathcal{Y}\hookrightarrow\mathcal{Z}_{1}\), \(\Psi\colon\mathcal{Y}\hookrightarrow\mathcal{Z}_{2}\) are isometric embeddings, then \(d_{Z}\colon Z\times Z\to[0,\infty)\) is a well-defined metric on \(Z\) and \(\pi_{Z}\colon Z\to B\) is 1-Lipschitz. Hence, \(\mathcal{Z}=(Z,d_{Z},\pi_{Z})\) is a \(B\)-field, and \(\mathcal{Z}_{1},\mathcal{Z}_{2}\) are isometrically included in \(\mathcal{Z}\)._
Proof.: To show that \(d_{Z}\) is well defined, we verify that \(d_{Z}(z_{1},\psi(y))=d_{Z}(z_{1},\phi(y))\) and \(d_{Z}(z_{2},\psi(y))=d_{Z}(z_{2},\phi(y))\), for any \(z_{1}\in Z_{1}\), \(z_{2}\in Z_{2}\) and \(y\in Y\). Indeed,
\[\begin{split} d_{Z}(z_{1},\psi(y))&=\inf_{y^{\prime }\in Y}d_{Z_{1}}(z_{1},\phi(y^{\prime}))+d_{Z_{2}}(\psi(y^{\prime}),\psi(y))= \inf_{y^{\prime}\in Y}d_{Z_{1}}(z_{1},\phi(y^{\prime}))+d_{Y}(y^{\prime},y)\\ &=\inf_{y^{\prime}\in Y}d_{Z_{1}}(z_{1},\phi(y^{\prime}))+d_{Z_{1 }}(\phi(y^{\prime}),\phi(y))=d_{Z_{1}}(z_{1},\phi(y))=d_{Z}(z_{1},\phi(y)). \end{split} \tag{4}\]
Similarly, \(d_{Z}(z_{2},\psi(y))=d_{Z}(z_{2},\phi(y))\). Thus, \(d_{Z}\) is well defined. To show that \(d_{Z}\) is a metric, we show that definiteness and the triangle inequality hold, since other properties of a metric follow easily from the definition. Assume that \(z_{1}\in Z_{1}\), \(z_{2}\in Z_{2}\) and \(d_{Z}(z_{1},z_{2})=0\). For each integer \(n>0\), there exists \(y_{n}\in Y\) such that \(d_{Z_{1}}(z_{1},\phi(y_{n}))\leq 1/n\), \(d_{Z_{2}}(z_{2},\psi(y_{n}))\leq 1/n\). This shows that \((y_{n})\) is a Cauchy sequence in \(Y\). By completeness, it has a limit \(y\in Y\). Then, \(z_{1}=\phi(y)\) and \(z_{2}=\psi(y)\), implying that \(z_{1}\) is equal to \(z_{2}\) in \(Z\). This shows definiteness.
For the triangle inequality, let \(z_{1},z_{1}^{\prime}\in Z_{1}\), and \(z_{2}\in Z_{2}\). We have
\[\begin{split} d_{Z}(z_{1},z_{1}^{\prime})+d_{Z}(z_{1}^{\prime},z_ {2})&=d_{Z_{1}}(z_{1},z_{1}^{\prime})+\inf_{y\in Y}d_{Z_{1}}(z_{1 }^{\prime},\phi(y))+d_{Z_{2}}(\psi(y),z_{2})\\ &\geq\inf_{y\in Y}d_{Z_{1}}(z_{1},\phi(y))+d_{Z_{2}}(\psi(y),z_{ 2})=d_{Z}(z_{1},z_{2})\end{split} \tag{5}\]
and
\[\begin{split} d_{Z}(z_{1},z_{2})+d_{Z}(z_{2},z_{1}^{\prime})& =\inf_{y,y^{\prime}\in Y}d_{Z_{1}}(z_{1},\phi(y))+d_{Z_{1}}(z_{1}^{ \prime},\phi(y^{\prime}))+d_{Z_{2}}(z_{2},\psi(y))+d_{Z_{2}}(z_{2},\psi(y^{ \prime}))\\ &\geq\inf_{y,y^{\prime}\in Y}d_{Z_{1}}(z_{1},\phi(y))+d_{Z_{1}}(z_ {1}^{\prime},\phi(y^{\prime}))+d_{Z_{2}}(\psi(y),\psi(y^{\prime}))\\ &=\inf_{y,y^{\prime}\in Y}d_{Z_{1}}(z_{1},\phi(y))+d_{Z_{1}}(z_ {1}^{\prime},\phi(y^{\prime}))+d_{Z_{1}}(\phi(y),\phi(y^{\prime}))\\ &\geq\ d_{Z_{1}}(z_{1},z_{1}^{\prime})=d_{Z}(z_{1},z_{1}^{\prime}).\end{split} \tag{6}\]
Similarly, for \(z_{1}\in Z_{1}\) and \(z_{2},z_{2}^{\prime}\in Z_{2}\), we have
\[d_{Z}(z_{1},z_{2})+d_{Z}(z_{2},z_{2}^{\prime})\geq d_{Z}(z_{1},z_{2}^{\prime}),\,d_{Z}(z_{2},z_{1})+d_{Z}(z_{1},z_{2}^{\prime})\geq d_{Z}(z_{2},z_{2}^{ \prime}). \tag{7}\]
Thus, the triangle inequality holds and \(d_{Z}\) is a metric on \(Z\), as claimed.
The map \(\pi_{Z}\) is well defined because if \(z_{1}=\phi(y)\) and \(z_{2}=\psi(y)\), then \(\pi_{Z_{1}}(z_{1})=\pi_{Z_{2}}(z_{2})=\pi_{Y}(y)\). Let us show that \(\pi_{Z}\) is 1-Lipschitz. If \(z_{1}\in Z_{1}\) and \(z_{2}\in Z_{2}\), then
\[\begin{split} d_{Z}(z_{1},z_{2})&=\inf_{y\in Y}d_{Z_ {1}}(z_{1},\phi(y))+d_{Z_{2}}(z_{2},\psi(y))\\ &\geq\inf_{y\in Y}d_{B}(\pi_{Z_{1}}(z_{1}),\pi_{Z_{1}}(\phi(y)))+d_ {B}(\pi_{Z_{2}}(z_{2}),\pi_{Z_{2}}(\psi(y)))\\ &=\inf_{y\in Y}d_{B}(\pi_{Z_{1}}(z_{1}),\pi_{Y}(y))+d_{B}(\pi_{Z_{ 2}}(z_{2}),\pi_{Y}(y))\\ &\geq d_{B}(\pi_{Z_{1}}(z_{1}),\pi_{Z_{2}}(z_{2}))=d_{B}(\pi_{Z}(z_ {1}),\pi_{Z}(z_{2})),\end{split} \tag{8}\]
as desired. This completes the proof.
**Proposition 2.5**.: _Let \(\mathcal{X},\mathcal{Y},\mathcal{W}\) be \(B\)-fields. Then, \(d_{GH}(\mathcal{X},\mathcal{W})\leq d_{GH}(\mathcal{X},\mathcal{Y})+d_{GH}( \mathcal{Y},\mathcal{W})\)._
Proof.: Let \(r>d_{GH}(\mathcal{X},\mathcal{Y})\) and \(s>d_{GH}(\mathcal{Y},\mathcal{W})\). There exists \(B\)-fields \(\mathcal{Z}_{1}\) and \(\mathcal{Z}_{2}\), and isometric embeddings \(\Lambda_{1}\colon\mathcal{X}\hookrightarrow\mathcal{Z}_{1}\), \(\Phi\colon\mathcal{Y}\hookrightarrow\mathcal{Z}_{1}\), \(\Psi\colon\mathcal{Y}\hookrightarrow\mathcal{Z}_{2}\), and \(\Lambda_{2}\colon\mathcal{W}\hookrightarrow\mathcal{Z}_{2}\) such that \(d_{H}^{Z_{1}}(\lambda_{1}(X),\phi(Y))<r\) and \(d_{H}^{Z_{2}}(\psi(Y),\lambda_{2}(W))<s\). Let \(\mathcal{Z}\) be the \(B\)-field described in Lemma 2.4. We show that \(d_{H}^{Z}(\lambda_{1}(X),\lambda_{2}(W))<r+s\). Given \(x\in X\), there exist \(y\in Y\) and \(w\in W\) such that \(d_{Z_{1}}(\lambda_{1}(x),\phi(y))<r\) and \(d_{Z_{2}}(\psi(y),\lambda_{2}(w))<s\). Then,
\[d_{Z}(\lambda_{1}(x),\lambda_{2}(w))\leq d_{Z}(\lambda_{1}(x),\phi(y))+d_{Z}(\phi( y),\psi(y))+d_{Z}(\psi(y),\lambda_{2}(w))<r+s. \tag{9}\]
Similarly, for any \(w\) in \(W\), there exists \(x\) in \(X\) such that \(d_{Z}(\lambda_{1}(x),\lambda_{2}(w))<r+s\). Hence,
\[d_{GH}(\mathcal{X},\mathcal{Y})\leq d_{H}^{Z}(\lambda_{1}(X),\lambda_{2}(W))<r+s. \tag{10}\]
Since \(r>d_{GH}(\mathcal{X},\mathcal{Y})\) and \(s>d_{GH}(\mathcal{Y},\mathcal{W})\) are arbitrary, we obtain \(d_{GH}(\mathcal{X},\mathcal{W})\leq d_{GH}(\mathcal{X},\mathcal{Y})+d_{GH}( \mathcal{Y},\mathcal{W})\) as desired.
In analogy with the corresponding result for compact metric spaces, next, we obtain a characterization of the Gromov-Hausdorff distance for \(B\)-fields in terms of correspondences. Recall that a correspondence between two sets \(X\) and \(Y\) is a relation \(R\subseteq X\times Y\) such that the projection to each factor restricted to \(R\) is surjective.
**Definition 2.6** (Metric Field Distortion).: Let \(\mathcal{X}=(X,d_{X},\pi_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y})\) be \(B\)-fields and \(R\) be a relation between \(X\) and \(Y\). The _metric field distortion_ of \(R\), denoted \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)\), is defined as
\[\operatorname{dis}_{\pi_{X},\pi_{Y}}(R):=\max\big{(}\sup_{(x,y),(x^{\prime},y^ {\prime})\in R}|d_{X}(x,x^{\prime})-d_{Y}(y,y^{\prime})|,2\sup_{(x,y)\in R}d_ {B}(\pi_{X}(x),\pi_{Y}(y))\big{)}.\]
The following construction introduces a 1-parameter family of \(B\)-fields associated with a relation and its distortion.
**Definition 2.7**.: Let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(B\)-fields, \(R\) a relation between \(X\) and \(Y\), and \(r>0\) satisfying \(r\geq\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)/2\). Let \(Z\) be the disjoint union of \(X\) and \(Y\), and \(d_{Z}:Z\times Z\to\mathbb{R}\) be given by \(d_{Z}|_{X\times X}=d_{X}\), \(d_{Z}|_{Y\times Y}=d_{Y}\) and
\[d_{Z}(x,y)=d_{Z}(y,x)=r+\inf_{(x^{\prime},y^{\prime})\in R}d_{X}(x,x^{\prime}) +d_{Y}(y,y^{\prime}),\]
for \(x\in X\) and \(y\in Y\). Letting \(\pi_{Z}:Z\to B\) be given by \(\pi_{Z}|_{X}=\pi_{X}\) and \(\pi_{Z}|_{Y}=\pi_{Y}\), we define the \(B\)-field \(\mathcal{X}\coprod_{R,r}\mathcal{Y}\) by \(\mathcal{X}\coprod_{R,r}\mathcal{Y}:=(Z,d_{Z},\pi_{Z})\) (see next lemma).
**Lemma 2.8**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(B\)-fields, \(R\) a relation between \(X\) and \(Y\), and \(r\geq\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)/2\). Then, \(\mathcal{X}\coprod_{R,r}\mathcal{Y}\) is a \(B\)-field._
Proof.: Let \((Z,d_{Z},\pi_{Z}):=\mathcal{X}\coprod_{R,r}\mathcal{Y}\). Definiteness and symmetry of \(d_{Z}\) are clear from the definition. We verify the triangle inequality. Let \(x_{1},x_{2}\in X\), and \(y_{1},y_{2}\in Y\). We have
\[\begin{split} d_{Z}(x_{1},x_{2})+d_{Z}(x_{2},y_{1})& =d_{X}(x_{1},x_{2})+r+\inf_{(x^{\prime},y^{\prime})\in R}d_{X}(x_{ 2},x^{\prime})+d_{Y}(y^{\prime},y_{1})\\ &\geq r+\inf_{(x^{\prime},y^{\prime})\in R}d_{X}(x_{1},x^{\prime })+d_{Y}(y^{\prime},y_{1})=d_{Z}(x_{1},y_{1}).\end{split} \tag{11}\]
Similarly, \(d_{Z}(x_{1},y_{1})+d_{Z}(y_{1},y_{2})\geq d_{Z}(x_{1},y_{2})\). We also have
\[\begin{split} d_{Z}(x_{1},y_{1})+d_{Z}(y_{1},x_{2})& =2r+\inf_{(x^{\prime},y^{\prime}),(x^{\prime\prime},y^{\prime \prime})\in R}d_{X}(x_{1},x^{\prime})+d_{X}(x_{2},x^{\prime\prime})+d_{Y}(y_{ 1},y^{\prime})+d_{Y}(y_{1},y^{\prime\prime})\\ &\geq 2r+\inf_{(x^{\prime},y^{\prime}),(x^{\prime\prime},y^{\prime \prime})\in R}d_{X}(x_{1},x^{\prime})+d_{X}(x_{2},x^{\prime\prime})+d_{Y}(y^{ \prime},y^{\prime\prime})\\ &\geq\inf_{(x^{\prime},y^{\prime}),(x^{\prime\prime},y^{\prime \prime})\in R}d_{X}(x_{1},x^{\prime})+d_{X}(x_{2},x^{\prime\prime})+d_{X}(x^{ \prime},x^{\prime\prime})\\ &\geq d_{X}(x_{1},x_{2})=d_{Z}(x_{1},x_{2}).\end{split} \tag{12}\]
Similarly, \(d_{Z}(y_{1},x_{1})+d_{Z}(x_{1},y_{2})\geq d_{Z}(y_{1},y_{2})\). Hence, \(d_{Z}\) is a metric on \(Z\). Moreover, \(Z\) is complete because any sequence in \(Z\) has a subsequence contained in either \(X\) or \(Y\), which are complete. Since the union of countable dense sets in \(X\) and \(Y\) is dense in \(Z\), it follows that \(Z\) is Polish. Lastly, we show that \(\pi_{Z}\) is 1-Lipschitz. For \(x\in X\), \(y\in Y\), we have
\[\begin{split} d_{B}(\pi_{X}(x),\pi_{Y}(y))&\leq\inf_{(x ^{\prime},y^{\prime})\in R}d_{B}(\pi_{X}(x),\pi_{X}(x^{\prime}))+d_{B}(\pi_{X}(x ^{\prime}),\pi_{Y}(y^{\prime}))+d_{B}(\pi_{Y}(y),\pi_{Y}(y^{\prime}))\\ &\leq r+\inf_{(x^{\prime},y^{\prime})\in R}d_{X}(x,x^{\prime})+d_{Y} (y,y^{\prime})=d_{Z}(x,y).\end{split} \tag{13}\]
This completes the proof.
**Lemma 2.9**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(B\)-fields, \(R\) be a correspondence between \(X\) and \(Y\), \(r>0\) and \(r\geq\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)/2\). If \(\mathcal{Z}:=\mathcal{X}\coprod_{R,r}\mathcal{Y}\), then \(d_{H}^{\mathcal{Z}}(X,Y)=r\)._
Proof.: For \(x\in X\), \(y\in Y\), we have \(d_{Z}(x,y)\geq r\), hence \(d_{H}^{Z}(X,Y)\geq r\). Since \(R\) is a correspondence, for any \(x\in X\), there exists \(y\in Y\) such that \((x,y)\in R\), so \(d_{Z}(x,y)=r\). Similarly, for any \(y\in Y\), there exists \(x\in X\) such that \((x,y)\in R\), so \(d_{Z}(x,y)=r\). Hence, we get \(d_{H}^{Z}(X,Y)\leq r\), implying \(d_{H}^{Z}(X,Y)=r\).
**Theorem 2.10**.: _If \(\mathcal{X}=(X,d_{X},\pi_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y})\) are \(B\)-fields, then_
\[d_{GH}(\mathcal{X},\mathcal{Y})=\inf_{R}\operatorname{dis}_{\pi_{X},\pi_{Y}}(R )/2,\]
_where the infimum is taken over all correspondences between \(X\) and \(Y\)._
Proof.: Let \(r>d_{GH}(\mathcal{X},\mathcal{Y})\). There is a \(B\)-fields \(\mathcal{Z}\) and isometric embeddings \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{Z}\) and \(\Psi\colon\mathcal{Y}\Rightarrow\mathcal{Z}\) over \(B\) such that \(d_{H}(\phi(X),\psi(Y))<r\). For \(x\in X\) and \(y\in Y\), abusing notation, we write \(d_{Z}(x,y)\) to denote \(d_{Z}(\phi(x),\psi(y))\). We also write \(\pi_{Z}(x)\) and \(\pi_{Z}(y)\) for \(\pi_{Z}(\phi(x))\) and \(\pi_{Z}(\psi(y))\), respectively. Let \(R\) be the correspondence between \(X\) and \(Y\) given by
\[R:=\{(x,y)\in X\times Y:d_{Z}(x,y)<r\}. \tag{14}\]
We have
\[|d_{X}(x,x^{\prime})-d_{Y}(y,y^{\prime})|\leq d_{Z}(x,y)+d_{Z}(x^{\prime},y^{ \prime})\leq 2r \tag{15}\]
and
\[d_{B}(\pi_{X}(x),\pi_{Y}(y))=d_{B}(\pi_{Z}(x),\pi_{Z}(y))\leq d_{Z}(x,y)\leq r. \tag{16}\]
Hence, \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)<2r.\) Since \(r>d_{GH}(\mathcal{X},\mathcal{Y})\) is arbitrary, we get
\[d_{GH}(\mathcal{X},\mathcal{Y})\geq\inf_{R}\operatorname{dis}_{\pi_{X},\pi_{Y }}(R)/2. \tag{17}\]
For the converse inequality, let \(R\) be a correspondence between \(X\) and \(Y\) and \(r\geq\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)/2\). By Lemma 2.9, there exists a \(B\)-field \(\mathcal{Z}\) containing isometric copies of \(\mathcal{X},\mathcal{Y}\) such that \(d_{H}^{\mathcal{Z}}(X,Y)=r\). Hence, \(d_{GH}(\mathcal{X},\mathcal{Y})\leq r\). Since the correspondence \(R\) and \(r>\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)/2\) are arbitrary, we have
\[d_{GH}(\mathcal{X},\mathcal{Y})\leq\inf_{R}\operatorname{dis}_{\pi_{X},\pi_{Y }}(R), \tag{18}\]
as claimed.
Our next goal is to establish the existence of an optimal correspondence that realizes the Gromov-Hausdorff distance between compact fields.
**Proposition 2.11**.: _If \(\mathcal{X},\mathcal{Y}\) are compact \(B\)-fields, then there exists a correspondence \(R\) between \(X\) and \(Y\) such that_
\[d_{GH}(\mathcal{X},\mathcal{Y})=\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)/2\,.\]
Proof.: Endow \(X\times Y\) with the product \(\sup\) metric and let \(\mathcal{C}\) be set of all closed subspaces of \(X\times Y\) equipped with the Hausdorff distance. We claim that \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(\cdot):\mathcal{C}\to[0,\infty)\) is \(4\)-Lipschitz. Indeed, let \(S,T\in\mathcal{C}\), \(\epsilon>d_{H}(S,T)\) and \((x,y),(x^{\prime},y^{\prime})\in S\). Then, there exist \((w,z),(w^{\prime},z^{\prime})\in T\) such that \(d_{X}(x,w),d_{X}(x^{\prime},w^{\prime})\), \(d_{Y}(y,z)\), and \(d_{Y}(y^{\prime},z^{\prime})<\epsilon\). Thus, we have
\[|d_{X}(x,x^{\prime})-d_{Y}(y,y^{\prime})| \leq|d_{X}(x,x^{\prime})-d_{X}(w,w^{\prime})|+|d_{X}(w,w^{\prime}) -d_{Y}(z,z^{\prime})|+|d_{Y}(z,z^{\prime})-d_{Y}(y,y^{\prime})|\] \[\leq d_{X}(x,w)+d_{X}(x^{\prime},w^{\prime})+\operatorname{dis}_{ \pi_{X},\pi_{Y}}(T)+d_{Y}(z,y)+d_{Y}(z^{\prime},y^{\prime}) \tag{19}\] \[\leq\operatorname{dis}_{\pi_{X},\pi_{Y}}(T)+4\epsilon\,.\]
We also have
\[d_{B}(\pi_{X}(x),\pi_{Y}(y)) \leq d_{B}(\pi_{X}(x),\pi_{X}(w))+d_{B}(\pi_{X}(w),\pi_{Y}(z))+d_{ B}(\pi_{Y}(z),\pi_{Y}(y)) \tag{20}\] \[\leq\operatorname{dis}_{\pi_{X},\pi_{Y}}(T)/2+2\epsilon.\]
Since \((x,y),(x^{\prime},y^{\prime})\in S\) and \(\epsilon>d_{H}(S,T)\) are arbitrary, we have \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(S)\leq\operatorname{dis}_{\pi_{X},\pi_{Y}} (T)+4d_{H}(S,T)\). Similarly, we can show that \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(T)\leq\operatorname{dis}_{\pi_{X},\pi_{Y} }(R)+4d_{H}(S,T)\). Therefore,
\[|\operatorname{dis}_{\pi_{X},\pi_{Y}}(S)-\operatorname{dis}_{\pi_{X},\pi_{Y}}(T )|\leq 4d_{H}(S,T)\,; \tag{21}\]
that is, the distortion function is \(4\)-Lipschitz. By Theorem 2.10, there exists a correspondence \(R_{n}\) between \(X\) and \(Y\) such that
\[\operatorname{dis}_{\pi_{X},\pi_{Y}}(R_{n})<2d_{GH}(\mathfrak{X},\mathfrak{Y} )+1/n. \tag{22}\]
Since the distortion of the closure of a relation is same as the distortion of the original relation, without loss of generality, we can assume that \(R_{n}\) is closed. By [7, Theorem 7.3.8], \(\mathcal{C}\) is compact so that, by passing to a subsequence, we can assume that \(R_{n}\) converges to \(R\) in the Hausdorff metric. This implies that \(\operatorname{dis}_{\pi_{f},\pi_{g}}(R)=2d_{GH}(\mathfrak{X},\mathfrak{Y})\) by (Lipschitz) continuity.
To conclude the proof, we show that the projections of \(R\) to \(X\) and \(Y\) are surjective. For each \(x\in X\), there exists \(y_{n}\in Y\) such that \((x,y_{n})\in R_{n}\). By compactness, we can assume that \(y_{n}\) converges to \(y\in Y\). Given any \(\epsilon>0\), \((x,y_{n})\) is in the closed \(\epsilon\)-neighborhood \(R^{\epsilon}\) of \(R\) for \(n\)-large enough, which implies that \((x,y)\in R^{\epsilon}\). Since \(R\) is closed and \(\epsilon>0\) is arbitrary, \((x,y)\in R\). Similarly, for any \(y\in Y\), there exists \(x\in X\) such that \((x,y)\in R\). Hence, \(R\) is a correspondence.
Since the cardinality of a compact metric space is at most that of a continuum, the isometry classes of compact \(B\)-fields form a set, which we denote by \(\mathcal{F}_{B}\).
**Theorem 2.12**.: _The Gromov-Hausdorff distance \(d_{GH}\) metrizes the moduli space \(\mathcal{F}_{B}\) of compact \(B\)-fields and \((\mathcal{F}_{B},d_{GH})\) is a Polish metric space._
We begin the proof with a lemma.
**Lemma 2.13**.: _Let \(\mathfrak{X}\) be a compact \(B\)-field and \(B_{0}\) a dense subset of \(B\). For each \(\epsilon>0\), there exists a finite \(B\)-field \(\mathcal{Y}=(Y,d_{Y},\pi_{Y})\) such that \(\pi_{Y}\) takes values in \(B_{0}\), \(d_{Y}\) only takes rational values, and \(d_{GH}(\mathfrak{X},\mathfrak{Y})<\epsilon\)._
Proof.: Let \(Y\) be a finite subset of \(X\) such that for all \(x\in X\) there exists \(y\in Y\) satisfying \(d_{X}(x,y)<\epsilon/3\). For each \(y\in Y\), pick \(b_{y}\in B_{0}\) such that \(d_{B}(\pi_{X}(y),b_{y})<\epsilon/3\). Letting \(n>0\) be an integer such that \(1/n<\epsilon/3\), define \(d_{Y}:Y\times Y\to\mathbb{Q}\) by
\[d_{Y}(y,y^{\prime}):=\lceil n\max(d_{X}(y,y^{\prime}),d_{B}(b_{y},b^{\prime}_{ y}))\rceil/n. \tag{23}\]
Symmetry and definiteness of \(d_{Y}\) are clear, so to show that it is a metric it suffices to verify the triangle inequality:
\[d_{Y}(y,y^{\prime})+d_{Y}(y^{\prime},y^{\prime\prime}) =\big{(}\lceil n\max\{d_{X}(y,y^{\prime}),d_{B}(b_{y},b_{y^{ \prime}})\}\rceil+\lceil n\max\{d_{X}(y^{\prime},y^{\prime\prime}),d_{B}(b_{y^ {\prime}},b_{y^{\prime\prime}})\}\rceil\big{)}/n \tag{24}\] \[\geq\lceil n\max\{d_{X}(y,y^{\prime}),d_{B}(b_{y},b_{y^{\prime}}) \}+\max\{d_{X}(y^{\prime},y^{\prime\prime}),d_{B}(b_{y^{\prime}},b_{y^{\prime \prime}})\}\rceil/n\] \[\geq\lceil n\max\{d_{X}(y,y^{\prime})+d_{X}(y^{\prime},y^{\prime \prime}),d_{B}(b_{y},b_{y^{\prime}})+d_{B}(b_{y^{\prime}},b_{y^{\prime\prime}}) \}\rceil/n\] \[\geq\lceil n\max\{d_{X}(y,y^{\prime\prime}),d_{B}(b_{y},b_{y^{ \prime\prime}})\}\rceil/n=d_{Y}(y,y^{\prime\prime}).\]
Define \(\pi_{Y}:Y\to B\) by \(y\mapsto b_{y}\). Note that, by definition, \(\pi_{Y}\) is \(1\)-Lipschitz and takes values in \(B_{0}\). To conclude, we show that the \(B\)-field \(\mathcal{Y}:=(Y,d_{Y},\pi_{y})\) is \(\epsilon\)-close to \(\mathfrak{X}\). Let \(R\) be the correspondence between \(X\) and \(Y\) given by
\[R:=\{(x,y):x\in X,y\in Y,d_{X}(x,y)<\epsilon/3\}. \tag{25}\]
It is enough to show that \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)\leq 2\epsilon\). Let \((x,y),(x^{\prime},y^{\prime})\in R\). Note that \(d_{Y}(y,y^{\prime})\geq d_{X}(y,y^{\prime})\). Using \(d_{B}(b_{y},b_{y^{\prime}})\leq 2\epsilon/3+d_{B}(\pi_{X}(y),\pi_{X}(y^{ \prime}))\leq 2\epsilon/3+d_{X}(y,y^{\prime})\), we can get
\[|d_{Y}(y,y^{\prime})-d_{X}(x,x^{\prime})| \leq d_{Y}(y,y^{\prime})-d_{X}(y,y^{\prime})+|d_{X}(y,y^{\prime})- d_{X}(x,x^{\prime})| \tag{26}\] \[\leq 1/n+\max(d_{X}(y,y^{\prime}),d_{B}(b_{y},b_{y^{\prime}}))-d_{X}(y,y ^{\prime})+d_{X}(x,y)+d_{X}(x^{\prime},y^{\prime})\] \[\leq\max(d_{X}(y,y^{\prime}),d_{X}(y,y^{\prime})+2\epsilon/3)-d_{X }(y,y^{\prime})+\epsilon<2\epsilon.\]
We also have that
\[d_{B}(\pi_{X}(x),\pi_{Y}(y))\leq d_{B}(\pi_{X}(x),\pi_{X}(y))+d_{B}(\pi_{X}(y), \pi_{Y}(y))\leq d_{X}(x,y)+\epsilon/3<\epsilon\,. \tag{27}\]
Therefore, \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)\leq 2\epsilon\). This completes the proof.
Proof of Theorem 2.12.: Symmetry of \(d_{GH}\) is straightforward and the triangle inequality has been established in Proposition 2.5. To show definiteness, suppose that \(d_{GH}(\mathscr{X},\mathscr{Y})=0\). We need to show that \(\mathscr{X}\) is isometric to \(\mathscr{Y}\). By Proposition 2.11, there exists a correspondence \(R\) between \(X\) and \(Y\) such that \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)=0\). This implies that \(R\) is the graph of an isometry between \(\mathscr{X}\) and \(\mathscr{Y}\).
We now argue that \((\mathcal{F}_{B},d_{GH})\) is Polish. Let \(B_{0}\) be a countable dense set in \(B\) and denote by \(\mathcal{F}\) the collection of isometry classes of finite \(B\)-fields that map into \(B_{0}\) and only have rational distances. By Lemma 2.13, \(\mathcal{F}\) is dense in \(\mathcal{F}_{B}\). Note that a subset of the countable set \(\cup_{n>0}(\mathbb{Q}^{n^{2}}\times B_{0}^{n})\) surjects onto \(\mathcal{F}\), which implies that \(\mathcal{F}\) is countable.
It remains to verify completeness. Let \(\mathscr{X}_{n}\) be a Cauchy sequence of compact \(B\)-fields with respect to Gromov-Hausdorff distance. To prove that the sequence is convergent, it suffices to construct a convergent subsequence. By passing to a subsequence if necessary, we can assume that \(d_{GH}(\mathscr{X}_{n},\mathscr{X}_{n+1})<1/2^{n}\) for all \(n\). Then, there exists a correspondence \(R_{n}\) between \(X_{n}\) and \(X_{n+1}\) such that \(\operatorname{dis}_{\pi_{X_{n}},\pi_{X_{n+1}}}(R_{n})<2/2^{n}\) for all \(n\). Letting \(r_{n}=1/2^{n}\), apply the construction described in Definition 2.7 consecutively to get a \(B\)-field
\[\mathscr{Z}_{n}:=\left(\left((\mathscr{X}_{1}\coprod_{R_{1},r_{1}}\mathscr{X} _{2})\coprod_{R_{2},r_{2}}\mathscr{X}_{3}\right)\dots\right)\coprod_{R_{n},r_ {n}}\mathscr{X}_{n+1}. \tag{28}\]
for each \(n>0\). Clearly, \(\mathscr{Z}_{n}\) is a subfield of \(\mathscr{Z}_{n+1}\). Let \(\mathscr{Z}\) be the completion of the co-limit \(\cup_{n>0}\mathscr{Z}_{n}\). Note that the union of countable dense sets in \(Z_{n}\) is a countable dense set in \(Z\) and therefore \(Z\) is Polish. Hence, \(\mathscr{Z}\) is a \(B\)-field. By Lemma 2.9, \(d_{H}^{Z}(X_{n},X_{n+1})=r_{n}=1/2^{n}\) so \((X_{n})\) forms a Cauchy sequence with respect to Hausdorff distance in \(Z\). By [7, Proposition 7.3.7], there exists a closed subspace \(Y\subseteq Z\) such that \(X_{n}\) Hausdorff converges to \(Y\) in \(Z\). Now we show that \(Y\) is compact. Since \(Y\) is complete, it is enough to show that it is totally bounded. Given \(\epsilon>0\), pick \(n>0\) such that \(d_{H}^{Z}(X_{n},Y)<\epsilon/3\) and let \(A\subseteq X_{n}\) be a finite \(\epsilon/3\)-net of \(X_{n}\). For each \(a\in A\), let \(b_{a}\in Y\) be such that \(d_{Z}(a,b_{a})<\epsilon/3\) and set \(B:=\{b_{a}:a\in A\}\). For any \(y\in Y\), there exist \(x\in X_{n}\) such that \(d_{Z}(x,y)<\epsilon/3\) and \(a\in A\) such that \(d_{Z}(x,a)\leq\epsilon/3\). Therefore, we have
\[d_{Z}(y,b_{a})\leq d_{Z}(y,x)+d_{Z}(x,a)+d_{Z}(a,b_{a})<\epsilon\,. \tag{29}\]
This means that \(B\) is a finite \(\epsilon\)-net in \(Y\) so that \(Y\) is totally bounded. Hence, \(Y\) is compact. Letting \(\mathscr{Y}=(Y,d_{Z}|_{Y},\pi_{Z}|_{Y})\), we have that \(d_{GH}(\mathscr{X}_{n},\mathscr{Y})\leq d_{H}^{Z}(X_{n},Y)\). Thus, \(\mathscr{X}_{n}\) converges to \(\mathscr{Y}\) in the Gromov-Hausdorff distance.
It is known that the isometry classes of compact metric spaces form a geodesic space under the Gromov-Hausdorff distance [23, 11]. (A geodesic space is a metric space such that between any two points, there is a shortest path whose length is equal to the distance between its endpoints.) Is \((\mathcal{M}_{B},d_{GH})\) also a geodesic space? We close this section with an extension of the result to \(B\)-fields.
**Proposition 2.14**.: \((\mathcal{F}_{B},d_{GH})\) _is a geodesic space if and only if \(B\) is a geodesic space._
Proof.: Assume \((\mathcal{F}_{B},d_{GH})\) is a geodesic space. To show that this implies that \(B\) is geodesic, by [7, Theorem 2.4.16], it suffices to prove that any pair of distinct points \(b,b^{\prime}\in B\) has a midpoint. Let \(X=Y:=\{*\}\) be the same \(1\)-point space (with the \(0\) metric) and set \(\mathscr{X}=(\{*\},0,*\mapsto b)\), \(\mathscr{Y}=(\{*\},0,*\mapsto b^{\prime})\). There is a unique correspondence \(R\) between \(X\) and \(Y\) whose distortion is \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)=2d_{B}(b,b^{\prime})\). By Proposition 2.10, \(d_{GH}(\mathscr{X},\mathscr{Y})=d_{B}(b,b^{\prime})\). By assumption, there exists a compact field \(\mathscr{W}\) such that \(d_{GH}(\mathscr{X},\mathscr{W})=d_{GH}(\mathscr{Y},\mathscr{W})=d_{B}(b,b^{ \prime})/2\). Let \(S\) and \(T\) be the unique correspondences between \(X\) and \(W\), and \(Y\) and \(W\), respectively. By Proposition 2.10, \(\operatorname{dis}_{\pi_{X},\pi_{W}}(S)=\operatorname{dis}_{\pi_{Y},\pi_{W}}(T )=d_{B}(b,b^{\prime})\). Then, for any \(w\in W\), we have \(d_{B}(b,\pi_{W}(w))\leq d_{B}(b,b^{\prime})/2\) and \(d_{B}(b^{\prime},\pi_{W}(w))\leq d_{B}(b,b^{\prime})/2\). This implies that \(\pi_{W}(w)\) is a midpoint between \(b\) and \(b^{\prime}\).
To prove the converse statement, assume \(B\) is a geodesic space. Given any pair of compact \(B\)-fields \(\mathscr{X}\) and \(\mathscr{Y}\), we construct a Gromov-Hausdorff geodesic between them. We can assume that \(r=d_{GH}(\mathscr{X},\mathscr{Y})>0\). By Proposition 2.11, there is a correspondence \(R\) between \(X\) and \(Y\) such that \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)=2r\). Let \(\mathscr{W}:=\mathscr{X}\coprod_{R,r}\mathscr{Y}\), Then, both \(\mathscr{X}\) and \(\mathscr{Y}\) are contained in \(\mathscr{W}\) and \(d_{H}^{W}(X,Y)=r\) by Lemma 2.9. Now we use the fact that every compact metric space can be isometrically embedded into a compact geodesic space, for example, its injective hull, (cf. [26]). Let \(\overline{W}\) be a compact geodesic space containing \(W\) and \(Z=\overline{W}\times B\) endowed with the product sup-metric. \(Z\) is a geodesic space because both \(\overline{W}\) and \(B\) have this property. Letting \(\pi_{Z}:Z\to B\) denote projection onto the second coordinate, \(\mathscr{Z}:=(Z,B,\pi_{Z})\) defines a \(B\)-field.
Moreover, \(\mathcal{W}\) isometrically embeds into \(\mathcal{Z}\) via \(w\mapsto(w,\pi_{W}(w))\). Thus, we can assume that \(\mathcal{X}\) and \(\mathcal{Y}\) are sub-fields of \(\mathcal{Z}\) satisfying \(d_{H}^{Z}(X,Y)=r\). Define a correspondence between \(X\) and \(Y\) by
\[R:=\{(x,y):x\text{ is a $d_{Z}$-closest point to $y$ in $X$ or $y$ is a $d_{Z}$-closest point to $x$ in $Y$}\}. \tag{30}\]
Any \((x,y)\in R\) satisfies \(d_{Z}(x,y)\leq r\). For \((x,y)\in R\), let \(\gamma_{x,y}\colon[0,1]\to Z\) be a constant speed geodesic from \(x\) to \(y\) in \(Z\). For \(t\in[0,1]\), let \(A_{t}=\{\gamma_{x,y}(t):(x,y)\in R\}\). Since \(R\) is a correspondence, we have \(A_{0}=X\) and \(A_{1}=Y\). Since
\[d_{Z}(\gamma_{x,y}(s),\gamma_{x,y}(t))=|s-t|\,d_{Z}(x,y)\leq|s-t|\,r\,, \tag{31}\]
for all \((x,y)\in R\) and \(s,t\in[0,1]\), we have
\[d_{H}^{Z}(A_{s},A_{t})\leq|s-t|\,d_{GH}(\mathcal{X},\mathcal{Y})\,. \tag{32}\]
Let \(\mathcal{X}_{t}:=(\bar{A}_{t},d_{Z}|_{\bar{A}_{t}},\pi_{Z}|_{\bar{A}_{t}})\), where \(\bar{A}_{t}\) denotes the closure of \(A_{t}\) in \(Z\). Then, \(\mathcal{X}_{t}\) is a compact \(B\)-field, \(\mathcal{X}_{0}=\mathcal{X}\), \(\mathcal{X}_{1}=\mathcal{Y}\) and
\[d_{GH}(\mathcal{X}_{s},\mathcal{X}_{t})\leq d_{H}^{Z}(\bar{A}_{s},\bar{A}_{t} )=d_{H}^{Z}(A_{s},A_{t})\leq|s-t|\,d_{GH}(\mathcal{X},\mathcal{Y}), \tag{33}\]
for all \(s,t\in[0,1]\), which implies that \(d_{GH}(\mathcal{X}_{s},\mathcal{X}_{t})=|s-t|\,d_{GH}(\mathcal{X},\mathcal{Y})\) by the triangle inequality. This shows that \(t\mapsto\mathcal{X}_{t}\) is a geodesic between \(\mathcal{X}\) and \(\mathcal{Y}\) in \((\mathcal{F}_{B},d_{GH})\).
## 3 The Urysohn Field
The primary goal of this section is to obtain a description of the Gromov-Hausdorff distance for compact \(B\)-fields in terms of the Hausdorff distances between subfields of a Urysohn universal \(B\)-field modulo the action of its automorphism group. An _automorphism_ of a field \(\pi\colon X\to B\) is a bijective isometry \(\psi\colon X\to X\) that satisfies \(\pi\circ\psi=\pi\). An analogous result holds for the Gromov-Prokhorov distance.
**Definition 3.1** (Urysohn Field).: A \(B\)-field \(\pi\colon U\to B\) is called a _Urysohn field over \(B\)_ if for each finite subspace \(A\) of \(U\) and \(1\)-Lipschitz map \(\phi\colon A^{*}\to B\) defined on a one-point metric extension \(A^{*}=A\sqcup\{a^{*}\}\), satisfying \(\phi|_{A}=\pi|_{A}\), there exists an isometric embedding \(\imath\colon A^{*}\to U\) such that the restriction \(\imath|_{A}\) is the inclusion map and \(\pi\circ\imath=\phi\).
The next theorem is a special case of a more general result proven in [14] in a model theory framework. For a proof based on metric geometry constructs that extends to the functional setting a well-known construction of Urysohn space due to Katetov [24], the reader may consult [1].
**Theorem 3.2** (Existence and Uniqueness of Urysohn Fields).: _If \(B\) is a Polish space, then the following statements hold:_
1. _there exists a Urysohn field_ \(\pi\colon U\to B\)_, unique up to isometry;_
2. _Urysohn field is universal, that is, every_ \(B\)_-field isometrically embeds into the Urysohn field;_
3. _every isometry between finite subfields of the Urysohn field extends to an automorphism of the Urysohn field._
Given an equivalence relation \(\sim\) on a metric space \((X,d_{X})\), the quotient metric is the maximal (pseudo) metric on \(X/\!\!\sim\) that makes the quotient map \(\pi\colon X\to X/\!\!\sim\)\(1\)-Lipschitz. Let \(F(\mathcal{U}_{B})\) be the space of compact subfields of \(\mathcal{U}_{B}=(U,B,\pi_{U})\), equipped with the Hausdorff distance, and \(Aut(B)\) the automorphism group of \(\mathcal{U}_{B}\), which acts on \(\mathcal{U}_{B}\) by isometries. On \(F(\mathcal{U}_{B})/Aut(B)\), by [7, Lemma 3.3.6], the quotient metric may be expressed as
\[d_{F}^{B}(\mathcal{X},\mathcal{Y})=\inf_{\Phi,\Psi\in Aut(B)}d_{H}^{U}(\phi(X),\psi(Y)))=\inf_{\Psi\in Aut(B)}d_{H}^{U}(X,\psi(Y)). \tag{34}\]
**Theorem 3.3**.: _The moduli space \((\mathcal{F}_{B},d_{GH})\) of isometry classes of compact \(B\)-fields equipped with the Gromov-Hausdorff distance is isometric to the quotient space \((F(\mathcal{U}_{B})/Aut(B),d_{F}^{B})\)._
We provide a proof at the end of this section after establishing some results needed for the argument.
**Lemma 3.4**.: _Let \(\mathcal{X}=(X,B,\pi_{X})\) and \(\mathcal{Y}=(Y,B,\pi_{Y})\) be compact \(B\)-fields and \(\mathcal{U}_{B}=(U,B,\pi_{U})\) be the Urysohn \(B\)-field. Then,_
\[d_{GH}(\mathcal{X},\mathcal{Y})=\inf_{\Phi,\Psi}d_{H}^{U}(\phi(X),\psi(Y)),\]
_where the infimum is taken over all isometric embeddings \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{U}_{B}\) and \(\Psi\colon\mathcal{Y}\Rightarrow\mathcal{U}_{B}\) over \(B\). The corresponding result for the Gromov-Prokhorov distance also holds._
Proof.: The inequality \(d_{GH}(\mathcal{X},\mathcal{Y})\leq\inf_{\Phi,\Psi}d_{H}^{U}(\phi(X),\psi(Y))\) follows from the definition of \(d_{GH}\). To prove the reverse inequality, let \(\epsilon>0\). We show that there are isometric embeddings \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{U}_{B}\) and \(\Psi\colon\mathcal{Y}\Rightarrow\mathcal{U}_{B}\) such that
\[d_{H}^{U}(\phi(X),\psi(Y))\leq d_{GH}(\mathcal{X},\mathcal{Y})+\epsilon\,. \tag{35}\]
By definition of the Gromov-Hausdorff distance, there is a \(B\)-field \(\mathcal{Z}\) and isometric embeddings \(\Phi^{\prime}\colon\mathcal{X}\Rightarrow\mathcal{Z}\) and \(\Psi^{\prime}\colon\mathcal{Y}\Rightarrow\mathcal{Z}\) over \(B\) such that
\[d_{H}^{Z}(\phi^{\prime}(X),\psi^{\prime}(Y))\leq d_{GH}(\mathcal{X},\mathcal{Y })+\epsilon\,. \tag{36}\]
By the universality of \(\mathcal{U}_{B}\), there is an isometric embedding \(\Lambda\colon\mathcal{Z}\Rightarrow\mathcal{U}_{B}\) over \(B\). Letting \(\Phi=\Lambda\circ\Phi^{\prime}\) and \(\Psi=\Lambda\circ\Psi^{\prime}\), we have
\[d_{H}^{U}(\phi(X),\psi(Y))=d_{H}^{Z}(\phi^{\prime}(X),\psi^{\prime}(Y))\leq d _{GH}(\mathcal{X},\mathcal{Y})+\epsilon\,. \tag{37}\]
Clearly, the same inequality holds if the left-hand side of (37) is replaced with the infimum over \(\Phi\) and \(\Psi\). Taking the limit as \(\epsilon\to 0\), we obtain the desired inequality.
**Lemma 3.5**.: _Let \(\mathcal{U}_{B}\) be a Urysohn field over \(B\), \(A\subseteq U\) a compact subset, and \(f\colon A^{*}\to B\) a field defined on a one-point metric extension \(A^{*}=A\sqcup\{a^{*}\}\) satisfying \(f_{A}=\pi|_{A}\). Then, there exists an isometric embedding \(\iota\colon A^{*}\to U\) over \(B\) such that the restriction \(\iota|_{A}\) is the inclusion map and \(\pi\circ\iota=f\)._
Proof.: It suffices to construct a sequence \((u_{n})\) in \(U\) satisfying
* \(\pi_{U}(u_{n})=f(a^{*})\), for \(n\geq 1\);
* \(|d_{*}(a^{*},a)-d_{U}(u_{n},a)|\leq 2^{-n}\), \(\forall a\in A\), where \(d_{*}\) is the metric on \(A^{*}\);
* \(d_{U}(u_{n},u_{n+1})\leq 2^{-n}\), for \(n\geq 1\).
Indeed, letting \(u=\lim_{n\to\infty}u_{n}\in U\), define \(\iota\colon A^{*}\to B\) as the identity on \(A\) and \(\iota(a^{*})=u\). The map \(\iota\) gives the desired one-point extension. Now we proceed to the construction of the sequence \((u_{n})\).
Let \(A_{n}\) be an ascending sequence of finite subsets of \(A\), where \(A_{n}\) is a \(2^{-(n+1)}\)-net in \(A\), for \(n\geq 1\). (This means that the balls of radius \(2^{-(n+1)}\) centered at the points in \(A_{n}\) cover \(A\).) Let \(D_{1}=A_{1}\) and denote by \(D_{1}^{*}=D_{1}\sqcup\{a^{*}\}\), the one-point metric extension of \(D_{1}\) induced by \((A^{*},d_{*})\). Since \(\mathcal{U}_{B}\) is Urysohn, applying the one-point extension property to the field \(f|_{D_{1}^{*}}\), we obtain an isometric embedding \(\iota_{1}\colon D_{1}^{*}\to U\) such that \(\pi_{U}\circ\iota_{1}=f|_{D_{1}^{*}}\). Defining \(u_{1}=\iota_{1}(a^{*})\in U\), we have that \(\pi_{U}(u_{1})=f(a^{*})\) and \(d_{*}(a^{*},a)=d_{U}(u_{1},a)\), for any \(a\in A\), so \(u_{1}\) satisfies (i) and (ii) above. Condition (iii) is empty at this stage of the construction. Inductively, suppose that we have constructed \(u_{j}\), \(i\leq j\leq n\), with the desired properties and let
\[D_{n+1}=A_{n+1}\cup\{u_{n}\}\quad\text{and}\quad D_{n+1}^{*}=D_{n+1}\sqcup\{a^ {*}\}. \tag{38}\]
Using the notation \(A_{n+1}^{*}=A_{n+1}\cup\{a^{*}\}\), define a metric \(d_{*}^{\prime}\colon D_{n+1}^{*}\times D_{n+1}^{*}\to\mathbb{R}\), as follows: \(d_{*}^{\prime}\) coincides with \(d_{*}\) on \(A_{n+1}^{*}\times A_{n+1}^{*}\), \(d_{*}^{\prime}(a,u_{n})=d_{U}(a,u_{n})\), for every \(a\in A_{n+1}\), and
\[d_{*}^{\prime}(a^{*},u_{n}):=\sup_{b\in A_{n+1}}|d_{*}(a^{*},b)-d_{U}(u_{n},b)|. \tag{39}\]
Define a field \(f^{\prime}\colon D_{n+1}^{*}\to B\) by \(f^{\prime}|_{D_{n+1}}=\pi_{U}|_{D^{n+1}}\) and \(f^{\prime}(a^{*})=f(a^{*})\). Applying the one-point extension property to \(f^{\prime}\) we obtain an isometric embedding \(\iota_{n+1}\colon D_{n+1}^{*}\to U\) satisfying \(f^{\prime}=\pi_{U}\circ\iota_{n+1}\).
Let \(u_{n+1}=u_{n+1}(a^{*})\in U\). By construction, \(\pi_{U}(u_{n+1})=f^{\prime}(a_{*})=f(a_{*})\), so requirement (i) is satisfied. Moreover, \(d_{U}(u_{n+1},b)=d_{*}^{\prime}(a^{*},b)=d_{*}(a^{*},b)\), for any \(b\in A_{n+1}\). Since \(A_{n+1}\) is a \(2^{-(n+2)}\)-net in \(A\), for each \(a\in A\), we can pick \(b\in A_{n+1}\) such that \(d(a,b)\leq 2^{-(n+2)}\). Then, we have
\[|d_{*}(a^{*},a)-d_{U}(u_{n+1},a)| \leq|d_{*}(a^{*},a)-d_{U}(u_{n+1},b)|+|d_{U}(u_{n+1},b)-d_{U}(u_{n +1},a)|\] \[=|d_{*}(a^{*},a)-d_{*}(a^{*},b)|+|d_{U}(u_{n+1},b)-d_{U}(u_{n+1},a)| \tag{40}\] \[\leq d_{*}(a,b)+d_{U}(a,b)=2d_{U}(a,b)\leq 2^{-(n+1)}\,.\]
This verifies property (ii). By the inductive hypothesis, we also have \(|d_{*}(a^{*},a)-d_{U}(u_{n},a)|\leq 2^{-n}\), for any \(a\in A\). Thus, by (39),
\[d(u_{n+1},u_{n})=d_{*}^{\prime}(a^{*},u_{n})=\sup_{b\in A_{n+1}}|d_{*}(a^{*},b )-d_{U}(u_{n},b)|\leq 2^{-n}. \tag{41}\]
This concludes the proof.
A _partial isometric matching_ of \(\pi\colon X\to B\) is a bijective isometry \(\phi\colon A\to A^{\prime}\) between subspaces of \(X\) that satisfies \(\pi|_{A^{\prime}}\circ\phi=\pi|_{A}\).
**Proposition 3.6**.: _If \(\mathcal{U}_{B}\) is a Urysohn field and \(A,B\subseteq U\) are compact subsets, then any partial isometric matching \(\phi\colon A\to B\) of \(\mathcal{U}_{B}\) extends to an automorphism of \(\mathcal{U}_{B}\)._
Proof.: Let \(C=\{x_{1},x_{2},\ldots\}\) and \(C^{\prime}=\{x_{1}^{\prime},x_{2}^{\prime},\ldots\}\) be countable dense sets in \(U\setminus A\) and \(U\setminus B\), respectively. Using the one-point compact extension property established in Lemma 3.5 and a back-and-forth argument applied to \(C\) and \(C^{\prime}\) as in [22, Section 3.1], \(\phi\) can be extended to an isometry of \(\mathcal{U}_{B}\).
Proof of Theorem 3.3.: For any compact \(B\)-fields \(\mathcal{X}\) and \(\mathcal{Y}\), Lemma 3.4 shows that
\[d_{GH}(\mathcal{X},\mathcal{Y})=\inf_{\Phi,\Psi}d_{H}^{U}(\phi(X),\psi(Y)), \tag{42}\]
where \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{U}_{B}\) and \(\Psi\colon\mathcal{Y}\Rightarrow\mathcal{U}_{B}\) are isometric embeddings. By Proposition 3.6, any other isometric embeddings \(\Phi^{\prime}\colon\mathcal{X}\Rightarrow\mathcal{U}_{B}\) and \(\Psi^{\prime}\colon\mathcal{Y}\Rightarrow\mathcal{U}_{B}\) differ from \(\Phi\) and \(\Psi\) by the action of automorphisms of \(\mathcal{U}_{B}\). This proves the claim.
## 4 Gromov-Prokhorov Distance for Metric-Measure Fields
Recall that a metric measure space (\(mm\)-space) is a triple \((X,d_{X},\mu_{X})\), where \((X,d_{X})\) is a Polish metric space and \(\mu_{X}\) is a Borel probability measure on \(X\). The next definition introduces an analogue for fields.
**Definition 4.1**.: Let \(B\) be a Polish metric space. A _metric measure field_, or \(mm\)-field, over \(B\) is a quadruple \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\), where \((X,d_{X})\) is a Polish metric space, \(\pi_{X}\colon X\to B\) is a \(1\)-Lipschitz map, and \(\mu_{X}\) is a Borel probability measure on \(X\). Two \(mm\)-fields are _isomorphic_ if there is a measure-preserving isometry between the underlying \(B\)-fields.
We abuse notation an also denote by \(\mathcal{X}\) the \(B\)-field underlying \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\).
**Definition 4.2**.: (Gromov-Prokhorov Distance for Fields) Let \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y},\mu_{Y})\) be \(mm\)-fields over \(B\). The _Gromov-Prokhorov distance_ is defined by
\[d_{GP}(\mathcal{X},\mathcal{Y}):=\inf_{Z,\Phi,\Psi}d_{P}^{Z}(\phi_{*}(\mu_{X}),\psi_{*}(\mu_{Y})),\]
where the infimum is taken over all \(B\)-fields \(\mathcal{Z}\) and isometric embeddings \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{Z}\) and \(\Psi\colon\mathcal{Y}\Rightarrow\mathcal{Z}\) over \(B\).
**Proposition 4.3**.: _Let \(\mathcal{X}\), \(\mathcal{Y}\), and \(\mathcal{W}\) be \(mm\)-fields over \(B\). Then,_
\[d_{GP}(\mathcal{X},\mathcal{W})\leq d_{GP}(\mathcal{X},\mathcal{Y})+d_{GP}( \mathcal{Y},\mathcal{W}).\]
Proof.: The proof is identical to that of Proposition 2.5 replacing the Hausdorff distance by the Prokhorov distance.
Given \((X,d_{X},\mu_{X})\) and \((Y,d_{Y},\mu_{Y})\), let \(C(\mu_{X},\mu_{Y})\) denote the set of all couplings between \(\mu_{X}\) and \(\mu_{Y}\); that is, the collection of all probability measures \(\mu\) on \(X\times Y\) that marginalize to \(\mu_{X}\) and \(\mu_{Y}\), respectively. Our next goal is to characterize the Gromov-Prokhorov distance between \(B\)-fields in terms of couplings.
**Definition 4.4** (\(\epsilon\)-couplings).: Let \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y},\mu_{Y})\) be \(mm\)-fields over \(B\) and \(\epsilon\geq 0\). A coupling \(\mu\in C(\mu_{X},\mu_{Y})\) is called an \(\epsilon\)-_coupling_ if there is a Borel subset \(R\) of \(X\times Y\) such that \(\mu(R)\geq 1-\epsilon/2\) and \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)\leq\epsilon\). (Note: \(R\) is not assumed to be a correspondence.)
The following is an analogue of Lemma 2.9 for the Prokhorov distance.
**Lemma 4.5**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(mm\)-fields over \(B\) and \(r>0\). Suppose that \(\mu\) is a \(2r\)-coupling between \(\mathcal{X}\) and \(\mathcal{Y}\) with respect to a Borel relation \(R\subseteq X\times Y\) and let \(\mathcal{Z}:=\mathcal{X}\coprod_{R,r}\mathcal{Y}\). Then, \(d_{P}^{Z}(\mu_{X},\mu_{Y})\leq r\)._
Proof.: Given a Borel subset \(A\subseteq X\), denote by \(A^{r}\) the closed \(r\)-neighborhood of \(A\) in \(Z\). Viewing \(\mu\) as a measure on \(Z\times Z\) supported on \(X\times Y\), (under the inclusion \(X\times Y\hookrightarrow Z\times Z\)), we have
\[\mu_{Y}(A^{r})=\mu(Z\times A^{r})\geq\mu((A\times Y)\cap R)\geq\mu(A\times Y) -r=\mu_{X}(A)-r\,. \tag{43}\]
Similarly, \(\mu_{X}(A^{r})\geq\mu_{Y}(A)-r\). Hence, \(d_{P}^{Z}(\mu_{X},\mu_{Y})\leq r\).
**Theorem 4.6**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(mm\)-fields over \(B\). The Gromov-Prokhorov distance between \(\mathcal{X}\) and \(\mathcal{Y}\) can be expressed as_
\[d_{GP}(\mathcal{X},\mathcal{Y})=\frac{1}{2}\inf\{\epsilon>0\colon\text{there exists an $\epsilon$-coupling $\mu\in C(\mu_{X},\mu_{Y})$}\}.\]
Proof.: We first show that if \(\epsilon>0\) and \(\mu\) is an \(\epsilon\)-coupling between \(\mathcal{X}\) and \(\mathcal{Y}\), then \(d_{GP}(\mathcal{X},\mathcal{Y})\leq\epsilon/2\). If \(r>\epsilon/2\), then \(\mu\) is a \(2r\)-coupling between \(\mathcal{X}\) and \(\mathcal{Y}\). By Lemma 4.5, there exists a \(B\)-field \(\mathcal{Z}\) containing \(\mathcal{X}\) and \(\mathcal{Y}\) such that \(d_{P}^{Z}(\mu_{X},\mu_{Y})\leq r\), which implies that \(d_{GP}(\mathcal{X},\mathcal{Y})\leq r\). Since \(r>\epsilon/2\) is arbitrary, \(d_{GP}(\mathcal{X},\mathcal{Y})\leq\epsilon/2\). Thus, if we denote the infimum in the statement of the theorem by \(\alpha\), we have that \(d_{GP}(\mathcal{X},\mathcal{Y})\leq\alpha/2\). We now sharpen this to an equality.
Let \(r>d_{GP}(\mathcal{X},\mathcal{Y})\). Let us show that there is a \(2r\)-coupling between \(\mathcal{X}\) and \(\mathcal{Y}\). There exists isometric embeddings \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{Z}\) and \(\Psi\colon\mathcal{Y}\Rightarrow\mathcal{Z}\) such that \(d_{P}^{Z}(\phi_{*}(\mu_{X}),\psi_{*}(\mu_{Y}))<r\). For \(x\in X\) and \(y\in Y\), abusing notation, write \(d_{Z}(x,y)\) for \(d_{Z}(\phi(x),\psi(y))\), and \(\pi_{Z}(x)\) and \(\pi_{Z}(y)\) for \(\pi_{Z}(\phi(x))\) and \(\pi_{Z}(\psi(y))\), respectively. By [15, Theorem 11.6.2], there exists a coupling \(\nu\) between \(\phi_{*}(\mu_{X})\) and \(\psi_{*}(\mu_{Y})\) such that
\[\nu(\{(w,z)\in Z\times Z\colon d_{Z}(w,z)>r\})\leq r. \tag{44}\]
Since \(\nu\) is supported in \(\phi(X)\times\psi(Y)\), it induces a coupling \(\mu\) between \(\mu_{X}\) and \(\mu_{Y}\) such that
\[\mu(\{(x,y)\in X\times Y\colon d_{Z}(x,y)>r\})\leq r. \tag{45}\]
Let \(R:=\{(x,y)\in X\times Y\colon d_{Z}(x,y)\leq r\}\). By (45), \(\mu(R)\geq 1-r\). Moreover, if \((x,y),(x^{\prime},y^{\prime})\in R\), then
\[|d_{X}(x,x^{\prime})-d_{Y}(y,y^{\prime})|\leq d_{Z}(x,y)+d_{Z}(x^{\prime},y^{ \prime})\leq 2r \tag{46}\]
and
\[|d_{B}(\pi_{X}(x),\pi_{Y}(y)|=d_{B}(\pi_{Z}(x),\pi_{Z}(y))\leq d_{Z}(x,y)\leq r. \tag{47}\]
Hence, \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)\leq 2r\). This implies that \(d_{GP}(\mathcal{X},\mathcal{Y})=\alpha/2\), concluding the proof.
The next proposition establishes the existence of optimal couplings for the Gromov-Prokhorov distance between compact fields.
**Proposition 4.7**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be compact \(mm\)-fields over \(B\). There exists an \(\epsilon\)-coupling between \(\mathcal{X}\) and \(\mathcal{Y}\) such that \(d_{GP}(\mathcal{X},\mathcal{Y})=\epsilon/2\)._
Proof.: Let \(d_{GP}(\mathcal{X},\mathcal{Y})=r\). By Theorem 4.6, for each integer \(n>0\), there exists a \((2r+2/n)\)-coupling \(\mu_{n}\) between \(\mathcal{X}\) and \(\mathcal{Y}\). By Proposition A.1, passing to a subsequence if necessary, there exists a coupling \(\mu\) between \(\mathcal{X}\) and \(\mathcal{Y}\) such that \(\mu_{n}\) weakly converges to \(\mu\). It is enough to show that \(\mu\) is a \(2r\)-coupling, as we can take \(\epsilon=2r\).
There exist Borel sets \(R_{n}\subseteq X\times Y\) such that \(\mu_{n}(R_{n})\geq 1-r-1/n\) and \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R_{n})\leq 2r+2/n\), for all \(n\). Without loss of generality, we can assume that \(R_{n}\) is closed. By passing to a subsequence if necessary, by [7, Theorem 7.3.8], there exists a closed subset \(R\) of \(X\times Y\) such that \(R_{n}\) Hausdorff converges to a closed subset \(R\), where \(X\times Y\) is endowed with the product sup-metric. As in the proof of Proposition 2.11, we have \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R)\leq 2r\). It remains to show that \(\mu(R)\geq 1-r\). Let \(\delta>0\) and \(R^{\delta}\) denote the closed \(\delta\)-neighborhood of \(R\) in \(X\times Y\). For \(n\)-large enough, \(R_{n}\subseteq R^{\delta}\), so \(\mu_{n}(R^{\delta})\geq 1-r-1/n\). Hence, by [15, Theorem 11.1.1], \(\mu(R^{\delta})\geq 1-r\). Therefore, \(\mu(R)=\mu(\cap_{n}R^{1/n})=\lim_{n\to\infty}\mu(R^{1/n})\geq 1-r\).
Two \(mm\)-fields over \(B\) are called isomorphic if there is a measure preserving isometry between them. The following result shows that, even without a compactness requirement, fully supported fields with zero Gromov-Prokhorov distance are isomorphic.
**Proposition 4.8**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be fully supported \(mm\)-fields over \(B\). Then, \(d_{GP}(\mathcal{X},\mathcal{Y})=0\) if and only if \(\mathcal{X}\) and \(\mathcal{Y}\) are isomorphic._
Proof.: The "if" statement is clear, so assume that \(d_{GP}(\mathcal{X},\mathcal{Y})=0\). By Theorem 4.6, for each integer \(n>0\), there exists a \(2/n\)-coupling \(\mu_{n}\) between \(\mathcal{X}\) and \(\mathcal{Y}\). By Proposition A.1, by passing to a subsequence if necessary, there exists a coupling \(\mu\) between \(\mathcal{X}\) and \(\mathcal{Y}\) such that \(\mu_{n}\) weakly converges to \(\mu\). Note that \(\mu_{n}\otimes\mu_{n}\) converges weakly to \(\mu\otimes\mu\) ([3, Theorem 2.8]).
There exist Borel sets \(R_{n}\subseteq X\times Y\) such that \(\mu_{n}(R_{n})\geq 1-1/n\) and \(\operatorname{dis}_{\pi_{X},\pi_{Y}}(R_{n})\leq 2/n\), for all \(n>0\). For an integer \(N>0\), let
\[\begin{split} S_{N}:=\{(x,& y,& x^{\prime},y^{ \prime})\in X\times Y\times X\times Y\colon|d_{X}(x,x^{\prime})-d_{Y}(y,y^{ \prime})|\leq 2/N,\\ & d_{B}(\pi_{X}(x),\pi_{Y}(y))\leq 1/N,\text{ and }d_{B}(\pi_{X}(x^{\prime}),\pi_{Y}(y^{ \prime}))\leq 1/N\}.\end{split} \tag{48}\]
If \(n\geq N\), then \(R_{n}\times R_{n}\subseteq S_{N}\), so \(\mu_{n}\otimes\mu_{n}(S_{N})\geq(1-1/n)^{2}\). Hence, by [15, Theorem 11.1.1], \(\mu\otimes\mu(S_{N})=1\). Since \(S_{N+1}\subseteq S_{N}\), if we let \(S=\cap_{N}S_{N}\), then \(\mu\otimes\mu(S)=1\) and
\[S=\{(x,y,x^{\prime},y^{\prime})\in X\times Y\times X\times Y:d_{X}(x,x^{\prime })=d_{Y}(y,y^{\prime}),\pi_{X}(x)=\pi_{Y}(y),\text{and}\,\pi_{X}(x^{\prime})= \pi_{Y}(y^{\prime})\}. \tag{49}\]
Note that \(\operatorname{supp}(\mu)\times\operatorname{supp}(\mu)=\operatorname{supp}( \mu\otimes\mu)\subseteq S\). Letting \(X_{0}\) and \(Y_{0}\) be the projections of \(\operatorname{supp}(\mu)\) onto \(X\) and \(Y\), respectively, we have that
\[\mu_{X}(\bar{X}_{0})=\mu(\bar{X}_{0}\times Y)\geq\mu(\operatorname{supp}(\mu) )=1, \tag{50}\]
where the bar denotes closure. Since \(\mu_{X}\) is fully supported, \(\bar{X}_{0}=X\). Similarly, \(\bar{Y}_{0}=Y\). For each \(x\in X_{0}\), there exists \(y\in Y_{0}\) such that \((x,y)\in\operatorname{supp}(\mu)\). If \((x,y^{\prime})\in\operatorname{supp}(\mu)\), then \((x,y,x,y^{\prime})\in S\), implying that \(y=y^{\prime}\) and \(\pi_{X}(x)=\pi_{Y}(y)=\pi_{Y}(y^{\prime})\). Similarly, for each \(y\in Y_{0}\), there exists unique \(x\) in \(X_{0}\) such that \((x,y)\in\operatorname{supp}(\mu)\). Thus, we can define a bijection \(\phi:X_{0}\to Y_{0}\) by requiring that \((x,\phi(x))\in\operatorname{supp}(\mu)\). Furthermore, for all \(x,x^{\prime}\in X\), we have \((x,\phi(x),x^{\prime},\phi(x^{\prime}))\in S\), so \(\pi_{Y}(\phi(x))=\pi_{X}(x)\) and \(d_{X}(x,x^{\prime})=d_{Y}(\phi(x),\phi(x^{\prime}))\). This implies that \(\phi\) is an isometry between \(X_{0},Y_{0}\), which extends to an isometry \(\Phi:\mathcal{X}\to\mathcal{Y}\) of fields. It remains to show that \(\phi\) is measure preserving.
For a Borel subset \(A\subseteq X\), we have \(\mu_{X}(A)=P(A\times X)\geq P(A\times\phi(A))\). Let \(A_{0}=A\cap X_{0}\). Note that \((A\times Y)\cap\operatorname{supp}(P)=A_{0}\times\phi(A_{0})\), hence \(\mu_{X}(A)=P(A_{0}\times\phi(A_{0}))\leq P(A\times\phi(A))\). Hence \(\mu_{X}(A)=P(A\times\phi(A))\). Similarly, for a Borel subset \(A\) of \(Y\), we have \(\mu_{Y}(A)=P(\phi^{-1}(A)\times A)\). Therefore,
\[\mu_{X}(\phi^{-1}(A))=P(\phi^{-1}(A)\times\phi(\phi^{-1}(A)))=P(\phi^{-1}(A) \times A)=\mu_{Y}(A).\]
This shows that \(\phi\) is measure preserving and completes the proof.
In Theorem 2.12, we have shown that compact \(B\)-fields form a Polish space under the Gromov-Hausdorff distance. In contrast, the space of compact \(mm\)-fields over \(B\) is not complete under the Gromov-Prokhorov distance since every Borel probability measure on a Polish space is the Prokhorov limit of compactly supported Borel probability measures. The next result shows that if we drop the compactness requirement, we still get a Polish space.
**Theorem 4.9**.: _Let \(\widehat{\mathcal{F}}_{B}\) denote the set of isometry classes of fully supported \(B\)-fields. Then, \(d_{GP}\) metrizes \(\widehat{\mathcal{F}}_{B}\) and \((\widehat{\mathcal{F}}_{B},d_{GP})\) is a Polish space._
We begin the proof with a finite approximation lemma.
**Lemma 4.10**.: _Let \(\mathcal{X}\) be a \(B\)-field, and \(B_{0}\) be a dense subset of \(B\). For each \(\epsilon>0\), there exists a finite \(mm\)-field \(\mathcal{Y}\) over \(B\) such that \(\mu_{Y}\) is the normalized counting measure, \(\pi_{Y}\) takes values in \(B_{0}\), \(d_{Y}\) only takes rational values, and \(d_{GP}(\mathcal{X},\mathcal{Y})<\epsilon\)._
Proof.: Varadarajan's Theorem [15, Theorem 11.4.1] implies that there are finite subsets of \(X\) whose normalized counting measures get arbitrarily \(d_{GP}\)-close to \(\mu_{X}\). The result then follows from Lemma 2.13.
Proof of Theorem 4.9.: Symmetry of \(d_{GP}\) is clear, the triangle inequality has been proven in Proposition 4.3 and definiteness in Proposition 4.8. Hence, \(d_{GP}\) is a metric on \(\widehat{\mathcal{F}}_{B}\). Moreover, Lemma 4.10 shows that \(\widehat{\mathcal{F}}_{B}\) has a countable dense set. Thus, it remains to show that \(\widehat{\mathcal{F}}_{B}\) is complete.
Let \(\{\mathcal{X}_{n}\}\) be a Cauchy sequence of \(mm\)-fields over \(B\) with respect to Gromov-Prokhorov distance. To prove that \(\{\mathcal{X}_{n}\}\) is convergent, it suffices to show that it has a convergent subsequence. By passing to a subsequence if necessary, we can assume that \(d_{GP}(\mathcal{X}_{n},\mathcal{X}_{n+1})<1/2^{n}\), for all \(n\). For each \(n>0\), there exists a coupling \(\mu_{n}\in C(\mu_{X_{n}},\mu_{X_{n+1}})\) and a Borel subspace \(R_{n}\subseteq X_{n}\times X_{n+1}\) such that \(\mu_{n}(R_{n})>1-1/2^{n}\) and \(\operatorname{dis}_{\pi_{X_{n}},\pi_{X_{n+1}}}(R_{n})<2/2^{n}\). Let \(r_{n}=1/2^{n}\). By applying the construction described in Definition 2.7 consecutively, we get a \(B\)-field
\[\mathcal{Z}_{n}:=((\mathcal{X}_{1}\coprod_{R_{1},r_{1}}\mathcal{X}_{2})\coprod _{R_{2},r_{2}}\mathcal{X}_{3}\dots)\coprod_{R_{n},r_{n}}\mathcal{X}_{n+1}\]
for each \(n>0\). By construction, \(\mathcal{Z}_{n}\) is a subfield of \(\mathcal{Z}_{n+1}\). Let \(\mathcal{Z}\) be the completion of \(\cup_{n>0}\mathcal{Z}_{n}\). Note that the union of countable dense sets in \(Z_{n}\) is a countable dense set in \(Z\), hence \(Z\) is Polish. Therefore, \(\mathcal{Z}\) is a \(B\)-field. By Lemma 4.5, \(d_{P}^{Z}(\mu_{X_{n}},\mu_{X_{n+1}})\leq r_{n}=1/2^{n}\). Hence, \(\{\mu_{X_{n}}\}\) forms a Cauchy sequence with respect to Prokhorov distance in \(Z\). By [15, Corollary 11.5.5], there exists a Borel probability measure \(\mu\) of \(Z\) such that \(\{\mu_{X_{n}}\}\) Prokhorov converges to \(\mu\) in \(Z\). Let \(Y\) be the support of \(\mu\). Since \(Y\) is complete, \(\mathcal{Y}:=(Y,\mu,B,\pi_{Z}|_{Y})\) is a fully supported \(mm\)-field over \(B\). We have \(d_{GH}(\mathcal{X}_{n},\mathcal{Y})\leq d_{P}^{Z}(\mu_{X_{n}},\mu)\), hence \(\{\mathcal{X}_{n}\}\) Gromov-Prokhorov converges to \(\mathcal{Y}\).
**Remark 4.11**.: Analogous to Theorem 3.3, there is a characterization of the Gromov-Prokhorov distance between compact \(mm\)-fields in terms of isometric embeddings into a Urysohn field. The statement and proof are nearly identical and therefore omitted.
## 5 Gromov-Wasserstein Distance for Metric-Measure Fields
Given \(B\)-fields \((X,d_{X},\pi_{X})\) and \((Y,d_{Y},\pi_{Y})\), we let \(m_{X,Y}\colon(X\times Y)\times(X\times Y)\to\mathbb{R}\) and \(d_{X,Y}\colon X\times Y\to\mathbb{R}\) be the functions given by
\[\begin{split} m_{X,Y}(x,y,x^{\prime},y^{\prime})&:= \left|d_{X}(x,x^{\prime})-d_{Y}(y,y^{\prime})\right|,\\ d_{X,Y}(x,y)&:=d_{B}(\pi_{X}(x),\pi_{Y}(y))\,.\end{split} \tag{51}\]
Note that
\[\begin{split}|m_{X,Y}(x_{1},y_{1},x_{1}^{\prime},y_{1}^{\prime})-m_ {X,Y}(x_{2},y_{2},x_{2}^{\prime},y_{2}^{\prime})|&\leq\left|d_{ X}(x_{1},x_{1}^{\prime})-d_{X}(x_{2},x_{2}^{\prime})\right|+\left|d_{Y}(y_{1},y_{1}^{ \prime})-d_{Y}(y_{2},y_{2}^{\prime})\right|\\ &\leq d_{X}(x_{1},x_{2})+d_{X}(x_{1}^{\prime},x_{2}^{\prime})+d_{Y }(y_{1},y_{2})+d_{Y}(y_{1}^{\prime},y_{2}^{\prime})\\ &\leq 4\max\{d_{X}(x_{1},x_{2}),d_{Y}(y_{1},y_{2}),d_{X}(x_{1}^{\prime}, x_{2}^{\prime}),d_{Y}(y_{1}^{\prime},y_{2}^{\prime})\}.\end{split} \tag{52}\]
Therefore, if we endow \((X\times Y)\times(X\times Y)\) with the product sup metric, then \(m_{X,Y}\) is \(4\)-Lipschitz. Similarly, \(d_{X,Y}\) is \(2\)-Lipschitz. Using this notation, we introduce a field version of the Gromov-Wasserstein distance in a manner similar to [33].
**Definition 5.1**.: Let \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y},\mu_{Y})\) be \(mm\)-fields over \(B\). For \(1\leq p<\infty\), the _Gromov-Wasserstein distance_\(d_{GW,p}(\mathcal{X},\mathcal{Y})\) is defined as
\[d_{GW,p}(\mathcal{X},\mathcal{Y}):=\inf_{\mu\in C(\mu_{X},\mu_{Y})}\max\bigg{\{} \frac{1}{2}\bigg{(}\int m_{X,Y}^{p}\,d(\mu\otimes\mu)\bigg{)}^{1/p},\bigg{(} \int d_{X,Y}^{p}\,d\mu\bigg{)}^{1/p}\bigg{\}}.\]
For \(p=\infty\),
\[d_{GW,\infty}(\mathcal{X},\mathcal{Y}):=\inf_{\mu\in C(\mu_{X},\mu_{Y})}\max \left\{\frac{1}{2}\sup_{\text{supp}\,(\mu\otimes\mu)}m_{X,Y},\,\sup_{\text{ supp}\,\mu}\,d_{X,Y}\right\}.\]
That \(d_{GW,p}\) and \(d_{GW,\infty}\) are metrics can be argued as in the case of metric measure spaces (see ([28], Theorem 5.1).
**Remark 5.2**.: As \(m_{X,Y}\) and \(d_{X,Y}\) are continuous, their suprema over a set do not change after taking the closure of the set. Since the support is the smallest closest set of full measure, the suprema in the definition of \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\) are essential suprema.
**Proposition 5.3**.: _For each \(1\leq p\leq\infty\), \(d_{GW,p}(\mathcal{X},\mathcal{Y})\) is realized by a coupling. Furthermore,_
\[\lim_{p\to\infty}d_{GW,p}(\mathcal{X},\mathcal{Y})=d_{GW,\infty}(\mathcal{X}, \mathcal{Y}).\]
Proof.: For each integer \(n\geq 1\), there exists \(\mu_{n}\in C(\mu_{X},\mu_{Y})\) such that the expression on the right-hand side in the definition of \(d_{GW,p}(\mathcal{X},\mathcal{Y})\) is \(\leq d_{GW,p}(\mathcal{X},\mathcal{Y})+1/n\). By Proposition A.1 in the Appendix, without loss of generality, we can assume that \(\mu_{n}\) converges weakly to a coupling \(\mu\). This, in turn, implies that \(\mu_{n}\otimes\mu_{n}\) converges weakly to \(\mu\otimes\mu\) ([3, Theorem 2.8]). We show that \(\mu\) is an optimal coupling.
**Case 1.** Suppose that \(1\leq p<\infty\). Since \(m_{X,Y}\) and \(d_{X,Y}\) are continuous and bounded below, by [35, Lemma 4.3] we have
\[\int m_{X,Y}^{p}d(\mu\otimes\mu)\leq\liminf_{n}m_{X,Y}^{p}d(\mu_{n}\otimes\mu _{n})\quad\text{and}\quad\int d_{X,Y}^{p}d\mu\leq\liminf_{n}\int d_{X,Y}^{p}d \mu_{n}\,. \tag{53}\]
Using (53) and the fact that for any sequences \(\{a_{n}\}\) and \(\{b_{n}\}\) of real numbers the inequality
\[\max\left\{\liminf_{n}a_{n},\liminf b_{n}\right\}\leq\liminf_{n}\max\{a_{n},b _{n}\} \tag{54}\]
holds, we obtain
\[\begin{split} d_{GW,p}(\mathcal{X},\mathcal{Y})&\leq \max\bigg{\{}\frac{1}{2}\bigg{(}\int m_{X,Y}^{p}\,d(\mu\otimes\mu)\bigg{)}^{1 /p},\bigg{(}\int d_{X,Y}^{p}\,d\mu\bigg{)}^{1/p}\bigg{\}}\\ &\leq\liminf_{n}\max\bigg{\{}\frac{1}{2}\left(\int m_{X,Y}^{p}d( \mu_{n}\otimes\mu_{n})\right)^{1/p},\bigg{(}\int d_{X,Y}^{p}d\mu_{n}\bigg{)}^{ 1/p}\bigg{\}}\\ &\leq\liminf_{n}\left(d_{GW,p}(\mathcal{X},\mathcal{Y})+1/n\right) =d_{GW,p}(\mathcal{X},\mathcal{Y}).\end{split} \tag{55}\]
This implies that \(\mu\) realizes the Gromov-Wasserstein distance, as claimed.
**Case 2** For \(p=\infty\), we adapt the proof of [16, Proposition 3] to the present setting. Note that if \(1\leq q\leq q^{\prime}<\infty\), then
\[d_{GW,q}(\mathcal{X},\mathcal{Y})\leq d_{GW,q^{\prime}}(\mathcal{X},\mathcal{Y })\leq d_{GW,\infty}(\mathcal{X},\mathcal{Y}). \tag{56}\]
Hence, we have
\[\lim_{q\to\infty}d_{GW,q}(\mathcal{X},\mathcal{Y})=\sup_{q}d_{GW,q}(\mathcal{ X},\mathcal{Y})\leq d_{GW,\infty}(\mathcal{X},\mathcal{Y}). \tag{57}\]
Let \(\{q_{n}\}\) be a sequence of real numbers satisfying \(q_{n}\geq 1\) and \(q_{n}\to\infty\), and \(\mu_{n}\) be an optimal coupling realizing \(d_{GW,q_{n}}(\mathcal{X},\mathcal{Y})\). As before, we can assume that \(\mu_{n}\) converges to \(\mu\) weakly so that \(\mu_{n}\otimes\mu_{n}\) converges weakly to \(\mu\otimes\mu\). Let \(0\leq r<\max\left\{\frac{1}{2}\sup_{\text{supp}\,(\mu\otimes\mu)}m_{X,Y},\, \sup_{\text{supp}\,\mu}\,d_{X,Y}\right\}\) and set
\[U=\{(x,y,x^{\prime},y^{\prime})\colon m_{X,Y}(x,y,x^{\prime},y^{\prime})/2>r\} \quad\text{and}\quad V=\{(x,y)\colon d_{X,Y}(x,y)>r\}. \tag{58}\]
Either \(\mu\otimes\mu\left(U\right)>0\) or \(\mu(V)>0\). Let us first assume that \(\mu\otimes\mu(U)=2m>0\). By the Portmanteau Theorem [15, Theorem 11.1.1],
\[2m\leq\liminf\mu_{n}\otimes\mu_{n}(U). \tag{59}\]
Passing to a subsequence if necessary, we can assume that \(\mu_{n}\otimes\mu_{n}(U)\geq m>0\), for all \(n\). We then have
\[d_{GW,q_{n}}(\mathcal{X},\mathcal{Y})\geq\frac{1}{2}\bigg{(}\int m_{X,Y}^{q_{n }}d(\mu_{n}\otimes\mu_{n})\bigg{)}^{1/q_{n}}\geq r\,(\mu_{n}\otimes\mu_{n}(U)) ^{1/q_{n}}\geq r\,m^{1/q_{n}}. \tag{60}\]
Hence,
\[\lim_{p\to\infty}d_{GW,p}(\mathcal{X},\mathcal{Y})=\lim_{n}d_{GW,q_{n}}( \mathcal{X},\mathcal{Y})\geq r. \tag{61}\]
Since \(r<\max\left\{\frac{1}{2}\sup_{\mathrm{supp}\,\left(\mu\otimes\mu\right)}m_{X, Y},\,\sup_{\mathrm{supp}\,\mu}\,d_{X,Y}\right\}\) is arbitrary, we get
\[\max\left\{\frac{1}{2}\sup_{\mathrm{supp}\,\left(\mu\otimes\mu \right)}m_{X,Y},\,\sup_{\mathrm{supp}\,\mu}\,d_{X,Y}\right\} \geq d_{GW,\infty}(\mathcal{X},\mathcal{Y}) \tag{62}\] \[\geq\lim_{p\to\infty}d_{GW,p}(\mathcal{X},\mathcal{Y})\] \[\geq\max\left\{\frac{1}{2}\sup_{\mathrm{supp}\,\left(\mu\otimes \mu\right)}m_{X,Y},\sup_{\mathrm{supp}\,\mu}\,d_{X,Y}\right\}.\]
This shows that the coupling \(\mu\) realizes \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\) and also proves the convergence claim. The case \(\mu(V)>0\) is handled similarly.
The next proposition establishes a standard relation between the Gromov-Wasserstein and the Wasserstein distances in the setting of \(mm\)-fields.
**Proposition 5.4**.: _Let \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y},\mu_{Y})\) be \(mm\)-fields over \(B\). Suppose that \(\pi_{Z}\colon Z\to B\) is 1-Lipschitz and \(\iota_{X}\colon X\to Z\) and \(\iota_{Y}\colon Y\to Z\) are isometric embeddings satisfying \(\pi_{X}=\pi_{Z}\circ\iota_{X}\) and \(\pi_{Y}=\pi_{Z}\circ\iota_{Y}\). Then, for any \(1\leq p\leq\infty\), we have_
\[d_{GW,p}(\mathcal{X},\mathcal{Y})\leq d_{W,p}((\iota_{X})_{*}(\mu_{X}),(\iota _{Y})_{*}(\mu_{Y}))\,.\]
Proof.: For \(1\leq p<\infty\), let \(\nu\) be an optimal coupling between \((\iota_{X})_{*}(\mu_{X})\) and \((\iota_{Y})_{*}(\mu_{Y})\) realizing the \(p\)-Wasserstein distance \(d_{W,p}((\iota_{X})_{*}(\mu_{X}),(\iota_{Y})_{*}(\mu_{Y}))\). Since \(\iota_{X}(X)\times\iota_{Y}(Y)\) has full measure in \((Z\times Z,\nu)\), there is a measure \(\mu\) on \(X\times Y\) such that \((\iota_{X}\times\iota_{Y})_{*}(\mu)=\nu\). Since \((\iota_{X})_{*}\) and \((\iota_{Y})_{*}\) are injective, \(\mu\in C(\mu_{X},\mu_{Y})\). We have
\[\bigg{(}\int m_{X,Y}^{p}\,d(\mu\otimes\mu)\bigg{)}^{1/p} =\bigg{(}\iint|d_{Z}(w,w^{\prime})-d_{Z}(z,z^{\prime})|^{p}d\nu(w,z)d\nu(w^{\prime},z^{\prime})\bigg{)}^{1/p} \tag{63}\] \[\leq\bigg{(}\iint(d_{Z}(w,z)+d_{Z}(w^{\prime},z^{\prime}))^{p}d \nu(w,z)d\nu(w^{\prime},z^{\prime})\bigg{)}^{1/p}\] \[\leq\bigg{(}\int d_{Z}(w,z)^{p}d\nu(w,z)\bigg{)}^{1/p}+\bigg{(} \int d_{Z}(w^{\prime},z^{\prime})^{p}d\nu(w^{\prime},z^{\prime})\bigg{)}^{1/p}\] \[=2d_{W,p}((\iota_{X})_{*}(\mu_{X}),(\iota_{Y})_{*}(\mu_{Y})).\]
Similarly,
\[\bigg{(}\int d_{X,Y}^{p}\,d\mu\bigg{)}^{1/p} =\bigg{(}\int d_{B}(\pi_{Z}(w),\pi_{Z}(z))^{p}d\nu(w,z)\bigg{)}^{ 1/p} \tag{64}\] \[\leq\bigg{(}\int d_{Z}(w,z)^{p}d\nu(w,z)\bigg{)}^{1/p}=d_{W,p}(( \iota_{X})_{*}(\mu_{X}),(\iota_{Y})_{*}(\mu_{Y})).\]
Hence,
\[d_{GW,p}(\mathcal{X},\mathcal{Y})\leq\max\left\{\frac{1}{2}\bigg{(}\int m_{X,Y}^{ p}\,d(\mu\otimes\mu)\bigg{)}^{1/p},\bigg{(}\int d_{X,Y}^{p}\,d\mu\bigg{)}^{1/p} \right\}\leq d_{W,p}((\iota_{X})_{*}(\mu_{X}),(\iota_{Y})_{*}(\mu_{Y})). \tag{65}\]
Letting \(p\to\infty\) in (65), we get \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\leq d_{W,\infty}((\iota_{X})_{*}(\mu_ {X}),(\iota_{Y})_{*}(\mu_{Y}))\).
Recall that, in a metric measure space \((X,d_{X},\mu_{X})\), a sequence \((x_{n})\) is called _uniformly distributed_ (u.d.) if \(\sum_{i=1}^{n}\delta_{x_{i}}/n\to\mu_{X}\) weakly. Let \(U_{X}\) denote the set of uniformly distributed sequences in \(X\). It is known that \(U_{X}\) is a Borel set in \(X^{\infty}\) and \(\mu_{X}^{\infty}(U_{X})=1\)[25].
The next result provides a characterization of \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\) in terms of uniformly distributed sequences.
**Theorem 5.5**.: _Let \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y},\mu_{Y})\) be bounded \(mm\)-fields over \(B\). Then,_
\[d_{GW,\infty}(\mathcal{X},\mathcal{Y})=\inf_{(x_{n})\in U_{X}\atop(y_{n})\in U _{Y}}\max\left\{\frac{1}{2}\sup_{i,j}m_{X,Y}(x_{i},y_{i},x_{j},y_{j}),\,\sup_{ i}d_{X,Y}(x_{i},y_{i})\right\}.\]
_Furthermore, there are sequences \((x_{n})\in U_{X}\) and \((y_{n})\in U_{Y}\) that realize the infimum on the right-hand side._
Proof.: Let us denote the infimum on the right-hand side by \(\alpha\) and let \(\mu\) be an optimal coupling realizing \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\). Then,
\[d_{GW,\infty}(\mathcal{X},\mathcal{Y})=\max\left\{\frac{1}{2}\sup_{\mathrm{ supp}\,(\mu\otimes\mu)}m_{X,Y},\,\sup_{\mathrm{supp}\,\mu}\,d_{X,Y}\right\}. \tag{66}\]
Let \((x_{n},y_{n})\) be an equidistributed sequence with respect to \(\mu\) in \(\mathrm{supp}\,\mu\). Then, \((x_{n})\in U_{X}\) and \((y_{n})\in U_{X}\), and we have
\[\alpha \leq\max\left\{\frac{1}{2}\sup_{i,j}m_{X,Y}(x_{i},y_{i},x_{j},y_{ j}),\,\sup_{i}d_{X,Y}(x_{i},y_{i})\right\} \tag{67}\] \[\leq\max\left\{\frac{1}{2}\sup_{\mathrm{supp}\,(\mu\otimes\mu)}m_ {X,Y},\,\sup_{\mathrm{supp}\,\mu}\,d_{X,Y}\right\}=d_{GW,\infty}(\mathcal{X}, \mathcal{Y}).\]
For the converse inequality, let \(p\geq 1\), \(\epsilon>0\), \((x_{n})\in U_{X}\) and \((y_{n})\in U_{Y}\). Let \(\mathcal{E}_{n}\) be the \(mm\)-field with domain \(E_{n}=\{1,\ldots,n\}\) equipped with the (pseudo)-metric \(d_{E_{n}}(i,j)=d_{X}(x_{i},x_{j})\), normalized counting measure, and the \(1\)-Lipschitz function \(\pi_{E}(i)=\pi_{X}(x_{i})\). Similarly, define \(\mathcal{F}_{n}\) using \((y_{n})\). By Proposition 5.4, we have
\[d_{GW,p}(\mathcal{X},\mathcal{E}_{n})\leq d_{W,p}(\mu_{X},\sum_{i=1}^{n} \delta_{x_{i}}/n)\quad\text{and}\quad d_{GW,p}(\mathcal{Y},\mathcal{F}_{n}) \leq d_{W,p}(\mu_{Y},\sum_{i=1}^{n}\delta_{y_{i}}/n). \tag{68}\]
Since the Wasserstein distances in the above inequalities converge to \(0\) as \(n\to\infty\) ([35, Theorem 6.9]), we can choose \(n\) large enough so that the Gromov-Wasserstein distances in the above inequalities are \(<\epsilon\). If we use diagonal coupling \(\zeta_{n}\in C(\mathcal{E}_{n},\mathcal{F}_{n})\) given by \(\zeta_{n}(i,i)=1/n\), we get
\[d_{GW,p}(\mathcal{E}_{n},\mathcal{F}_{n})\leq\max\left\{\frac{1}{2}\sup_{i,j}m _{X,Y}(x_{i},y_{i},x_{j},y_{j}),\,\sup_{i}d_{X,Y}(x_{i},y_{i})\right\}. \tag{69}\]
This, in turn, implies that
\[d_{GW,p}(\mathcal{X},\mathcal{Y})\leq\max\left\{\frac{1}{2}\sup_{i,j}m_{X,Y}( x_{i},y_{i},x_{j},y_{j}),\,\sup_{i}d_{X,Y}(x_{i},y_{i})\right\}+2\epsilon. \tag{70}\]
Since \((x_{n})\in U_{X}\), \((y_{n})\in U_{Y}\), and \(\epsilon>0\) is arbitrary, we get \(d_{GW,p}(\mathcal{X},\mathcal{Y})\leq\alpha\). As \(p\geq 1\) is arbitrary, Proposition 5.3 implies that
\[d_{GW,\infty}(\mathcal{X},\mathcal{Y})=\lim_{p\to\infty}d_{GW,p}(\mathcal{X}, \mathcal{Y})\leq\alpha. \tag{71}\]
This also shows that the sequences \((x_{n})\in U_{X}\) and \((y_{n})\in U_{Y}\) realize the infimum.
## 6 Gromov-Wasserstein Through Functional Curvature
Given a sequence \((x_{n})\) is an \(mm\)-space \((X,d,\mu)\), one can form an associated (infinite) distance matrix \(D=(d_{ij})\), where \(d_{ij}=d(x_{i},x_{j})\). Gromov's Reconstruction Theorem [19] states that the distribution of all distance matrices for \((X,d,\mu)\) with respect to the product measure \(\mu^{\infty}\) is a complete invariant. This section introduces _augmented distance matrices_ to establish a similar result for \(mm\)-fields and also studies relationships between the Gromov-Wasserstein distance between \(mm\)-fields and the Wasserstein distance between the corresponding augmented distance matrix distributions. For an integer \(n>0\), let
\[\mathcal{R}^{n}:=\{(r_{ij})\in\mathbb{R}^{n\times n}:r_{ij}=r_{ji},r_{ii}=0,r_ {ik}\leq r_{ij}+r_{jk}\} \tag{72}\]
denote the space of all \(n\times n\) (pseudo) distance matrices equipped with the metric
\[\rho_{n}((r_{ij},b_{i}),(r^{\prime}_{ij},b^{\prime}_{i})):=\max\big{(}\frac{1 }{2}\sup_{ij}|r_{ij}-r^{\prime}_{ij}|,\,\sup_{i}d_{B}(b_{i},b^{\prime}_{i}) \big{)}. \tag{73}\]
Similarly, denoting the natural numbers by \(\mathbb{N}\), let
\[\mathcal{R}:=\{(r_{ij})\in\mathbb{R}^{\mathbb{N}\times\mathbb{N}}:r_{ij}=r_{ ji},r_{ii}=0,r_{ik}\leq r_{ij}+r_{jk}\} \tag{74}\]
be the space of all countably infinite (pseudo) distance matrices equipped with the weak topology; that is, the coarsest topology that makes all projections \(\pi_{n}\colon\mathcal{R}\to\mathcal{R}^{n}\) (onto the northwest \(n\times n\) quadrant) continuous, \(n>0\).
**Definition 6.1**.: Let \(B\) be a Polish space.
1. The space of (countably infinite) _augmented distance matrices_ (ADM) is defined as \(\mathcal{R}_{B}:=\mathcal{R}\times B^{\infty}\).
2. Similarly, for \(n>0\), define the space of \(n\times n\)_augmented distance matrices_ as \(\mathcal{R}_{B}^{n}:=\mathcal{R}^{n}\times B^{n}\).
In the study of \(mm\)-fields \((X,B,\pi,\mu)\), if \((x_{n})\) is a sequence in \(X\), we associate to \((x_{n})\) the ADM defined by \(r_{ij}=d(x_{i},x_{j})\) and \(b_{i}=\pi(x_{i})\).
**Definition 6.2** (ADM Distribution).: Let \(\mathcal{X}=(X,B,\pi,\mu)\) be an \(mm\)-field and \(\mathcal{F}_{\mathcal{X}}:X^{\infty}\to\mathcal{R}_{B}\) the map \((x_{i})\mapsto(d_{X}(x_{i},x_{j}),\pi(x_{i}))\). The _augmented distance matrix distribution_ of \(\mathcal{X}\) is defined as \(\mathcal{D}_{\mathcal{X}}=(\mathcal{F}_{\mathcal{X}})_{*}(\mu^{\infty})\). Similarly, for \(n>0\), define \(\mathcal{F}_{\mathcal{X}}^{n}:X^{n}\to\mathcal{R}_{B}^{n}\) and \(\mathcal{D}_{\mathcal{X}}^{n}:=(\mathcal{F}_{\mathcal{X}}^{n})_{*}(\mu^{n})\).
**Theorem 6.3**.: _(Field Reconstruction Theorem) Let \(\mathcal{X}=(X,d_{X},\pi_{X},\mu_{X})\) and \(\mathcal{Y}=(Y,d_{Y},\pi_{Y},\mu_{Y})\) be \(mm\)-fields over \(B\) such that \(\mu_{X}\) and \(\mu_{Y}\) are fully supported. Then,_
\[\mathcal{X}\simeq\mathcal{Y}\text{ if and only if }\mathcal{D}_{\mathcal{X}}= \mathcal{D}_{\mathcal{Y}}.\]
Proof.: Clearly, \(\mathcal{X}\simeq\mathcal{Y}\) implies that \(\mathcal{D}_{\mathcal{X}}=\mathcal{D}_{\mathcal{Y}}\). Suppose that \(\mathcal{D}_{\mathcal{X}}=\mathcal{D}_{\mathcal{Y}}\). The subset of \(U_{X}\subseteq X^{\infty}\) of all equidistributed sequences in \(X\) is a Borel measurable set of full measure [25, Lemma 2.4]. Hence, its image \(C_{X}:=\mathcal{F}_{\mathcal{X}}(U_{X})\) is an analytic set that has full measure in the completion of \(\mathcal{D}_{\mathcal{X}}\)[15, Theorem 13.2.6]. Define \(C_{Y}\) similarly. By construction, \(C_{X}\cap C_{Y}\) is of full measure in the completion of \(\mathcal{D}_{\mathcal{X}}=\mathcal{D}_{\mathcal{Y}}\). Let \((x_{n})\in U_{X}\), \((y_{n})\in U_{Y}\) such that \(\mathcal{F}_{\mathcal{X}}((x_{n}))=\mathcal{F}_{\mathcal{Y}}((y_{n}))\). The map \(\phi\colon\{x_{n}\colon n\geq 1\}\to\{y_{n}\colon n\geq 1\}\) given by \(\phi(x_{i})=y_{i}\) is a well-defined isometry satisfying \(\pi_{X}(x_{i})=\pi_{Y}(y_{i})\). Since \((x_{n})\) and \((y_{n})\) are dense in \(X\) and \(Y\), respectively, \(\phi\) induces an isometry \(\Phi\colon\mathcal{X}\Rightarrow\mathcal{Y}\).
On the space \(\mathcal{R}_{B}\), we also define the (extended) metric
\[\rho((r_{ij},b_{i}),(r^{\prime}_{ij},b^{\prime}_{i})):=\max(\frac{1}{2}\sup_{ ij}|r_{ij}-r^{\prime}_{ij}|,\,\sup_{i}d_{B}(b_{i},b^{\prime}_{i})). \tag{75}\]
The metric \(\rho_{n}\) over \(\mathcal{R}_{B}^{n}\) is defined similarly. However, since \((\mathcal{R}_{B},\rho)\) is not separable, instead of using \(\rho\) to define a topology on \(\mathcal{R}_{B}\), we only employ it to formulate the Wasserstein distance in \(\mathcal{R}_{B}\). The next lemma shows that \(\rho\) is sufficiently regular for the Wasserstein distance so defined to satisfy some desirable properties.
**Lemma 6.4**.: _The extended function \(\rho\colon\mathcal{R}_{B}\times\mathcal{R}_{\mathcal{B}}\to[0,\infty]\) is lower semicontinuous._
Proof.: Let \(\pi_{n}:\mathcal{R}_{B}\to\mathcal{R}_{B}^{n}\) denote the projection map. Note that \(\rho_{n}\circ\pi_{n}\uparrow\rho\) pointwise. Hence, as a pointwise supremum of a sequence of continuous functions, \(\rho\) is lower semicontinuous.
In the discussion below, the \(p\)-Wasserstein distances \(d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\) and \(d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})\) are taken with respect to the distance functions \(\rho\) and \(\rho_{n}\), respectively.
**Theorem 6.5**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be bounded mm-fields over \(B\). Then, for any \(1\leq p\leq\infty\), we have_
\[\lim_{n\to\infty}d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y }}^{n})=d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})=d_{GW, \infty}(\mathcal{X},\mathcal{Y}).\]
Proof.: The projection map \(\pi_{n}^{n+1}\colon\mathcal{R}_{B}^{n+1}\to\mathcal{R}_{B}^{n}\) is \(1\)-Lipschitz and has the property that
\[(\pi_{n}^{n+1})_{*}(\mathcal{D}_{\mathcal{X}}^{n+1})=\mathcal{D}_{\mathcal{X} }^{n}\quad\text{and}\quad(\pi_{n}^{n+1})_{*}(\mathcal{D}_{\mathcal{Y}}^{n+1})= \mathcal{D}_{\mathcal{Y}}^{n}\,. \tag{76}\]
This implies that \(d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})\leq d_{W, p}(\mathcal{D}_{\mathcal{X}}^{n+1},\mathcal{D}_{\mathcal{Y}}^{n+1})\). By a similar argument, we get \(d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})\leq d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\). Therefore, we have
\[\lim_{n\to\infty}d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y }}^{n})=\sup_{n}d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y }}^{n})\leq d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}}). \tag{77}\]
Since \(\rho\) is lower semicontinuous by Lemma 6.4, using an argument similar to the proof of Proposition 5.3, one can show that \(d_{W,\infty}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})=\lim_{p\to \infty}d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\). Therefore, without loss of generality, we can assume that \(p<\infty\).
Now, we show that \(d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\leq d_{GW,\infty }(\mathcal{X},\mathcal{Y})\). Let \(\mu\) be an optimal coupling between \(\mu_{X}\) and \(\mu_{Y}\) realizing \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\). Let \(\psi_{X}:(X\times Y)^{\infty}\to\mathcal{R}_{B}\) be the map given by \((x_{n},y_{n})_{n=1}^{\infty}\mapsto\mathcal{F}_{\mathcal{X}}((x_{n})_{n=1}^{ \infty})\). Define \(\psi_{Y}\) similarly. Then \(\nu:=(\psi_{X},\psi_{Y})_{*}(\mu^{\infty})\) is a coupling between \(\mathcal{D}_{\mathcal{X}}\) and \(\mathcal{D}_{\mathcal{Y}}\). We have
\[\begin{split} d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{ \mathcal{Y}})&\leq\bigg{(}\int\rho^{p}d\nu\bigg{)}^{1/p}=\bigg{(} \int_{\operatorname{supp}\mu^{\infty}}\rho^{p}(\psi_{X}((x_{n},y_{n})_{n}), \psi_{Y}((x_{n},y_{n})_{n})d\mu^{\infty}((x_{n},y_{n})_{n})\bigg{)}^{1/p}\\ &=\bigg{(}\int_{(\operatorname{supp}\mu)^{\infty}}\max\big{(} \frac{1}{2}\sup_{i,j}m_{X,Y}(x_{i},y_{i},x_{j},y_{j}),\sup_{i}d_{X,Y}(x_{i},y_{ i})\big{)}^{p}d\mu^{\infty}((x_{n},y_{n})_{n})\bigg{)}^{1/p}\\ &\leq\max\bigg{\{}\frac{1}{2}\sup_{\operatorname{supp}\,(\mu\otimes \mu)}m_{X,Y},\;\sup_{\operatorname{supp}\,\mu}d_{X,Y}\bigg{\}}=d_{GW,\infty}( \mathcal{X},\mathcal{Y}).\end{split} \tag{78}\]
It remains to show that \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\leq\lim_{n}d_{W,p}(\mathcal{D}_{ \mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})\). Let \(0<\epsilon<1/2\). By Proposition 5.3, there exists \(1\leq q<\infty\) so that \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\leq d_{GW,q}(\mathcal{X},\mathcal{Y})+\epsilon\). Let
\[U_{X,q}^{n,\epsilon}:=\{(x_{i})\in X^{n}\colon d_{W,q}(\mu_{X},\sum_{i=1}^{n} \delta_{x_{i}}/n)\leq\epsilon\} \tag{79}\]
and define \(U_{Y,q}^{n,\epsilon}\) similarly. By Proposition A.2, if \(n\) large enough, then \(\mu_{X}^{n}(U_{X,q}^{n,\epsilon})\geq 1-\epsilon\) and \(\mu_{Y}^{n}(U_{Y,q}^{n,\epsilon})\geq 1-\epsilon\). If we define \(C_{X,q}^{n,\epsilon}:=\mathcal{F}_{\mathcal{X}}^{n}(U_{X,q}^{n,\epsilon})\) and \(C_{Y,q}^{n,\epsilon}:=\mathcal{F}_{\mathcal{X}}^{n}(U_{Y,q}^{n,\epsilon})\), both of these sets are analytical, hence measurable in the completion of \(\mathcal{D}_{\mathcal{X}}^{n}\) and \(\mathcal{D}_{\mathcal{Y}}^{n}\), respectively [15, Theorem 13.2.6]. Moreover, the measures of these sets are \(\geq 1-\epsilon\). Let \(\mu_{n}\) be a coupling realizing \(d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})\). Note that we have
\[\mu_{n}(C_{X,q}^{n,\epsilon}\subset C_{Y,q}^{n,\epsilon})\geq 1-2\epsilon. \tag{80}\]
By Proposition A.3, we also have
\[\rho_{n}|_{C_{X,q}^{n,\epsilon}\times C_{Y,q}^{n,\epsilon}}\geq d_{GW,q}( \mathcal{X},\mathcal{Y})-2\epsilon\geq d_{GW,\infty}(\mathcal{X},\mathcal{Y})-3\epsilon. \tag{81}\]
Therefore,
\[d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})=\bigg{(} \int\rho_{n}^{p}\,d\mu_{n}\bigg{)}^{1/p}\geq(d_{GW,\infty}(\mathcal{X}, \mathcal{Y})-3\epsilon)(1-2\epsilon)^{1/p}. \tag{82}\]
This implies that
\[\lim_{n\to\infty}d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^ {n})\geq(d_{GW,\infty}(\mathcal{X},\mathcal{Y})-3\epsilon)(1-2\epsilon)^{1/p}. \tag{83}\]
Since \(0<\epsilon<1/2\) is arbitrary, we get
\[\lim_{n\to\infty}d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y }}^{n})\geq d_{GW,\infty}(\mathcal{X},\mathcal{Y}), \tag{84}\]
as claimed.
**Corollary 6.6**.: _Let \(\mu\) be an optimal coupling realizing \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\) and \(\nu\) be the coupling between \(\mathcal{D}_{\mathcal{X}}\) and \(\mathcal{D}_{\mathcal{Y}}\) induced by \(\mu\) as in the proof of Theorem 6.5. More precisely,_
\[\nu:=(\psi_{X},\psi_{Y})_{*}(\mu^{\infty}),\]
_where \(\psi_{X}:(X\times Y)^{\infty}\to\mathcal{R}_{B}\) is the map given by \((x_{n},y_{n})_{n=1}^{\infty}\mapsto\mathcal{F}_{\mathcal{X}}((x_{n})_{n=1}^{ \infty})\), and \(\psi_{Y}\) is defined similarly. Then, \(\nu\) is the optimal coupling realizing \(d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\), independent of \(p\geq 1\). Furthermore, if \((x_{n},y_{n})\) is a uniformly distributed sequence (with respect to \(\mu\)) in \(\operatorname{supp}\mu\), then_
\[\rho(\mathcal{F}_{\mathcal{X}}((x_{n})),\mathcal{F}_{\mathcal{Y}}((y_{n}))=d_ {GW,\infty}(\mathcal{X},\mathcal{Y}). \tag{85}\]
Proof.: By (78), we have
\[d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\leq\bigg{(}\int \rho^{p}d\nu\bigg{)}^{1/p}\leq d_{GW,\infty}(\mathcal{X},\mathcal{Y}), \tag{86}\]
for all \(p\geq 1\). Hence, by Theorem 6.5, the inequalities above are equalities. This shows the optimality of \(\nu\) independent of \(p\geq 1\).
The equality \(\rho(\mathcal{F}_{\mathcal{X}}((x_{n})),\mathcal{F}_{\mathcal{Y}}((y_{n}))=d_ {GW,\infty}(\mathcal{X},\mathcal{Y})\) is shown in the proof of Theorem 5.5.
**Remark 6.7**.: By Theorem 6.5, \(d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})\) can be used as an approximation to \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\). To discretize this approximation, one can take i.i.d. samples from \((X^{n},\mu^{n})\) and \((Y^{n},\nu^{n})\) and form empirical measures \(\mathcal{E}_{n,X}\) and \(\mathcal{E}_{n,Y}\). Then, \(d_{W,p}(((\mathcal{F}_{\mathcal{X}}^{n})_{*}(\mathcal{E}_{n,X}),(\mathcal{F} _{\mathcal{Y}}^{n})_{*}(\mathcal{E}_{n,Y}))\) can be taken as an approximation to \(d_{W,p}(\mathcal{D}_{\mathcal{X}}^{n},\mathcal{D}_{\mathcal{Y}}^{n})\).
**Remark 6.8**.: By Theorem 6.5, \(d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\) is independent of \(p\geq 1\). This can be explained as follows. Since we are using the sup-distance \(\rho\) on \(\mathcal{R}_{B}\) and almost every sequence in a metric measure space is uniformly distributed, if \(d_{W,p}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\leq r\), then there are uniformly distributed sequences in \(\mathcal{X}\) and \(\mathcal{Y}\) whose augmented distance matrices are \(r\)-close to each other, which forces \(d_{W,q}(\mathcal{D}_{\mathcal{X}},\mathcal{D}_{\mathcal{Y}})\leq r\) for any \(q\geq 1\).
The following theorem gives a geometric representation of the isomorphism classes of \(mm\)-fields via the Urysohn universal field.
**Theorem 6.9**.: _Let \(\mathcal{I}_{B}\) denote the moduli space of isometry classes of compact \(mm\)-fields over \(B\) endowed with the distance \(d_{GW,\infty}\). Let \(\mathcal{G}_{B}\) be the group of automorphisms of the Urysohn field \(\mathcal{U}_{B}\) and \(\mathcal{L}_{B}\) be the set of compactly supported laws on \(\mathcal{U}_{B}\), endowed with the distance \(d_{W,\infty}\). The group \(\mathcal{G}_{B}\) acts on \(\mathcal{L}_{B}\) by \(g\cdot\mu:=g_{*}(\mu)\). Then, \(\mathcal{I}_{B}\) is isomorphic to the orbit space of this action; that is, \(\mathcal{I}_{B}\simeq\mathcal{L}_{B}/\mathcal{G}_{B}\), where the orbit space is equipped with the quotient metric, as in (34), which can be expressed as_
\[d([\mu],[\nu])=\inf_{g\in\mathcal{G}_{B}}d_{W,\infty}(\mu,g_{*}(\nu)).\]
Proof.: Given a compact \(mm\)-field \(\mathcal{X}\) over \(B\) and an isometric embedding \(\iota:\mathcal{X}\to\mathcal{U}_{B}\), we have \((\iota)_{*}(\mu_{X})\in\mathcal{L}_{B}\). This induces a well-defined map from \(\Psi:\mathcal{I}_{B}\to\mathcal{L}_{B}/\mathcal{G}_{B}\) because of the compact homogeneity of \(\mathcal{U}_{B}\).
By Proposition 5.4, \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\leq d(\Psi(\mathcal{X}),\Psi(\mathcal{ Y}))\).
To showthe opposite inequality, let \(\mu\) be an optimal coupling realizing \(d_{GW,\infty}(\mathcal{X},\mathcal{Y})\) and \(R\subseteq X\times Y\) the support of \(\mu\). If we let \(r:=d_{GW,\infty}(\mathcal{X},\mathcal{Y})\), then \(\operatorname{dis}_{\pi_{X},\pi_{Y}}=2r\). Let \(\mathcal{Z}:=\mathcal{X}\coprod_{R,r}\mathcal{Y}\). If we consider \(\mu_{X}\) and \(\mu_{Y}\) as measures on \(Z\) and \(\mu\) as a measure on \(Z\times Z\), then \(d_{W,\infty}^{Z}(\mu_{X},\mu_{Y})\leq\sup_{\operatorname{supp}(\mu)}d_{Z}=\sup _{R}d_{Z}\leq r\). If we let \(\iota\colon\mathcal{Z}\to\mathcal{U}_{B}\) be an isometric embedding, then
\[d(\Psi(\mathcal{X}),\Psi(\mathcal{Y}))\leq d_{W,\infty}(\iota_{*}(\mu_{X}),\iota_{* }(\mu_{Y}))\leq d_{W,\infty}^{Z}(\mu_{X},\mu_{Y})\leq d_{GW,\infty}(\mathcal{ X},\mathcal{Y}), \tag{87}\]
as desired.
Topological Multifiltrations and Their Stability
One of the key insights of geometric and topological data analysis is that one can study the geometry of data through associated filtered spaces by analyzing the changes in topological structure along the filtration. The neighborhood and Vietoris-Rips filtrations are two prominent examples of such constructs. Here, we investigate field analogues of these filtrations.
### Neighborhood Multifiltrations
Let \((E,d_{E})\) be a metric space and \(X\subseteq E\) a compact subspace. Given \(r\geq 0\), the _r-neighborhood_\(N^{r}(X,E)\) of \(X\) in \(E\) is defined by
\[N^{r}(X,E):=\{y\in E:\operatorname{dist}(y,X)\leq r\}. \tag{88}\]
Clearly, \(N^{r}(X,E)\subseteq N^{s}(X,E)\) if \(r\leq s\), so \(N^{*}(X,E)\) is a filtration. Since \(X\) is compact, denoting the closed ball in \(E\) of radius \(r\) centered at \(x\in X\) by \(B_{r}(x,E)\), we can express the neighborhood filtration as follows:
\[N^{r}(X,E)=\{y\in E:\exists x\in X\text{ such that }d_{E}(x,y)\leq r\}=\bigcup_{x \in X}B_{r}(x,E)\,. \tag{89}\]
**Definition 7.1** (Field neighborhood bifiltration).: Let \(\mathcal{E}=(E,d_{E},\pi_{E})\) be a \(B\)-field and \(X\subseteq E\) a compact subspace. For \(r,s\geq 0\) define
\[N^{r,s}(X,\mathcal{E}):=\{y\in E:y\in N^{r}(X,E)\text{ and }d_{B}(\pi_{E}(x), \pi_{E}(y))\leq s\}.\]
If \(r\leq r^{\prime}\) and \(s\leq s^{\prime}\), then \(N^{r,s}(X,\mathcal{E})\subseteq N^{r^{\prime},s^{\prime}}(X,\mathcal{E})\) so that \(N^{*,*}(X,\mathcal{E})\) forms a bifiltration that we call the _field neighborhood bifiltration_ of \(X\) in \(\mathcal{E}\).
For \(y\in E\), if we let \(B_{r,s}(y,\mathcal{E}):=\{y^{\prime}\in E:d_{E}(y,y^{\prime})\leq r\text{ and }d_{B}(\pi(y),\pi(y^{\prime}))\leq s\}\), then
\[N^{r,s}(X,\mathcal{E})=\cup_{x\in X}B_{r,s}(x,\mathcal{E}). \tag{90}\]
In many applications, \(X\) is a point cloud and it is desirable to have filtrations that are robust to outliers. This requires taking the density of points into account. From that point of view, instead of considering \(X\) as a subspace of \(E\), it is more informative to consider it as a probability measure on \(E\) given by \(\mu_{X}=\Sigma_{x\in X}\delta_{x}/|X|\). We can then express the field neighborhood filtration in terms of \(\mu_{X}\) as
\[N^{r,s}(X,\mathcal{E}) =\{y\in E:\exists x\in X\text{ such that }d_{E}(x,y)\leq r\text{ and }d_{B}(f(x),f(y))\leq s\}\] \[=\{y\in E:B_{r,s}(y,\mathcal{E})\cap X\neq\emptyset\} \tag{91}\] \[=\{y\in E:\mu_{X}(B_{r,s}(y,E))>0\}.\]
This last expression motivates the introduction of the following trifiltered space associated with a probability measure on the domain of a \(B\)-field. (This may be viewed as the \(mm\)-field analogue of [5].) We abuse notation and also denote \(mm\)-fields by \(\mathcal{E}\).
**Definition 7.2**.: Let \(\mathcal{E}=(E,d_{E},\pi_{E},\mu_{E})\) be an \(mm\)-field over \(B\). For \(r,s,t\geq 0\), define
\[N^{r,s,t}(\mathcal{E}):=\{y\in E:\mu_{E}(B_{r,s}(y,\mathcal{E}))\geq 1-t\}.\]
If \(r\leq r^{\prime}\), \(s\leq s^{\prime}\) and \(t\leq t^{\prime}\), then \(N^{r,s,t}(\mathcal{E})\subseteq N^{r^{\prime},s^{\prime},t^{\prime}}(\mathcal{E})\), so that \(N^{*,*,*}(\mathcal{E})\) forms a trifiltration that we refer to as the \(mm\)_-field neighborhood trifiltration_ of \(\mathcal{E}\).
Figure 1 shows an example in which \(X\) is a set of points sampled from two circles and \(E\) is a rectangle containing \(X\). A scalar field is depicted through its contour lines and \(\mu_{E}\) is the empirical measure on \(E\) induced by \(X\). Panels (a), (b), and (c) compare the neighborhoods \(N^{r}(X,E)\), \(N^{r,s}(X,\mathcal{E})\), and \(N^{r,s,t}(\mathcal{E})\), respectively, for parameter values \(r=0.8\), \(s=0.1\) and \(t=0.99\). Note that in (c), only the more densely populated parts of the field metric neighborhood (shown in (b)) remain. In each plot, the color intensity at each point represents how densely the corresponding neighborhood of that point is populated by elements in the original point cloud.
The main goal of this section is to establish stability properties for these multiparameter filtrations. We denote by \(d_{I}\), \(d_{H}\) and \(d_{P}\) the interleaving [6], Hausdorff [7], and Prokhorov [15]
**Theorem 7.3**.: _Let \(\mathcal{E}=(E,d_{E},f,\mu)\) and \(\mathcal{F}=(E,d_{E},g,\nu)\) be the \(mm\)-fields over \(B\) and \(X,Y\subseteq E\) be compact subspaces. Then, the following inequalities hold:_
* \(d_{I}(N^{*,*}(X,\mathcal{E}),N^{*,*}(Y,\mathcal{F}))\leq d_{H}(X,Y)+2\sup_{y \in E}d_{B}(f(y),g(y)).\)__
* \(d_{I}(N^{*,*,*}(\mathcal{E}),N^{*,*,*}(\mathcal{F}))\leq d_{P}(\mu,\nu)+2\sup _{y\in E}d_{B}(f(y),g(y))\)_._
Proof.: (i) Let \(\epsilon=d_{H}(X,Y)\) and \(\epsilon^{\prime}=\sup_{y\in E}d_{B}(f(y),g(y))\). If \(z\in N^{r,s}(X,\mathcal{E})\), then there exists \(x\in X\) such that \(d_{E}(x,z)\leq r\) and \(d_{B}(f(x),f(z))\leq s\). Moreover, there exists \(y\in Y\) such that \(d_{E}(x,y)\leq\epsilon\). Then, \(d_{E}(y,z)\leq r+\epsilon\) and
\[d_{B}(g(y),g(z))\leq d_{B}(g(y),f(y))+d_{B}(f(y),f(x))+d_{B}(f(x),f(z))+d_{B}( f(z),g(z))\leq s+\epsilon+2\epsilon^{\prime}. \tag{92}\]
Therefore, \(N^{r,s}(X,\mathcal{E})\subseteq N^{r+\epsilon+2\epsilon^{\prime},s+\epsilon+2 \epsilon^{\prime}}(Y,\mathcal{F})\). Similarly, \(N^{r,s}(Y,\mathcal{F})\subseteq N^{r+\epsilon+2\epsilon^{\prime},s+\epsilon+2 \epsilon^{\prime}}(X,\mathcal{E})\). This proves the first assertion.
(ii) Let \(\epsilon=d_{P}(\mu,\nu)\), \(\epsilon^{\prime}=\sup_{y\in E}d_{B}(f(y),g(y))\) and note that \(B_{r,s}(x,\mathcal{E})\subseteq B_{r,s+2\epsilon^{\prime}}(x,\mathcal{F})\). If \(y\in N^{\epsilon}(B_{r,s+2\epsilon^{\prime}}(x,\mathcal{F}))\), then there exists \(z\in B_{r,s+2\epsilon^{\prime}}(x,\mathcal{F})\) such that \(d_{E}(y,z)\leq\epsilon\). Then, \(d_{E}(x,z)\leq r+\epsilon\) and
\[d_{B}(g(y),g(x))\leq d_{B}(g(y),g(z))+d_{B}(g(z),g(x))\leq s+\epsilon+2 \epsilon^{\prime}. \tag{93}\]
This shows that \(N^{\epsilon}(B_{r,s+2\epsilon^{\prime}}(x,\mathcal{F}))\subseteq B_{r+\epsilon+ 2\epsilon^{\prime},s+\epsilon+2\epsilon^{\prime}}(x,\mathcal{F})\). Therefore, if \(x\in N^{r,s,t}(\mathcal{E})\), we have
\[\begin{split}\nu(B_{r+\epsilon+2\epsilon^{\prime},s+\epsilon+2 \epsilon^{\prime}}(x,\mathcal{F}))&\geq\nu(N^{\epsilon}(B_{r,s+2 \epsilon^{\prime}}(x,\mathcal{F})))\\ &\geq\mu(B_{r,s+2\epsilon^{\prime}}(x,\mathcal{F}))-\epsilon\\ &\geq\mu(B_{r,s}(x,\mathcal{E}))-\epsilon\geq 1-(t+\epsilon). \end{split} \tag{94}\]
This shows that \(N^{r,s,t}(\mathcal{E})\subseteq N^{r+\delta,s+\delta,t+\delta}(\mathcal{F})\), where \(\delta=\epsilon+2\epsilon^{\prime}\). Similarly, one can argue that \(N^{r,s,t}(\mathcal{F})\subseteq N^{r+\delta,s+\delta,t+\delta}(\mathcal{E})\), completing the proof of (ii).
### Vietoris-Rips Multifiltrations
For a metric space \((X,d_{X})\), the Vietoris-Rips simplicial filtration \(\mathit{VR}^{*}(X)\) is defined by
\[\mathit{VR}^{r}(X):=\{A\subseteq X\colon A\text{ is finite and diam}(A)\leq r\}.\]
To modify this filtration for fields, recall that the _radius_ of a subspace \(C\) of a metric space \((B,d_{B})\) is defined as
\[\text{rad}(C):=\inf_{b\in B}\sup_{c\in C}d_{B}(b,c).\]
It is simple to verify that \(\text{diam}(C)\leq 2\,\text{rad}(C)\).
**Definition 7.4**.: Let \(\mathcal{X}=(X,d_{X},\pi)\) be a metric field over \(B\). We define the _metric field Vietoris-Rips bifiltration_\(\mathit{VR}^{*,*}(\mathcal{X})\) by
\[\mathit{VR}^{r,s}(\mathcal{X}):=\{A\subseteq X\colon A\text{ is finite, diam}(A)\leq r,\text{ and rad}(\pi(A))\leq s/2\},\]
for \(r,s\geq 0\)
Figure 1: Neighborhoods associated with a scalar field defined on a finite set of points \(X\) sampled from two circles: (a) \(N^{r}(X,E)\); (b) \(N^{r,s}(X,\mathcal{E})\); (c) \(N^{r,s,t}(\mathcal{E})\). The parameter values are \(r=0.8\), \(s=0.1\) and \(t=0.99\).
For \(mm\)-fields we can define a Vietoris-Rips trifiltration, as follows.
**Definition 7.5**.: Let \(\mathcal{X}=(X,d_{X},\pi,\mu)\) be an \(mm\)-field over \(B\). We define the \(mm\)_-field Vietoris-Rips trifiltration_\(V\!R^{*,*,*}(\mathcal{X})\) by
\[V\!R^{r,s,t}(\mathcal{X},\mu):=\{ \{A_{0},\ldots,A_{n}\}\colon A_{i}\subseteq X\text{ and }A_{i-1}\subseteq A_{i},\,\forall i,\] \[\operatorname{diam}(A_{i})\leq r,\,\operatorname{rad}(\pi(A_{i} ))\leq s/2,\,\mu(A_{i})\geq 1-t\},\]
for \(r,s,t\geq 0\).
Note that if \(\mathcal{X}\) is a finite \(mm\)-field, then \(V\!R^{r,s,t}(\mathcal{X})\) is a full subcomplex of the barycentric subdivision of \(V\!R^{r,s}(\mathcal{X})\).
Figure 2 shows an example in which \(X\) is a weighted set of points sampled from a circle with the size of the dots indicating their weights. A scalar field is depicted through its contour lines. Panels (a), (b), and (c) compare the \(V\!R\)-complex \(V\!R^{r}(X)\), the \(m\)-field \(V\!R\)-complex \(V\!R^{r,s}(\mathcal{X})\), and the \(mm\)-field \(V\!R\)-complex \(V\!R^{r,s,t}(\mathcal{X})\), respectively, for parameter values \(r=1.5\), \(s=1\) and \(t=0.1\). Note how the simplices in \(V\!R^{r}(X)\) intersecting with many contour lines get removed in \(V\!R^{r,s}(\mathcal{X})\). As remarked above, \(V\!R^{r,s,t}(\mathcal{X})\) sits in the barycentric subdivision of \(V\!R^{r,s}(\mathcal{X})\) and simplices away from high weight regions get removed.
**Theorem 7.6**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be metric measure \(B\)-fields. Then,_
* \(d_{HI}(V\!R^{*,*}(\mathcal{X}),V\!R^{*,*}(\mathcal{Y}))\leq 2d_{GH}(\mathcal{X}, \mathcal{Y})\) _and_
* \(d_{HI}(V\!R^{*,*,*}(\mathcal{X}),V\!R^{*,*}(\mathcal{Y}))\leq 2d_{GP}( \mathcal{X},\mathcal{Y})\)_,_
_where \(d_{HI}\) denotes the homotopy interleaving distance [4]._
Proof.: (i) Let \(\phi:\mathcal{X}\to\mathcal{E}\) and \(\psi:\mathcal{Y}\to\mathcal{E}\) be isometric embeddings. Abusing notation, we write \(d_{E}(x,y):=d_{E}(\phi(x),\psi(y))\) for \(x\in X\), \(y\in Y\). Given \(\epsilon>d_{H}(\phi(X),\psi(Y))\), there exist functions \(f:X\to Y\) and \(g:Y\to X\) such that \(d_{E}(x,f(x))<\epsilon\) and \(d_{E}(g(y),y)<\epsilon\), for any \(x\in X\) and \(y\in Y\). Note that if \(A\subseteq X\), \(\operatorname{diam}(A)\leq r\) and \(\operatorname{rad}(\pi_{X}(A))\leq s/2\), then \(\operatorname{diam}(f(A))\leq r+2\epsilon\) and \(\operatorname{rad}(\pi_{Y}(f(A)))\leq s/2+\epsilon\). Hence \(f\) induces a morphism \(f_{*}:V\!R^{r,s}(\mathcal{X})\to V\!R^{r+2\epsilon,s+2\epsilon}(\mathcal{Y})\). Similarly \(g\) induces a morphism \(g_{*}:V\!R^{r,s}(\mathcal{Y})\to V\!R^{r+2\epsilon,s+2\epsilon}\). Let us show that \(g_{*}\circ f_{*}:V\!R^{r,s}(\mathcal{X})\to V\!R^{r+4\epsilon,s+4\epsilon}( \mathcal{X})\) is contiguous (hence homotopy equivalent) to the inclusion map. First note that if \(A\) is a simplex in \(V\!R^{r,s}(\mathcal{X})\), then \(\operatorname{diam}(N^{2\epsilon}(A,X))\leq r+4\epsilon\) and \(\operatorname{rad}(\pi_{X}(N^{2\epsilon}(A,X)))\leq s/2+2\epsilon\). Since \(A\cup g(f(A))\subseteq N^{2\epsilon}(A,X)\), this implies that \(A\cup g(f(A))\) is a simplex in \(V\!R^{r+4\epsilon,s+4\epsilon}(\mathcal{X})\). This shows the required contiguity. Similarly, \(f_{*}\circ g_{*}:V\!R^{r,s}(\mathcal{Y})\to V\!R^{r+4\epsilon,s+4\epsilon}( \mathcal{Y})\) is contiguous to the inclusion. This completes the proof of part (i).
(ii) Let \(\epsilon>d_{H}(\phi_{*}(\mu_{X}),\psi_{*}(\mu_{Y}))\). Given a closed subset \(A\) of \(X\), let \(F(A)=\psi^{-1}(N^{\epsilon}(\phi(A),E))\subseteq Y\). Similarly, given a closed subset \(A\) of \(Y\), let \(G(A)=\phi^{-1}(N^{\epsilon}(\psi(A),E))\subseteq X\). Note that \(\operatorname{diam}(F(A))\leq\operatorname{diam}(A)+2\epsilon\), \(\operatorname{rad}(F(A))\leq\operatorname{rad}(A)+\epsilon\) and \(\nu(F(A))\geq\mu(A)-\epsilon\). \(G\) satisfies the same inequalities. Hence, \(F\) induces a simplicial map \(F_{*}:V\!R^{r,s}(\mathcal{X})\to V\!R^{r+2\epsilon,s+2\epsilon,t+2\epsilon}( \mathcal{Y})\). Similarly, \(G\) induces a simplicial map. We show that \(G_{*}\circ F_{*}:V\!R^{r,s,t}(\mathcal{X})\to V\!R^{r+4\epsilon,s+4\epsilon,t+4 \epsilon}(\mathcal{X})\) is homotopy equivalent to the inclusion. We do this by constructing simplicial homotopies between both maps and the simplicial map \(\Gamma:V\!R^{r,s,t}(\mathcal{X})\to V\!R^{r+4\epsilon,s+4\epsilon,t+4\epsilon}( \mathcal{X})\) given by \(\Gamma(A):=N^{2\epsilon}(A,X)\).
Figure 2: Simplicial complexes associated with a scalar field defined on a weighted finite set of points \(X\): (a) the \(V\!R\)-complex \(V\!R^{r}(X)\); (b) the \(m\)-field \(V\!R\)-complex \(V\!R^{r,s}(\mathcal{X})\); the \(mm\)-field \(V\!R\)-complex \(V\!R^{r,s,t}(\mathcal{X})\). The parameter values are \(r=1.5\), \(s=1\) and \(t=0.1\).
We first construct a simplicial homotopy between the inclusion and \(\Gamma\). Let \(I\) denote the interval simplicial complex \(I:=\{\{0\},\{1\},\{0,1\}\}\). The simplicies of the simplicial product \(V\!R^{r,s,t}(\mathcal{X})\times I\) are of the form \(\{(A_{0},i_{0}),\ldots,(A_{n},i_{n})\}\) where \(A_{0}\subseteq A_{1}\cdots\subseteq A_{n}\subseteq X\), \(\{A_{0},\ldots,A_{n}\}\in V\!R^{r,s,t}(\mathcal{X})\), \(i_{0},\ldots i_{n}\in\{0,1\}\), \(i_{0}\leq i_{1}\leq\ldots i_{n}\). If we let \(H((A,0))=A\) and \(H((A,1))=\Gamma(A)\), then it gives a simplicial map \(H:V\!R^{r,s,t}(\mathcal{X})\times I\to V\!R^{r+4\epsilon,s+4\epsilon,t+4 \epsilon}(\mathcal{X})\), which is the simplicial homotopy between the inclusion and \(\Gamma\).
Since \(G(F(A))\subseteq\Gamma(A)\), if we define \(H\) instead by \(H((A,0))=G(F(A))\) and \(H((A,1))=\Gamma(A)\), then it becomes a simplicial homotopy between \(G_{*}\circ F_{*}(A)\) and \(\Gamma\). Therefore \(G_{*}\circ F_{*}\) is homotopy equivalent to the inclusion. Similarly, \(F_{*}\circ G_{*}\) is homotopy equivalent to the inclusion. This proves (ii).
## 8 Summary and Discussion
This paper studied functional data, termed fields, defined on geometric domains. More precisely, the objects of study were \(1\)-Lipschitz functions between Polish metric spaces with the domain possibly equipped with a Borel probability measure. We addressed foundational questions and developed new approaches to the analysis of datasets consisting of fields not necessarily defined on the same domains. We studied the basic properties of the Gromov-Hausdorff distance between compact fields and how it relates to the Hausdorff distance in a Urysohn universal field via isometric embeddings. Similarly, we investigated analogues of the Gromov-Prokhorov and Gromov-Wasserstein distances between fields on metric-measure domains.
We introduced a representation of metric-measure fields as probability distributions on the space of (countably) infinite augmented distance matrices and proved a Reconstruction Theorem that extends to \(mm\)-fields a corresponding result for \(mm\)-spaces due to Gromov. This provided a pathway to discrete representations of \(mm\)-fields via distributions of finite-dimensional augmented distance matrices for which we proved a convergence theorem. We also studied field analogues of the neighborhood and Vietoris-Rips filtrations and established stability results with respect to appropriate metrics.
Questions that also are of interest but fall beyond the scope of this paper include: (i) the study of rate of convergence of the probabilistic model based on finite-dimensional augmented distance matrices; (ii) investigation of alternative cost functions in the formulation of the Gromov-Wasserstein distance between \(mm\)-fields; (iii) the development of computational models and algorithms derived from augmented distance matrices.
## Acknowledgements
This work was partially supported by NSF grant DMS-1722995.
|
2307.16602 | Proximitized insulators from disordered superconductors | We present an experimental study of bilayers of a disordered Ag metal layer
close to the metal-insulator transition and an Indium Oxide film which is on
the insulating side of the superconductor-insulator-transition. Our results
show that superconducting fluctuations within the indium-oxide film, that
proximitize the underlying metal layer, induce insulating rather than
superconducting behavior. This is ascribed to suppression of density of states
(due to the superconducting energy gap) for quasiparticles in the proximitized
regions. Our results present a novel manifestation of the proximity effect
phenomenon and provide important insight into the nature of the insulating
phase of the disorder driven superconductor-insulator-transition. | Moshe Haim, David Dentelski, Aviad Frydman | 2023-07-31T12:06:49Z | http://arxiv.org/abs/2307.16602v1 | # Proximitized insulators from disordered superconductors
###### Abstract
We present an experimental study of bilayers of a disordered \(Ag\) metal layer close to the metal-insulator transition and an Indium Oxide film which is on the insulating side of the superconductor-insulator-transition. Our results show that superconducting fluctuations within the indium-oxide film, that proximitize the underlying metal layer, induce _insulating_ rather than superconducting behavior. This is ascribed to suppression of density of states (due to the superconducting energy gap) for quasiparticles in the proximitized regions. Our results present a novel manifestation of the proximity effect phenomenon and provide important insight into the nature of the insulating phase of the disorder driven superconductor-insulator-transition.
The interplay between superconductivity and disorder is a very active topic of investigation. It was recognized decades ago that s-wave superconductivity is remarkably robust against weak disorder [1]. The situation is different for strong disorder. Experiments show that superconductivity in 2D films can be destroyed by strong enough disorder as well as other non-thermal tuning parameters, g, such as magnetic field, thickness, chemical composition, gate voltage and pressure [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Once superconductivity is destroyed, the system undergoes a transition to an insulating state (for reviews see [34; 35]). This superconductor-insulator-transition (SIT) is a paradigmatic example for a quantum phase transition that can occur in systems driven by a non-thermal tuning parameter [36].
One of the ongoing deliberations in the field of the SIT is the nature of the insulating phase, \(I_{S}\). It has been shown both theoretically [37; 38; 39; 40] and experimentally [41] that in some nominally s-wave BCS superconductors, the presence of disorder can separate the temperature \(T^{*}\) where pairing occurs accompanied by the development of an energy gap in the local density of states [16; 42], and the actual \(T_{c}\) where the superfluid density becomes finite. The pseudogap region between these temperatures grows with disorder as the SIT is approached and eventually, in the insulating regime, a finite superconducting gap, \(\Delta\), exists even in the absence of superfluid density. Indeed, similar energy gap [41] (as well as vortex motion [43; 17] and Nernst signals [18]) were measured in both the superconductor and \(I_{S}\) phases of disordered films. These results led to the realization that a BCS superconductor can undergo a quantum phase transition to an insulating phase of bosons rather than unpaired electrons. Hence, \(I_{S}\), which is composed of uncorrelated superconducting fluctuations, shows a number of properties similar to those of a bulk superconductor despite being an electrical insulator. In this letter, we study a special aspect of the superconducting nature of \(I_{S}\), i.e. the proximity effect to a normal metal.
The classic proximity effect describes the mutual influence of two "clean" layers, one is a superconductor, \(SC\), and one is a normal metal, \(N\), placed in a good electric contact, resulting in the induction of finite superconductivity into the \(N\) and suppression of the superconducting order parameter in the \(SC\)[44; 45]. Experiments have shown [46; 47] that, when the superconductor is highly disordered, superconducting quantum fluctuations (even within the \(I_{S}\)) can induce superconductivity into a proximitized \(N\). In this letter we report on a more exotic effect which occurs when the \(N\) is highly disordered so that it is close to the metal-insulator-transition. Our main results are the following:
1. Placing a highly disordered metal in proximity to an \(I_{S}\) can induce _insulating_ behavior in the \(N\).
2. This effect becomes more prominent as the \(I_{S}\) is driven towards the SIT.
3. The effect is larger the more disordered is the \(N\) layer.
We provide a simple model to explain these results based on enhanced electronic localization in the disordered metal due to proximitized superconducting fluctuations in the normal region.
The samples for this study were prepared using the following scheme. Six \(Au\) leads were deposited on an insulating \(SiO\) substrate (gold pads in Fig 1a). Then, two \(10nm\) thick \(Ag\) strips were deposited between two sets of leads, to be used as the \(N\) proximity layer (grey strips in Fig. 1a). In order to increase the disorder of the silver, the samples were thinned in an \(Ar\) plasma chamber in short pulses for different amounts of time. Here we present results for three highly disordered \(Ag\) films, \(S1\), \(S2\) and \(S3\), having decreasing \(Ag\) room temperature sheet resistances of 250, 220 and 150 \(\Omega_{\square}\), respectively. Finally, a \(30nm\) thick layer of amorphous Indium Oxide (\(InO\)) was e-beam deposited in a \(10^{-4}mbar\) partial oxygen pressure resulting in a highly disordered, insulating film (purple layer in Fig 1a). The resistance of the \(InO\) film was sequentially reduced via low temperature
thermal annealing, thus driving the film through the insulator to superconductor transition [18; 48]. This setup takes advantage of the fact the \(Ag\) layer is significantly more conductive than the \(InO\) layer and allows to use the Ag layer as a voltage terminal for four-probe resistance measurements of the bare \(InO\) film and two-probe measurements of the resistance of the \(Ag/InO\) bilayer as the \(InO\) is driven through the SIT.
\(InO\) films, despite being morphologically uniform [48; 49], have been shown to include emergent granularity in the form of superconducting puddles embedded in an insulating matrix [41; 43; 18; 50]. Hence, local superconductivity is present even in the insulating phase of the SIT. Fig. 1b shows the resistance vs temperature curves of the bare \(InO\) of \(S1\) for different annealing stages (sequentially reducing \(R_{\square}\)). The insulating curves are found to follow \(R=R_{0}e^{T_{0}/T}\) behavior. This is a typical feature of \(I_{S}\) insulators which are characterized by emergent granularity [51; 52; 53; 15]. Fig. 1c presents \(T_{0}\) versus \(R\) of the \(InO\) film, which is found to decreases as the sample approaches the SIT and extrapolates to zero close (but beyond) to it, in consistence with previously reported works [15].
The key result of this work is presented in Fig. 2 which shows the resistance versus temperature curves for the three \(Ag/InO\) bilayers. As long as the \(InO\) is in the insulating phase, the resistance of such a bilayer is governed by the \(Ag\) layer which has a much lower sheet resistance (\(\approx 200\Omega_{\square}\)) than that of the \(InO\) (\(\approx 100k\Omega_{\square}-100M\Omega_{\square}\)) for \(T\leq 10K\). Thus, when measuring the bilayer we are, in fact, measuring the Ag layer almost strictly. We note that the annealing process may slightly affect the Ag resistance as well.
We start with considering the results of sample \(S1\) (having the highest \(Ag\) resistance) represented in Fig. 2a. Surprisingly, the addition of \(InO\) causes the resistance to _increase_ with decreasing temperature below \(\sim 10K\). This is very counter-intuitive since one naively expects that the \(InO\) would add conductivity in parallel and, if anything, would reduce the total resistance. Instead, the \(InO\) overlayer seems to be inducing insulating behavior in the underlying \(Ag\) film. This effect gets larger as the \(InO\) is driven towards the SIT, eventually exceeding a 50% amplitude increase before reversing the trend at low temperatures where the resistance starts decreasing with lowering temperature.
A similar effect, though with smaller magnitude, was seen for samples \(S2\) and \(S3\), having decreasing disorder respectively (Fig. 2b and c). Fig 2d shows the peak amplitude versus \(R_{\square}\) for the three samples, indicating that the resistance increase depends on two parameters. It is larger the more disordered the \(Ag\) film is and also the closer the \(InO\) is to the SIT.
In order to understand how inducing superconducting fluctuations in a \(N\) layer results in an increase of resistance we recall that our \(Ag\) films are highly disordered, close to being insulating themselves. The conductivity is thus inhomogeneous due to strong spatial fluctuations of the underlying electronic potential. The current does not flow uniformly through the sample but rather through
Figure 1: **a**: A sketch of the device containing six leads (gold), two silver strips (gray) and the \(InO\) layer (purple). **b**: Resistance vs temperature of the \(InO\) layer of sample \(S1\), for different stages of annealing (as noted in the legend), measured between the silver strips. **c**: \(T_{0}\) (extracted from the \(R=R_{0}e^{\frac{T_{0}}{T}}\) dependency) vs the 1K \(InO\) sheet resistance. The dashed line is a guide to the eye.
Figure 2: Resistance vs temperature of the three \(Ag/InO\) bilayer of samples \(S1\), \(S2\) and \(S3\) (panels **a**, **b** and **c**, respectively), for different stages of annealing. The colors signaling the annealing stage (see legend) apply for S2 and S3 as well, however, the sheet resistances of the InO film may differ between the samples. Black lines are plots for the bare \(Ag\) films. For clarity, the curves are normalized to the resistance at 10K. **d**: Resistance maximum vs the \(InO\) Resistance at 1K of samples S1 (black), S2 (red) and S3 (blue).
preferred high conductance trajectories as illustrated in the conductivity map of Fig. 3a.
Adding a disordered superconducting overlayer induces islands of \(SC\) regions in the \(Ag\) film (purple dots in Fig 3b) at temperatures below \(T_{c}\). As the temperature is lowered, the density of superconducting regions increases. However, when global phase coherence is absent, the insertion of superconducting islands into a highly disorder metal can actually increase the resistivity. This is due to the formation of a local energy gap, \(\Delta\), within each island, which suppresses the density of states for quasiparticles and thus limiting the current flow through these islands. Because the regions that are more prone to the proximity process are, naturally, those with higher conductivity, the current is restricted to one of two options: flowing through the high resistance trajectories (Fig. 3b) or through the puddles of zero resistance, but at an energy 'cost' of \(2\Delta\). Therefore, the sample can be viewed as a network of SIS junctions where the experimentally observed gap overlays the individual local ones which the current must tunnel through. A similar process was suggested as the origin for the giant magnetoresistance peak observed in these materials at high fields and low temperatures [54, 55].
Lowering the disorder of the \(SC\) and pushing it towards the SIT (e.g. by annealing the \(InO\) layer) increases the density of superconducting puddles (Fig. 3c), thus further limiting the current carrying network and forcing the current to flow through higher resistance trajectories. This results in increasing the bilayer resistivity as the \(InO\) film is pushed towards the SIT as indeed seen in the experiment (Fig. 2d). In addition, reducing the \(N\) disorder smooths the potential background thus suppressing the above process.
The temperature onset of this unique proximity effect is \(\sim 10K\) which is significantly larger than the maximal \(T_{c}\) measured in \(InO\) films (\(\sim 3.5K\)[18]). However, STM measurements on a film of \(InO\) with \(T_{C}\approx 3K\) have detected a finite \(\Delta\) up to temperatures of \(\approx 6.5\) K [16]. In the insulator, \(\Delta\) is predicted to grow further and increase as disorder increases [39]. The real pairing critical temperature, \(T^{*}\), of the \(I_{s}\) phase of \(InO\), is yet unknown, but the results presented here show signs for superconductivity up to \(T\approx 10K\).
The schematic representation of the current flow through the \(Ag\) layer in Fig. 3a-c is modeled here by considering an \(L\times L\) squared lattice, where each site can be either a superconductor or a normal metal. We consider only nearest neighbors connections and assign a resistance \(r=1\) for each bond between two metallic sites. The total sheet resistance, \(R_{\square}\), is then calculated by the minimal resistance path required for the current to flow from one side of the bilayer to the other normalized by the size of the lattice. This is to say, that for \(t=\frac{T}{T_{c}}\geq 1\), where all sites are metallic, \(R_{\square}=1\).
With decreasing temperature, proximitized superconducting islands start to form in the \(Ag\) layer. This is manifested by re-weighting all the bonds' resistances connecting a superconducting site to a metallic one by a factor \(Z\geq 1\) while all bonds between two neighbouring superconducting sites (within a SC island) are assigned a resistance \(r=0\), such that \(R_{\square}\to 0\) as the superconducting density, \(n_{sc}\), becomes large. The different sites are chosen randomly to be metallic or superconductors, depending on the value of \(n_{sc}(t)\in[0,1]\).
We include the effect of disorder by introducing sites
Figure 3: Illustration of the current paths (in orange) through the \(Ag\) film. **a**: The 2D conductivity map of the \(Ag\) film prior to the \(InO\) deposition. The current is carried by the most conductive parts (peaks in the 2D map). **b**: Adding an \(InO\) layer proximitizes different parts of \(Ag\) film and induces superconductivity, represented in purple. The current bypasses the SC regions due to suppression of the DOS in these locations. **c**: Annealing the \(InO\) leads to more sections of the \(Ag\) being proximitized thus further limiting the current paths and forcing them to choose less conductive routes, thus further increasing the resistance. **d**: Differential conductivity vs bias voltage of sample \(S1\) for the last insulating stage, normalized to the data at 150 meV. The inset shows the relative peak height and energy separation for different disorder degrees.
in the normal metal that prevent the formation of superconductivity. We assign a resistance \(r(t)=r_{0}e^{T_{0}/T}\) between a site within a superconducting or a metal region to a disordered one. Here, we use \(r_{0}=\exp(-1)\) and \(T_{0}=1.05\ T_{c}\), such that the high-temperature resistivity of the different samples is \(\approx 5\%\) higher than the clean one. The strength of the disorder is defined by the density of these sites, \(N_{d}\).
Fig. 4a shows the resistance as a function of the reduced temperature \(t\) for a fixed value of \(N_{d}\) and different values of \(Z\), where we used the empirical approximate temperature dependence of the superconducting fraction: \(n_{sc}(t)=1-t^{0.4}\). For all values of disorder, decreasing the temperature increases the density of superconducting islands and hence increasing \(R_{\square}\). This trend continues down to a disorder-dependent temperature, which marks the onset of global superconductivity, thus leading to the peak in the R-T curve as is found in the experiments. Note that the broad transition is also consistent with the experimental results. Fig. 4b shows the results for constant \(Z=3\) and different degree of disorder, \(N_{d}\). It is seen that as the disorder increases, the resistivity peak is found to be higher as indeed observed in the experiments.
The above picture is strongly supported by the differential conductance curve shown in Fig 3d which resembles an insulating stage of \(S1\). The overall \(dI/dV\) versus \(V\) curve exhibits a suppression of conductance as the bias voltage is lowered, due to the Altshuler-Aronov (AA) mechanism of electron-electron interactions [56] in the disordered \(Ag\) film (dashed curve). At low bias, the curve exhibits a superimposed structure which includes two symmetrical maxima which resemble the coherence peaks of a superconducting gap structure. The peak amplitude and energy scale grow as the sample is further annealed and pushed towards the SIT, however they are only observed in the most disordered \(Ag\) film (\(S1\)), and when the overlayer \(InO\) film is close to the SIT, as seen in the inset. The energy scales extracted from these features are \(19.6\) and \(30.8\) meV for the two last insulating \(InO\) stages. Interestingly, these are integer multiples (\(14\) and \(22\) respectively) of \(1.4meV\), which is the value of \(2\Delta\) for amorphous \(InO\)[41]. This is consistent with the suggested model of current flowing through a series of \(SIS\) junctions giving rise to a global gap-like structure, which is the sum of the individual local gaps on each of the SC island and is superimposed on the AA trend.
The results presented in this paper demonstrate a new type of proximity effect. We show that inducing superconductivity into an \(N\) layer is not limited to 'clean' superconductors but can also be extended to an \(I_{S}\) phase. Moreover, in the case of a highly disordered \(N\), the proximity of a disordered metal to the \(I_{S}\) can induce insulating-like behavior thus reducing its conductance. This effect becomes more prominent, the larger the N disorder. Such a proximitized bilayer can also offer a useful tool to study the \(I_{S}\) deep into the insulating phase. Attempting to measure superconducting fluctuations in an insulating sample by transport is ineffective, since the exponentially increasing resistance screens local superconductivity. Tunneling measurement require a barrier that is much more resistive than the sample itself, limiting the measurement to samples that are close to the transition and at relatively high temperatures. In contrary, by coupling a film that is well within the insulating phase of the SIT to a normal metal, one can access transport and tunneling measurement of the coupled metal, regardless of how insulating the superconductor is. The interplay between the metal and the superconductor is quantified in our numerical model as a single parameter denoted as \(Z\), which can be extracted directly from simple tunneling
Figure 4: Normalized sheet resistance \(R_{\square}(t)\) of an \(L=50\) squared lattice normalized by its value at \(T_{c}\), as a function of the reduced temperature \(t=T/T_{c}\). **a**: Fixed disorder density \(N_{d}=0.05\) for different values of \(Z\). As the temperature decreases, there is an interplay between the gain of a current passing through a zero-resistance superconducting island and the cost, \(Z\), to enter and exit the island. **b**: Fixed \(Z=3\) for samples with different disorder density, \(N_{d}\). The dashed lines mark the sheet resistance at \(T=T_{c}\).
measurements and can be studied for different samples and materials.
We are grateful for technical help from I. Volotsenko, R. Cohen, Y. Stein, A. Fried, A. Roy and M. Laav and useful discussions with J. Ruhman, N. Trivedi and T. Baturina. This work was supported by the US-Israel Binational Science Foundation (BSF) grant No. 2020331.
|
2307.00048 | Learned harmonic mean estimation of the marginal likelihood with
normalizing flows | Computing the marginal likelihood (also called the Bayesian model evidence)
is an important task in Bayesian model selection, providing a principled
quantitative way to compare models. The learned harmonic mean estimator solves
the exploding variance problem of the original harmonic mean estimation of the
marginal likelihood. The learned harmonic mean estimator learns an importance
sampling target distribution that approximates the optimal distribution. While
the approximation need not be highly accurate, it is critical that the
probability mass of the learned distribution is contained within the posterior
in order to avoid the exploding variance problem. In previous work a bespoke
optimization problem is introduced when training models in order to ensure this
property is satisfied. In the current article we introduce the use of
normalizing flows to represent the importance sampling target distribution. A
flow-based model is trained on samples from the posterior by maximum likelihood
estimation. Then, the probability density of the flow is concentrated by
lowering the variance of the base distribution, i.e. by lowering its
"temperature", ensuring its probability mass is contained within the posterior.
This approach avoids the need for a bespoke optimisation problem and careful
fine tuning of parameters, resulting in a more robust method. Moreover, the use
of normalizing flows has the potential to scale to high dimensional settings.
We present preliminary experiments demonstrating the effectiveness of the use
of flows for the learned harmonic mean estimator. The harmonic code
implementing the learned harmonic mean, which is publicly available, has been
updated to now support normalizing flows. | Alicja Polanska, Matthew A. Price, Alessio Spurio Mancini, Jason D. McEwen | 2023-06-30T18:00:02Z | http://arxiv.org/abs/2307.00048v3 | # Learned harmonic mean estimation of the marginal likelihood with normalizing flows
###### Abstract
Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e. by lowering its "temperature", ensuring its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimisation problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.
Bayesian model selection; harmonic mean estimator; normalizing flows. +
Footnote †: journal: Article
0000-0002-4000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-00000-00000-0000-00000-00000-0000-00000-00000-00000-00000-00000-00000-00000-0000-0000-00000-0000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-00000-000000-00000-000000-00000-000000-00000-00000-00000-00000-00000-00000-000000-00000-00000-000000-00000-00000-000000-000000-00000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-000000-0000000-000000-0000000-0000000-0000000-00000000-00000000-000000000-00000000-00000000-0
In this work we introduce the use of normalizing flows [7] for the learned harmonic mean estimator, which addresses some limitations of the models used previously. The use of normalizing flows eliminates the need for bespoke training, resulting in a more robust and scalable approach, which is now implemented in the harmonic code. We first review the learned harmonic mean estimator, before describing how normalizing flows may be used for the estimator and their main advantages in this context. We then present a number of experiments that demonstrate the effectiveness of the use of flows in the learned harmonic mean estimator.
## 2 The harmonic mean estimator
Bayesian model selection requires the computation of the marginal likelihood given by
\[z=p(y\,|\,M)=\int\mathrm{d}\theta\,p(y\,|\,\theta,M)\,p(\theta\,|\,M)=\int\, \mathrm{d}\theta\mathcal{L}(\theta)\pi(\theta), \tag{1}\]
where \(y\) denotes observed data, \(\theta\) the parameters of interest, and \(M\) the model under consideration. We adopt the shorthand notation for the likelihood of \(\mathcal{L}(\theta)=p(y\,|\,\theta,M)\) and prior of \(\pi(\theta)=p(\theta\,|\,M)\).
The harmonic mean estimator was first proposed by [8], who showed that the marginal likelihood \(z\) can be estimated from the harmonic mean of the likelihood, given posterior samples. This follows by considering the expectation of the reciprocal of the likelihood with respect to the posterior distribution, leading to the following estimator of the reciprocal of the marginal likelihood \(\rho=1/z\):
\[\hat{\rho}=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{\mathcal{L}(\theta_{i})},\quad \theta_{i}\sim p(\theta\,|\,y), \tag{2}\]
where \(N\) specifies the number of samples \(\theta_{i}\) drawn from the posterior \(p(\theta\,|\,y)\). The marginal likelihood can then be estimated from its reciprocal straightforwardly [5]. It was soon realized that the original harmonic mean estimator can fail catastrophically [9], as it can suffer from an exploding variance.
The estimator can also be interpreted as importance sampling. Consider the reciprocal marginal likelihood, which may be expressed in terms of the prior and posterior as:
\[\rho=\int\,\mathrm{d}\theta\,\frac{1}{\mathcal{L}(\theta)}\,p(\theta|y)=\int \,\mathrm{d}\theta\,\frac{1}{z}\,\frac{\pi(\theta)}{p(\theta\,|\,y)}\,p(\theta \,|\,y). \tag{3}\]
It is clear the estimator has an importance sampling interpretation where the importance sampling target distribution is the prior \(\pi(\theta)\), while the sampling density is the posterior \(p(\theta|y)\), in contrast to typical importance sampling scenarios.
For importance sampling to be effective, one requires the sampling density to have fatter tails than the target distribution, i.e. to have greater probability mass in the tails of the distribution. Typically the prior has fatter tails than the posterior since the posterior updates our initial understanding of the underlying parameters \(\theta\) that are encoded in the prior, in the presence of new data \(y\). For the harmonic mean estimator the importance sampling density (the posterior) typically does not have fatter tails than the target (the prior) and so importance sampling is not effective. This explains why the original harmonic mean estimator can be problematic.
In [10] an arbitrary density \(\varphi(\theta)\) is introduced to relate the reciprocal of the marginal likelihood to the likelihood through the following expectation:
\[\rho=\mathbb{E}_{p(\theta|y)}\bigg{[}\frac{\varphi(\theta)}{\mathcal{L}( \theta)\pi(\theta)}\bigg{]}. \tag{4}\]
The above expression motivates the estimator:
\[\hat{\rho}=\frac{1}{N}\sum_{i=1}^{N}\frac{\varphi(\theta_{i})}{\mathcal{L}( \theta_{i})\pi(\theta_{i})},\quad\theta_{i}\sim p(\theta|y). \tag{5}\]
The normalised density \(\varphi(\theta)\) can be interpreted as an alternative importance sampling target distribution, hence we refer to this approach as the re-targeted harmonic mean estimator. Note that the original harmonic mean estimator is recovered for the target distribution \(\varphi(\theta)=\pi(\theta)\).
The learned harmonic mean estimator is introduced in [5], where the target density \(\varphi(\theta)\) is learned by machine learning techniques. It is shown in [5] that the optimal target distribution is the posterior. Since the target must be normalized, the normalized posterior is clearly not accessible since its normalizing constant is precisely the term of interest. The learned harmonic mean approximates the optimal target of the posterior with a learned model that is normalized. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In [5] a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. Specifically, the model is fitted by minimizing the variance of the resulting estimator, while ensuring it is also unbiased, and with possible regularization. Such an approach requires a careful selection of an appropriate model and its hyperparameters for a problem at hand, determined by cross-validation. Furthermore, only simple classical machine learning models were considered in [5], which in many cases struggle to scale to high-dimensional settings.
## 3 Learning the target distribution using normalizing flows
In this paper we learn the target distribution of the learned harmonic mean estimator [5] using normalizing flows. Using normalizing flows renders the previous bespoke approach to training no longer necessary since it provides an elegant way to ensure the probability mass of the learned distribution is contained within the posterior, thereby resulting in a learned harmonic mean estimator that is more flexible and robust. Futhermore, normalizing flows also offer the potential to scale to higher dimensional settings. We first introduce normalizing flows, before describing how they may be used for the learned harmonic mean estimator and their main advantages in this context.
### Normalizing flows
Normalizing flows are a class of probabilistic models that allow one to evaluate the density of and sample from a learned probability distribution (for a review see [7]). They consist of a series of transformations that are applied to a simple base distribution. A vector \(\theta\) of an unknown distribution \(p(\theta)\), can be expressed through a transformation \(T\) of a vector \(z\) sampled from a base distribution \(q(z)\):
\[\theta=T(z),\text{ where }z\sim q(z). \tag{6}\]
Typically the base distribution is chosen so that its density can be evaluated simply and that it can be sampled from easily. Often a Gaussian is used for the base distribution. The unknown distribution can then be recovered by the change of variables formula:
\[p(\theta)=q(z)|\det J_{T}(z)|^{-1}, \tag{7}\]
where \(J_{T}(z)\) is the Jacobian corresponding to transformation \(T\). In a flow-based model \(T\) consists of a series of learned transformations that are each invertible and differentiable, so that the full transformation is also invertible and differentiable. This allows us to compose multiple simple transformations with learned parameters, into what is called a flow, obtaining a normalized approximation of the unknown distribution that we can
sample from and evaluate. Careful attention is given to construction of the transformations such that the determinant of the Jacobian can be computed easily.
A relatively simple example of a normalizing flow is the real-valued non-volume preserving (real NVP) flow introduced in [11]. It consists of a series of bijective transformations given by affine coupling layers. Consider the \(D\) dimensional input \(z\), split into elements up to and following \(d\), respectively, \(z_{1:d}\) and \(z_{d+1:D}\), for \(d<D\). Given input \(z\), the output \(y\) of an affine couple layer is calculated by
\[y_{1:d}= z_{1:d}; \tag{8}\] \[y_{d+1:D}= z_{d+1:D}\odot\exp\bigl{(}s(z_{1:d})\bigr{)}+t(z_{1:d}), \tag{9}\]
where \(\odot\) denotes Hadamard (elementwise) multiplication. The scale \(s\) and translation \(t\) are typically represented by neural networks with learnable parameters that take as input \(z_{1:d}\). This construction is easily invertible and ensures the Jacobian is a lower-triangular matrix, making its determinant efficient to calculate.
### Concentrating the probability density for the learned harmonic mean estimator
Normalizing flows meet the core requirements of the learned target distribution of the learned harmonic mean estimator: namely, they provide a normalized probability distribution for which one can evaluate probability densities. In this work we use them to introduce an elegant way to ensure the probability mass of the learned distribution is contained within the posterior. We thereby avoid the exploding variance issue of the original harmonic mean estimator and can evaluate the marginal likelihood accurately without the need for fine-tuning.
Reducing the variance of the base distribution, or equivalently lowering its "temperature" in a statistical mechanics perspective, clearly concentrates the probability density of the base distribution. This has the effect of also concentrating the probability density of the transformed distribution due to the continuity and differentiability of the flow. Consequently, once a flow is trained to approximate the posterior, by lowering the temperature of the base distribution (i.e. reducing its variance) we can concentrate the learned distribution to ensure its probability mass is contained within the posterior, as illustrated in Figure 1.
The learned distributions considered previously for the learned harmonic mean estimator [5] required the introduction of a bespoke optimizatio
Figure 1: Diagram illustrating the concentration of the probability density of a normalizing flow. The flow is trained on samples from the posterior, giving us a normalized approximation of the posterior distribution. The temperature of the base distribution \(T\in(0,1)\) is reduced, which concentrates the probability density of the transformed distribution ensuring that it is contained within the posterior. The concentrated flow can then be used as the target distribution for the learned harmonic mean estimator, avoiding the exploding variance issue of the original harmonic mean estimator.
order to ensure the learned target is contained within the posterior. This requires careful selection of an appropriate model and its hyperparameters, determined by cross-validation. The introduced normalizing flow approach renders bespoke training no longer necessary. Instead, we train a flow in the usual manner, based on maximum likelihood estimation, before concentrating its probability density. There is only one parameter to consider, the temperature \(T\in(0,1)\). Moreover, we expect a common value of \(T\sim 0.9\) to be suitable for most problems. Once we have a flow with its probability density concentrated for \(\varphi(\theta)\), the learned harmonic mean estimator can be computed in the usual manner [5]. Using normalizing flows with the learned harmonic mean thus provides a much more robust method. Furthermore, an added benefit of using flows is that we can draw samples from the flow distribution efficiently, in order to easily visualize the concentrated target distribution and compare it to the posterior.
In this preliminary work we consider real NVP flows only, as described above, which are implemented in the harmonic code. In the future we will consider more expressive and scalable flows. The use of normalizing flows for the learned distribution therefore has the potential to extend the learned harmonic mean estimator to problems with complex, high-dimensional posteriors.
## 4 Experiments
To validate the effectiveness of the method described in Section 3, we repeat a number of the numerical experiments carried out in [5] but using normalizing flows as the target distribution of the learned harmonic mean estimator. In the experiments that follow we consider a real NVP flow where the scale and translation networks of the affine coupling layers are given by two layer dense neural networks with a leaky ReLU in between. The scaling layers additionally include a proceeding softplus activation. We typically consider a flow with six coupling layers, where typically only the first two include scaling, and permute elements of the vector between coupling layers to ensure the flow transforms all elements. We consider a Gaussian base distribution with unit variance. We use the emcee package [12] to generate MCMC samples from the posterior. We then train the real NVP flow on half of the samples by maximum likelihood and calculate the marginal likelihood using the remaining samples by the learned harmonic mean estimator with the flow concentrated to temperature \(T\). In all experiments we consider an identical temperature of \(T=0.9\), which works well throughout, demonstrating that \(T\) does not require fine-tuning. We
Figure 2: Corner plot of samples from the posterior (red) and real NVP flow with temperature \(T=0.9\) (blue) for the Rosenbrock benchmark problem. The target distribution given by the concentrated flow is contained within the posterior and has thinner tails, as required for the learned harmonic mean estimator.
consider a relatively simple flow in this preliminary work and a small number of simple experiments. In future work we will consider more expressible and scalable flows, and further experiments to thoroughly evaluate the robustness and scalability of the method.
### Rosenbrock
A common benchmark problem to test methods that compute the marginal likelihood is a likelihood specified by the Rosenbrock function, which exhibits a narrow curving degeneracy. We consider the Rosenbrock likelihood in \(d=2\) dimensions and a simple uniform prior with \(x_{0}\in[-10,10]\) and \(x_{1}\in[-5,15]\). We sample the resulting posterior distribution, drawing 5,000 samples for 200 chains, with burn in of 2,000 samples, yielding 3,000 posterior samples per chain. Figure 2 shows a corner plot of the training samples from the posterior (red) and from the normalizing flow (blue) at temperature \(T=0.9\). It can be seen that the concentrated flow approximates the posterior well and has thinner tails, as required for the marginal likelihood estimate to be stable and accurate.
This process is repeated 100 times and the marginal likelihood is computed for each trial. Figure 3 shows a summary of the estimates across all the runs. The dashed red line in Figure 2(a) indicates the ground truth computed through numerical integration, which is tractable in two dimensions. It can be seen that the learned harmonic mean estimator using a real NVP flow provides an accurate and unbiased estimate of the marginal likelihood.
### Normal-Gamma
We consider the Normal-Gamma example for which the marginal likelihood can be computed analytically [2; 5]. It was found that the marginal likelihood values computed by the original harmonic mean estimator do not vary with a varying prior [2], highlighting this example as a pathological failure of the original harmonic mean estimator. We consider the same pathological example here and demonstrate that our learned harmonic mean estimator with normalizing flows is highly accurate (as is the learned harmonic mean with other models; [5]). We consider the Normal-Gamma model [5; 13] with data \(y_{i}\sim\mathrm{N}(\mu,\tau^{-1})\), for \(i\in\{1,\dots,n\}\), with mean \(\mu\) and precision (inverse variance) \(\tau\). A normal prior is assumed for \(\mu\) and a Gamma prior for \(\tau\):
\[\mu\sim\mathrm{N}\big{(}\mu_{0},(\tau_{0}\tau)^{-1}\big{)},\ \tau\sim\mathrm{Ga}(a_{0},b_{0}), \tag{10}\]
Figure 3: Marginal likelihood computed by the learned harmonic mean estimator with a concentrated flow for the Rosenbrock benchmark problem. 100 experiments are repeated to recover empirical estimates of the statistics of the estimator. In panel (a) the distribution of marginal likelihood values are shown (measured) along with the estimate of the standard deviation computed by the error estimator (estimated). The ground truth is indicated by the red dashed line. In panel (b) the distribution of the variance estimator is shown (estimated) along with the standard deviation computed by the variance-of-variance estimator (estimated). The learned harmonic mean estimator and its error estimators are highly accurate.
with mean \(\mu_{0}=0\), shape \(a_{0}=10^{-3}\) and rate \(b_{0}=10^{-3}\). The precision scale factor \(\tau_{0}\) is varied to observe the impact of changing prior on the computed marginal likelihood.
We draw 1,500 samples for 200 chains, with burn in of 500 samples, yielding 1,000 posterior samples per chain. Figure 4 shows a corner plot of the training samples from the posterior for \(\tau=0.001\) (red) and from the normalizing flow (blue) at temperature \(T=0.9\). Again, it can be seen the concentrated learned target is close to the posterior but with thinner tails, as expected. We consider priors with \(\tau\in\{10^{-4},10^{-3},10^{-2},10^{-1},1\}\). Figure 5 shows the relative accuracy of the marginal likelihood computed by the learned harmonic mean estimator using normalizing flows, that is the ratio of the estimated marginal likelihood to the analytic ground truth. We additionally consider a concentrated flow with \(T=0.95\) to demonstrate that accuracy is not highly dependent on the temperature parameter. It can be seen that the estimate remains accurate and is indeed sensitive to the prior for both temperatures. The estimates for the flow with \(T\) closer to one have a slightly lower variance, as one would expect, since the broader target \(\varphi\) makes more efficient use of samples.
### Logistic regression models: Pima Indian example
We consider the comparison of two logistic regression models using the Pima Indians data, which is another common benchmark problem for comparing estimators of the marginal likelihood. The original harmonic mean estimator has been shown to fail catastrophically for this example [2]. The Pima Indians data [14], originally from the National Institute of Diabetes and Digestive and Kidney Diseases, were compiled from a study of indicators of diabetes in \(n=532\) Pima Indian women aged 21 or over. Seven primary predictors of diabetes were recorded, including: number of prior pregnancies (NP); plasma glucose concentration (PGC); diastolic blood pressure (BP); triceps skin fold thickness (TST); body mass index (BMI); diabetes pedigree function (DP); and age (AGE). The probability of diabetes \(p_{i}\) for person \(i\in\{1,\ldots,n\}\) can be modelled by the logistic function. An independent multivariate Gaussian prior with precision \(\tau=0.01\) is assumed for parameters \(\theta\). Two different logistic regression models are compared, with different subsets of covariates:
\[\text{Model }M_{1}:\text{ covariates = {\{NP, PGC, BMI, DP\}} (\text{and bias});}\] \[\text{Model }M_{2}:\text{ covariates = {\{NP, PGC, BMI, DP, AGE\}} (\text{and bias}).}\]
Figure 4: Corner plot of samples from the posterior (red) and real NVP flow trained on the posterior samples with temperature \(T=0.9\) (blue) for the Normal-Gamma example with \(\tau=0.001\). The target distribution given by the concentrated flow is contained within the posterior and has thinner tails, as required for the learned harmonic mean estimator.
A reversible jump algorithm [15] is used by [2] to compute a benchmark Bayes factor \(\text{BF}_{12}\) of \(13.96\) (\(\log\text{BF}_{12}=2.6362\)) which is treated as ground truth.
We draw 5,000 samples from for 200 chains, with burn in of 1,000 samples, yielding 4,000 posterior samples per chain. We train a flow consisting of 6 scaled layers followed by 2 unscaled ones. Figure 6 shows a corner plot of the training samples from the posterior (red) and from the normalizing flow (blue) at temperature \(T=0.9\). Again, it can be seen the concentrated learned target is close to the posterior but with thinner tails, as expected. We compute the marginal likelihood for Model 1 and Model 2 using our learned harmonic mean estimator. The log evidence found for Model 1 and 2 is \(-257.2300\pm 0.0020\) and \(-259.8602\pm 0.0031\) respectively, resulting in the estimate \(\log\text{BF}_{12}=2.6302\pm 0.0051\), which is in close agreement with the benchmark.
## 5 Conclusions
In this work we propose using normalizing flows for the learned harmonic mean estimator of the marginal likelihood. The flow may be fitted to posterior samples by the usual maximum likelihood estimation. Its probability density may then be concentrated by lowering the temperature of the base distribution, ensuring the probability mass of the transformed distribution is contained within the posterior to avoid the exploding variance issue of the original harmonic mean estimator. The use of flows therefore results in a more robust learned harmonic mean estimator. We perform a number of experiments to compute the marginal likelihood with the proposed approach, using a real NVP flow, finding excellent agreement with ground truth values. In this preliminary work we consider only simple real NVP flows and a simple set of experiments. In a follow-up article we will consider more expressive and scalable flows to address problems with complex, high-dimensional posteriors. We will also perform a more extensive set of numerical experiments to thoroughly assess performance. This preliminary work nevertheless suggests the learned harmonic mean estimator with normalising flows provides an effective technique to compute the marginal likelihood for Bayesian model selection. Furthermore, it is applicable for any MCMC sampling technique or variational inference approach.
Conceptualization, A.P., M.A.P. and J.D.M.; methodology, A.P., M.A.P. and J.D.M.; software, A.P., M.A.P. and J.D.M.; validation, A.P., M.A.P. and J.D.M.; investigation, A.P.; resources, J.D.M.; writing--original draft preparation, A.P. and J.D.M.; writing--review and editing, A.P., M.A.P., A.S.M. and J.D.M.; visualization, A.P.; supervision, M.A.P., A.S.M. and J.D.M.; funding acquisition, J.D.M. All authors have read and agreed to the published version of the manuscript.
Figure 5: Ratio of marginal likelihood values computed by the learned harmonic mean estimator with a concentrated flow to those computed analytically for the Normal-Gamma problem. Errors bars corresponding to the estimated standard deviation of the learned harmonic estimator are also shown. Notice that the marginal likelihood values computed by the learned harmonic mean estimator are highly accurate and are indeed sensitive to changes in the prior. Predictions made with flow at temperature \(T=0.9\) (blue) and \(T=0.95\) (green) are shown, which are slightly offset for ease of visualization, demonstrating accuracy is not highly sensitive to the choice of \(T\). |
2309.06195 | Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding | Solving linear inverse problems plays a crucial role in numerous
applications. Algorithm unfolding based, model-aware data-driven approaches
have gained significant attention for effectively addressing these problems.
Learned iterative soft-thresholding algorithm (LISTA) and alternating direction
method of multipliers compressive sensing network (ADMM-CSNet) are two widely
used such approaches, based on ISTA and ADMM algorithms, respectively. In this
work, we study optimization guarantees, i.e., achieving near-zero training loss
with the increase in the number of learning epochs, for finite-layer unfolded
networks such as LISTA and ADMM-CSNet with smooth soft-thresholding in an
over-parameterized (OP) regime. We achieve this by leveraging a modified
version of the Polyak-Lojasiewicz, denoted PL$^*$, condition. Satisfying the
PL$^*$ condition within a specific region of the loss landscape ensures the
existence of a global minimum and exponential convergence from initialization
using gradient descent based methods. Hence, we provide conditions, in terms of
the network width and the number of training samples, on these unfolded
networks for the PL$^*$ condition to hold. We achieve this by deriving the
Hessian spectral norm of these networks. Additionally, we show that the
threshold on the number of training samples increases with the increase in the
network width. Furthermore, we compare the threshold on training samples of
unfolded networks with that of a standard fully-connected feed-forward network
(FFNN) with smooth soft-thresholding non-linearity. We prove that unfolded
networks have a higher threshold value than FFNN. Consequently, one can expect
a better expected error for unfolded networks than FFNN. | Shaik Basheeruddin Shah, Pradyumna Pradhan, Wei Pu, Ramunaidu Randhi, Miguel R. D. Rodrigues, Yonina C. Eldar | 2023-09-12T13:03:47Z | http://arxiv.org/abs/2309.06195v1 | # Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth Soft-Thresholding
###### Abstract
Solving linear inverse problems plays a crucial role in numerous applications. Algorithm unfolding based, model-aware data-driven approaches have gained significant attention for effectively addressing these problems. Learned iterative soft-thresholding algorithm (LISTA) and alternating direction method of multipliers compressive sensing network (ADMM-CSNet) are two widely used such approaches, based on ISTA and ADMM algorithms, respectively. In this work, we study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs, for finite-layer unfolded networks such as LISTA and ADMM-CSNet with smooth soft-thresholding in an over-parameterized (OP) regime. We achieve this by leveraging a modified version of the Polyak-Lojasiewicz, denoted PL', condition. Satisfying the PL' condition within a specific region of the loss landscape ensures the existence of a global minimum and exponential convergence from initialization using gradient descent based methods. Hence, we provide conditions, in terms of the network width and the number of training samples, on these unfolded networks for the PL' condition to hold. We achieve this by deriving the Hessian spectral norm of these networks. Additionally, we show that the threshold on the number of training samples increases with the increase in the network width. Furthermore, we compare the threshold on training samples of unfolded networks with that of a standard fully-connected feed-forward network (FFNN) with smooth soft-thresholding non-linearity. We prove that unfolded networks have a higher threshold value than FFNN. Consequently, one can expect a better expected error for unfolded networks than FFNN.
Optimization Guarantees, Algorithm Unfolding, LISTA, ADMM-CSNet, Polyak-Lojasiewicz condition
## I Introduction
Linear inverse problems are fundamental in many engineering and science applications [1, 2], where the aim is to recover a vector of interest or target vector from an observation vector. Existing approaches to address these problems can be categorized into two types; model-based and data-driven. Model-based approaches use mathematical formulations that represent knowledge of the underlying model, which connects observation and target information. These approaches are simple, computationally efficient, and require accurate model knowledge for good performance [3, 4]. In data-driven approaches, a machine learning (ML) model, e.g., a neural network, with a training dataset, i.e., a supervised setting, is generally considered. Initially, the model is trained by minimizing a certain loss function. Then, the trained model is used on unseen test data. Unlike model-based methods, data-driven approaches do not require underlying model knowledge. However, they require a large amount of data and huge computational resources while training [3, 4].
By utilizing both domains' knowledge, i.e., the mathematical formulation of the model and ML ability, a new approach, called model-aware data-driven, has been introduced [5, 6]. This approach involves the construction of a neural network architecture based on an iterative algorithm, which solves the optimization problem associated with the given model. This process is called algorithm unrolling or unfolding [6]. It has been observed that the performance, in terms of accurate recovery of the target vector, training data requirements, and computational complexity, of model-aware data-driven networks is better when compared with existing techniques [5, 7]. Learned iterative soft-thresholding algorithm (LISTA) and alternating direction method of multipliers compressive sensing network (ADMM-CSNet) are two popular unfolded networks that have been used in many applications such as image compressive sensing [7], image deblurring [8], image super-resolution [9], super-resolution microscopy [10], clutter suppression in ultrasound [11], power system state estimation [12], and many more.
Nevertheless, the theoretical studies supporting these unfolded networks remain to be established. There exist a few theoretical studies that address the challenges of generalization [13, 14, 15] and convergence rate [16, 17, 18] in unfolded networks. For instance, in [13], the authors showed that unfolded networks exhibit higher generalization capability compared with standard ReLU networks by deriving an upper bound on the generalization and estimation errors. In [16, 17, 18] the authors examined the LISTA network convergence to the ground truth as the number of layers increases i.e., layer-wise convergence (which is analogous to iteration-wise convergence in the ISTA algorithm). Furthermore, in [16, 17, 18], the network weights are not learned but are calculated in an analytical way (by solving a data-free optimization problem). Thus, the network only learns a few parameters, like threshold, step size, etc., from the available data. In this work, we study guarantees to achieve near-zero training loss with an increase in the number of learning epochs, i.e., _optimization guarantees_, by using gradient descent (GD) for both LISTA and ADMM-CSNet
with smooth activation in an over-parameterized regime. Note that, our work differs from [16, 17, 18], as we focus on the convergence of training loss with the increase in the number of epochs by fixing the number of layers in the network.
In classical ML theory, we aim to minimize the expected/test risk by finding a balance between under-fitting and over-fitting, i.e., achieving the bottom of the classical U-shaped test risk curve [19]. However, modern ML results establish that large models that try to fit train data exactly, i.e., interpolate, _often_ show high test accuracy even in the presence of noise [20, 21, 22, 23, 24, 25]. Recently, ML practitioners proposed a way to numerically justify the relationship between classical and modern ML practices. They achieved this by proposing a performance curve called the double-descent test risk curve [20, 21, 23, 24], which is depicted in Fig. 1. This curve shows that increasing the model capacity (e.g., model parameters) until interpolation results in the classical U-shaped risk curve; further increasing it beyond the interpolation point reduces the test risk. Thus, understanding the conditions - as a function of the training data - that allow perfect data fitting is crucial.
Neural networks can be generally categorized into under-parameterized (UP) and over-parameterized (OP), based on the number of trainable parameters and the number of training data samples. If the number of trainable parameters is less than the number of training samples, then the network is referred to as an UP model, else, referred to as an OP model. The loss landscape of both UP and OP models is generally non-convex. However, OP networks satisfy _essential non-convexity_[26]. Particularly, the loss landscape of an OP model has a non-isolated manifold of global minima with non-convexity around any small neighborhood of a global minimum. Despite being highly non-convex, GD based methods work well for training OP networks [27, 28, 29, 30]. Recently, in [26, 31], the authors provided a theoretical justification for this. Specifically, they proved that the loss landscape, corresponding to the squared loss function, of a typical smooth OP model holds the modified version of the Polyak-Lojasiewicz condition, denoted PL\({}^{*}\), on most of the parameter space. Indeed, a necessary (but not sufficient) condition to satisfy the PL\({}^{*}\) is that the model should be in OP regime. Satisfying PL\({}^{*}\) on a region in the parameter space guarantees the existence of a global minimum in that region, and exponential convergence to the global minimum from the Gaussian initialization using simple GD.
Motivated by the aforementioned PL\({}^{*}\)-based mathematical framework of OP networks, in this paper, we analyze optimization guarantees of finite-layer OP based unfolded ISTA and ADMM networks. Moreover, as the analysis of PL\({}^{*}\) depends on the double derivative of the model [26], we consider a smooth version of the soft-thresholding as an activation function. The major contributions of the paper are summarized as follows:
* As the linear inverse problem aims to recover a vector, we initially extend the gradient-based optimization analysis of the OP model with a scalar output, proposed in [26], to a vector output. In the process, we prove that a necessary condition to satisfy PL\({}^{*}\) is \(P\gg mT\), where \(P\) denotes the number of parameters, \(m\) is the dimension of the model output vector, and \(T\) denotes the number of training samples.
* In [26, 31], the authors provided a condition on the width of a fully-connected feed-forward neural network (FFNN) with scalar output to satisfy the PL\({}^{*}\) condition by utilizing the Hessian spectral norm of the network. Motivated by this work, we derive the Hessian spectral norm of finite-layer LISTA and ADMM-CSNet with smoothed soft-thresholding non-linearity. We show that the norm is on the order of \(\tilde{\Omega}\left(1/\sqrt{m}\right)\), where \(m\) denotes the width of the network which is equal to the target vector dimension.
* By employing the Hessian spectral norm, we derive necessary conditions on both \(m\) and \(T\) to satisfy the PL\({}^{*}\) condition for both LISTA and ADMM-CSNet. Moreover, we demonstrate that the threshold on \(T\), which denotes the maximum number of training samples that a network can memorize, increases as the network width increases.
* We compare the threshold on the number of training samples of LISTA and ADMM-CSNet with that of FFNN, solving a given linear inverse problem. Our findings show that LISTA/ADMM-CSNet exhibits a higher threshold value than FFNN. Specifically, we demonstrate this by proving that the upper bound on the minimum eigenvalue of the tangent kernel matrix at initialization is high for LISTA/ADMM-CSNet compared to FFNN. This implies that, with fixed network parameters, the unfolded network is capable of memorizing a larger number of training samples compared to FFNN. Therefore, we should expect to obtain a better expected error (which is upper bounded by the sum of generalization and training error [32]) for unfolded networks than FFNN.
* Additionally, we numerically evaluate the parameter efficiency of unfolded networks in comparison to FFNNs. In particular, we demonstrate that FFNNs require a higher number of parameters to achieve near-zero empirical training loss compared to LISTA/ADMM-CSNet for a given fixed \(T\) value.
**Outline:** The paper is organized as follows: Section II presents a comprehensive discussion on LISTA and ADMM-CSNet, and also formulates the problem. Section III extends the PL\({}^{*}\)-based optimization guarantees of an OP model with scalar output to a model with multiple outputs. Section IV
Fig. 1: Double descent risk curve.
begins by deriving the Hessian spectral norm of the unfolded networks. Then, it provides conditions on the network width and on the number of training samples to satisfy the \(\text{PL}^{*}\) condition. Further, it also establishes a comparative analysis of the threshold for the number of training samples among LISTA, ADMM-CSNet, and FFNN. Section V discusses the experimental results and Section VI draws conclusions.
**Notations:** The following notations are used throughout the paper. The set of real numbers is denoted by \(\mathbb{R}\). We use bold lowercase letters, e.g., \(\mathbf{y}\), for vectors, capital letters, e.g., \(W\), for matrices, and bold capital letters, e.g., \(\mathbf{H}\), for tensors. Symbols \(||\mathbf{z}||_{1}\), \(||\mathbf{z}||\), and \(||\mathbf{z}||_{\infty}\) denote the \(l_{1}\)-norm, \(l_{2}\)-norm, and \(l_{\infty}\)-norm of \(\mathbf{z}\), respectively. The spectral norm and Frobenius norm of a matrix \(W\) are written as \(||W||\) and \(||W||_{F}\), respectively. We use \([L]\) to denote the set \(\{1,2,\ldots,L\}\), where \(L\) is a natural number. The first-order derivative or gradient of a function \(L(\mathbf{w})\) w.r.t. \(\mathbf{w}\) is denoted as \(\nabla_{\mathbf{w}}L(\mathbf{w})\). The asymptotic upper bound and lower bound on a quantity are described using \(O(\cdot)\) and \(\Omega(\cdot)\), respectively. Notations \(\tilde{O}(\cdot)\) and \(\tilde{\Omega}(\cdot)\) are used to suppress the logarithmic terms in \(O(\cdot)\) and \(\Omega(\cdot)\), respectively. For example, \(O\left(\frac{1}{m}\ln(m)\right)\) is written as \(\tilde{O}\left(\frac{1}{m}\right)\). Symbols \(\gg\) and \(\ll\) mean "much greater than" and "much lesser than", respectively. Consider a matrix \(G\) with \(G_{i,j}=\sum_{k}A_{i,j,k}v_{k}\), where \(A_{i,j,k}\) is a component in tensor \(\mathbf{A}\in\mathbb{R}^{m_{1}\times m_{2}\times m_{3}}\). The spectral norm of \(G\) can be bounded as
\[\|G\|\leq\|\mathbf{A}\|_{2,2,1}\|\mathbf{v}\|_{\infty}. \tag{1}\]
Here \(\|\mathbf{A}\|_{2,2,1}\) denotes the \((2,2,1)\)-norm of the tensor \(\mathbf{A}\), which is defined as
\[\|\mathbf{A}\|_{2,2,1}=\sup_{\|\mathbf{r}\|=\|\mathbf{s}\|=1}\sum_{k=1}^{m_{3} }\left|\sum_{i=1}^{m_{1}}\sum_{j=1}^{m_{2}}A_{i,j,k}r_{i}s_{j}\right|, \tag{2}\]
where \(\mathbf{r}\in\mathbb{R}^{m_{1}\times 1}\) and \(\mathbf{s}\in\mathbb{R}^{m_{2}\times 1}\).
## II Problem Formulation
### _LISTA and ADMM-CSNet_
Consider the following linear inverse problem
\[\mathbf{y}=A\mathbf{x}+\mathbf{e}. \tag{3}\]
Here \(\mathbf{y}\in\mathbb{R}^{n\times 1}\) is the observation vector, \(\mathbf{x}\in\mathbb{R}^{m\times 1}\) is the target vector, \(A\in\mathbb{R}^{n\times m}\) is the forward linear operator matrix with \(m>n\), and \(\mathbf{e}\) is noise with \(\|\mathbf{e}\|_{2}<\epsilon\), where the constant \(\epsilon>0\). Our aim is to recover \(\mathbf{x}\) from a given \(\mathbf{y}\).
In model-based approaches, an optimization problem is formulated using some prior knowledge about the target vector and is usually solved using an iterative algorithm. For instance, by assuming \(\mathbf{x}\) is a \(k\)-sparse vector [33], the least absolute shrinkage and selection operator (LASSO) problem is formulated as
\[\min_{\mathbf{x}}\,\frac{1}{2}\|\mathbf{y}-A\mathbf{x}\|^{2}+\gamma\|\mathbf{ x}\|_{1}, \tag{4}\]
where \(\gamma\) is a regularization parameter. Iterative algorithms, such as ISTA and ADMM [34], are generally used to solve the LASSO problem. The update of \(\mathbf{x}\) at the \(l^{\text{th}}\) iteration in ISTA is [35]
\[\mathbf{x}^{l}=S_{\gamma\tau}\left\{\left(\mathbf{I}-\tau A^{T}A\right) \mathbf{x}^{l-1}+\tau A^{T}\mathbf{y}\right\}, \tag{5}\]
where \(\mathbf{x}^{0}\) is a bounded input initialization, \(\tau\) controls the iteration step size, and \(S_{\lambda}(\cdot)\) is the soft-thresholding operator applied element-wise on a vector argument \(S_{\lambda}(x)=\text{sign}(x)\text{max}\left(|x|-\lambda,0\right).\) The \(l^{\text{th}}\) iteration in ADMM is [36]
\[\begin{split}\mathbf{x}^{l}&=\left(A^{T}A+\rho \mathbf{I}\right)^{-1}\left(A^{T}\mathbf{y}+\rho\left(\mathbf{z}^{l-1}-\mathbf{ u}^{l-1}\right)\right),\\ \mathbf{z}^{l}&=S_{\frac{1}{\rho}}\left(\mathbf{x}^{l }+\mathbf{u}^{l-1}\right),\\ \mathbf{u}^{l}&=\mathbf{u}^{l-1}+\left(\mathbf{x}^{l}- \mathbf{z}^{l}\right),\end{split} \tag{6}\]
where \(\mathbf{x}^{0}\), \(\mathbf{z}^{0}\), and \(\mathbf{u}^{0}\), are bounded input initializations to the network and \(\rho>0\) is a penalty parameter. Model-based approaches are in general sensitive to inaccurate knowledge of the underlying model [3, 4]. In turn, data-driven approaches use an ML model to recover the target vector. These approaches generally require a large amount of training data and computational resources [3, 4].
A model-aware data-driven approach is generally developed using algorithm unfolding or unrolling [6]. In unfolding, a neural network is constructed by mapping each iteration in the iterative algorithm (such as (5) or (6)) to a network layer. Hence, an iterative algorithm with \(L\)-iterations leads to an \(L\)-layer cascaded deep neural network. The network is then trained by using the available dataset containing a series of pairs \(\{\mathbf{y}_{i},\mathbf{x}_{i}\},i\in[T]\). For example, the update of \(\mathbf{x}\) at the \(l^{\text{th}}\) iteration in ISTA, given in (5), is rewritten as
\[\mathbf{x}^{l}=S_{\lambda}\left\{W_{2}^{l}\mathbf{x}^{l-1}+W_{1}^{l}\mathbf{y} \right\}, \tag{7}\]
where \(\lambda=\gamma\tau\), \(W_{1}^{l}=\tau A^{T}\), and \(W_{2}^{l}=\mathbf{I}-\tau A^{T}A\). By considering \(W_{1}^{l}\), \(W_{2}^{l}\), and \(\lambda\) as network learnable parameters, one can map the above \(l^{\text{th}}\) iteration to an \(l^{\text{th}}\) layer in the network as shown in Fig. 2. The corresponding unfolded network is called learned ISTA (LISTA) [5]. Similarly, by considering \(W_{1}^{l}=\left(A^{T}A+\rho\mathbf{I}\right)^{-1}A^{T}\), \(W_{2}^{l}=\left(A^{T}A+\rho\mathbf{I}\right)^{-1}\rho\), and \(\lambda=\frac{\gamma}{\rho}\) as learnable parameters, (6) is rewritten as
\[\begin{split}\mathbf{x}^{l}&=W_{1}^{l}\mathbf{y}+W_{2}^{l }\left(\mathbf{z}^{l-1}-\mathbf{u}^{l-1}\right),\\ \mathbf{z}^{l}&=S_{\lambda}\left(\mathbf{x}^{l}+ \mathbf{u}^{l-1}\right),\\ \mathbf{u}^{l}&=\mathbf{u}^{l-1}+\left(\mathbf{x}^{l}- \mathbf{z}^{l}\right).\end{split} \tag{8}\]
The above \(l^{\text{th}}\) iteration in ADMM can be mapped to an \(l^{\text{th}}\) layer in a network as shown in Fig. 3, leading to ADMM-CSNet [7]. Note that from a network point of view, the inputs of \(l^{\text{th}}\) layer are \(\mathbf{x}^{l-1}\) and \(\mathbf{y}\) for LISTA, and \(\mathbf{z}^{l-1}\), \(\mathbf{u}^{l-1}\) and \(\mathbf{y}\) for ADMM-CSNet. It has been observed that the performance of LISTA and ADMM-CSNet is better in comparison with ISTA, ADMM, and traditional networks, in many applications [5, 7]. For instance, to achieve good performance the number of layers required in an unrolled network is generally much smaller than the number of iterations required by the iterative
Fig. 2: \(l^{\text{th}}\) layer of the unfolded ISTA network.
solver [5]. In addition, an unrolled network works effectively even if the linear operator matrix, \(A\), is not known exactly. An unrolled network typically requires less data for training compared to standard deep neural networks [3] to achieve a certain level of performance on unseen data. Due to these advantages, LISTA and ADMM-CSNet have been used in many applications [7, 8, 9, 10, 11, 12]. That said, the theoretical foundations supporting these networks remain to be established. While there have been some studies focusing on the generalization [13, 14, 15] and convergence rate [16, 17, 18] of unfolded networks, a comprehensive study of the optimization guarantees is lacking. Here, we analyze the conditions on finite \(L\)-layer LISTA and ADMM-CSNet to achieve near-zero training loss with the increase in the number of epochs.
### _Problem Formulation_
We consider the following questions: Under what conditions does the training loss in LISTA and ADMM-CSNet converge to zero as the number of epochs tends to infinity using GD? Additionally, how do these conditions differ for FFNNs?
For the analysis, we consider the following training setting: Let \(\mathbf{x}=F(\mathbf{w},\lambda;\mathbf{y})\) be an \(L\)-layer unfolded model, where \(\mathbf{y}\in\mathbb{R}^{n\times 1}\) is the model input vector, \(\mathbf{x}\in\mathbb{R}^{m\times 1}\) is the model output, and \(\mathbf{w}\in\mathbb{R}^{P\times 1}\) and \(\lambda\) are the learnable parameters. To simplify the analysis, \(\lambda\) is assumed to be constant, henceforth, we write \(F(\mathbf{w},\lambda;\mathbf{y})\) as \(F(\mathbf{w};\mathbf{y})\). This implies that \(\mathbf{w}_{P\times 1}=\text{Vec}\left([\mathbf{W}]_{L\times m\times(m+n)}\right)\) is the only learnable (untied) parameter vector, where
\[\mathbf{W}=\left[W^{1}~{}W^{2}~{}\dots~{}W^{L}\right], \tag{9}\]
and \(\left[W^{l}\right]_{m\times(m+n)}=\left[W^{l}_{1}~{}W^{l}_{2}\right]\) is the parameter matrix corresponding to the \(l^{\text{th}}\)-layer. Alternatively, we can write
\[\mathbf{W}=\left[[\mathbf{W}_{1}]_{L\times m\times n}~{}~{}[\mathbf{W}_{2}]_{ L\times m\times m}\right], \tag{10}\]
\(\mathbf{W}_{1}=\left[W^{1}_{1}~{}\dots~{}W^{L}_{1}\right]\) and \(\mathbf{W}_{2}=\left[W^{2}_{2}~{}\dots~{}W^{L}_{2}\right]\). Consider the training dataset \(\{\mathbf{y}_{i},\mathbf{x}_{i}\}_{i=1}^{T}\). An optimal parameter vector \(\mathbf{w}^{*}\), such that \(F(\mathbf{w}^{*};\mathbf{y}_{i})\approx\mathbf{x}_{i},~{}\forall i\in[T]\), is found by minimizing an empirical loss function \(L(\mathbf{w})\), defined as
\[L(\mathbf{w})=\sum_{i=1}^{T}l(\mathbf{f}_{i},\mathbf{x}_{i}), \tag{11}\]
where \(l(\cdot)\) is the loss function, \(\mathbf{f}_{i}=(\mathcal{F}(\mathbf{w}))_{i}=F(\mathbf{w},\mathbf{y}_{i})\), \(\mathcal{F}(\cdot):\mathbb{R}^{P\times 1}\rightarrow\mathbb{R}^{m\times T}\), and \((\mathcal{F}(\mathbf{w}))_{i}\) is the \(i^{\text{th}}\) column in \(\mathcal{F}(\mathbf{w})\). We consider the squared loss, hence
\[L(\mathbf{w})=\frac{1}{2}\sum_{i=1}^{T}\|\mathbf{f}_{i}-\mathbf{x}_{i}\|^{2}= \frac{1}{2}\|\mathcal{F}(\mathbf{w})-X\|_{F}^{2}, \tag{12}\]
where \(X=[\mathbf{x}_{1},\dots,\mathbf{x}_{T}]\). We choose GD as the optimization algorithm for minimizing \(L(\mathbf{w})\), hence, the updating rule is
\[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\nabla_{\mathbf{w}}L(\mathbf{w})\]
where \(\eta\) is the learning rate.
Our aim is to derive conditions on LISTA and ADMM-CSNet such that \(L(\mathbf{w})\) converges to zero with an increase in the number of epochs using GD, i.e., \(\lim_{t\rightarrow\infty}L(\mathbf{w}_{t})=0\). In addition, we compare these conditions with those of FFNN, where we obtain the conditions for FFNN by extending the analysis given in [26]. Specifically, in Section IV-C, we derive a bound on the number of training samples to achieve near zero training loss for unfolded networks. Further, we show that this threshold is lower for FFNN compared to unfolded networks.
## III Revisiting PL\({}^{*}\)-Based Optimization Guarantees
In [26] the authors proposed PL\({}^{*}\)-based optimization theory for a model with a scalar output. Motivated by this, in this section, we extend this theory to a multi-output model, as we aim to recover a vector in a linear inverse problem.
Consider an ML model, not necessarily an unfolded network, \(\mathbf{x}=F(\mathbf{w};\mathbf{y})\), with the training setup mentioned in Section II-B, where \(\mathbf{y}\in\mathbb{R}^{n\times 1}\), \(\mathbf{x}\in\mathbb{R}^{m\times 1}\), and \(\mathbf{w}\in\mathbb{R}^{P\times 1}\). Further, assume that the model is \(L_{F}\)-Lipschitz continuous and \(\beta_{\mathcal{F}}\)-smooth. A function \(\mathcal{F}(\cdot):\mathbb{R}^{P}\rightarrow\mathbb{R}^{m\times T}\) is \(L_{\mathcal{F}}\)-Lipschitz continuous if
\[\|\mathcal{F}(\mathbf{w}_{1})-\mathcal{F}(\mathbf{w}_{2})\|_{F}\leq L_{ \mathcal{F}}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|,~{}\forall\mathbf{w}_{1},\mathbf{ w}_{2}\in\mathbb{R}^{P},\]
and is \(\beta_{\mathcal{F}}\)-smooth if the gradient of the function is \(\beta_{\mathcal{F}}\)-Lipschitz, i.e.,
\[\|\nabla_{\mathbf{w}}\mathcal{F}(\mathbf{w}_{1})-\nabla_{\mathbf{w}}\mathcal{F }(\mathbf{w}_{2})\|_{F}\leq\beta_{\mathcal{F}}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|,\]
\(\forall\mathbf{w}_{1},~{}\mathbf{w}_{2}\in\mathbb{R}^{P}\). The Hessian spectral norm of \(\mathcal{F}(\cdot)\) is defined as
\[\|\mathbf{H}_{\mathcal{F}}(\mathbf{w})\|=\underset{i\in[T]}{\text{max}}\| \mathbf{H}_{\mathcal{F}_{i}}(\mathbf{w})\|,\]
where \(\mathbf{H}_{\mathcal{F}}\in\mathbb{R}^{T\times m\times P\times P}\) is a tensor with \((\mathbf{H}_{\mathcal{F}})_{i,j,k,l}=\frac{\partial^{2}(\mathcal{F}(\mathbf{w}) )_{i,j,i}}{\partial\mathbf{w}_{2}\partial\mathbf{w}_{1}}\) and \(\mathbf{H}_{\mathcal{F}_{i}}=\frac{\partial^{2}(\mathcal{F}(\mathbf{w}))_{i}}{ \partial\mathbf{w}}\). As stated earlier, the loss landscape of the OP model typically satisfies PL\({}^{*}\) on most of the parameter space. Formally, the PL\({}^{*}\) condition is defined as follows [37, 38]:
**Definition 1**.: _Consider a set \(C\subset\mathbb{R}^{P\times 1}\) and \(\mu>0\). Then, a non-negative function \(L(\mathbf{w})\) satisfies \(\mu\)-PL\({}^{*}\) condition on \(C\) if \(\|\nabla_{\mathbf{w}}L(\mathbf{w})\|^{2}\geq\mu L(\mathbf{w}),~{}\forall\mathbf{w}\in C\)._
**Definition 2**.: _The tangent kernel matrix, \([K(\mathbf{w})]_{mT\times mT}\), of the function \(\mathcal{F}(\mathbf{w})\), is a block matrix with \((i,j)^{\text{th}}\) block defined as_
\[(K(\mathbf{w}))_{i,j}=\left[\nabla_{\mathbf{w}}\mathbf{f}_{i}\right]_{m\times P }\left[\nabla_{\mathbf{w}}\mathbf{f}_{j}\right]_{P\times m}^{T},~{}i\in[T]~{} \text{and}~{}j\in[T].\]
Fig. 3: \(l^{\text{th}}\) layer of the unfolded ADMM network.
From the above definitions, we have the following lemma, which is called \(\mu\)-uniform conditioning [26] of a multi-output model \(\mathcal{F}(\mathbf{w})\):
**Lemma 1**.: \(\mathcal{F}(\mathbf{w})\) _satisfies \(\mu\)-PL\({}^{*}\) on set \(C\) if the minimum eigenvalue of the tangent kernel matrix, \(K(\mathbf{w})\), is greater than or equal to \(\mu\), i.e., \(\lambda_{\text{min}}(K(\mathbf{w}))\geq\mu,\ \forall\mathbf{w}\in C\)._
Proof.: From (12), we have
\[\|\nabla_{\mathbf{w}}L(\mathbf{w})\|^{2} =\left[\hat{\mathbf{f}}-\hat{\mathbf{x}}\right]^{T}\left[\nabla_ {\mathbf{w}}\hat{\mathbf{f}}\right]_{mT\times P}\left[\nabla_{\mathbf{w}} \hat{\mathbf{f}}\right]_{P\times mT}^{T}\left[\hat{\mathbf{f}}-\hat{\mathbf{x }}\right]\] \[=\left[\hat{\mathbf{f}}-\hat{\mathbf{x}}\right]^{T}\left[K( \mathbf{w})\right]_{mT\times mT}\left[\hat{\mathbf{f}}-\hat{\mathbf{x}}\right],\]
where \(\hat{\mathbf{f}}=\text{Vec}\left(\mathcal{F}(\mathbf{w})\right)\) and \(\hat{\mathbf{x}}=\text{Vec}\left(X\right)\). The above equation can be lower-bounded as
\[\|\nabla_{\mathbf{w}}L(\mathbf{w})\|^{2}\geq\lambda_{\text{min}} \left(K(\mathbf{w})\right)\|\hat{\mathbf{f}}-\hat{\mathbf{x}}\|_{2}^{2}\geq \mu L(\mathbf{w}).\]
Observe that \(K(\mathbf{w})\) is a positive semi-definite matrix. Thus, a necessary condition to satisfy the PL\({}^{*}\) condition (that is, a necessary condition to obtain a full rank \(K(\mathbf{w})\)), for a multi-output model is \(P\gg mT\). For a scalar output model, the equivalent condition is \(P\gg T\)[26]. Note that if \(P\ll T\), i.e., an UP model with a scalar output, then \(\lambda_{\text{min}}(K(\mathbf{w}))=0\), implies that an UP model does not satisfy the PL\({}^{*}\) condition.
Practically, computing \(\lambda_{\text{min}}(K(\mathbf{w}))\) for every \(\mathbf{w}\in C\), to verify the PL\({}^{*}\) condition, is not feasible. One can overcome this by using the Hessian spectral norm of the model \(\|\mathbf{H}_{\mathcal{F}}(\mathbf{w})\|\)[26]:
**Theorem 1**.: _Let \(\mathbf{w}_{0}\in\mathbb{R}^{P\times 1}\) be the parameter initialization of an \(L_{\mathcal{F}}\)-Lipschitz and \(\beta_{\mathcal{F}}\)-smooth model \(\mathcal{F}(\mathbf{w})\), and \(B(\mathbf{w}_{0},R)=\{\mathbf{w}\mid\|\mathbf{w}-\mathbf{w}_{0}\|\leq R\}\) be a ball with radius \(R>0\). Assume that \(K(\mathbf{w}_{0})\) is well conditioned, i.e., \(\lambda_{\text{min}}(K(\mathbf{w}_{0}))=\lambda_{\text{0}}\) for some \(\lambda_{0}>0\). If \(\|\mathbf{H}_{\mathcal{F}}(\mathbf{w})\|\leq\frac{\lambda_{0}-\mu}{2L_{ \mathcal{F}}\sqrt{TR}}\) for all \(\mathbf{w}\in B(\mathbf{w}_{0},R)\), then the model satisfies \(\mu\)-uniform conditioning in \(B(\mathbf{w}_{0},R)\); this also implies that \(L(\mathbf{w})\) satisfies \(\mu\)-PL\({}^{*}\) in the ball \(B(\mathbf{w}_{0},R)\)._
The intuition behind the above theorem is that small \(\|\mathbf{H}_{\mathcal{F}}(\mathbf{w})\|\) leads to a small change in the tangent kernel. Precisely, if the tangent kernel is well conditioned at the initialization, then a small \(\|\mathbf{H}_{\mathcal{F}}(\mathbf{w})\|\) in \(B(\mathbf{w}_{0},R)\) guarantees that the tangent kernel is well conditioned within \(B(\mathbf{w}_{0},R)\). The following theorem states that satisfying PL\({}^{*}\) guarantees the existence of a global minimum and exponential convergence to the global minimum from \(\mathbf{w}_{0}\) using GD:
**Theorem 2**.: _Consider a model \(\mathcal{F}(\mathbf{w})\) that is \(L_{\mathcal{F}}\)-Lipschitz continuous and \(\beta_{\mathcal{F}}\)-smooth. If the square loss function \(L(\mathbf{w})\) satisfies the \(\mu\)-PL\({}^{*}\) condition in \(B(\mathbf{w}_{0},R)\) with \(R=\frac{2L_{\mathcal{F}}\|\mathcal{F}(\mathbf{w}_{0})-X\|_{F}}{\mu}=O\left( \frac{1}{\mu}\right)\), then we have the following:_
* _There exist a global minimum,_ \(\mathbf{w}^{*}\)_, in_ \(B(\mathbf{w}_{0},R)\) _such that_ \(\mathcal{F}(\mathbf{w}^{*})=X\)_._
* _GD with step size_ \(\eta\leq\frac{1}{L_{\mathcal{F}}+\beta_{\mathcal{F}}\|\mathcal{F}(\mathbf{w} _{0})-X\|_{F}}\) _converges to a global minimum at an exponential convergence rate, specifically,_ \(L(\mathbf{w}_{t})\leq(1-\eta\mu)^{t}L(\mathbf{w}_{0})\)_._
The proofs of Theorems 1 and 2 are similar to the proofs of Theorems 2 and 6, respectively, in [26]. However, as linear inverse problems deal with vector recovery, the proofs rely on Frobenius norms instead of Euclidean norms.
## IV Optimization Guarantees
We now analyze the optimization guarantees of both LISTA and ADMM-CSNet by considering them in the OP regime. Hence, the aim is further simplified to study under what conditions LISTA and ADMM-CSNet satisfy the PL\({}^{*}\) condition. As mentioned in Theorem 1, one can verify the PL\({}^{*}\) condition using the Hessian spectral norm of the network. Thus, in this section, we first compute the Hessian spectral norm of both LISTA and ADMM-CSNet. The mathematical analysis performed here is motivated by [31], where the authors derived the Hessian spectral norm of an FFNN with a scalar output. Then, we provide the conditions on both the network width and the number of training samples to hold the PL\({}^{*}\) condition. Subsequently, we provide a comparative analysis among unfolded networks and FFNN to evaluate the threshold on the number of training samples.
### _Assumptions_
For the analysis, we consider certain assumptions on the unfolded ISTA and ADMM networks. The inputs of the networks are bounded, i.e., there exist some constants \(C_{x}\), \(C_{u}\), \(C_{z}\), and \(C_{y}\) such that \(|x_{i}^{0}|\leq C_{x},\,|u_{i}^{0}|\leq C_{u},\,|z_{i}^{0}|\leq C_{z}\), \(\forall i\in[m]\), and \(|y_{i}|\leq C_{y},\ \forall i\in[n]\). As the computation of the Hessian spectral norm involves a second-order derivative, we approximate the soft-thresholding activation function, \(S_{\lambda}(\cdot)\), in the unfolded network with the double-differentiable/smooth soft-thresholding activation function, \(\sigma_{\lambda}(\cdot)\), formulated using soft-plus, where \(\sigma_{\lambda}(x)=\log\left(1+e^{x-\lambda}\right)-\log\left(1+e^{-x-\lambda} \right)\cdot\) Fig. 4 depicts \(S_{\lambda}(x)\) and \(\sigma_{\lambda}(x)\) for \(\lambda=5\). Observe that \(\sigma_{\lambda}(x)\) approximates well to the shape of \(S_{\lambda}(x)\). There are several works in the literature that approximate the soft-thresholding function with a smooth version of it [39, 40, 41, 42, 43, 44, 45]. The analysis proposed in this work can be extended as is to other smooth approximations. Further, since \(\lambda\) is assumed to be a constant (refer to Section II-B), henceforth, we write \(\sigma_{\lambda}(\cdot)\) as \(\sigma(\cdot)\). It is well known that \(\sigma(\cdot)\) is \(L_{\sigma}\)-Lipschitz continuous and \(\beta_{\sigma}\)-smooth.
Fig. 4: Soft-threshold function, \(S_{\lambda}(x)\), and its smooth approximation, \(\sigma_{\lambda}(x)\) (formulated using the soft-plus function), with \(\lambda=5\).
Let \(\mathbf{W}_{0},\mathbf{W}_{10},\mathbf{W}_{20},W_{10}^{l}\) and \(W_{20}^{l}\) denote the initialization of \(\mathbf{W},\mathbf{W}_{1},\mathbf{W}_{2}\), \(W_{1}^{l}\) and \(W_{2}^{l}\), respectively. We initialize each parameter using random Gaussian initialization with mean \(0\) and variance \(1\), i.e., \(\left(W_{10}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\) and \(\left(W_{20}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\), \(\forall l\in[L]\). This guarantees well conditioning of the tangent kernel at initialization [26, 27]. Moreover, the Gaussian initialization imposes certain bounds, with high probability, on the spectral norm of the weight matrices. In particular, we have the following:
**Lemma 2**.: _If \(\left(W_{10}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\) and \(\left(W_{20}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\), \(\forall l\in[L]\), then with probability at least \(1-2\exp\left(-\frac{m}{2}\right)\) we have \(\left\|W_{10}^{l}\right\|\leq c_{10}\sqrt{n}=O(\sqrt{n})\) and \(\left\|W_{20}^{l}\right\|\leq c_{20}\sqrt{m}=O(\sqrt{m})\), \(\forall l\in[L]\), where \(c_{10}=1+2\sqrt{m}/\sqrt{n}\) and \(c_{20}=3\)._
Proof.: Any matrix \(W\in\mathbb{R}^{m_{1}\times m_{2}}\) with Gaussian initialization satisfies the following inequality with probability at least \(1-2\exp\left(-\frac{t^{2}}{2}\right)\), where \(t\geq 0\), [46]: \(\left\|W\right\|\leq\sqrt{m_{1}}+\sqrt{m_{2}}+t\). Using this fact and considering \(t=\sqrt{m}\), we get \(\left\|W_{10}^{l}\right\|=O(\sqrt{n})\) and \(\left\|W_{20}^{l}\right\|=O(\sqrt{m})\).
The following lemma shows that the spectral norm of the weight matrices within a finite radius ball is of the same order as at the initialization.
**Lemma 3**.: _If \(\mathbf{W}_{10}\) and \(\mathbf{W}_{20}\) are initialized as stated in Lemma 2, then for any \(\mathbf{W}_{1}\in B(\mathbf{W}_{10},R_{1})\) and \(\mathbf{W}_{2}\in B(\mathbf{W}_{20},R_{2})\), where \(R_{1}\) and \(R_{2}\) are positive scalars, we have \(\left\|W_{1}^{l}\right\|=O(\sqrt{n})\) and \(\left\|W_{2}^{l}\right\|=O(\sqrt{m})\), \(\forall l\in[L]\)._
Proof.: From triangular inequality, we have
\[\left\|W_{1}^{l}\right\| \leq\left\|W_{10}^{l}\right\|+\left\|W_{1}^{l}-W_{10}^{l}\right\| _{F}\leq c_{10}\sqrt{n}+R_{1}=O(\sqrt{n}),\] \[\left\|W_{2}^{l}\right\| \leq\left\|W_{20}^{l}\right\|+\left\|W_{2}^{l}-W_{20}^{l}\right\| _{F}\leq c_{20}\sqrt{m}+R_{2}=O(\sqrt{m}).\]
As the width of the network can be very high (dimension of the target vector), to obtain the constant asymptotic behavior, the learnable parameters \(W_{1}^{l}\) and \(W_{2}^{l}\) are normalized by \(\frac{1}{\sqrt{n}}\) and \(\frac{1}{\sqrt{m}}\), respectively, and the output of the model is normalized by \(\frac{1}{\sqrt{m}}\). This way of normalization is called neural tangent kernel (NTK) parameterization [47, 48]. With these assumptions, the output of a finite \(L\)-layer LISTA network is
\[\mathbf{f}=\frac{1}{\sqrt{m}}\mathbf{x}^{L}, \tag{13}\]
where
\[\mathbf{x}^{l}=\sigma(\tilde{\mathbf{x}}^{l})=\sigma\left(\frac{W_{1}^{l}}{ \sqrt{n}}\mathbf{y}+\frac{W_{2}^{l}}{\sqrt{m}}\mathbf{x}^{l-1}\right)\ \in\mathbb{R}^{m\times 1},\ l\in[L].\]
Likewise, the output of a finite \(L\)-layer ADMM-CSNet is
\[\mathbf{f}=\frac{1}{\sqrt{m}}\mathbf{z}^{L}, \tag{14}\]
where
\[\mathbf{z}^{l} =\sigma\left(\tilde{\mathbf{z}}^{l}\right)=\sigma\left(\mathbf{x }^{l}+\mathbf{u}^{l-1}\right),\] \[\mathbf{x}^{l} =\frac{1}{\sqrt{n}}W_{1}^{l}\mathbf{y}+\frac{1}{\sqrt{m}}W_{2}^{l} \left(\mathbf{z}^{l-1}-\mathbf{u}^{l-1}\right),\] \[\mathbf{u}^{l} =\mathbf{u}^{l-1}+\left(\mathbf{x}^{l}-\mathbf{z}^{l}\right),\ l \in[L].\]
To maintain uniformity in notation, hereafter, we denote the output of the network as \(\mathbf{f}=\frac{1}{\sqrt{m}}\mathbf{g}^{L}\), where \(\mathbf{g}^{l}=\mathbf{x}^{l}\) for LISTA and \(\mathbf{g}^{l}=\mathbf{z}^{l}\) for ADMM-CSNet.
### _Hessian Spectral Norm_
For better understanding, we first compute the Hessian spectral norm of one layer, i.e., \(L=1\), unfolded network.
#### Iv-B1 Analysis of \(1\)-Layer Unfolded Network
The Hessian matrix of a \(1\)-layer LISTA or ADMM-CSNet for a given training sample \(i\) is1
Footnote 1: Note that, to simplify the notation, we denoted \(\mathbf{H}_{\mathcal{F}_{i}}\) as \(\mathbf{H}\).
\[\left[\mathbf{H}_{\mathcal{F}_{i}}\right]=\left[\mathbf{H}\right]_{m\times P \times P}=\left[\begin{array}{cccc}H_{1}&H_{2}&\cdots&H_{m}\end{array}\right], \tag{15}\]
where \([H_{s}]_{P\times P}=\frac{\partial^{2}f_{s}}{\partial\mathbf{w}^{2}}\), \(\mathbf{w}=\text{Vec}(W^{1})=\text{Vec}\left([W_{1}^{1},W_{2}^{1}]\right)\), \(f_{s}\) denotes the \(s^{\text{th}}\) component in the network output vector \(\mathbf{f}\), i.e., \(f_{s}=\frac{1}{\sqrt{m}}\mathbf{v}_{s}^{\prime}\mathbf{g}^{\text{l}}\) and \(\mathbf{v}_{s}\) is a vector with \(s^{\text{th}}\) element set to be \(1\) and others to be \(0\). The Hessian spectral norm given in (15) can be bounded as \(\underset{s\in[m]}{\max}\left\{\left\|H_{s}\right\|\right\}\leq\left\|\mathbf{H }\right\|\leq\sum_{s}\left\|H_{s}\right\|\). By leveraging the chain rule, we have
\[H_{s}=\frac{\partial f_{s}}{\partial\mathbf{g}^{\text{l}}}\frac{\partial^{2} \mathbf{g}^{\text{l}}}{\partial\mathbf{w}^{2}}. \tag{16}\]
We can bound \(H_{s}\), as given below, by using the inequality given in (1),
\[\left\|H_{s}\right\|\leq\left\|\frac{\partial f_{s}}{\partial\mathbf{g}^{\text{l}} }\right\|_{\infty}\left\|\frac{\partial^{2}\mathbf{g}^{\text{l}}}{\partial \mathbf{w}^{2}}\right\|_{2,2,1}. \tag{17}\]
From (13) or (14), we get
\[\left\|\frac{\partial f_{s}}{\partial\mathbf{g}^{\text{l}}}\right\|_{\infty}= \left\|\frac{1}{\sqrt{m}}\mathbf{v}_{s}^{\prime}\right\|_{\infty}=O\left(\frac{1}{ \sqrt{m}}\right). \tag{18}\]
In addition,
\[\left\|\frac{\partial^{2}\mathbf{g}^{\text{l}}}{\left(\partial \mathbf{w}\right)^{2}}\right\|_{2,2,1}=\left\|\left[\begin{array}{cc}\partial^{2} \mathbf{g}^{\text{l}}/\left(\partial W_{1}^{1}\right)^{2}&\partial^{2} \mathbf{g}^{\text{l}}/\partial W_{1}^{1}\partial W_{2}^{1}\\ \partial^{2}\mathbf{g}^{\text{l}}/\partial W_{2}^{1}\partial W_{1}^{1}&\partial^{2} \mathbf{g}^{\text{l}}/\left(\partial W_{2}^{1}\right)^{2}\end{array}\right]\right\|_{2,2,1} \tag{19}\] \[\leq\left\|\frac{\partial^{2}\mathbf{g}^{\text{l}}}{\left( \partial W_{1}^{1}\right)^{2}}\right\|_{2,2,1}+2\left\|\frac{\partial^{2} \mathbf{g}^{\text{l}}}{\partial W_{1}^{1}\partial W_{2}^{1}}\right\|_{2,2,1}+ \left\|\frac{\partial^{2}\mathbf{g}^{\text{l}}}{\left(\partial W_{2}^{2} \right)^{2}}\right\|_{2,2,1}.\]
We now compute the \((2,2,1)\)-norms in the above equation for both LISTA and ADMM-CSNet. To begin with, for LISTA, we have the following second-order partial derivatives of layer-wise output,
the network, and smoothness of the activation function, the \((2,2,1)\)-norms of the above quantities are obtained as shown below:
\[\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W_{1}^{1} \right)^{2}}\right\|_{2,2,1} =\sup_{\|V_{1}\|_{F}=\|V_{2}\|_{F}=1}\frac{1}{n}\sum_{i=1}^{m} \left|\sigma^{\prime\prime}\left(\mathbf{\xi}_{i}^{1}\right)\left(V_{1} \mathbf{y}\right)_{i}\left(V_{2}\mathbf{y}\right)_{i}\right|\] \[\leq\sup_{\|V_{1}\|_{F}=\|V_{2}\|_{F}=1}\frac{1}{2n}\beta_{\sigma }\left(\|V_{1}\mathbf{y}\|^{2}+\|V_{2}\mathbf{y}\|^{2}\right)\] \[\leq\frac{1}{2n}\beta_{\sigma}\left(\|\mathbf{y}\|^{2}+\| \mathbf{y}\|^{2}\right)\leq\beta_{\sigma}C_{y}^{2}=O(1)\]
\[\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W_{1}^{ 1}\right)^{2}}\right\|_{2,2,1} =\sup_{\|V_{1}\|_{F}=\|V_{2}\|_{F}=1}\frac{1}{m}\sum_{i=1}^{m} \left|\sigma^{\prime\prime}\left(\mathbf{\xi}_{i}^{1}\right)\left(V_{1} \mathbf{x}^{0}\right)_{i}\left(V_{2}\mathbf{x}^{0}\right)_{i}\right|\] \[\leq\frac{1}{2m}\beta_{\sigma}\left(\left\|\mathbf{x}^{0}\right\| ^{2}+\left\|\mathbf{x}^{0}\right\|^{2}\right)\leq\beta_{\sigma}C_{x}^{2}=O(1)\]
\[\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\partial W_{2}^{1} \partial W_{1}^{1}}\right\|_{2,2,1}\] \[=\sup_{\|V_{1}\|_{F}=\|V_{2}\|_{F}=1}\frac{1}{\sqrt{mn}}\sum_{i=1 }^{m}\left|\sigma^{\prime\prime}\left(\mathbf{\xi}_{i}^{1}\right)\left(V_{1} \mathbf{x}^{0}\right)_{i}\left(V_{2}\mathbf{y}\right)_{i}\right|\] \[\leq\frac{1}{2\sqrt{mn}}\beta_{\sigma}\left(\left\|\mathbf{x}^{0 }\right\|^{2}+\left\|\mathbf{y}\|^{2}\right)\leq\sqrt{\frac{1}{4n}}\beta_{ \sigma}C_{x}^{2}+\sqrt{\frac{n}{4m}}\beta_{\sigma}C_{y}^{2}=O(1).\]
Substituting the above bounds in (19) implies \(\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W^{1}\right)^{2}} \right\|_{2,2,1}=O(1)\).
Similarly, for ADMM-CSNet, the equivalent second-order partial derivatives are
\[\left(\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W_{1}^{1} \right)^{2}}\right)_{i,j^{\prime},k^{\prime}} =\frac{1}{n}\sigma^{\prime\prime}\left(\mathbf{\xi}_{i}^{1} \right)\mathbf{y}_{j^{\prime}}\mathbf{y}_{k^{\prime}}\mathbb{I}_{i=k=j},\] \[\left(\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W_{1}^{2} \right)^{2}}\right)_{i,j^{\prime},k^{\prime}} =\frac{1}{m}\sigma^{\prime\prime}\left(\mathbf{\xi}_{i}^{1} \right)\left(\mathbf{z}^{0}-\mathbf{u}^{0}\right)_{j^{\prime}}\mathbf{z}^{0}- \mathbf{u}^{0})_{k^{\prime}}\mathbb{I}_{i=k=j},\] \[\left(\frac{\partial^{2}\mathbf{g}^{1}}{\partial W_{2}^{2} \partial W_{1}^{1}}\right)_{i,j^{\prime},k^{\prime}} =\frac{1}{\sqrt{mn}}\sigma^{\prime\prime}\left(\mathbf{\xi}_{i}^{1} \right)\left(\mathbf{z}^{0}-\mathbf{u}^{0}\right)_{j^{\prime}}\mathbf{y}_{k^{ \prime}}\mathbb{I}_{i=k=j}.\]
The corresponding \((2,2,1)\)-norm bounds are
\[\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W_{1}^{1} \right)^{2}}\right\|_{2,2,1}\leq\frac{1}{2m}\beta_{\sigma}\left(\|\mathbf{y} \|^{2}+\|\mathbf{y}\|^{2}\right)\leq\beta_{\sigma}C_{y}^{2}=O(1),\] \[\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W_{2}^{1 }\right)^{2}}\right\|_{2,2,1}\leq\frac{1}{2m}\beta_{\sigma}\left(2mC_{z}^{2}+2 mC_{u}^{2}\right)=O(1),\] \[\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\partial W_{1}^{1} \partial W_{2}^{1}}\right\|_{2,2,1}\leq\beta_{\sigma}\sqrt{\frac{m}{4n}} \left(C_{y}^{2}+\left(C_{z}+C_{u}\right)^{2}\right)=O(1).\]
Using the above bounds, we get \(\left\|\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W^{1}\right)^{2}} \right\|_{2,2,1}=O(1)\). From the above analysis, we conclude that the \((2,2,1)\)-norm of the tensor, \(\frac{\partial^{2}\mathbf{g}^{1}}{\left(\partial W^{1}\right)^{2}}\), is of the order of \(O(1)\) and the \(\infty\)-norm of the vector, \(\frac{\partial}{\partial\mathbf{g}^{1}}\), is of the order of \(O\left(\frac{1}{\sqrt{m}}\right)\). This implies,
\[\|H_{s}\|=O\left(\frac{1}{\sqrt{m}}\right)\text{ and }\|\mathbf{H}\|=\Omega \left(\frac{1}{\sqrt{m}}\right)=O\left(\sqrt{m}\right). \tag{20}\]
Therefore, the Hessian spectral norm of a 1-layer LISTA or ADMM-CSNet depends on the width (dimension of the target vector) of the network. We now generalize the above analysis for an \(L\)-layer unfolded network.
#### Iii-B2 Analysis of L-Layer Unfolded Network
The Hessian matrix of an \(L\)-layer unfolded ISTA or ADMM network for a given \(i^{\text{th}}\) training sample is written as
\[\left[\mathbf{H}\right]_{m\times P\times P}=\left[\begin{array}{cccc}H_{1} &H_{2}&\cdots&H_{m}\end{array}\right], \tag{21}\]
where \(H_{s}\) for \(s\in[m]\) is
\[\left[H_{s}\right]_{P\times P}=\left[\begin{array}{cccc}H_{s}^{1,1}&H_{s}^{1,2 }&\cdots&H_{s}^{1,L}\\ H_{s}^{2,1}&H_{s}^{2,2}&\cdots&H_{s}^{2,L}\\ \vdots&\vdots&\ddots&\vdots\\ H_{s}^{L,1}&H_{s}^{L,2}&\cdots&H_{s}^{L,L}\end{array}\right], \tag{22}\]
\(\left[H_{s}^{l_{1},l_{2}}\right]_{P_{1}\times P_{1}}=\frac{\partial^{2}f_{s}}{ \partial\mathbf{w}^{1}\partial\mathbf{w}^{2}}\), where \(P_{1}=m^{2}+mn\), \(l_{1}\in[L]\), \(l_{2}\in[L]\), \(\mathbf{w}^{l}=\text{Vec}(W^{l})=\text{Vec}\left([W_{1}^{l}\;W_{2}^{2}]\right)\) denotes the weights of \(l^{\text{th}}\)-layer, and \(f_{s}=\frac{1}{\sqrt{m}}\mathbf{v}_{s}^{T}\mathbf{g}^{L}\). From (21) and (22), the spectral norm of \(\mathbf{H}\), \(\|\mathbf{H}\|\), is bounded by its block-wise spectral norm, \(\|H_{s}\|\), as stated in the following theorem:
**Theorem 3**.: _The Hessian spectral norm, \(\|\mathbf{H}\|\), of an \(L\)-layer unfolded ISTA (ADMM) network, defined as in (13) ((14)), is bounded as \(\underset{s\in[m]}{\text{max}}\left\{\|H_{s}\|\right\}\leq\|\mathbf{H}\|\leq \sum_{s\in[m]}\|H_{s}\|\,,\) where_
\[\|H_{s}\|\leq\sum_{l_{1},l_{2}}\left\|H_{s}^{l_{1},l_{2}}\right\| \leq\sum_{l_{1},l_{2}}C_{1}\mathcal{Q}_{2,2,1}\left(f_{s}\right)\mathcal{Q}_{ \infty}\left(f_{s}\right) \tag{23}\] \[\leq C\mathcal{Q}_{2,2,1}\left(f_{s}\right)\mathcal{Q}_{\infty} \left(f_{s}\right).\]
_The constant \(C_{1}\) depends on \(L\) and \(L_{\sigma}\), \(C=L^{2}C_{1}\),_
\[\mathcal{Q}_{\infty}\left(f_{s}
\[\begin{split} c_{\text{ADMM;a}}^{l}&=L_{\sigma}\left(c_{10}+ \frac{R_{1}}{\sqrt{n}}\right)\sqrt{n}C_{y}+L_{\sigma}\left(c_{20}+\frac{R_{2}} {\sqrt{m}}\right)c_{\text{ADMM;a}}^{l-1}\\ &+L_{\sigma}\left(1+c_{20}+\frac{R_{2}}{\sqrt{m}}\right)c_{ \text{ADMM;a}}^{l-1}+\sigma(0)=O\left(\sqrt{m}\right),\\ c_{\text{ADMM;u}}^{l}&=\left(c_{10}+\frac{R_{1}}{ \sqrt{n}}\right)\sqrt{n}C_{y}+\left(c_{20}+\frac{R_{2}}{\sqrt{m}}\right)c_{ \text{ADMM;z}}^{l-1}\\ &+\left(c_{20}+\frac{R_{2}}{\sqrt{m}}+1\right)c_{\text{ADMM;u}}^ {l-1}+c_{\text{ADMM;z}}^{l}=O\left(\sqrt{m}\right),\end{split}\]
_where \(c_{\text{ISTA;x}}^{0}=\sqrt{m}C_{x}\), \(c_{\text{ADMM;z}}^{0}=\sqrt{m}C_{x}\), \(c_{\text{ADMM;u}}^{0}=\sqrt{m}C_{w}\), \(|x_{i}^{0}|\leq C_{x}\), \(|u_{i}^{0}|\leq C_{w}\), and \(|z_{i}^{0}|\leq C_{z}\), \(\forall i\in[m]\)._
Refer to the Appendix for proof of the above lemma. The three updating rules in Lemma 4 are of the order of \(\sqrt{m}\) and \(\sqrt{n}\) w.r.t. \(m\) and \(n\), respectively. However, as the width of the unfolded network is controlled by \(m\), we consider the bounds on \(\mathcal{Q}_{2,2,1}\left(f_{s}\right)\) and \(\mathcal{Q}_{\infty}\left(f_{s}\right)\) w.r.t. \(m\) in this work.
The following theorem gives the bound on \(\left\|\mathbf{H}\right\|\) by deriving the bounds on the quantities \(\mathcal{Q}_{2,2,1}\left(f_{s}\right)\) and \(\mathcal{Q}_{\infty}\left(f_{s}\right)\). The proof of Theorem 4 basically uses the bounds on the weight matrices (Lemma 2 and Lemma 3), bound on the hidden layer output (Lemma 4), and properties of the activation function (\(L_{\sigma}\)-Lipschitz continuous and \(\beta_{\sigma}\)-smooth).
**Theorem 4**.: _Consider an \(L\)-layer unfolded ISTA or ADMM network, \(\mathbf{F}(\mathbf{W})\), with random Gaussian initialization \(\mathbf{W}_{0}\). Then, the quantities \(\mathcal{Q}_{2,2,1}\left(f_{s}\right)\) and \(\mathcal{Q}_{\infty}\left(f_{s}\right)\) satisfy the following equality w.r.t. \(m\), over initialization, at any point \(\mathbf{W}\in B\left(\mathbf{W}_{0},R\right)\), for some fixed \(R>0\):_
\[\mathcal{Q}_{2,2,1}\left(f_{s}\right)=O(1)\text{ and }\mathcal{Q}_{\infty} \left(f_{s}\right)=\tilde{O}\left(\frac{1}{\sqrt{m}}\right), \tag{26}\]
_with probabilities \(1\) and \(1-me^{-c\ln^{2}(m)}\) for some constant \(c>0\), respectively. This implies_
\[\left\|H_{s}\right\|\leq\sum_{l_{1},l_{2}}\left\|H_{s}^{l_{1},l_{2}}\right\|= \tilde{O}\left(\frac{1}{\sqrt{m}}\right) \tag{27}\]
_and the Hessian spectral norm satisfies_
\[\left\|\mathbf{H}\right\|=\tilde{\Omega}\left(\frac{1}{\sqrt{m}}\right)= \tilde{O}\left(\sqrt{m}\right). \tag{28}\]
The proof of Theorem 4 is motivated by [31] and is lengthy. Thus, the readers are directed to the supplementary material [49], which provides the complete proof. In summary, from both \(1\)-layer and \(L\)-layer analyses, we claim that the Hessian spectral norm bound of an unfolded network is proportional to the square root of the width of the network.
### _Conditions on Unfolded Networks to Satisfy \(\text{PL}^{*}\)_
From Theorem 1, the Hessian spectral norm of a model should hold the following condition to satisfy \(\mu\)-uniform conditioning in a ball \(B(\mathbf{w}_{0},R)\): \(\left\|\mathbf{H}_{\mathcal{F}}(\mathbf{w})\right\|\leq\frac{\lambda_{0}-\mu }{2L_{\mathcal{F}}\sqrt{TR}},\ \forall\mathbf{w}\in B(\mathbf{w}_{0},R)\). Since \(\left\|\mathbf{H}_{\mathcal{F}}(\mathbf{w})\right\|=\max\limits_{i\in[T]} \left\|\mathbf{H}_{\mathcal{F}_{i}}(\mathbf{w})\right\|\), the above condition can be further simplified as
\[\left\|\mathbf{H}_{\mathcal{F}_{i}}(\mathbf{w})\right\|\leq\frac{\lambda_{0}- \mu}{2L_{\mathcal{F}}\sqrt{TR}},\ \forall i\in[T]\text{ and }\mathbf{w}\in B(\mathbf{w}_{0},R). \tag{29}\]
Substituting the Hessian spectral norm bound of LISTA and ADMM-CSNet, stated in Theorem 4, in (29) provides a constraint on the network width such that the square loss function satisfies the \(\mu\)-PL\({}^{*}\) condition in \(B(\mathbf{w}_{0},R)\):
\[m=\tilde{\Omega}\left(\frac{TR^{2}}{(\lambda_{0}-\mu)^{2}}\right),\text{ where }\mu\in(0,\lambda_{0}). \tag{30}\]
Therefore, from Theorem 2, we claim that for a given fixed \(T\) one should consider the width of the unfolded network as given in (30) to achieve near-zero training loss. However, the \(m\) (target vector dimension) value is generally fixed for a given linear inverse problem. Hence, we provide the constraint on \(T\) instead of \(m\). Substituting the \(\left\|\mathbf{H}_{\mathcal{F}_{i}}(\mathbf{w})\right\|\) bound in (29) also provides a threshold on \(T\), which is summarized in the following theorem:
**Theorem 5**.: _Consider a finite \(L\)-layer unfolded network as given in (13) or (14) with \(m\) as the network width. Assume that the model is well-conditioned at initialization, i.e., \(\lambda_{\text{min}}(K_{\text{Unfolded}}(\mathbf{w}_{0}))=\lambda_{0,\text{ Unfolded}}\), for some \(\lambda_{0,\text{Unfolded}}>0\). Then, the loss landscape corresponding to the square loss function satisfies the \(\mu\)-PL\({}^{*}\) condition in a ball \(B(\mathbf{w}_{0},R)\), if the number of training samples, \(T_{\text{Unfolded}}\), satisfies the following condition:_
\[T_{\text{Unfolded}}=\tilde{O}\left(\frac{m(\lambda_{0,\text{ Unfolded}}-\mu)^{2}}{R^{2}}\right),\ \mu\in(0,\lambda_{0,\text{ Unfolded}}). \tag{31}\]
Thus, while addressing a linear inverse problem using unfolded networks, one should consider the number of training samples as given in (31), to obtain zero training loss as the number of GD epochs increases to infinity. Observe that the threshold on \(T\) increases with the increase in the network width. We attribute this to the fact that a high network width is associated with more trainable parameters in the network, which provides the ability to handle/memorize more training samples. Conversely, a smaller network width leads to fewer trainable parameters, thereby impacting the network's performance in handling training samples.
**Comparison with FFNN:** In [26], the authors computed the Hessian spectral norm of an FFNN with a scalar output, which is of the order of \(\tilde{O}\left(\frac{1}{\sqrt{m}}\right)\). Following the analysis procedure of an \(m\)-output model given in Section IV-B, one can obtain the Hessian spectral norm of an FFNN with \(m\)-output and smoothed soft-thresholding non-linearity as given below:
\[\left\|\mathbf{H}\right\|=\tilde{\Omega}\left(\frac{1}{\sqrt{m}}\right)=\tilde {O}\left(\sqrt{m}\right). \tag{32}\]
This implies that the bound on the number of training samples, \(T_{\text{FFNN}}\), for an \(m\)-output FFNN to satisfy the \(\mu\)-PL\({}^{*}\) is
\[T_{\text{FFNN}}=\tilde{O}\left(\frac{m(\lambda_{0,\text{FFNN}}-\mu)^{2}}{R^{2}} \right),\ \mu\in(0,\lambda_{0,\text{FFNN}}) \tag{33}\]
Note that \(m\) is a fixed value in both (31) and (33), \(R\) is of the order of \(O\left(\frac{1}{\mu}\right)\) (refer to Theorem 2), and \(\mu\) depends on \(\lambda_{0}=\lambda_{\text{min}}\left(K\left(\mathbf{w}_{0}\right)\right)\). Therefore, from (31) and (33), the parameter that governs the number of training samples of a network is the minimum eigenvalue of the tangent kernel
matrix at initialization. Hence, we compare both \(T_{\text{Unfolded}}\) and \(T_{\text{FFNN}}\) by deriving the upper bounds on \(\lambda_{0,\text{Unfolded}}\) and \(\lambda_{0,\text{FFNN}}\). Specifically, in the following theorem, we show that the upper bound of \(\lambda_{0,\text{Unfolded}}\) is higher compared to \(\lambda_{0,\text{FFNN}}\).
**Theorem 6**.: _Consider an L-layered FFNN, defined as_
\[\mathbf{f}_{\text{FFNN}}=\frac{1}{\sqrt{m}}\mathbf{x}^{L},\mathbf{x}^{l}= \sigma\left(\frac{W^{l}}{\sqrt{m}}\mathbf{x}^{l-1}\right)\ \in\mathbb{R}^{m},\ l\in[L], \tag{34}\]
_with \(\mathbf{x}^{0}=\sqrt{\frac{\pi}{n}}\mathbf{y}\in\mathbb{R}^{n},\ W^{1}\in \mathbb{R}^{m\times n}\), and \(W^{l}\in\mathbb{R}^{m\times m}\quad\forall l\in[L]-\{1\}\). Also, consider the unfolded network defined in (13) or (14). Then, the upper bound on the minimum eigenvalue of the tangent kernel matrix at initialization for unfolded network, UB\({}_{\text{Unfolded}}\) (either UB\({}_{\text{LISTA}}\) or UB\({}_{\text{ADMM-CSNet}}\)), is greater than that of FFNN, UB\({}_{\text{FFNN}}\), i.e., UB\({}_{\text{Unfolded}}>\) UB\({}_{\text{FFNN}}\)._
Proof of the above theorem is given in the Appendix. To better understand Theorem 6, substitute \(L=2\) in equations (38), (39), and (40), this leads to
\[\text{UB}_{\text{FFNN}}=\hat{L}^{4}\hat{y}\left[\|W_{0}^{1}\|^{2}+ \|\mathbf{v}_{s}^{T}W_{0}^{2}\|^{2}\right],\] \[\text{UB}_{\text{LISTA}}=\hat{L}^{4}\hat{y}\left[\|W_{10}^{1}\|^{ 2}+\|\mathbf{v}_{s}^{T}W_{20}^{2}\|^{2}\right]+\hat{L}^{2}\hat{y}+\] \[\hat{L}^{4}\hat{x}\left[\|W_{10}^{2}\|^{2}+\|\mathbf{v}_{s}^{T}W_ {20}^{2}\|^{2}\right]+2\hat{L}^{4}\sqrt{\hat{x}}\hat{y}\|W_{10}^{1}\|\|W_{20}^{ 1}\|,\]
and
\[\text{UB}_{\text{ADMM-CSNet}}=L^{4}\hat{y}\left[\|W_{10}^{1}\|^{2}+ \|\mathbf{v}_{s}^{T}W_{20}^{2}\|^{2}\right]+L^{2}\hat{y}+\frac{\|\mathbf{u}^{( 1)}\|^{2}}{m}+\] \[\hat{L}^{4}\hat{a}^{(0)}\left[\|W_{20}^{2}\|^{2}+\|\mathbf{v}_{s} ^{T}W_{20}^{2}\|^{2}\right]+2\hat{l}\|\mathbf{\dot{x}}^{(0)}\|\mathbf{u}^{( 1)}\|+\hat{L}^{4}\|\mathbf{u}^{(0)}\|^{2}+\hat{L}^{4}\] \[\left[2\sqrt{\hat{y}\hat{a}^{(0)}}\|W_{10}^{1}\|\|W_{20}^{1}\|+2 \sqrt{\hat{a}^{(0)}}\|W_{20}^{1}\|\|\mathbf{u}^{(0)}\|+2\sqrt{\hat{y}}\|W_{10} ^{1}\|\|\mathbf{u}^{(0)}\|\right].\]
Since the dimension of \(W_{1}^{1}\) (\(W_{2}^{2}\)) of unfolded is same as \(W^{1}\) (\(W^{2}\)) of FFNN, we conclude that UB\({}_{\text{Unfolded}}>\) UB\({}_{\text{FFNN}}\) for \(L=2\). One can verify that this relation holds for any \(L\) value using the generalized expressions given in (38), (39), and (40). Figures 5 (a) and 5 (b) depict the variation of \(10\log_{10}\left(\lambda_{\text{min}}\left(K(\mathbf{w}_{0})\right)\right)\) w.r.t. \(L\) (here we considered \(T=10\), \(m=100\), \(n=20\), and \(k=2\)) and \(P\) (here we vary \(m\), \(n\), and \(k\) values by fixing \(T=10\), \(L=6\) for unfolded, and \(L=8\) for FFNN), respectively, for LISTA, ADMM-CSNet, and FFNN. From these figures, we see that \(\lambda_{0,\text{Unfolded}}>\lambda_{0,\text{FFNN}}\). Consequently, from Theorem 6, (31), and (33), we also claim that the upper bound of \(T_{\text{Unfolded}}\) is high compared to \(T_{\text{FFNN}}\). As a result, \(T_{\text{Unfolded}}>T_{\text{FFNN}}\) whenever \(\lambda_{0,\text{Unfolded}}>\lambda_{0,\text{FFNN}}\). Moreover, from the aforementioned equations, it is evident that UB\({}_{\text{ADMM-CSNet}}\) exceeds UB\({}_{\text{LISTA}}\). Consequently, it is reasonable to anticipate that \(\lambda_{0,\text{ADMM-CSNet}}\) will surpass \(\lambda_{0,\text{LISTA}}\). This inference is substantiated by the data depicted in figures 5 (a) and 5 (b). This implies that the upper bound on \(T_{\text{ADMM-CSNet}}\) exceeds the upper bound on \(T_{\text{LISTA}}\). Through simulations, we show that \(T_{\text{ADMM-CSNet}}>T_{\text{LISTA}}>T_{\text{FFNN}}\) in the following section. Since the threshold on \(T\) -- guaranteeing memorization -- is higher for unfolded networks than FFNN, we should obtain a better expected error, which is upper bounded by the sum of generalization and training error [32], for unfolded networks than FFNN for a given \(T\) value such that \(T_{\text{FFNN}}<T\leq T_{\text{Unfolded}}\). Because in such scenarios the training error is zero and the generalization error is smaller for unfolded networks [13].
## V Numerical Experiments
We perform the following simulations to support the proposed theory. For all the simulations in this section, we fix the following for LISTA, ADMM-CSNet, and FFNN: \(1\). Parameters are initialized independently and identically (i.i.d.) from a Gaussian distribution with zero mean and unit variance, i.e., \(\mathcal{N}(0,1)\). \(2\). Networks are trained with the aim of minimizing the square loss function (12) using stochastic GD. Note that the theoretical analysis proposed in this work is for GD, however, to address the computation and storage issues, we considered stochastic GD for the numerical analysis. \(3\). Modified soft-plus activation function (refer to IV-A) with \(\lambda=1\) is used as the non-linear activation function. \(4\). A batch size of \(\frac{T}{5}\) is considered. \(5\). All the simulations are repeated for \(10\) trials.
**Threshold on \(T\):** From (31), the choice of \(T\) plays a vital role in achieving near-zero training loss. To illustrate this, consider two linear inverse models: \(\mathbf{y}_{1}=A_{1}\mathbf{x}_{1}+\mathbf{e}_{1}\) and \(\mathbf{y}_{2}=A_{2}\mathbf{x}_{2}+\mathbf{e}_{2}\), where \(\mathbf{y}_{1}\in\mathbb{R}^{20\times 1}\), \(\mathbf{x}_{1}\in\mathbb{R}^{100\times 1}\), \(A_{1}\in\mathbb{R}^{20\times 100}\), \(\|\mathbf{x}_{1}\|_{0}=2\), \(\mathbf{y}_{2}\in\mathbb{R}^{20\times 1}\), \(\mathbf{x}_{2}\in\mathbb{R}^{1000\times 1}\), \(A_{2}\in\mathbb{R}^{200\times 1000}\), and \(\|\mathbf{x}_{2}\|_{0}=10\). Generate synthetic data using a random linear operator matrix, which follows the uniform distribution, and then normalize it to ensure \(\|A_{1}\|_{F}=\|A_{2}\|_{F}=10\). Both models are subjected to Gaussian noise (\(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\)) with a signal-to-noise ratio (SNR) of \(10\) dB. Construct an \(L\)-layer LISTA and ADMM-CSNet with \(L=11\). Here, we train LISTA for \(30\)K epochs and ADMM-CSNet for \(40\)K epochs. For the first model, we choose \(0.12\) and \(0.09\) as learning rates for LISTA and ADMM-CSNet, respectively. For the second model, we choose \(1.2\) for LISTA and \(0.9\) for ADMM-CSNet. Figures 6 and 7 depict the variation of mean square loss/error (MSE) w.r.t. \(T\) for both LISTA and ADMM-CSNet, respectively. Note that for a fixed \(m\) there exists a threshold (by considering a specific MSE value) on \(T\) such that choosing a \(T\) value that is less than this threshold leads to near-zero training loss. Moreover, observe that this threshold increases as the network width grows.
For comparison, construct an \(L\)-layer FFNN, to recover \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), that has the same number of parameters as that
Fig. 5: Variation of the minimum eigenvalue of tangent kernel matrix at initialization: (a) With respect to the number of layers. (b) With respect to the network learnable parameters.
of unfolded, hence, we choose \(L=14\). Here, we train the network for \(40K\) epochs with a learning rate of \(0.04\) for the first model and \(0.3\) for the second model. Fig. 8 shows the variation of MSE w.r.t. \(T\). From Fig. 8, we can conclude that the threshold for FFNN is lower compared to LISTA and ADMM-CSNet.
**Comparison Between Unfolded and Standard Networks:** We compare LISTA and ADMM-CSNet with FFNN in terms of parameter efficiency. To demonstrate this, consider the first linear inverse model given in the above simulation. Then, construct LISTA, ADMM-CSNet, and FFNN with a fixed number of parameters and consider \(T=30\). Also, consider the same learning rates that are associated with the first model in the above simulation for LISTA, ADMM-CSNet, and FFNN. Here we choose \(L=6\) for both LISTA and ADMM-CSNet, and \(L=8\) for FFNN, resulting in a total of \(72K\) parameters. As shown in Fig. 9, the convergence of training loss to zero is better for LISTA and ADMM-CSNet compared to FFNN. Fig. 9 also shows the training loss convergence of FFNN with \(L=11\). Now, FFNN has \(102K\) learnable parameters, and its performance is comparable to LISTA for higher epoch values. Therefore, to achieve a better training loss FFNN requires more trainable parameters.
**Generalization:** In this simulation, we show that zero-training error leads to better generalization. To demonstrate this, consider LISTA/ADMM-CSNet/FFNN with a fixed \(T\) and observe the variation of the expected mean absolute error (MAE) w.r.t. \(m\). If the generalization performance is better, then it is anticipated that the expected MAE reduces as the \(m\) increases. Because an increase in \(m\) improves the possibility of getting near-zero training loss for a fixed \(T\). In Fig. 10, we present the results for LISTA, ADMM-CSNet, and FFNN with \(T=100\). Notably, the expected MAE diminishes as \(m\) increases, i.e., as the number of parameters grows. Further, it is observed that for this choice of \(T\), the training error is near-zero for \(m\) values exceeding approximately \(300\) for FFNN, and approximately \(250\) for both LISTA and ADMM-CSNet.
Fig. 8: Training loss vs \(T\) for FFNN.
Fig. 10: Variation of the expected MAE w.r.t. \(m\) for both LISTA and ADMM-CSNet.
Fig. 6: Training loss vs \(T\) for LISTA.
Fig. 7: Training loss vs \(T\) for ADMM-CSNet.
Fig. 9: Comparison between LISTA, ADMM-CSNet, and FFNN in terms of the required number of parameters, \(P\), for training loss convergence.
This finding underscores the importance of zero-training error in generalization.
However, it is important to note that the generalization results presented here are preliminary and require a rigorous analysis for more robust conclusions. Because considering a smaller value of \(T\) may not yield satisfactory generalization performance. Thus, it is important to find a lower bound on \(T\) to optimize both the training process and overall generalization capability, which we consider as a future work of interest.
## VI Conclusion
In this work, we provided optimization guarantees for finite-layer LISTA and ADMM-CSNet with smooth nonlinear activation. We begin by deriving the Hessian spectral norm of these unfolded networks. Based on this, we provided conditions on both the network width and the number of training samples, such that the empirical training loss converges to zero as the number of learning epochs increases using the GD approach. Additionally, we showed that LISTA and ADMM-CSNet outperform the standard FFNN in terms of threshold on the number of training samples and parameter efficiency. We provided simulations to support the theoretical findings.
The work presented in this paper is an initial step to understand the theory behind the performance of unfolded networks. While considering certain assumptions, our work raises intriguing questions for future research. For instance, we approximated the soft-threshold activation function with a double-differentiable function formulated using soft-plus. However, it is important to analyze the optimization guarantees without relying on any such approximations. Additionally, we assumed a constant value for \(\lambda\) in \(\sigma_{\lambda}(\cdot)\). It is interesting to explore the impact of treating \(\lambda\) as a learnable parameter. Furthermore, analyzing the changes in the analysis for other loss functions presents an intriguing avenue for further research.
Appendix A Proof of **Theorem 3**: _The Hessian block \(H_{s}^{l_{1},l_{2}}\) can be decomposed as given in (35), using the following chain rule:_
\[\frac{\partial f_{s}}{\partial\mathbf{w}^{l}}=\frac{\partial\mathbf{g}^{l}}{ \partial\mathbf{w}^{l}}\left(\prod_{t^{\prime}=l_{1}}^{L}\frac{\partial \mathbf{g}^{t}}{\partial\mathbf{g}^{t^{\prime}-1}}\right)\frac{\partial f_{s} }{\partial\mathbf{g}^{L}}.\]
\[H_{s}^{l_{1},l_{2}}= \frac{\partial^{2}\mathbf{g}^{l_{1}}}{\left(\partial\mathbf{w}^{ l_{1}}\right)^{2}}\frac{\partial f_{s}}{\partial\mathbf{g}^{t_{1}}} \mathbb{I}_{l_{1}=l_{2}}+\left(\frac{\partial\mathbf{g}^{t_{1}}}{\partial \mathbf{w}^{l_{1}}}\prod_{t^{\prime}=l_{1}+1}^{l_{2}-1}\frac{\partial\mathbf{ g}^{t^{\prime}}}{\partial\mathbf{g}^{t^{\prime}-1}}\right)\frac{\partial^{2} \mathbf{g}^{t^{\prime}}}{\partial\mathbf{w}^{l_{2}}\partial\mathbf{g}^{t-1}} \tag{35}\] \[\left(\frac{\partial f_{s}}{\partial\mathbf{g}^{t_{2}}}\right)+ \sum_{l=l_{2}+1}^{L}\left(\frac{\partial\mathbf{g}^{t_{1}}}{\partial\mathbf{w} ^{l_{1}}}\prod_{t^{\prime}=l_{1}+1}^{l-1}\frac{\partial\mathbf{g}^{t^{\prime} }}{\partial\mathbf{g}^{t^{\prime}-1}}\right)\frac{\partial^{2}\mathbf{g}^{t^{ \prime}}}{\left(\partial\mathbf{g}^{t^{\prime}-1}\right)^{2}}\] \[\left(\frac{\partial\mathbf{g}^{t_{2}}}{\partial\mathbf{w}^{l_{2} }}\prod_{t^{\prime}=l_{2}+1}^{L}\frac{\partial\mathbf{g}^{t^{\prime}}}{ \partial\mathbf{g}^{t^{\prime}-1}}\right)\left(\frac{\partial f_{s}}{\partial \mathbf{g}^{t}}\right).\]
From (35), the spectral norm of \(H_{s}^{l_{1},l_{2}}\) can be bounded as
\[\left\|H_{s}^{l_{1},l_{2}}\right\|_{2}\leq\left\|\frac{\partial^{2}\mathbf{g} ^{t_{1}}}{\left(\partial\mathbf{w}^{(l_{1})}\right)^{2}}\right\|_{2,2,1}\left\| \frac{\partial f_{s}}{\partial\mathbf{g}^{t_{1}}}\right\|_{\infty}+L_{\sigma} ^{l_{2}-l_{1}-1}\left\|\frac{\partial\mathbf{g}^{t_{1}}}{\partial\mathbf{w}^{ l_{1}}}\right\|_{F}\]
Note that (36) uses the fact that \(\left\|\frac{\partial\mathbf{g}^{t^{\prime}}}{\partial\mathbf{g}^{t^{\prime}- 1}}\right\|_{F}\leq L_{\sigma}\). By using the notations given in (42) and (43), we get
\[\left\|H_{s}^{l_{1},l_{2}}\right\|\leq C_{1}\mathcal{Q}_{2,2,1}\left(f_{s} \right)\mathcal{Q}_{\infty}\left(f_{s}\right),\]
where \(C_{1}\) is a constant depend on \(L\) and \(L_{\sigma}\). \(\Box\)
**Proof of Lemma 4**: _For \(l=0\), \(\|\mathbf{x}^{0}\|\leq\sqrt{m}\|\mathbf{x}^{0}\|_{\infty}\leq\sqrt{m}C_{x}\), \(\|\mathbf{x}^{0}\|\leq\sqrt{m}\|\mathbf{x}^{0}\|_{\infty}\leq\sqrt{m}C_{z}\), and \(\|\mathbf{u}^{0}\|\leq\sqrt{m}\|\mathbf{u}^{0}\|_{\infty}\leq\sqrt{m}C_{u}\). Whereas for \(l=1,2,\ldots,L\), we have_
\[\left\|\mathbf{x}^{l}\right\| =\left|\varrho^{\prime}\left(\frac{W_{1}^{l}}{\sqrt{n}}+\frac{W_{ 1}^{l}}{\sqrt{m}}\mathbf{x}^{l-1}\right)\right\|\] \[\leq L_{\sigma}\left\|\frac{W_{1}^{l}}{\sqrt{n}}\right\|\mathbf{ y}\|+L_{\sigma}\left\|\frac{W_{1}^{l}}{\sqrt{m}}\right\|\left\|\mathbf{x}^{l-1} \right\|+\sigma(0)\] \[\leq L_{\sigma}\left(c_{10}+\frac{R_{1}}{\sqrt{n}}\right)\sqrt{n}C _{y}+L_{\sigma}\left(c_{20}+\frac{R_{2}}{\sqrt{m}}\right)c_{\rm lSTA;\,\,x}^{l -1}+\sigma(0)\] \[=\mathrm{d}_{\rmSTA;\,\,x}.\]
_Here, we used Lemma 3 and \(L_{\sigma}\)-Lipschitz continuous of the activation function \(\sigma(\cdot)\). Similarly,_
\[\left\|\mathbf{x}^{l}\right\|=\left\|\sigma\left(\frac{W_{1}^{l}}{ \sqrt{n}}W_{1}^{l}\mathbf{y}+\frac{1}{\sqrt{m}}W_{2}^{l}\left(\mathbf{z}^{l-1}- \mathbf{u}^{l-1}\right)+\mathbf{u}^{l-1}\right)\right\|\] \[\leq L_{\sigma}\frac{1}{\sqrt{n}}\left\|W_{1}^{l}\right\|\left\| \mathbf{y}\right\|+L_{\sigma}\frac{1}{\sqrt{m}}\left\|W_{2}^{l}\right\|\left\| \mathbf{z}^{l-1}\right\|+\frac{1}{\sqrt{m}}L_{\sigma}\left\|W_{2}^{l}\right\| \left\|\mathbf{u}^{l-1}\right\|\] \[\quad+L_{\sigma}\left\|\mathbf{u}^{l-1}\right\|+\sigma(0)\] \[\leq L_{\sigma}\left(c_{10}+\frac{R_{1}}{\sqrt{n}}\right)\sqrt{n}C _{y}+L_{\sigma}\left(c_{20}+\frac{R_{2}}{\sqrt{m}}\right)c_{\rm ADMM;\,\,\mathbf{z}}^{l -1}\] \[\quad+L_{\sigma}\left(1+c_{20}+\frac{R_{2}}{\sqrt{m}}\right)c_{\rm ADMM ;\,\,\mathbf{z}}^{l-1}\] \[=c_{\rm ADMM;\,\,\mathbf{z}}^{l}\]
_and_
\[\left\|\mathbf{u}^{l}\right\|=\left\|\mathbf{u}^{l-1}+\left(\frac{1} {\sqrt{n}}W_{1}^{l}\mathbf{y}+\frac{1}{\sqrt{m}}W_{2}^{l}\left(\mathbf{z}^{l-1}- \mathbf{u}^{l-1}\right)-\mathbf{z}^{l}\right)\right\|\] \[\leq\left\|\mathbf{u}^{l-1}\right\|+\left\|\frac{1}{\sqrt{n}}W_{ 1}^{l}\mathbf{y}\right\|+\left\|\frac{1}{\sqrt{m}}W_{2}^{l}\mathbf{z}^{l-1} \right\|+\left\|\frac{1}{\sqrt{m}}W_{2}^{l}\mathbf{u}^{l-1}\right\|+\left\| \mathbf{z}^{l}\right\|\] \[\leq\left(c_{10}+\frac{R_{1}}{\sqrt{n}}\right)\sqrt{n}C_{y}+ \left(c_{20}+\frac{R_{2}}{\sqrt{m}}\right)c_{\rm ADMM;\,\,\mathbf{z}}^{l-1}\] \[\quad+\left(c_{20}+\frac{R_{2}}{\sqrt{m}}+1\right)c_{\rm ADMM ;\,\,\mathbf{z}}^{l-1}+c_{\rm ADMM;\,\mathbf{z}}^{l}\] \[=c_{\rm ADMM;\,\mathbf{u}}^{l}\]
\(\Box\)
_Proof of **Theorem 6**: _Consider the real symmetric NTK matrix \([K\left(\mathbf{w}_{0}\right)]_{mT\times mT}\). Utilizing the Rayleigh quotient of \(K\left(\mathbf{w}_{0}\right)\), we can write the following for any \(\mathbf{x}\) such that \(\|\mathbf{x}\|_{2}=1\):_
\[\lambda_{\min}\left(K\left(\mathbf{w
Consider a one-layer FFNN, then from (34), the \(s^{\text{th}}\) component of \(\mathbf{f}_{\text{IFNN}}\) is, \(\mathbf{f}_{s}=\frac{1}{\sqrt{m}}\sigma\left(\frac{1}{\sqrt{n}}W_{0}^{1}(s,:) \mathbf{y}\right),\) where \(W_{0}^{1}(s,:)\) represents the \(s^{\text{th}}\) row of \(W_{0}^{1}\). This implies,
\[\left\langle\nabla_{W_{0}^{1}}\mathbf{f}_{s},\nabla_{W_{0}^{T}}\mathbf{f}_{s} \right\rangle=\left[\frac{\sigma^{\prime}(\mathbf{\hat{x}}_{s}^{1})}{\sqrt{mn }}\right]^{2}\|\mathbf{y}\|^{2}\leq\hat{L}^{2}\hat{y},\]
where \(\hat{L}=\frac{L_{s}}{\sqrt{m}}\), and \(\hat{y}=\frac{\|\mathbf{y}\|^{2}}{n}.\) Similarly, for a 2-layered FFNN, we have
\[\left\langle\nabla_{\mathbf{W_{0}}}\mathbf{f}_{s},\nabla_{\mathbf{ W_{0}}}\mathbf{f}_{s}\right\rangle =\left\langle\nabla_{W_{0}^{1}}\mathbf{f}_{s},\nabla_{W_{0}^{1}} \mathbf{f}_{s}\right\rangle+\left\langle\nabla_{W_{0}^{2}}\mathbf{f}_{s}, \nabla_{W_{0}^{2}}\mathbf{f}_{s}\right\rangle \tag{38}\] \[\leq(\hat{L}^{2})^{2}\hat{y}\left[\left\|W_{0}^{1}\right\|^{2}+ \left\|W_{0}^{2}(s,:)\right\|^{2}\right].\]
Generalizing the above equations, one can derive the upper bound on \(\lambda_{0,\text{FFNN}}\) for an L-layer FFNN as
\[\lambda_{0,\text{FFNN}} \leq\text{UB}_{\text{FFNN}} \tag{39}\] \[=\hat{L}^{2L}\hat{y}\left[\sum_{i=1}^{L-1}\|\mathbf{v}_{s}^{T}W_ {0}^{L}\|^{2}\prod_{j=1,j\neq i}^{L-1}\|W_{0}^{j}\|^{2}+\prod_{j=1}^{L-1}\|W_{ 0}^{j}\|^{2}\right].\]
Likewise, consider \(L=1\), then from (13), the \(s^{\text{th}}\) component of \(\mathbf{f}_{\text{LISTA}}\) is
\[\mathbf{f}_{s}=\frac{1}{\sqrt{m}}\sigma\left(\frac{1}{\sqrt{n}}W_{10}^{1}(s,: )\mathbf{y}+\frac{1}{\sqrt{m}}W_{20}^{1}(s,:)\mathbf{x}\right).\]
This implies,
\[\left\langle\nabla_{\mathbf{w_{0}}}\mathbf{f}_{s},\nabla_{\mathbf{ w_{0}}}\mathbf{f}_{s}\right\rangle =\left\langle\nabla_{W_{10}^{1}}\mathbf{f}_{s},\nabla_{W_{10}^{1} }\mathbf{f}_{s}\right\rangle+\left\langle\nabla_{W_{20}^{1}}\mathbf{f}_{s}, \nabla_{W_{20}^{1}}\mathbf{f}_{s}\right\rangle\] \[\leq\hat{L}^{2}\left[\hat{y}+\hat{x}\right],\]
where \(\hat{x}=\frac{\|\mathbf{x}\|^{2}}{m}.\) If \(L=2\), then the \(s^{\text{th}}\) component of \(\mathbf{f}_{\text{LISTA}}\) is
\[\left\langle\nabla_{\mathbf{w_{0}}}\mathbf{f}_{s},\nabla_{\mathbf{ w_{0}}}\mathbf{f}_{s}\right\rangle =\left\langle\nabla_{W_{10}^{2}}\mathbf{f}_{s},\nabla_{W_{10}^{ 2}}\mathbf{f}_{s}\right\rangle+\left\langle\nabla_{W_{20}^{2}}\mathbf{f}_{s}, \nabla_{W_{20}^{2}}\mathbf{f}_{s}\right\rangle\] \[+\left\langle\nabla_{W_{10}^{1}}\mathbf{f}_{s},\nabla_{W_{10}^{1} }\mathbf{f}_{s}\right\rangle+\left\langle\nabla_{W_{20}^{2}}\mathbf{f}_{s}, \nabla_{W_{20}^{1}}\mathbf{f}_{s}\right\rangle\] \[\leq\hat{L}^{2}\left[\hat{y}+\hat{L}^{2}\|\bar{\mathbf{x}}^{(1)} \|^{2}\right]+\hat{L}^{4}\left[\hat{y}+\hat{x}\right]\left\|\mathbf{v}_{s}^{ \top}W_{20}^{2}\right\|^{2}.\]
By extending the above equations, we obtain the upper bound on \(\lambda_{0,\text{LISTA}}\) for an \(L\)-layer LISTA as
\[\lambda_{0,\text{LISTA}}\leq\text{UB}_{\text{LISTA}}=\hat{L}^{2} \left(\hat{y}+\hat{x}\right),\ \ \text{for}\ \ L=1 \tag{40}\] \[\lambda_{0,\text{LISTA}}\leq\text{UB}_{\text{LISTA}}=\hat{L}^{2L} \left(\hat{y}+\hat{x}\right)\|\mathbf{v}_{s}^{T}W_{20}^{L}\|\prod_{l=2}^{L-1} \|W_{20}^{l}\|^{2}\] \[+\sum_{k=2}^{L-1}\hat{L}^{2L-2k+2}\left[\hat{y}+\hat{L}^{2}\left\| \bar{\mathbf{x}}^{(k-1)}\right\|^{2}\right]\|\mathbf{v}_{s}^{T}W_{20}^{L}\|^{2 }\prod_{l=k+1}^{L-1}\|W_{20}^{l}\|^{2}\] \[+\hat{L}^{2}\left[\hat{y}+\hat{L}^{2}\|\bar{\mathbf{x}}^{(L-1)} \|^{2}\right],\ \text{for}\ L>1,\]
where \(\hat{L}=\frac{L_{s}}{\sqrt{m}},\ \hat{y}=\frac{\|\mathbf{y}\|^{2}}{n},\ \text{ and }\hat{x}=\frac{\|\mathbf{x}\|^{2}}{m}.\) Repeating the same analysis, one can derive the upper bound on \(\lambda_{0,\text{ADMM-CSNet}}\) of an \(L\)-layer ADMM-CSNet as
\[\lambda_{0,\text{ADMM-CSNet}}\leq\text{UB}_{\text{ADMM-CSNet}}= \hat{L}^{2}\left[\hat{y}+\hat{a}^{(L-1)}\right] \tag{41}\] \[+\sum_{k=1}^{L-1}\hat{L}^{2L-2k+2}\left[\hat{y}+\hat{a}^{(k-1)} \right]\|\mathbf{v}_{s}^{T}W_{20}^{L}\|^{2}\prod_{l=k+1}^{L-1}\|W_{20}^{l}\|^{2},\]
where \(\hat{a}^{(l)}=\frac{\|\mathbf{x}^{(l)}-\mathbf{u}^{(l)}\|^{2}}{m},\ \forall l \in[L-1]\cup\{0\}.\)
## References
* [1] Y. C. Eldar and G. Kutyniok, _Compressod sensing: theory and applications_. Cambridge University Press, 2012.
* [2] D. Donoho, "Compressed sensing," _IEEE Trans. Inf. Theory_, vol. 52, no. 4, pp. 1289-1306, 2006.
* [3] N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, "Model-Based Deep Learning," arXiv:2012.08405, 2020.
* [4] N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, "Model-Based Deep Learning: Key Approaches and Design Guidelines," in _Proc. IEEE Data Sci. Learn. Workshop (DSLW)_, pp. 1-6, 2021.
* [5] K. Gregor and Y. LeCun, "Learning fast approximations of sparse coding," in _Proc. Int. Conf. Mach. Learn._, pp. 399-406, 2010.
* [6] V. Monga, Y. Li, and Y. C. Eldar, "Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing," _IEEE Signal Process. Mag._, vol. 38, no. 2, pp. 18-44, 2021.
* [7] Y. Yang, J. Sun, H. Li, and Z. Xu, "ADMM-CSNet: A Deep Learning Approach for Image Compressive Sensing," _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 42, no. 3, pp. 521-538, 2020.
* [8] Y. Li, M. Tofighi, J. Geng, V. Monga, and Y. C. Eldar, "Efficient and Interpretable Deep Blind Image Deblurring Via Algorithm Unrolling," _IEEE Trans. Med. Imag._, vol. 6, pp. 6666-681, 2020.
* [9] Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, "Deep Networks for Image Super-Resolution With Sparse Prior," in _Proc. IEEE Int. Conf. Comput. Vis._, December 2015.
* [10] G. Dardikman-Joffe and Y. C. Eldar, "Learned SPARCOM: unfolded deep super-resolution microscopy," _Opt. Express_, vol. 28, pp. 27736-27763, Sep 2020.
* [11] O. Solomon, R. Cohen, Y. Zhang, Y. Yang, Q. He, J. Luo, R. J. G. van Sloun, and Y. C. Eldar, "Deep Unfolded Robust PCA With Application to Clutter Suppression in Ultrasound," _IEEE Trans. Med. Imag._, vol. 39, no. 4, pp. 1051-1063, 2020.
* [12] L. Zhang, G. Wang, and G. B. Giannakis, "Real-Time Power System State Estimation and Forecasting via Deep Unrolled Neural Networks," _IEEE Trans. Signal Process._, vol. 67, no. 15, pp. 4069-4077
* [26] C. Liu, L. Zhu, and M. Belkin, "Loss landscapes and optimization in over-parameterized non-linear systems and neural networks," _Appl. Comput. Harmon. Anal._, vol. 59, pp. 85-116, 2022.
* [27] S. S. Du, X. Zhai, B. Poczos, and A. Singh, "Gradient Descent Provably Optimizes Over-parameterized Neural Networks," in _Proc. Int. Conf. Learn. Represent._, 2019.
* [28] S. Du, J. Lee, H. Li, L. Wang, and X. Zhai, "Gradient Descent Finds Global Minima of Deep Neural Networks," in _Int. Conf. Mach. Learn._, vol. 97, pp. 1675-1685, PMLR, 09-15 Jun 2019.
* [29] Z. Allen-Zhu, Y. Li, and Z. Song, "A convergence theory for deep learning via over-parameterization," in _Proc. Int. Conf. Mach. Learn._, pp. 242-252, PMLR, 2019.
* [30] D. Zou, Y. Cao, D. Zhou, and Q. Gu, "Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks," _CoRR_, vol. abs/1811.08888, 2018.
* [31] C. Liu, L. Zhu, and M. Belkin, "On the linearity of large non-linear models: when and why the tangent kernel is constant," _Proc. Adv. Neural Inf. Process. Syst._, vol. 33, pp. 15954-15964, 2020.
* [32] D. Jakubovitz, R. Giryes, and M. R. Rodrigues, "Generalization error in deep learning," in _Compressed Sensing and Its Applications: Third International MATHENON Conference 2017_, pp. 153-193, Springer, 2019.
* [33] R. Tibshirani, "Regression Shrinkage and Selection via the Lasso," _J. Roy. Statist Soc. Ser. B (Methodol.)_, vol. 58, no. 1, pp. 267-288, 1996.
* [34] N. Parikh and S. Boyd, "Proximal Algorithms," _Found. Trends Optim._, vol. 1, no. 3, pp. 127-239, 2014.
* [35] I. Daubechies, M. Defrise, and C. De Mol, "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint," _Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences_, vol. 57, no. 11, pp. 1413-1457, 2004.
* [36] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, _Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers_. 2011.
* [37] B. T. Polyak, "Gradient methods for minimizing functionals," _Z. Vycist. Mat. Fiz._, vol. 3, no. 4, pp. 643-653, 1963.
* [38] S. Lugisievicz, "A topological property of real analytic subsets," _Coll. du CNRS, Les equations aux derivees partielles_, vol. 117, no. 87-89, p. 2, 1963.
* [39] Y. Ben Sabel, J. P. Bryan, B. Cleary, S. L. Farhi, and Y. C. Eldar, "Deep Unrolled Recovery in Sparse Biological Imaging: Achieving fast, accurate results," _IEEE Signal Process. Mag._, vol. 39, no. 2, pp. 45-57, 2022.
* [40] A. M. Atto, D. Pastor, and G. Mercier, "Smooth sigmoid wavelet shrinkage for non-parametric estimation," in _Proc. IEEE Int. Conf. Acoust., Speech, Signal Process._, pp. 3265-3268, 2008.
* [41] X.-P. Zhang, "Thresholding neural network for adaptive noise reduction," _IEEE Trans. Neural Netw._, vol. 12, no. 3, pp. 567-584, 2001.
* [42] X.-P. Zhang, "Space-scale adaptive noise reduction in images based on thresholding neural network," in _Proc. IEEE Int. Conf. Acoust., Speech, Signal Process._, vol. 3, pp. 1889-1892 vol.3, 2001.
* [43] H. Pan, D. Badawi, and A. E. Cetin, "Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks," in _Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR)_, pp. 4650-4659, June 2021.
* [44] J. Youn, S. Ravindran, R. Wu, J. Li, and R. van Sloun, "Circular Convolutional Learned ISTA for Automotive Radar DOA Estimation," in _Proc. 19th Eur. Radar Conf. (EuRAD)_, pp. 273-276, 2022.
* [45] K. Kavukcuoglu, P. Sermanet, Y.-I. Boureau, K. Gregor, M. Mathieu, and Y. Cun, "Learning Convolutional Feature Hierarchies for Visual Recognition," in _Proc. Adv. Neural Inf. Process. Syst._, vol. 23, Curran Associates, Inc., 2010.
* [46] R. Vershynin, "Introduction to the non-asymptotic analysis of random matrices," _arXiv:1011.3027_, 2010.
* [47] A. Jacot, F. Gabriel, and C. Hongler, "Neural Tangent Kernel: Convergence and Generalization in Neural Networks," in _Proc. Adv. Neural Inf. Process. Syst._, vol. 31, 2018.
* [48] J. Lee, L. Xiao, S. Schoenholz, Y. Bahri, R. Novak, J. Sohl-Dickstein, and J. Pennington, "Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent," in _Proc. Adv. Neural Inf. Process. Syst._, vol. 32, Curran Associates, Inc., 2019.
* [49] S. B. Shah, P. Pradhan, W. Pu, R. Rammaudio, M. R. D. Rodrigues, and Y. C. Eldar, "Supporting Material: Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth Soft-Thresholding," _2023_.
**Supporting Material: Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth Soft-Thresholding**
From Theorem 3, the Hessian spectral norm, \(\|\mathbf{H}\|_{2}\), of an \(L\)-layer unfolded ISTA (ADMM) network is bounded as
\[\begin{split}\|\mathbf{H}\|_{2}&\leq\sum_{s,l_{1},l_{ 2}}C_{1}\mathcal{Q}_{2,2,1}\left(f_{s}\right)\mathcal{Q}_{\infty}\left(f_{s} \right)\\ &\leq\sum_{s=1}^{m}C\mathcal{Q}_{2,2,1}\left(f_{s}\right) \mathcal{Q}_{\infty}\left(f_{s}\right),\end{split} \tag{41}\]
where the constant \(C_{1}\) depends on \(L\) and \(L_{\sigma}\), \(C=L^{2}C_{1}\),
\[\mathcal{Q}_{\infty}\left(f_{s}\right)=\max_{1\leq l\leq L}\left\{\left\|\frac {\partial f_{s}}{\partial\mathbf{g}^{l}}\right\|_{\infty}\right\}\text{ and } \tag{42}\]
\[\mathcal{Q}_{2,2,1}\left(f_{s}\right)=\max_{1\leq l_{1}\leq l_{2}<l_{3}\leq L }\Bigg{\{}\left\|\frac{\partial^{2}\mathbf{g}^{l_{1}}}{\partial\mathbf{w}^{l _{1}}}\right\|_{2,2,1},\left\|\frac{\partial\mathbf{g}^{l_{1}}}{\partial \mathbf{w}^{l_{1}}}\right\|\left\|\frac{\partial^{2}\mathbf{g}^{l_{2}}}{ \partial\mathbf{g}^{(l_{2}-1)}\partial\mathbf{w}^{l_{2}}}\right\|_{2,2,1}, \left\|\frac{\partial\mathbf{g}^{l_{1}}}{\partial\mathbf{w}^{l_{1}}}\right\| \left\|\frac{\partial\mathbf{g}^{l_{2}}}{\partial\mathbf{w}^{l_{2}}}\right\| \left\|\frac{\partial^{2}\mathbf{g}^{l_{3}}}{\left(\partial\mathbf{g}^{l_{3} -1}\right)^{2}}\right\|_{2,2,1}\Bigg{\}}. \tag{43}\]
Note that \(\mathbf{g}^{l}=\mathbf{x}^{l}\) for LISTA and \(\mathbf{g}^{l}=\mathbf{z}^{l}\) for ADMM-CSNet. Theorem 4 aims to provide bounds on \(\mathcal{Q}_{\infty}\left(f_{s}\right)\) and \(\mathcal{Q}_{2,2,1}\left(f_{s}\right)\). The proof of this theorem has been divided into two parts: First, we prove the bound on \(Q_{2,2,1}\) in sub-sections A and B, respectively. Then, we prove the bound on \(Q_{\infty}\) in sub-sections C and D, respectively. Here we denote \(\|\cdot\|\) as \(l_{2}\)-norm for vectors and spectral norm for matrices. We also denote \(\|\cdot\|_{F}\) as the Frobenious norm of matrices.
### _Bound on \(Q_{2,2,1}\) For LISTA Network_
Consider an L-layer unfolded ISTA network with output
\[\begin{split}\mathbf{f}&=\frac{1}{\sqrt{m}} \mathbf{x}^{L},\text{ where }\\ \mathbf{x}^{l}&=\sigma(\tilde{\mathbf{x}}^{l})= \sigma\left(\frac{W_{1}^{l}}{\sqrt{n}}\mathbf{y}+\frac{W_{2}^{l}}{\sqrt{m}} \mathbf{x}^{l-1}\right)\ \in\mathbb{R}^{m},\ l\in[L].\end{split} \tag{44}\]
Now the first derivatives of \(\mathbf{x}^{l}\) are
\[\left(\frac{\partial\mathbf{x}^{l}}{\partial\mathbf{x}^{l-1}}\right)_{i,j}= \frac{1}{\sqrt{m}}\sigma^{\prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\left( W_{2}\right)_{i,j}^{l},\left(\frac{\partial\mathbf{x}^{l}}{\partial W_{1}^{l}} \right)_{i,jj^{\prime}}=\frac{1}{\sqrt{n}}\sigma^{\prime}\left(\tilde{\mathbf{ x}}_{i}^{l}\right)\mathbf{y}_{j^{\prime}}\mathbb{I}_{i=j},\left(\frac{ \partial\mathbf{x}^{l}}{\partial W_{2}^{l}}\right)_{i,jj^{\prime}}=\frac{1}{ \sqrt{m}}\sigma^{\prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\mathbf{x}_{j^{ \prime}}^{l-1}\mathbb{I}_{i=j}.\]
By the definition of spectral norm, \(\|A\|_{2}=\sup_{\|\mathbf{y}\|_{2}=1}\|A\mathbf{v}\|_{2}\), we have
\[\left\|\frac{\partial\mathbf{x}^{l}}{\partial W_{1}^{l}}\right\|^{2}=\sup_{\| \mathbf{y}\|_{F}=1}\frac{1}{n}\sum_{i}\left(\sum_{j,j^{\prime}}\sigma^{\prime} \left(\tilde{\mathbf{x}}_{i}^{l}\right)\mathbf{y}_{j^{\prime}}V_{j,j^{\prime}} \mathbb{I}_{i=j}\right)^{2}=\sup_{\|\mathbf{y}\|_{F}=1}\frac{1}{n}\left\| \Sigma^{\prime l}V\mathbf{y}\right\|^{2}\leq\frac{1}{n}\left\|\Sigma^{\prime l }\right\|^{2}\|\mathbf{y}\|^{2}\leq L_{\sigma}^{2}C_{y}^{2}=O(1),\]
where \(\Sigma^{\prime l}\) is a diagonal matrix with the diagonal entry \((\Sigma^{\prime l})_{ii}=\sigma^{\prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\). Similarly,
\[\left\|\frac{\partial\mathbf{x}^{l}}{\partial W_{2}^{l}}\right\|^{2} =\sup_{\|\mathbf{y}\|_{F}=1}\frac{1}{m}\sum_{i}\left(\sum_{j,j^{ \prime}}\sigma^{\prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\mathbf{x}_{j^{ \prime}}^{l-1}V_{j,j^{\prime}}\mathbb{I}_{i=j}\right)^{2}=\sup_{\|\mathbf{y} \|_{F}=1}\frac{1}{m}\left\|\Sigma^{\prime l}V\mathbf{x}^{l-1}\right\|^{2}\] \[\leq\frac{1}{m}L_{\sigma}^{2}\left\|\mathbf{x}^{l-1}\right\|^{2} \leq\frac{1}{m}L_{\sigma}^{2}\left(d_{\text{ISTA }\cdot\mathbf{x}}^{l-1}\right)^{2}=O(1).\]
Here we used \(\left(d_{\text{ISTA }\cdot\mathbf{x}}^{l-1}\right)=O(\sqrt{m})\) from lemma (4).
\[\left\|\frac{\partial\mathbf{x}^{l}}{\partial W^{l}}\right\|=\left\|\left[\frac{ \partial\mathbf{x}^{l}}{\partial W_{1}^{l}}\quad\frac{\partial\mathbf{x}^{l}}{ \partial W_{2}^{l}}\right]\right\|\leq\left\|\frac{\partial\mathbf{x}^{l}}{ \partial W_{1}^{l}}\right\|+\left\|\frac{\partial\mathbf{x}^{l}}{\partial W_{2}^ {l}}\right\|=O(1)+O(1)=O(1). \tag{45}\]
The second-order derivatives of the vector-valued layer function \(\mathbf{x}^{l}\), which are order 3 tensors, have the following expressions:
\[\left(\frac{\partial^{2}\mathbf{x}^{l}}{\left(\partial\mathbf{x}^{l -1}\right)^{2}}\right)_{i,j,k}=\frac{1}{m}\sigma^{\prime\prime}\left(\tilde{ \mathbf{x}}_{i}^{l}\right)\left(W_{2}\right)_{i,j}^{l}\left(W_{2}\right)_{i,k} ^{l}; \left(\frac{\partial^{2}\mathbf{x}^{l}}{\partial\mathbf{x}^{l-1} \partial W_{2}^{2}}\right)_{i,j,kk^{\prime}}=\frac{1}{m}\sigma^{\prime\prime} \left(\tilde{\mathbf{x}}_{i}^{l}\right)\left(W_{2}\right)_{i,j}^{l}\mathbf{x} _{k^{\prime}}^{l-1}\mathbb{I}_{i=k};\] \[\left(\frac{\partial^{2}\mathbf{x}^{l}}{\partial\mathbf{x}^{l-1} \partial W_{1}^{l}}\right)_{i,j,kk^{\prime}}=\frac{1}{\sqrt{mn}}\sigma^{ \prime\prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\left(W_{2}\right)_{i,j}^ {l}\mathbf{y}_{k^{\prime}}\mathbb{I}_{i=k}; \left(\frac{\partial^{2}\mathbf{x}^{l}}{\left(\partial W_{2} \right)^{2}}\right)_{i,jj^{\prime},kk^{\prime}}=\frac{1}{m}\sigma^{\prime \prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\mathbf{x}_{j^{\prime}}^{l-1} \mathbb{I}_{i=k=j};\] \[\left(\frac{\partial^{2}\mathbf{x}^{l}}{\partial W_{2}^{l} \partial W_{1}^{l}}\right)_{i,jj^{\prime},kk^{\prime}}=\frac{1}{\sqrt{mn}} \sigma^{\prime\prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\mathbf{x}_{j^{ \prime}}^{l-1}\mathbf{y}_{k^{\prime}}\mathbb{I}_{i=k=j}; \left(\frac{\partial^{2}\mathbf{x}^{l}}{\left(\partial W_{1} \right)^{2}}\right)_{i,jj^{\prime},kk^{\prime}}=\frac{1}{n}\sigma^{\prime \prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\mathbf{y}_{j^{\prime}}\mathbf{y }_{k^{\prime}}\mathbb{I}_{i=k=j};\]
\[\left\|\frac{\partial^{2}\mathbf{x}^{l}}{\left(\partial\mathbf{ x}^{l-1}\right)^{2}}\right\|_{2,2,1}=\sup_{\left\|\mathbf{v}_{1}\right\|=\left\| \mathbf{v}_{2}\right\|=1}\frac{1}{m}\sum_{i=1}^{m}\left|\sigma^{\prime\prime }\left(\tilde{\mathbf{x}}_{i}^{l}\right)\left(W_{2}^{l}\mathbf{v}_{1}\right)_ {i}\left(W_{2}^{l}\mathbf{v}_{2}\right)_{i}\right|\leq\sup_{\left\|\mathbf{v} _{1}\right\|=\left\|\mathbf{v}_{2}\right\|=1}\frac{1}{m}\beta_{\sigma}\sum_{i= 1}^{m}\left|\left(W_{2}^{l}\mathbf{v}_{1}\right)_{i}\left(W_{2}^{l}\mathbf{v} _{2}\right)_{i}\right| \tag{46}\] \[\leq\frac{1}{2m}\beta_{\sigma}\left(\left\|W_{2}^{l}\right\|^{2} +\left\|\mathbf{x}^{l-1}\right\|^{2}\right)\leq\frac{\beta_{\sigma}}{2m}\left( c_{20}\sqrt{m}+R_{2}\right)^{2}+\frac{\beta_{\sigma}}{2m}\left(c_{\text{ISTA}} \cdot\mathbf{x}\right)^{2}=O(1),\]
\[\left\|\frac{\partial^{2}\mathbf{x}^{l}}{\partial\mathbf{x}^{l-1} \partial W_{1}^{l}}\right\|_{2,2,1}=\sup_{\left\|\mathbf{v}_{1}\right\|= \left\|\mathbf{v}_{2}\right\|_{p}=1}\frac{1}{\sqrt{mn}}\sum_{i=1}^{m}\left| \sigma^{\prime\prime}\left(\tilde{\mathbf{x}}_{i}^{l}\right)\left(W_{2}^{l} \mathbf{v}_{1}\right)_{i}\left(V_{2}\mathbf{y}\right)_{i}\right|\leq\sup_{ \left\|\mathbf{v}_{1}\right\|=\left\|V_{2}\right\|_{p}=1}\frac{1}{2m}\beta_{ \sigma}\left(\left\|W_{2}^{l}\mathbf{v}_{1}\right\|^{2}+\left\|V_{2}\mathbf{ y}\right\|^{2}\right)\] \[\leq\frac{1}{2\sqrt{mn}}\beta_{\sigma}\left(\left\|W_{2}^{l} \right\|^{2}+\left\|\mathbf{y}\right\|^{2}\right)\leq\sqrt{\frac{m}{4n}}\beta_{ \sigma}\left(c_{20}+\frac{R_{2}}{\sqrt{m}}\right)^{2}+\sqrt{\frac{n}{4m}}\beta_ {\sigma}C_{y}^{2}=O(1),\]
\[\left\|\frac{\partial^{2}\mathbf{x}^{l}}{\left(\partial W_{2} ^{l}\right)^{2}}\right\|_{2,2,1}=\sup_{\left\|V_{1}\right\|_{p}=\left\|V_{2} \right\|_{p}=1}\frac{1}{m}\sum_{i=1}^{m}\left|\sigma^{\prime\prime}\left( \tilde{\mathbf{x}}_{i}^{l}\right)\left(V_{1}\mathbf{x}^{l-1}\right)_{i}\left(V_ {2}\mathbf{x}^{l-1}\right)_{i}\right|\leq\sup_{\left\|V_{1}\right\|_{F}=\left\|V _{2}\right\|_{F}=1}\frac{1}{2m}\beta_{\sigma}\left(\left\|V_{1}\mathbf{x}^{l-1 }\right\|^{2}+\left\|V_{2}\mathbf{x}^{l-1}\right\|^{2}\right)\] \[\leq\frac{1}{2m}\beta_{\sigma}\left(\left\|\mathbf{x}^{l-1} \right\|^{2}+\left\|\mathbf{x}^{l-1}\right\|^{2}\right)\leq\frac{1}{m}\beta_{ \sigma}\left(c_{\text{ISTA};\mathbf{x}}^{l-1}\right)^{2}=O(1),\]
\[\left\|\frac{\partial^{2}\mathbf{x}^{l}}{\partial W_{2}^{l}\partial W _{1}^{l}}\right\|_{2,2,1}=\sup_{\left\|V_{1}\right\|_{F}=\left\|V_{2}\right\|_{F }=1}\frac{1}{\sqrt{mn}}\sum_{i=1}^{m}\left|\sigma^{\prime\prime}\left(\tilde{ \mathbf{x}}_{i}^{l}\right)\left(V_{1}\mathbf{x}_{i}^{l-1}\right)_{i}\left(V_{2} \mathbf{y}\right)_{i}\right|\leq\sup_{\left\|V_{1}\right\|_{F}=\left\|V_{2} \right\|_{F}=1}\frac{1}{2\sqrt{mn}}\beta_{\sigma}\left(\left\|V_{1}\mathbf{x}^{l- 1}\right\|^{2}+\left\|V_{2}\mathbf{y}\right\|^{2}\right)\] \[\leq\frac{1}{2\sqrt{mn}}\beta_{\sigma}\left(\left\|\mathbf{x}^{l- 1}\right\|^{2}+\left\|\mathbf{y}\right\|^{2}\right)\leq\frac{\beta_{\sigma}}{2 \sqrt{mn}}\left(c_{\text{ISTA};\mathbf{x}}^{l-1}\right)^{2}+\sqrt{\frac{n}{4m}} \beta_{\sigma}C_{y}^{2}=O(1),\]
\[\left\|\frac{\partial^{2}\mathbf{x}^{l}}{\left(\partial W_{1} ^{l}\right)^{2}}\right\|_{2,2,1}=\sup_{\left\|V_{1}\right\|_{F}=\left\|V_{2} \right\|_{F}=1}\frac{1}{n}\sum_{i=1}^{m}\left|\sigma^{\prime\prime}\left( \tilde{\mathbf{x}}_{i}^{l}\right)\left(V_{1}\mathbf{y}_{1}\right)_{i}\left(V_ {2}\mathbf{y}\right)_{i}\right|\leq\sup_{\left\|V_{1}\right\|_{F}=\left\|V_{2} \right\|_{F}=1}\frac{1}{2n}\beta_{\sigma}\left(\left\|V_{1}\mathbf{x}^{l-1 }\right\|^{2}+\left\|V_{2}\mathbf{y}\right\|^{2}\right)\] \[\leq\frac{1}{2n}\beta_{\sigma}\left(\left\|\mathbf{y}\right\|^{2}+ \left\|\mathbf{y}\right\|^{2}\right)=\beta_{\sigma}C_{y}^{2}=O(1),\]
\[\left\|\frac{\partial^{2}\mathbf{x}^{l}}{\partial\mathbf{x}^{l-1} \partial W^{l}}\right\|_{2,2,1}=\left\|\left\|\left[\ \ \frac{\partial^{2}\mathbf{x}^{l}}{\partial\mathbf{x}^{l-1}\partial W^{l}_{1}}\ \ \ \ \frac{ \partial^{2}\mathbf
### _Bound on \(Q_{2,2,1}\) For ADMM-CSNet_
Consider an L-layered ADMM-CSNet as
\[\begin{split}\mathbf{f}&=\frac{1}{\sqrt{m}}\mathbf{z}^ {L};\\ \mathbf{z}^{l}&=\sigma\left(\tilde{\mathbf{z}}^{l} \right)=\sigma\left(\mathbf{x}^{l}+\mathbf{u}^{l-1}\right),\\ \mathbf{x}^{l}&=\frac{1}{\sqrt{n}}W_{1}^{l}\mathbf{y }+\frac{1}{\sqrt{m}}W_{2}^{l}\left(\mathbf{z}^{l-1}-\mathbf{u}^{l-1}\right), \\ \mathbf{u}^{l}&=\mathbf{u}^{l-1}+\left(\mathbf{x}^{l }-\mathbf{z}^{l}\right).\end{split} \tag{49}\]
where \(\mathbf{f}\) is the output of the network. Now the first derivatives of \(\mathbf{z}^{l}\) are
\[\begin{split}&\left(\frac{\partial\mathbf{z}^{l}}{\partial \mathbf{z}^{l-1}}\right)_{i,j}=\left(\frac{\partial\mathbf{z}^{l}}{\partial \mathbf{z}^{l-1}}\right)_{i,j}+\left(\frac{\partial\mathbf{z}^{l}}{\partial \mathbf{u}^{l-1}}\frac{\partial\mathbf{u}^{l-1}}{\partial\mathbf{z}^{l-1}} \right)_{i,j}=\sigma^{\prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)\left( \frac{2}{\sqrt{m}}W_{2}^{l}-I\right)_{i,j};\\ &\left(\frac{\partial\mathbf{z}^{l}}{\partial W_{1}^{l}}\right)_ {i,jj^{\prime}}=\frac{1}{\sqrt{n}}\sigma^{\prime}\left(\tilde{\mathbf{z}}_{i}^ {l}\right)\mathbf{y}_{j^{\prime}}\mathbb{I}_{i=j};\left(\frac{\partial\mathbf{ z}^{l}}{\partial W_{2}^{l}}\right)_{i,jj^{\prime}}=\frac{1}{\sqrt{m}}\sigma^{ \prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)(\mathbf{z}^{l-1}-\mathbf{u}^{l- 1})_{j^{\prime}}\mathbb{I}_{i=j}.\end{split}\]
Now, we have
\[\left\|\frac{\partial\mathbf{z}^{l}}{\partial W_{1}^{l}}\right\|_{2}^{2}=\sup_{ \left\|V\right\|_{F}=1}\frac{1}{n}\sum_{i=1}^{m}\left(\sum_{j,j^{\prime}} \sigma^{\prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)\mathbf{y}_{j^{\prime}} \mathbb{I}_{i=j}V_{jj^{\prime}}\right)^{2}=\sup_{\left\|V\right\|_{F}=1}\frac{ 1}{n}\left\|\Sigma^{\prime l}V\mathbf{y}\right\|^{2}\leq\frac{1}{n}\left\| \Sigma^{\prime l}\right\|^{2}\left\|\mathbf{y}\right\|^{2}\leq L_{\sigma}^{2}C _{y}^{2}=O(1),\]
where \(\Sigma^{\prime l}\) is a diagonal matrix with the diagonal entry \((\Sigma^{\prime l})_{ii}=\sigma^{\prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)\).
\[\begin{split}\left\|\frac{\partial\mathbf{z}^{l}}{\partial W_{1}^ {l}}\right\|_{2}^{2}&=\sup_{\left\|V\right\|_{F}=1}\frac{1}{m} \sum_{i=1}^{m}\left(\sum_{j,j^{\prime}}\sigma^{\prime}\left(\tilde{\mathbf{z}}_ {i}^{l}\right)(\mathbf{z}^{l-1}-\mathbf{u}^{l-1})_{j^{\prime}}\mathbb{I}_{i=j }V_{jj^{\prime}}\right)^{2}=\sup_{\left\|V\right\|_{F}=1}\frac{1}{m}\left\| \Sigma^{\prime l}V(\mathbf{z}^{l-1}-\mathbf{u}^{l-1})\right\|^{2}\\ &\leq\frac{1}{m}\left\|\Sigma^{\prime l}\right\|^{2}\left\| \mathbf{z}^{l-1}\right\|^{2}+\frac{1}{m}\left\|\Sigma^{\prime l}\right\|^{2} \left\|\mathbf{u}^{l-1}\right\|^{2}\leq\frac{1}{m}L_{\sigma}^{2}\left(\left(c_{ \mathrm{ADMM};\mathbf{z}}^{l-1}\right)^{2}+\left(c_{\mathrm{ADMM};\mathbf{u}}^{ l-1}\right)^{2}\right)=O(1).\end{split}\]
From lemma (4) we used \(\left(c_{\mathrm{ADMM};\mathbf{z}}^{l-1}\right)=O(\sqrt{m})\) and \(\left(c_{\mathrm{ADMM};\mathbf{u}}^{l-1}\right)=O(\sqrt{m})\). Therefore
\[\left\|\frac{\partial\mathbf{z}^{l}}{\partial W^{l}}\right\|=\left\|\left[\frac{ \partial\mathbf{z}^{l}}{\partial W_{1}^{l}}\quad\frac{\partial\mathbf{z}^{l}}{ \partial W_{2}^{l}}\right]\right\|\leq\left\|\frac{\partial\mathbf{z}^{l}}{ \partial W_{1}^{l}}\right\|+\left\|\frac{\partial\mathbf{z}^{l}}{\partial W_{2} ^{l}}\right\|=O(1)+O(1)=O(1). \tag{50}\]
The second derivatives of the vector-valued layer function \(\mathbf{z}^{l}\), which are order 3 tensors, have the following expressions:
\[\begin{split}&\left(\frac{\partial^{2}\mathbf{z}^{l}}{\left( \partial\mathbf{z}^{l-1}\right)^{2}}\right)_{i,j,k}=\sigma^{\prime\prime}\left( \tilde{\mathbf{z}}_{i}^{l}\right)\left(\frac{2}{\sqrt{m}}W_{2}^{l}-I\right)_{i,j }\left(\frac{2}{\sqrt{m}}W_{2}^{l}-I\right)_{i,k};\\ &\left(\frac{\partial^{2}\mathbf{z}^{l}}{\partial\mathbf{z}^{l-1} \partial W_{2}^{l}}\right)_{i,j,kk^{\prime}}=\frac{1}{\sqrt{m}}\sigma^{\prime \prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)\left(\frac{2}{\sqrt{m}}W_{2}^{l }-I\right)_{ij}(\mathbf{z}^{l-1}-\mathbf{u}^{l-1})_{k^{\prime}}\mathbb{I}_{i=k} +\frac{2}{\sqrt{m}}\sigma^{\prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)\mathbb{I}_ {i=k}\mathbb{I}_{j=k^{\prime}};\\ &\left(\frac{\partial^{2}\mathbf{z}^{l}}{\partial\mathbf{z}^{l-1} \partial W_{1}^{l}}\right)_{i,j,kk^{\prime}}=\frac{1}{\sqrt{m}}\sigma^{\prime \prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)\left(\frac{2}{\sqrt{m}}W_{2}^{l}-I \right)_{ij}\mathbf{y}_{k^{\prime}}\mathbb{I}_{i=k};\\ &\left(\frac{\partial^{2}\mathbf{z}^{l}}{\left(\partial W_{2}^{l} \right)^{2}}\right)_{i,j^{\prime},kk^{\prime}}=\frac{1}{m}\sigma^{\prime\prime} \left(\tilde{\mathbf{z}}_{i}^{l}\right)(\mathbf{z}^{l-1}-\mathbf{u}^{l-1})_{j ^{\prime}}(\mathbf{z}^{l-1}-\mathbf{u}^{l-1})_{k^{\prime}}\mathbb{I}_{i=k=j};\\ &\left(\frac{\partial^{2}\mathbf{z}^{l}}{\partial W_{2}^{l}\partial W _{1}^{l}}\right)_{i,jj^{\prime},kk^{\prime}}=\frac{1}{\sqrt{mn}}\sigma^{\prime \prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)(\mathbf{z}^{l-1}-\mathbf{u}^{l-1 })_{j^{\prime}}\mathbf{y}_{k^{\prime}}\mathbb{I}_{i=k=j};\\ &\left(\frac{\partial^{2}\mathbf{z}^{l}}{\left(\partial W_{1}^{l} \right)^{2}}\right)_{i,jj^{\prime},kk^{\prime}}=\frac{1}{n}\sigma^{\prime \prime}\left(\tilde{\mathbf{z}}_{i}^{l}\right)\mathbf{y}_{j^{\prime}}\mathbf{y}_ {k^{\prime}}\mathbb{I}_{i=k=j};\end{split} \tag{51}\]
\[\left\|\frac{\partial^{2}\mathbf{z}^{l}}{\left(\partial\mathbf{z}^{l- 1}\right)^{2}}\right\|_{2,2,1} =\sup_{\left\|\mathbf{v}_{1}\right\|=\left\|\mathbf{v}_{2}\right\| =1}\sum_{i=1}^{m}\left|\sigma^{\prime\prime}\left(\tilde{\mathbf{z}}_{i}^{l} \right)\left(\left(\frac{2}{\sqrt{m}}W_{2}^{l}-I\right)\mathbf{v}_{1}\right)_ {i}\left(\left(\frac{2}{\sqrt{m}}W_{2}^{l}-I\right)\mathbf{v}_{2}\right)_{i}\right|\] \[\leq\sup_{\left\|\mathbf{v}_{1}\right\|=\left\|\mathbf{v}_{2} \right\|=1}\frac{1}{2}\beta_{\sigma}\sum_{i=1}^{m}\left(\left\|V_{1}(\mathbf{z }^{l-1}-\mathbf{u}^{l-1})\right\|^{2}+\left\|V_{2}(\mathbf{z}^{l-1}-\mathbf{ u}^{l-1})\right\|^{2}\right)\] \[\leq\frac{1}{2m}\beta_{\sigma}\left(\left\|\mathbf{z}^{l-1} \right\|^{2}+\left\|\mathbf{u}^{l-1}\right\|^{2}+\left\|\mathbf{v}\right\|^{2 }\right)\leq\frac{1}{2\sqrt{mn}}\beta_{\sigma}\left(nC_{y}^{2}+\left(c_{ \mathrm{ADMM}\,;\mathbf{z}}^{l-1}\right)^{2}+\left(c_{\mathrm{ADMM}\,; \mathbf{u}}^{l-1}\right)^{2}\right)=O(1),\]
\[\left\|\frac{\partial^{2}\mathbf{z}^{l}}{\left(\partial W_{1}^{ l}\right)^{2}}\right\|_{2,2,1} =\sup_{\left\|V_{1}\right\|_{F}=\left\|V_{2}\right\|_{F}=1}\frac{ 1}{n}\sum_{i=1}^{m}\left|\sigma^{\prime\prime}\left(\tilde{\mathbf{z}}_{i}^{l} \right)\left(V_{1}\mathbf{y}\right)_{i}\left(V_{2}\mathbf{y}\right)_{i}\right|\] \[\leq\frac{1}{2n}\beta_{\sigma}\left(\left\|\mathbf{y}\right\|^{2 }+\left\|\mathbf{y}\right\|^{2}\right)\leq\beta_{\sigma}C_{y}^{2}=O(1),\]
\[\left\|\frac{\partial^{2}\mathbf{z}^{l}}{\left(\partial W^{l} \right)^{2}}\right\|_{2,1} =\left\|\left[\begin{array}{cc}\partial^{2}\mathbf{z}^{l}/\left( \partial W_{1}^{l}\right)^{2}&\partial^{2}\mathbf{z}^{l}/\partial W_{1}^{l} \partial W_{2}^{l}\\ \partial^{2}\mathbf{z}^{l}/\partial W_{1}^{l}\partial W_{2}^{l}&\partial^{2} \mathbf{z}^{l}/\left(\partial W_{2}^{l}\right)^{2}\end{array}\right]\right\|_ {2,2,1}\leq\left\|\frac{\partial^{2}\mathbf{z}^{l}}{\left(\partial W_{1}^{l} \right)^{2}}\right\|_{2,2,1}+2\left\|\frac{\partial^{2}\mathbf{z}^{l}}{\partial W _{1}^{l}\partial W_{2}^{l}}\right\|_{2,2,1}+\left\|\frac{\partial^{2}\mathbf{z }^{l}}{\left(\partial W_{2}^{l}\right)^{2}}\right\|_{2,2,1} \tag{54}\]
Therefore, from (50), (52), (53), and (54), we get that \(\mathcal{Q}_{2,2,1}(f_{s})=O(1)\), for all \(s\in[m]\).
### _Bound on \(Q_{\infty}\) For LISTA Network_
Let \(\mathbf{b}_{s}^{l}=\frac{\partial f_{s}}{\partial\mathbf{x}^{l}}\), then \(\mathcal{Q}_{\infty}\left(f_{s}\right)=\max_{1\leq l\leq L}\left\{\left\| \mathbf{b}_{s}^{l}\right\|_{\infty}\right\}\). We now compute bound on \(\left\|\mathbf{b}_{s}^{l}\right\|_{\infty}\). From triangle inequality, we can write
\[\left\|\mathbf{b}_{s}^{l}\right\|_{\infty}\leq\left\|\mathbf{b}_{s,0}^{l} \right\|_{\infty}+\left\|\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right\|_{ \infty}\leq\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}+\left\|\mathbf{b}_{s} ^{l}-\mathbf{b}_{s,0}^{l}\right\|. \tag{55}\]
where \(\mathbf{b}_{s,0}^{l}\) is \(\mathbf{b}_{s}^{l}\) at initialization. Therefore, one can obtain the bound on \(\left\|\mathbf{b}_{s}^{l}\right\|_{\infty}\) by computing the bounds on \(\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\) and \(\left\|\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right\|\), which are provided in Lemma 7 and Lemma 8, respectively. Moreover, in order to compute the bound on \(\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\), we require several lemmas which are stated below. In specific, Lemma 5 and Lemma 6 provide the bound on each component of the hidden layer's output at initialization and the bound on \(l_{2}\)-norm of \(\mathbf{b}_{s}^{l},\ l\in[L]\), respectively.
**Lemma 5**.: _For any \(l\in[L]\) and \(i\in[m]\), we have \(\left|\mathbf{x}_{i}^{l}\right|\leq\ln(m)+\left|\sigma(0)\right|\) at initialization with probability at least \(1-2e^{-c_{\mathbf{x}}^{l}ln^{2}(m)}\) for some constant \(c_{\mathbf{x}}^{l}>0\)._
Proof.: From (44),
As \(\left(W_{1}^{l}\right)_{ik}\sim\mathcal{N}(0,1)\) and \(\left(W_{2}^{l}\right)_{ik}\sim\mathcal{N}(0,1)\), so that \(\sum_{k=1}^{n}\left(W_{1}^{l}\right)_{ik}\mathbf{y}_{k}\sim\mathcal{N}\left(0, \left\|\mathbf{y}\right\|^{2}\right)\) and \(\sum_{k=1}^{m}\left(W_{2}^{l}\right)_{ik}\mathbf{x}_{k}^{l-1}\sim\mathcal{N} \left(0,\left\|\mathbf{x}^{l-1}\right\|^{2}\right)\). In addition, since \(\left(W_{1}^{l}\right)_{ik}\) and \(\left(W_{2}^{l}\right)_{ik}\) are independent, \(\sum_{k=1}^{n}\left(W_{1}^{l}\right)_{ik}\mathbf{y}_{k}+\sum_{k=1}^{m}\left(W_{ 2}^{l}\right)_{ik}\mathbf{x}_{k}^{l-1}\sim\mathcal{N}\left(0,\left\|\mathbf{y }\right\|^{2}+\left\|\mathbf{x}^{l-1}\right\|^{2}\right)\). Using the concentration inequality of a Gaussian random variable, we obtain
\[\Pr\left[\left|\mathbf{x}_{i}^{l}\right|\geq\ln(m)+\left|\sigma(0)\right| \right]\leq\Pr\left[\left|\frac{L_{\sigma}}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2} ^{l}\right)_{ik}\mathbf{x}_{k}^{l-1}+\frac{L_{\sigma}}{\sqrt{n}}\sum_{k=1}^{n} \left(W_{1}^{l}\right)_{ik}\mathbf{y}_{k}\right|\geq\ln(m)\right]\leq 2e^{-\frac{ml^{2}(m)}{2L 2\left(\left\|\mathbf{y}\right\|^{2}+\left\|\mathbf{x}^{l-1}\right\|^{2} \right)}}.\]
This implies,
\[\Pr\left[\left|\mathbf{x}_{i}^{l}\right|\leq\ln(m)+\left|\sigma(0)\right| \right]\geq 1-2e^{-\frac{m\ln^{2}(m)}{2L2\left(\left\|\mathbf{y}\right\|^{2}+ \left\|\mathbf{x}^{l-1}\right\|^{2}\right)}}=1-2e^{-c_{\mathbf{x}}^{l}\ln^{2}( m)},\ \forall l\in[L], \tag{56}\]
where \(c_{\mathbf{x}}^{l}=\frac{m}{2L_{2}^{2}\left(\left\|\mathbf{y}\right\|^{2}+ \left\|\mathbf{x}^{l-1}\right\|^{2}\right)}>0\).
**Lemma 6**.: _Consider an \(L\)-layer LISTA network with \(\left(W_{10}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\) and \(\left(W_{20}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\), \(\forall l\in[L]\), then, for any \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) such that \(\left\|\mathbf{W}_{1}-\mathbf{W}_{10}\right\|\leq R_{1}\) and \(\left\|\mathbf{W}_{2}-\mathbf{W}_{20}\right\|\leq R_{2}\), we have,_
\[\left\|\mathbf{b}_{s}^{l}\right\|\leq L_{\sigma}^{L-l}\left(c_{20}+R_{2}/\sqrt{ m}\right)^{L-1},\ l\in[L]. \tag{57}\]
_From this at initialization, i.e., for \(R_{2}=0\), we get_
\[\left\|\mathbf{b}_{s,0}^{l}\right\|\leq L_{\sigma}^{L-l}c_{20}^{L-l}. \tag{58}\]
Proof.: We prove this lemma by using induction on \(l\). Initially, for \(l=L\), we have
\[\left\|\mathbf{b}_{s}^{L}\right\|=\left\|\frac{\partial f_{s}}{\partial \mathbf{x}^{L}}\right\|=(1/\sqrt{m})\left\|\mathbf{v}_{s}\right\|=1/\sqrt{m}<1.\]
That is, the inequality in (57) holds true for \(l=L\). Assume that at \(l^{th}\) layer the inequality holds, i.e., \(\left\|\mathbf{b}_{s}^{l}\right\|\leq L_{\sigma}^{L-l}\left(c_{0}+R_{2}/\sqrt{ m}\right)^{L-l}\), then below we prove that (57) holds true even for the \((l-1)^{th}\) layer:
\[\left\|\mathbf{b}_{s}^{l-1}\right\| =\left\|\frac{\partial f_{s}}{\partial\mathbf{x}^{l-1}}\right\| =\left\|\frac{\partial\mathbf{x}^{l}}{\partial\mathbf{x}^{l-1}}\frac{ \partial f_{s}}{\partial\mathbf{x}^{l}}\right\|=\left\|\frac{1}{\sqrt{m}} \left(W_{2}^{l}\right)^{T}\Sigma^{\prime l}\mathbf{b}_{s}^{l}\right\|\leq \frac{1}{\sqrt{m}}\left\|W_{2}^{l}\right\|\left\|\Sigma^{\prime l}\right\| \left\|\mathbf{b}_{s}^{l}\right\|\] \[\leq\left(c_{20}+R_{2}/\sqrt{m}\right)L_{\sigma}\left\|\mathbf{b }_{s}^{l}\right\|\leq\left(c_{20}+R_{2}/\sqrt{m}\right)^{L-l+1}L_{\sigma}^{L-l+ 1}.\]
So, from the above analysis, we claim that the inequality in (57) holds true for any \(l\in[L]\). Now, at initialization, i.e., substituting \(R_{2}=0\) in (57) directly leads to (58).
As mentioned earlier, we now use Lemma 5 and Lemma 6 to provide bound on \(\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\).
**Lemma 7**.: _At initialization, the \(\infty\)-norm of \(\mathbf{b}_{s}^{l}\) is in \(\tilde{O}(1/\sqrt{m})\) with probability \(1-me^{-c_{bs}^{l}\ln^{2}(m)}\) for some constant \(c_{bs}^{l}>0,\) i.e.,_
\[\|\mathbf{b}_{s,0}^{l}\|_{\infty}=\tilde{O}\left(\frac{1}{\sqrt{m}}\right). \tag{59}\]
Proof.: We prove this lemma by induction. Before proceeding, lets denote \(\mathbf{s}^{l}=\mathbf{b}_{s,0}^{l}\). Initially, for \(l=L\), we have
\[\left\|\mathbf{s}^{L}\right\|_{\infty}=1/\sqrt{m}\left\|\mathbf{v}_{s}\right\| _{\infty}=O(1/\sqrt{m}).\]
Implies that (59) holds true for \(l=L\). Suppose that at \(l^{th}\) layer with probability at least \(1-me^{-c_{bs}^{l}\ln^{2}(m)}\), for some constant \(c_{bs}^{l}>0,\left\|\mathbf{s}^{l}\right\|_{\infty}=\tilde{O}(\frac{1}{\sqrt{ m}})\). We now prove that equation (59) is valid for \((l-1)^{th}\) layer as well with probability at least \(1-me^{-c_{bs}^{l-1}\ln^{2}(m)}\) for some constant \(c_{bs}^{l-1}>0\). In particular, the absolute value of \(i^{th}\) component of \(\mathbf{s}_{i}^{l-1}\) is bounded as
\[\left|\mathbf{s}_{i}^{l-1}\right|=\left|\frac{1}{\sqrt{m}}\sum_{ k=1}^{m}\left(W_{2}^{l-1}\right)_{ki}\sigma^{\prime}\left(\frac{1}{\sqrt{m}} \sum_{j=1}^{m}\left(W_{2}^{l-1}\right)_{kj}\mathbf{x}_{j}^{l-2}+\frac{1}{\sqrt {n}}\sum_{j=1}^{n}\left(W_{1}^{l-1}\right)_{kj}\mathbf{y}_{j}\right)\mathbf{s }_{k}^{l}\right|\] \[\leq\left|\frac{1}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2}^{l-1} \right)_{ki}\sigma^{\prime}\left(\frac{1}{\sqrt{m}}\sum_{j\neq i}^{m}\left(W_ {2}^{l-1}\right)_{kj}\mathbf{x}_{j}^{l-2}+\frac{1}{\sqrt{n}}\sum_{j\neq i}^{n} \left(W_{1}^{l-1}\right)_{kj}\mathbf{y}_{j}\right)\mathbf{s}_{k}^{l}\right|\] \[+\left|\frac{1}{m}\beta_{\sigma}\mathbf{x}_{i}^{l-2}\sum_{k=1}^{ m}\left(\left(W_{2}^{l-1}\right)_{ki}\right)^{2}\mathbf{s}_{k}^{l}\right|+\left| \frac{1}{\sqrt{m}\sqrt{n}}\beta_{\sigma}\mathbf{y}_{i}\sum_{k=1}^{m}\left(W_{ 1}^{l-1}\right)_{ki}\left(W_{2}^{l-1}\right)_{ki}\mathbf{s}_{k}^{l}\right|\] \[=\left|T_{1}\right|+\left|T_{2}\right|+\left|T_{3}\right|.\]
Now, we provide bounds on the terms (\(T_{1},T_{2}\), and \(T_{3}\)) individually:
\[T_{1}= \frac{1}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2}^{l-1}\right)_{ki} \sigma^{\prime}\left(\frac{1}{\sqrt{m}}\sum_{j\neq i}^{m}\left(W_{2}^{l-1} \right)_{kj}\mathbf{x}_{j}^{l-2}+\frac{1}{\sqrt{n}}\sum_{j\neq i}^{m}\left(W_ {1}^{l-1}\right)_{kj}\mathbf{y}_{j}\right)\mathbf{s}_{k}^{l}\] \[\leq \frac{L_{\sigma}}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2}^{l-1}\right) _{ki}\mathbf{s}_{k}^{l}\sim\mathcal{N}\left(0,\frac{L_{\sigma}^{2}}{m}\left\| \mathbf{s}^{l}\right\|^{2}\right),\]
\[T_{2}=\] \[T_{3}=\]
where \(\sum_{k=1}^{m}\left(\left(W_{2}^{l-1}\right)_{ki}\right)^{2}\sim\chi^{2}(m)\), \(\sum_{k=1}^{m}\left(W_{1}^{l-1}\right)_{ki}\left(W_{2}^{l-1}\right)_{ki}\sim \chi^{2}(m)\), and \(\chi^{2}(m)\) denotes the chi-square distribution with degree \(m\). By using the concentration inequality on the derived \(T_{1}\) bound, we obtain
\[\Pr\left[\left|\frac{L_{\sigma}}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2}^{l-1} \right)_{ki}\mathbf{s}_{k}^{l}\right|\geq\frac{\ln(m)}{\sqrt{m}}\right]\leq 2e^{- \frac{\ln^{2}(m)}{2L_{\sigma}^{2}\left\|\mathbf{s}^{l}\right\|^{2}}}\leq 2e^{-c_{ \sigma}^{l}\ln^{2}(m)}. \tag{60}\]
Substituting the bound of \(\left\|\mathbf{s}^{l}\right\|\), obtained from Lemma (6), in the above inequality leads to \(c_{\sigma}^{l}=1/\left(2L_{\sigma}^{2}\left\|\mathbf{s}^{l}\right\|^{2}\right) \geq 1/\left(2L_{\sigma}^{2L-2l+2}c_{20}^{2L-2l}\right)\). From Lemma 1 in [50], there exist constants \(\tilde{c}_{1},\tilde{c}_{2}\), and \(\tilde{c}_{3}>0\), such that
\[\Pr\left[\left|\frac{1}{m}\beta_{\sigma}|\mathbf{x}_{i}^{l-2}|\left\|\mathbf{s }^{l}\right\|_{\infty}\sum_{k=1}^{m}\left(\left(W_{2}^{l-1}\right)_{ki}\right)^{ 2}\right|\geq\tilde{c}_{1}e^{-\frac{\ln^{2}(m)}{\sqrt{m}}}\right]\leq e^{- \tilde{c}_{2}m}. \tag{61}\]
Here, by using Lemma (5), we can write \(\left|\mathbf{x}_{i}^{l-2}\right|\leq\ln(m)+\left|\sigma(0)\right|\) with probability at least \(1-2e^{-c_{bs}^{l-2}\ln^{2}(m)}\)and by induction hypothesis we have \(\left\|\mathbf{s}^{l}\right\|_{\infty}=\tilde{O}(1/\sqrt{m})\) with probability \(1-me^{-c_{bs}^{l}\ln^{2}(m)}\). Similarly, there exist constants \(\hat{c}_{1}\), \(\hat{c}_{2}\), and \(\hat{c}_{3}>0\), such that
\[\Pr\left[\left|\frac{1}{\sqrt{mn}}\beta_{\sigma}|\mathbf{y}_{i}|\left\| \mathbf{s}^{l}\right\|_{\infty}\sum_{k=1}^{m}\left(W_{1}^{l-1}\right)_{ki}\left( W_{2}^{l-1}\right)_{ki}\right|\geq\hat{c}_{1}e^{-\frac{\ln^{2}(\sqrt{m})}{\sqrt{m}}} \right]\leq e^{-\hat{c}_{2}\sqrt{mn}}. \tag{62}\]
Combining probabilities in (60), (61), and (62), there exists a constant \(c_{bs}^{l-1}\) such that
\[e^{-c_{bs}^{l-1}\ln^{2}(m)}\leq me^{-c_{bs}^{l}\ln^{2}(m)}+2e^{-c_{bs}^{l}\ln^{2 }(m)}+2e^{-c_{bs}^{l}\ln^{2}(m)}+e^{-\tilde{c}_{2}m}+e^{-\hat{c}_{2}\sqrt{mn}},\]
and with probability at least \(1-e^{-c_{bs}^{l-1}\ln^{2}(m)}\), we have \(\left|s_{i}^{l-1}\right|=\tilde{O}\left(\frac{1}{\sqrt{m}}\right)\). This implies,
\[\left\|\mathbf{s}^{l-1}\right\|_{\infty}=\tilde{O}\left(\frac{1}{\sqrt{m}} \right), \tag{63}\]
with probability at least \(1-me^{-c_{bs}^{l-1}\ln^{2}(m)}\), i.e., by induction we proved (59) for any \(l\in[L]\).
**Lemma 8**.: _The \(l_{2}\)-norm of difference between \(\mathbf{b}_{s}^{l}\) and \(\mathbf{b}_{s,0}^{l}\) is in \(\tilde{O}(1/\sqrt{m})\) for any \(l\in[L-1]\), i.e.,_
\[\left\|\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right\|=\tilde{O}(1/\sqrt{m}) \quad\forall l\in[L-1]. \tag{64}\]
Proof.: we prove (64) by using Induction. For \(l=L\), we have \(\left\|\mathbf{b}_{s}^{(L)}-\mathbf{b}_{s,0}^{(L)}\right\|=0\). Let us consider (64) is valid for any \(l\in[L]\). Now, we prove that (64) is also valid for \(l-1\).
\[\left\|\mathbf{b}_{s}^{l-1}-\mathbf{b}_{s,0}^{l-1}\right\| =\frac{1}{\sqrt{m}}\left\|\left(\left(W_{2}^{l}\right)^{T}\Sigma ^{\prime l}\mathbf{b}_{s}^{l}-\left(W_{20}^{l}\right)^{T}\Sigma_{0}^{\prime l }\mathbf{b}_{s,0}^{l}\right\|\right.\] \[=\frac{1}{\sqrt{m}}\|\left(\left(W_{2}^{l}\right)^{T}\Sigma^{ \prime l}\mathbf{b}_{s}^{l}-\left(W_{20}^{l}\right)^{T}\Sigma_{0}^{\prime l} \mathbf{b}_{s,0}^{l}+\left(W_{20}^{l}\right)^{T}\Sigma^{\prime l}\mathbf{b}_{ s,0}^{l}\right.\] \[\left.+\left(W_{20}^{l}\right)^{T}\Sigma^{\prime l}\mathbf{b}_{s} ^{l}-\left(W_{20}^{l}\right)^{T}\Sigma^{\prime l}\mathbf{b}_{s,0}^{l}-\left(W_ {20}^{l}\right)^{T}\Sigma^{\prime l}\mathbf{b}_{s}^{l}\right\|\] \[=\frac{1}{\sqrt{m}}\left\|\left(\left(W_{2}^{l}\right)^{T}-\left( W_{20}^{l}\right)^{T}\right)\Sigma^{\prime l}\mathbf{b}_{s}^{l}+\left(W_{20}^{l} \right)^{T}\left(\Sigma^{\prime l}-\Sigma_{0}^{\prime l}\right)\mathbf{b}_{s,0 }^{l}+\left(W_{20}^{l}\right)^{T}\Sigma^{\prime l}\left(\mathbf{b}_{s}^{l}- \mathbf{b}_{s,0}^{l}\right)\right\|\] \[\leq\frac{1}{\sqrt{m}}\left\|\left(\left(W_{20}^{l}\right)^{T}- \left(W_{20}^{l}\right)^{T}\right)\Sigma^{\prime l}\mathbf{b}_{s}^{l}\right\| +\frac{1}{\sqrt{m}}\left\|\left(\left(W_{20}^{l}\right)^{T}\left(\Sigma^{ \prime l}-\Sigma_{0}^{\prime l}\right)\mathbf{b}_{s,0}^{l}\right\|+\frac{1}{ \sqrt{m}}\left\|\left(W_{20}^{l}\right)^{T}\Sigma^{\prime l}\left(\mathbf{b}_ {s}^{l}-\mathbf{b}_{s,0}^{l}\right)\right\|\right.\] \[=T_{1}+T_{2}+T_{3}.\]
We now provide bounds on \(T_{1},T_{2}\), and \(T_{3}\):
\[T_{1}=\frac{1}{\sqrt{m}}\left\|\left(\left(W_{2}^{l}\right)^{T}-\left(W_{20}^{l }\right)^{T}\right)\Sigma^{\prime l}\mathbf{b}_{s}^{l}\right\|\leq\frac{1}{ \sqrt{m}}\left\|W_{2}^{l}-W_{20}^{l}\right\|\left\|\Sigma^{\prime l}\right\| \left\|\mathbf{b}_{s}^{l}\right\|\leq\frac{R_{2}L_{\sigma}^{L-l+1}\left(c_{20} +R_{2}/\sqrt{m}\right)^{L-l}}{\sqrt{m}}=O(1/\sqrt{m}).\]
To obtain bound on \(T_{2}\), we need the following inequality,
\[\left\|\tilde{\mathbf{x}}^{l}(\mathbf{W})-\tilde{\mathbf{x}}^{l} \left(\mathbf{W}_{0}\right)\right\|= \left\|\frac{1}{\sqrt{m}}W_{2}^{l}\mathbf{x}^{l-1}(\mathbf{W})- \frac{1}{\sqrt{m}}W_{20}^{l}\mathbf{x}^{l-1}\left(\mathbf{W}_{0}\right)+ \frac{1}{\sqrt{n}}W_{1}^{l}\mathbf{y}-\frac{1}{\sqrt{n}}W_{10}^{l}\mathbf{y}\right\|\] \[\leq c_{20}L_{\sigma}\left\|\tilde{\mathbf{x}}^{l-1}(\mathbf{W})- \tilde{\mathbf{x}}^{l-1}\left(\mathbf{W}_{0}\right)\right\|+\frac{R_{2}}{\sqrt{ m}}\left\|\mathbf{x}^{l-1}(\mathbf{W})\right\|+R_{1}C_{y}\] \[\leq c_{20}L_{\sigma}\left\|\tilde{\mathbf{x}}^{l-1}(\mathbf{W})- \tilde{\mathbf{x}}^{l-1}\left(\mathbf{W}_{0}\right)\right\|+\frac{R_{2}L_{1 \mathrm{STA},\mathbf{x}}^{l-1}}{\sqrt{m}}+R_{1}C_{y}.\]
Since
\[\left\|\tilde{\mathbf{x}}^{(1)}(\mathbf{W})-\tilde{\mathbf{x}}^{(1)}\left( \mathbf{W}_{0}\right)\right\|\leq\frac{1}{\sqrt{m}}\left\|W_{2}^{(1)}-W_{20}^{ (1)}\right\|\left\|\mathbf{x}^{(0)}\right\|+\frac{1}{\sqrt{m}}\left\|W_{1}^{( 1)}-W_{10}^{(1)}\right\|\left\|\mathbf{y}\right\|\leq R_{2}C_{\mathbf{x}}+R_{1 }C_{y}=O(1).\]
Recursively applying the previous equation, we get
\[\left\|\tilde{\mathbf{x}}^{l}(\mathbf{W})-\tilde{\mathbf{x}}^{l}\left(\mathbf{W}_ {0}\right)\right\|\leq c_{20}^{l-1}L_{\sigma}^{l-1}\left(R_{2}C_{\mathbf{x}}+R_{1 }C_{y}\right)+\left(\frac{R_{2}c_{\mathrm{STA},\mathbf{x}}^{l-1}}{\sqrt{m}}+R_{1 }C_{y}\right)\sum_{i=1}^{l-2}c_{20}^{i}L_{\sigma}^{i}=O(1).\]
Using the above inequality bound and Lemma (7), we can write the following with probability \(1-me^{-c_{bs}^{l}\ln^{2}(m)}\):
\[\left\|\left[\Sigma^{\prime l}-\Sigma_{0}^{\prime l}\right] \mathbf{b}_{s,0}^{l}\right\| =\] \[\leq\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\beta_{\rho}\left\| \tilde{\mathbf{x}}^{l}(\mathbf{W})-\tilde{\mathbf{x}}^{l}\left(\mathbf{W}_{0} \right)\right\|=\tilde{O}\left(\frac{1}{\sqrt{m}}\right).\]
This leads to,
\[T_{2}=\frac{1}{\sqrt{m}}\left\|\left(W_{20}^{l}\right)^{T}\left(\Sigma^{\prime l }-\Sigma_{0}^{\prime l}\right)\mathbf{b}_{s,0}^{l}\right\|\leq\frac{1}{\sqrt{m}} \|W_{20}^{l}\|\left\|\left[\Sigma^{\prime l}-\Sigma_{0}^{\prime l}\right] \mathbf{b}_{s,0}^{l}\right\|=\tilde{O}\left(\frac{1}{\sqrt{m}}\right).\]
Besides, by using the induction hypothesis on \(l\), the term \(T_{3}\) is bounded as
\[T_{3}= \frac{1}{\sqrt{m}}\left\|\left(W_{20}^{l}\right)^{T}\Sigma^{l}\left( \mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right)\right\|\leq\frac{1}{\sqrt{m}} \left\|W_{20}^{l}\right\|\left\|\Sigma^{l}\right\|\left\|\mathbf{b}_{s}^{l}- \mathbf{b}_{s,0}^{l}\right\|=\tilde{O}(1/\sqrt{m}).\]
Now combining the bounds on the terms \(T_{1},\ T_{2}\), and \(T_{3}\), we can write
\[\left\|\mathbf{b}_{s}^{l-1}-\mathbf{b}_{s,0}^{l-1}\right\|\leq T_{1}+T_{2}+T_{ 3}=\tilde{O}\left(\frac{1}{\sqrt{m}}\right). \tag{65}\]
Therefore, (64) is true for \(l-1\). Hence, by induction (64) is true for all \(l\in[L]\).
By using Lemma 7 and 8, in equation (55), we get
\[\left\|\mathbf{b}_{s}^{l}\right\|_{\infty}\leq\left\|\mathbf{b}_{s,0}^{l} \right\|_{\infty}+\left\|\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right\|= \tilde{O}\left(\frac{1}{\sqrt{m}}\right). \tag{66}\]
This implies,
\[\mathcal{Q}_{\infty}\left(f_{s}\right)=\max_{1\leq l\leq L}\left\{\left\| \mathbf{b}_{s}^{l}\right\|_{\infty}\right\}=\tilde{O}\left(\frac{1}{\sqrt{m}} \right). \tag{67}\]
### _Bound on \(Q_{\infty}\) For ADMM-CSNet_
Let \(\mathbf{b}_{s}^{l}=\frac{\partial f_{s}}{\partial\mathbf{z}^{l}}\), then \(\mathcal{Q}_{\infty}\left(f_{s}\right)=\max_{1\leq l\leq L}\left\{\left\| \mathbf{b}_{s}^{l}\right\|_{\infty}\right\}\). We now compute bound on \(\left\|\mathbf{b}_{s}^{l}\right\|_{\infty}\) by using (55). Similar to the previous LISTA network analysis, one can obtain the bound on \(\left\|\mathbf{b}_{s}^{l}\right\|_{\infty}\) by computing the bounds on \(\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\) and \(\left\|\mathbf{b}_{0}^{l}-\mathbf{b}_{s,0}^{l}\right\|\), which are provided in Lemma 11 and Lemma 12, respectively. Moreover, in order to compute the bound on \(\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\), we require several lemmas which are stated below. In specific, Lemma 9 and Lemma 10 provide the bound on each component of the hidden layer's output at initialization and the bound on \(l_{2}\)-norm of \(\mathbf{b}_{s}^{l},\ l\in[L]\), respectively.
**Lemma 9**.: _For any \(l\in[L]\) and \(i\in[m]\), we have \(\left|\mathbf{z}_{i}^{l}\right|\leq\ln(m)+L_{\sigma}\left|\mathbf{u}_{i}^{l-1 }\right|+\left|\sigma(0)\right|\) at initialization with probability at least \(1-2e^{-c_{\mathbf{s}}^{l}ln^{2}(m)}\) for some constant \(c_{\mathbf{z}}^{l}>0\) and \(\left|\mathbf{u}_{i}^{l}\right|\leq\ln(m)+\left|\mathbf{u}_{i}^{l-1}\right|+ \left|\mathbf{z}_{i}^{l}\right|\) at initialization with probability at least \(1-2e^{-c_{\mathbf{s}}^{l}\ln^{2}(m)}\) for some constant \(c_{\mathbf{u}}^{l}>0\)._
Proof.: From (49),
\[\left|\mathbf{z}_{i}^{l}\right| =\left|\sigma\left(\mathbf{u}_{i}^{l-1}+\sum_{k=1}^{m}\frac{1}{ \sqrt{m}}\left(W_{2}^{l}\right)_{ik}\left(\mathbf{z}_{k}^{l-1}-\mathbf{u}_{k}^ {l-1}\right)+\sum_{k=1}^{n}\frac{1}{\sqrt{n}}\left(W_{1}^{l}\right)_{ik} \mathbf{y}_{k}\right)\right|\] \[\leq\left|\frac{L_{\sigma}}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2}^{l }\right)_{ik}\left(\mathbf{z}_{k}^{l-1}-\mathbf{u}_{k}^{l-1}\right)+\frac{L_{ \sigma}}{\sqrt{n}}\sum_{k=1}^{n}\left(W_{1}^{l}\right)_{ik}\mathbf{y}_{k} \right|+L_{\sigma}\left|\mathbf{u}_{i}^{l-1}\right|+\left|\sigma(0)\right|.\]
As \(\left(W_{1}^{l}\right)_{ik}\sim\mathcal{N}(0,1)\) and \(\left(W_{2}^{l}\right)_{ik}\sim\mathcal{N}(0,1)\), so that \(\sum_{k=1}^{n}\left(W_{1}^{l}\right)_{ik}\mathbf{y}_{k}\sim\mathcal{N}\left(0, \left\|\mathbf{y}\right\|^{2}\right)\) and \(\sum_{k=1}^{m}\left(W_{2}^{l}\right)_{ik}\left(\mathbf{z}_{k}^{l-1}-\mathbf{u} _{k}^{l-1}\right)\sim\mathcal{N}\left(0,\left\|\mathbf{z}^{l-1}-\mathbf{u}_{k} ^{l-1}\right\|^{2}\right)\). In addition, since \(\left(W_{1}^{l}\right)_{ik}\) and \(\left(W_{2}^{l}\right)_{ik}\) are independent,
\(\sum_{k=1}^{m}\left(W_{1}^{l}\right)_{ik}\mathbf{y}_{k}+\sum_{k=1}^{m}\left(W_{2 }^{l}\right)_{ik}\left(\mathbf{z}_{k}^{l-1}-\mathbf{u}_{k}^{l-1}\right)\sim \mathcal{N}\left(0,\left\|\mathbf{y}\right\|^{2}+\left\|\mathbf{z}^{l-1}- \mathbf{u}^{l-1}\right\|^{2}\right)\). Using the concentration inequality of a Gaussian random variable, we obtain
\[\Pr\left[\left|\mathbf{z}_{i}^{l}\right|\geq\ln(m)+L_{\sigma} \left|\mathbf{u}_{i}^{l-1}\right|+\left|\sigma(0)\right|\right] \leq\Pr\left[\left|\frac{L_{\sigma}}{\sqrt{m}}\sum_{k=1}^{m} \left(W_{2}^{l}\right)_{ik}\left(\mathbf{z}_{k}^{l-1}-\mathbf{u}_{k}^{l-1} \right)+\frac{L_{\sigma}}{\sqrt{n}}\sum_{k=1}^{n}\left(W_{1}\right)_{ik}^{(1)} \mathbf{y}_{k}\right|\geq\ln(m)\right]\] \[\leq 2e^{-\frac{m\ln^{2}(m)}{2L_{\sigma}^{2}\left(\left\|\mathbf{y} \right\|^{2}+\left\|\mathbf{z}^{l-1}-\mathbf{u}^{l-1}\right\|^{2}\right)}}=2e^{- c_{\mathbf{s}}^{l}\ln^{2}(m)},\]
where \(c_{\mathbf{s}}^{l}=\frac{m}{2L_{\sigma}^{2}\left(\left\|\mathbf{z}\right\|^{2}+ \left\|\mathbf{z}^{(l-1)}-\mathbf{u}^{(l-1)}\right\|^{2}\right)}\). Therefore,
\[\Pr\left[\left|\mathbf{z}_{i}^{l}\right|\leq\ln(m)+L_{\sigma} \left|\mathbf{u}_{i}^{l-1}\right|+\left|\sigma(0)\right|\right]\geq 1-2e^{-c_{ \mathbf{s}}^{(l)}\ln^{2}(m)}.\]
Since the bound on \(\left|\mathbf{z}_{i}^{l}\right|\) depends on \(\left|\mathbf{u}_{i}^{l-1}\right|\) (mentioned in above equation), we now find the bound of \(\left|\mathbf{u}_{i}^{l}\right|\),
\[\left|\mathbf{u}_{i}^{l}\right|\] \[\leq\left|\mathbf{u}_{i}^{l-1}\right|+\left|\mathbf{z}_{i}^{l} \right|+\left|\sum_{k=1}^{n}\frac{1}{\sqrt{n}}\left(W_{1}^{l}\right)_{ik} \mathbf{y}_{k}+\sum_{k=1}^{m}\frac{1}{\sqrt{m}}\left(W_{2}^{l}\right)_{ik} \left(\mathbf{z}_{k}^{l-1}-\mathbf{u}_{k}^{l-1}\right)\right|.\]
By the concentration inequality for the Gaussian random variable, we have
\[\Pr\left[\left|\mathbf{u}_{i}^{l}\right|\geq\ln(m)+\left|\mathbf{u}_{ i}^{l-1}\right|+\left|\mathbf{z}_{i}^{l}\right|\right]\leq\Pr\left[\left|\frac{L_{ \sigma}}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2}^{l}\right)_{ik}\left(\mathbf{z}_{k }^{l-1}-\mathbf{u}_{k}^{l-1}\right)+\frac{L_{\sigma}}{\sqrt{n}}\sum_{k=1}^{n} \left(W_{1}^{l}\right)_{ik}\mathbf{y}_{k}\right|\geq\ln(m)\right]\] \[\leq 2e^{-\frac{\frac{\ln^{2}(m)}{2L_{\sigma}}}{2L_{\sigma}\left(| \mathbf{y}|^{2}+\left|\mathbf{z}_{i}^{l-1}-\mathbf{u}^{l-1}\right|^{2}\right)}}.\]
Therefore, we have
\[\Pr\left[\left|\mathbf{u}_{i}^{l}\right|\leq\ln(m)+\left|\mathbf{u}_{i}^{l-1} \right|+\left|\mathbf{z}_{i}^{l}\right|\right]\geq 1-2e^{-c_{\mathbf{u}}^{l}\ln^{2}(m)}.\]
In a recursive manner, we get
\[\left|\mathbf{z}_{i}^{l}\right|\leq\ln(m)+\left|\sigma(0)\right|+ \sum_{i=0}^{l-2}\left(1+L_{\sigma}\right)^{i}L_{\sigma}(2\ln(m)+\left|\sigma(0 )\right|)+\left(1+L_{\sigma}\right)^{l-1}L_{\sigma}C_{\mathbf{u}},\] \[\left|\mathbf{u}_{i}^{l}\right|\leq\sum_{i=0}^{l-1}\left(1+L_{ \sigma}\right)^{i}\left(2\ln(m)+\left|\sigma(0)\right|\right)+\left(1+L_{ \sigma}\right)^{l}C_{\mathbf{u}},\]
with possibility \(1-2e^{-\frac{\frac{\ln^{2}(m)}{2L_{\sigma}}}{2L_{\sigma}\left(|\mathbf{y}|^{ 2}+\left|\mathbf{z}_{i}^{l-1}-\mathbf{u}^{l-1}\right|^{2}\right)}}\).
**Lemma 10**.: _Consider an \(L\)-layer ADMM-CSNet with \(\left(W_{10}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\) and \(\left(W_{20}^{l}\right)_{i,j}\sim\mathcal{N}(0,1)\), \(\forall l\in[L]\), then, for any \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) such that \(\left\|\mathbf{W}_{1}-\mathbf{W}_{10}\right\|\leq R_{1}\) and \(\left\|\mathbf{W}_{2}-\mathbf{W}_{20}\right\|\leq R_{2}\), we have,_
\[\left\|\mathbf{b}_{s}^{l}\right\|\leq L_{\sigma}^{L-l}\left(2\left(c_{20}+R_{ 2}/\sqrt{m}\right)+1\right)^{L-l}. \tag{68}\]
_From this at initialization, i.e., for \(R_{2}=0\), we get_
\[\left\|\mathbf{b}_{s,0}^{l}\right\|\leq L_{\sigma}^{L-l}\left(2c_{20}+1\right) ^{L-l}. \tag{69}\]
Proof.: We prove this lemma by using induction on \(l\). Initially, for \(l=L\), we have
\[\left\|\mathbf{b}_{s}^{L}\right\|=\left\|\frac{\partial f_{s}}{\partial \mathbf{z}^{L}}\right\|=\left(1/\sqrt{m}\right)\left\|\mathbf{v}_{s}\right\|=1 /\sqrt{m}<1.\]
That is the quantity in (68) is true for \(l=L\). Assume that at \(l^{th}\) layer the inequality holds, i.e., \(\left\|\mathbf{b}_{s}^{l}\right\|\leq L_{\sigma}^{L-l}\left(2\left(c_{20}+R_{ 2}/\sqrt{m}\right)+1\right)^{L-l}\), then below we prove that (68) holds true even for the \((l-1)^{th}\) layer:
\[\left\|\mathbf{b}_{s}^{l-1}\right\| =\left\|\frac{\partial f_{s}}{\partial\mathbf{z}^{l-1}}\right\|= \left\|\frac{\partial\mathbf{z}^{l}}{\partial\mathbf{z}^{l-1}}\frac{\partial f _{s}}{\partial\mathbf{z}^{l}}\right\|=\left\|\left(\frac{2}{\sqrt{m}}\left(W_{ 2}^{l}\right)^{T}\Sigma^{\prime l}-\Sigma^{\prime l}\right)\mathbf{b}_{s}^{l} \right\|\leq\frac{2}{\sqrt{m}}\left\|\left(W_{2}^{l}\right)\right\|\left\| \Sigma^{\prime l}\right\|\left\|\mathbf{b}_{s}^{l}\right\|+\left\|\Sigma^{ \prime l}\right\|\left\|\mathbf{b}_{s}^{l}\right\|\] \[\leq\left(2\left(c_{20}+R_{2}/\sqrt{m}\right)+1\right)L_{\sigma} \left\|\mathbf{b}_{s}^{l}\right\|\leq\left(2\left(c_{20}+R_{2}/\sqrt{m}\right)+ 1\right)^{L-l+1}L_{\sigma}^{L-l+1}.\]
So, from the above analysis, we claim that the inequality in (68) holds true for any \(l\in[L]\). Now, at initialization, i.e., substituting \(R_{2}=0\) in (68) directly leads to (69).
We now use the two lemmas that are mentioned above to provide the bound on \(\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\).
**Lemma 11**.: _At initialization, the \(\infty\)-norm of \(\mathbf{b}_{s}^{l}\) is in \(\tilde{O}(1/\sqrt{m})\) with probability \(1-me^{-c_{\mathbf{s}}^{l}\ln^{2}(m)}\) for some constant \(c_{\mathbf{s}}^{l}>0,\) i.e.,_
\[\|\mathbf{b}_{s,0}^{l}\|_{\infty}=\tilde{O}\left(\frac{1}{\sqrt{m}}\right). \tag{70}\]
Proof.: We prove this lemma by induction. Before proceeding, lets denote \(\mathbf{s}^{l}=\mathbf{b}_{s,0}^{l}\). Initially, for \(l=L\), we have
\[\left\|\mathbf{s}^{L}\right\|_{\infty}=1/\sqrt{m}\left\|\mathbf{v}_{s}\right\|_ {\infty}=O(1/\sqrt{m}).\]
Implies that (70) holds true for \(l=L\). Suppose that at \(l^{th}\) layer with probability at least \(1-me^{-c_{\mathbf{s}}^{l}\ln^{2}(m)}\), for some constant \(c_{\mathbf{s}}^{l}>0,\left\|\mathbf{s}^{l}\right\|_{\infty}=\tilde{O}(\frac{1}{ \sqrt{m}})\). We now prove that equation (70) is valid for \((l-1)^{th}\) layer with probability at least \(1-me^{-c_{\mathbf{s}}^{l-1}\ln^{2}(m)}\) for some constant \(c_{\mathbf{s}}^{l-1}>0\). In particular, the absolute value of \(i^{th}\) component of \(\mathbf{s}_{i}^{l-1}\) is bounded as
\[\left|s_{i}^{l-1}\right| =\left|\sum_{k=1}^{m}\left(\frac{2}{\sqrt{m}}W_{2}^{l-1}-I\right)_{ ki}\sigma^{\prime}\left(\frac{1}{\sqrt{m}}\sum_{j=1}^{m}\left(W_{2}^{l-1}\right)_{ kj}\left(\mathbf{z}^{(l-2)}-\mathbf{u}^{(l-2)}\right)_{j}+\frac{1}{\sqrt{n}}\sum_{j=1}^{n} \left(W_{1}^{l-1}\right)_{kj}\mathbf{y}_{j}+\mathbf{u}_{k}^{(l-2)}\right) \mathbf{s}_{k}^{l}\right|\] \[\leq\left|\sum_{k=1}^{m}\left(\frac{2}{\sqrt{m}}W_{2}^{l-1}-I \right)_{ki}\sigma^{\prime}\left(\frac{1}{\sqrt{m}}\sum_{j\neq i}^{m}\left(W_{ 2}^{l-1}\right)_{kj}\left(\mathbf{z}^{(l-2)}-\mathbf{u}^{(l-2)}\right)_{j}+ \frac{1}{\sqrt{n}}\sum_{j\neq i}^{n}\left(W_{1}^{l-1}\right)_{kj}\mathbf{y}_{j }+\mathbf{u}_{k}^{(l-2)}\right)\mathbf{s}_{k}^{l}\right|\] \[+\left|\frac{2}{m}\beta_{\sigma}\left(\mathbf{z}_{i}^{(l-2)}- \mathbf{u}_{i}^{(l-2)}\right)\sum_{k=1}^{m}\left(W_{2}^{l-1}\right)_{ki} \mathbf{s}_{k}^{l}\right|+\left|\frac{2}{\sqrt{mn}}\beta_{\sigma}\mathbf{y}_{i }\sum_{k=1}^{m}\left(W_{2}^{l-1}\right)_{ki}\left(W_{1}^{l-1}\right)_{ki} \mathbf{s}_{k}^{l}\right|\] \[+\left|\frac{1}{\sqrt{m}}\beta_{\sigma}\left(\mathbf{z}_{i}^{(l-2) }-\mathbf{u}_{i}^{(l-2)}\right)\sum_{k=1}^{m}\left(W_{2}^{l-1}\right)_{ki} \mathbf{s}_{k}^{l}\right|+\left|\frac{1}{\sqrt{n}}\beta_{\sigma}\mathbf{y}_{i }\sum_{k=1}^{m}\left(W_{1}^{l-1}\right)_{ki}\mathbf{s}_{k}^{l}\right|+\left| L_{\sigma}\sum_{k=1}^{m}\left(\frac{2}{\sqrt{m}}W_{2}^{l-1}-I\right)_{ki}\mathbf{s}_{k}^{l}\right|\] \[=\left|T_{1}\right|+\left|T_{2}\right|+\left|T_{3}\right|+\left| T_{4}\right|+\left|T_{5}\right|+\left|T_{6}\right|\text{.}\]
Now, we provide bounds on the terms (\(T_{1},T_{2},T_{3},T_{4},T_{5}\), and \(T_{6}\)) individually:
\[\left|T_{1}\right|= \left|\sum_{k=1}^{m}\left(\frac{2}{\sqrt{m}}W_{2}^{l-1}-I\right) _{ki}\sigma^{\prime}\left(\frac{1}{\sqrt{m}}\sum_{j\neq i}^{m}\left(W_{2}^{l-1 }\right)_{kj}\left(\mathbf{z}^{(l-2)}-\mathbf{u}^{(l-2)}\right)_{j}+\frac{1}{ \sqrt{n}}\sum_{j\neq i}^{n}\left(W_{1}^{l-1}\right)_{kj}\mathbf{y}_{j}+ \mathbf{u}_{k}^{(l-2)}\right)\mathbf{s}_{k}^{l}\right|\] \[\leq\left|L_{\sigma}\sum_{k=1}^{m}\left(\frac{2}{\sqrt{m}}W_{2}^{ l-1}-I\right)_{ki}\mathbf{s}_{k}^{l}\right|\text{ }\leq\left|\text{ }L_{\sigma}\sum_{k=1}^{m}\frac{2}{\sqrt{m}}\left(W_{2}^{l-1}\right)_{ki} \mathbf{s}_{k}^{l}\right|\text{ }+\left|\text{ }L_{\sigma}s_{k}^{l}\right|,\] \[\left|T_{2}\right|= \left|\frac{2}{m}\beta_{\sigma}\left(\mathbf{z}_{i}^{(l-2)}- \mathbf{u}_{i}^{(l-2)}\right)\sum_{k=1}^{m}\left(\left(W_{2}^{l-1}\right)_{ki} \right)^{2}\mathbf{s}_{k}^{l}\right|\leq\frac{2}{m}\beta_{\sigma}\left| \mathbf{z}_{i}^{(l-2)}-\mathbf{u}_{i}^{(l-2)}\right|\left\|\mathbf{s}^{l} \right\|_{\infty}\sum_{k=1}^{m}\left(\left(W_{2}^{l-1}\right)_{ki}\right)^{2} \text{,}\] \[\left|T_{3}\right|= \left|\frac{2}{\sqrt{mn}}\beta_{\sigma}\mathbf{y}_{i}\sum_{k=1}^{ m}\left(W_{2}^{l-1}\right)_{ki}\left(W_{1}^{l-1}\right)_{ki}\mathbf{s}_{k}^{l}\right| \leq\frac{2}{\sqrt{mn}}\beta_{\sigma}\left|\mathbf{y}_{i}\right|\left\| \mathbf{s}^{l}\right\|_{\infty}\left|\sum_{k=1}^{m}\left(W_{2}^{l-1}\right)_{ki }\left(W_{1}^{l-1}\right)_{ki}\right|\text{,}\] \[\left|T_{4}\right|= \left|\frac{1}{\sqrt{m}}\beta_{\sigma}\left(\mathbf{z}_{i}^{(l-2) }-\mathbf{u}_{i}^{(l-2)}\right)\sum_{k=1}^{m}\left(W_{2}^{l-1}\right)_{ki} \mathbf{s}_{k}^{l}\right|\text{, }\text{ }\text{ }\text{ }\left|T_{5}\right|=\left|\frac{1}{\sqrt{n}}\beta_{\sigma}\mathbf{y}_{i} \sum_{k=1}^{m}\left(W_{1}^{l-1}\right)_{ki}\mathbf{s}_{k}^{l}\right|\text{, }\] \[\left|T_{6}\right|= \left|L_{\sigma}\sum_{k=1}^{m}\left(\frac{2}{\sqrt{m}}W_{2}^{l-1}-I \right)_{ki}\mathbf{s}_{k}^{l}\right|\text{ }\leq\left|\text{ }L_{\sigma}\sum_{k=1}^{m}\frac{2}{\sqrt{m}}\left(W_{2}^{l-1}\right)_{ki} \mathbf{s}_{k}^{l}\right|\text{ }+\left|\text{ }L_{\sigma}\mathbf{s}_{k}^{l}\right|.\]
By using the concentration inequality on the derived \(T_{1}\) and \(T_{6}\) bounds, we obtain
\[\Pr\left[\left|\frac{2L_{\sigma}}{\sqrt{m}}\sum_{k=1}^{m}\left(W_{2}^{l-1} \right)_{ki}\mathbf{s}_{k}^{l}\right|\geq\frac{\ln(m)}{\sqrt{m}}\right]\leq 2e^{- \frac{\ln^{2}(m)}{2(2)^{2}L_{k}^{2}\left\|\mathbf{s}^{l}\right\|^{2}}}\leq 2e^{-c_{ \sigma}^{l}^{l}\ln^{2}(m)}\text{.} \tag{71}\]
Substituting the bound of \(\left\|\mathbf{s}^{l}\right\|\), obtained from Lemma (10), in the above inequality leads to \(c_{\sigma}^{l}=1/(8L_{\sigma}^{2}\left\|\mathbf{s}^{l}\right\|^{2})\geq 1/(8L_{ \sigma}^{2L-2l+2}\left((2c_{20}+1)^{2L-2l}\right)\). Also using the induction hypothesis, we get
\[\left|L_{\sigma}s_{i}^{l}\right|\leq L_{\sigma}\left\|\mathbf{s}^{l}\right\|_{ \infty}=\tilde{O}(1/\sqrt{m})\text{.} \tag{72}\]
Therefore, from (71) and (72), we get both \(T_{1}\) and \(T_{6}\) is \(\tilde{O}(1/\sqrt{m})\) with probability at least \(1-2e^{-c_{\sigma}^{l}^{l}n^{2}(m)}\). As \(\sum_{k=1}^{m}\left(\left(W_{2}^{l-1}\right)_{ki}\right)^{2}\sim\chi^{2}(m)\) and \(\sum_{k=1}^{m}\left(W_{1}^{l-1}\right)_{ki}\left(W_{2}^{l-2}\right)_{ki}\sim \chi^{2}(m)\). Hence, to derive bounds on \(T_{2}\) and \(T_{3}\), by using Lemma 1 in [50], there exist constants \(\hat{c}_{1},\hat{c}_{2}\), and \(\hat{c}_{3}>0\), such that
\[\Pr\left[\left|\frac{2}{m}\beta_{\sigma}|\mathbf{z}_{i}^{(l-2)}|\left\| \mathbf{s}^{l}\right\|_{\infty}\sum_{k=1}^{m}\left(\left(W_{2}^{l-1}\right)_{ki }\right)^{2}\right|\geq\hat{c}_{1}e^{-\frac{\ln^{2}(m)}{\sqrt{m}}}\right] \leq e^{-\hat{c}_{2}m}\text{.} \tag{73}\]
Here, by using Lemma (9), we can write \(\left
Again by using concentration inequality, we obtain the bound for \(T_{4}\) and \(T_{5}\) as follows.
\[\Pr\left[\left|\frac{\beta_{\sigma}}{\sqrt{m}}\left(\mathbf{z}_{i}^{(l-2)}- \mathbf{u}_{i}^{(l-2)}\right)\sum_{k=1}^{m}(W_{2})_{ki}^{l-1}\mathbf{s}_{k}^{l} \right|\geq\frac{\ln(m)}{\sqrt{m}}\right]\leq 2e^{-\frac{\ln^{2}(m)}{2\beta_{ \sigma}^{2}\left(\mathbf{z}_{i}^{(l-2)}-\mathbf{u}_{i}^{(l-2)}\right)^{2}\| \mathbf{z}^{l}\|^{2}}}\leq 2e^{-c_{\text{ax}}\ln^{2}(m)}, \tag{75}\]
\[\Pr\left[\left|\frac{\beta_{\sigma}}{\sqrt{n}}\mathbf{y}_{i}\sum_{k=1}^{m}(W_{ 1})_{ki}^{l-1}\mathbf{s}_{k}^{l}\right|\geq\frac{\ln(m)}{\sqrt{m}}\right]\leq 2 e^{-\frac{\ln^{2}(m)}{2\beta_{\sigma}^{2}\left(\mathbf{y}_{i}\right)^{2}\| \mathbf{z}^{l}\|^{2}}}\leq 2e^{-\frac{\ln^{2}(m)}{2\beta_{\sigma}^{2}\left( \mathbf{y}_{i}\right)^{2}\|\mathbf{z}^{l}\|^{2}}}\leq 2e^{-\frac{\ln^{2}(m)}{ 2\beta_{\sigma}^{2}\left(\mathbf{y}_{i}\right)^{2}\|\mathbf{z}^{l}\|^{2}}} \leq 2e^{-c_{\text{ax}}\ln^{2}(m)}, \tag{76}\]
for some constants \(c_{\text{ax}}=1/2\beta_{\sigma}^{2}\left(\mathbf{z}_{i}^{(l-2)}-\mathbf{u}_{i }^{(l-2)}\right)^{2}\left\|\mathbf{s}^{l}\right\|^{2}\geq 1/2\beta_{\sigma}^{2} \left(\mathbf{z}_{i}^{(l-2)}-\mathbf{u}_{i}^{(l-2)}\right)^{2}L_{\sigma}^{L- l}\left(2c_{20}+1\right)^{L-l}\) and \(c_{\text{ay}}=1/2\beta_{\sigma}^{2}\left(\mathbf{y}_{i}\right)^{2}\left\| \mathbf{s}^{l}\right\|^{2}\geq 1/2\beta_{\sigma}^{2}\left(\mathbf{z}_{i}^{(l-2)}- \mathbf{u}_{i}^{(l-2)}\right)^{2}L_{\sigma}^{L-l}\left(2c_{20}+1\right)^{L-l}\). Combining probabilities in (71), (72), (73), (74), (75) and (76), there exists a constant \(c_{bs}^{l-1}\) such that
\[e^{-c_{bs}^{l-1}\ln^{2}(m)}\leq 2me^{-c_{bs}^{l}\ln^{2}(m)}+4e^{-c_{\sigma}^{ l}\ln^{2}(m)}+2e^{-c_{\sigma}^{l}ln^{2}(m)}+e^{-c_{2}m}+e^{-c_{2}\sqrt{mn}}+2e^{-c_ {\text{ax}}\ln^{2}(m)}+2e^{-c_{\text{ay}}\ln^{2}(m)}\]
and with probability at least \(1-e^{-c_{bs}^{l-1}\ln^{2}(m)}\), we have\(\left|s_{i}^{l-1}\right|=\tilde{O}(1/\sqrt{m})\). This implies
\[\left\|s^{l-1}\right\|_{\infty}=\tilde{O}(1/\sqrt{m}), \tag{77}\]
with probability at least \(1-me^{-c_{bs}^{l-1}\ln^{2}(m)}\), i.e. by induction we prove (70) for any \(l\in[L]\).
**Lemma 12**.: _The \(l_{2}\)-norm of difference between \(\mathbf{b}_{s}^{l}\) and \(\mathbf{b}_{s,0}^{l}\) is in \(\tilde{O}(1/\sqrt{m})\) for any \(l\in[L-1]\), i.e.,_
\[\left\|\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right\|=\tilde{O}(1/\sqrt{m}) \quad\forall l\in[L-1]. \tag{78}\]
Proof.: we prove (78) by using induction. For \(l=L\), we have \(\left\|\mathbf{b}_{s}^{(L)}-\mathbf{b}_{s,0}^{(L)}\right\|=0\). Let us consider (78) is valid for any \(l\in[L]\). Now, we prove that (78) is also valid for \(l-1\).
\[\left\|\mathbf{b}_{s}^{l-1}-\mathbf{b}_{s,0}^{l-1}\right\| =\left\|\left(\frac{2}{\sqrt{m}}\left(W_{2}^{l}\right)^{T}\Sigma^{ l}-\Sigma^{l}\right)\mathbf{b}_{s}^{l}-\left(\frac{2}{\sqrt{m}}\left(W_{20}^{l }\right)^{T}\Sigma^{l}_{0}-\Sigma^{l}_{0}\right)\mathbf{b}_{s,0}^{l}\right\|\] \[=\left\|\left(\frac{2}{\sqrt{m}}\left(W_{2}^{l}\right)^{T}\Sigma^{ l}-\Sigma^{l}\right)\mathbf{b}_{s}^{l}-\left(\frac{2}{\sqrt{m}}\left(W_{20}^{l }\right)^{T}\Sigma^{l}_{0}-\Sigma^{l}_{0}\right)\mathbf{b}_{s,0}^{l}\right.\] \[\quad+\left(\frac{2}{\sqrt{m}}\left(W_{20}^{l}\right)^{T}\Sigma^{ l}-\Sigma^{l}\right)\mathbf{b}_{s,0}^{l}+\left(\frac{2}{\sqrt{m}}\left(W_{20}^{l }\right)^{T}\Sigma^{l}-\Sigma^{l}\right)\mathbf{b}_{s}^{l}\] \[\quad-\left(\frac{2}{\sqrt{m}}\left(W_{20}^{l}\right)^{T}\Sigma^{ l}-\Sigma^{l}\right)\mathbf{b}_{s,0}^{l}-\left(\frac{2}{\sqrt{m}}\left(W_{20}^{l }\right)^{T}\Sigma^{l}-\Sigma^{l}\right)\mathbf{b}_{s}^{l}\|\] \[=\left\|\frac{2}{\sqrt{m}}\left(\left(W_{2}^{l}\right)^{T}-\left(W_ {20}^{l}\right)^{T}\right)\Sigma^{l}\mathbf{b}_{s}^{l}+\left(\frac{2}{\sqrt{m}} \left(W_{20}^{l}\right)^{T}\left(\Sigma^{l}-\Sigma^{l}_{0}\right)-\left(\Sigma^{ l}-\Sigma^{l}_{0}\right)\right)\mathbf{b}_{s,0}^{l}\right.\] \[\quad+\left(\frac{2}{\sqrt{m}}\left(W_{20}^{l}\right)^{T}\Sigma^{ l}-\Sigma^{l}\right)\left(\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right)\|\] \[\leq\left\|\frac{2}{\sqrt{m}}\left(\left(W_{2}^{l}\right)^{T}- \left(W_{20}^{l}\right)^{T}\right)\Sigma^{l}\mathbf{b}_{s}^{l}\right\|+\frac{1}{ \sqrt{m}}\left\|\left(\left(2\left(W_{20}^{l}\right)^{T}-\sqrt{m}I\right)\left( \Sigma^{l}-\Sigma^{l}_{0}\right)\right)\mathbf{b}_{s,0}^{l}\right\|\] \[\quad+\frac{1}{\sqrt{m}}\left\|\left(\left(2\left(W_{20}^{l} \right)^{T}-\sqrt{m}I\right)\Sigma^{l}\right)\left(\mathbf{b}_{s}^{l}- \mathbf{b}_{s,0}^{l}\right)\right\|\] \[=T_{1}+T_{2}+T_{3}.\]
We now provide bounds on \(T_{1},T_{2},\) and \(T_{3}\):
\[T_{1} =\left\|\frac{2}{\sqrt{m}}\left(\left(W_{2}^{l}\right)^{T}-\left( W_{20}^{l}\right)^{T}\right)\Sigma^{l}\mathbf{b}_{s}^{l}\right\|\leq\frac{2}{\sqrt{m}}\left\|W_{2}^{l }-W_{20}^{l}\right\|\left\|\left\|\Sigma^{l}\right\|\left\|\mathbf{b}_{s}^{l} \right\|\leq\frac{2R_{2}L_{\sigma}^{L-l+1}\left(2\left(c_{20}+R_{2}/\sqrt{m} \right)+1\right)^{L-l}}{\sqrt{m}}\]
To obtain bound on \(T_{2}\), we need the following inequality,
\[\left\|\tilde{\mathbf{z}}^{l}(\mathbf{W})-\tilde{\mathbf{z}}^{l}\left( \mathbf{W}_{0}\right)\right\| =\left\|\frac{1}{\sqrt{m}}W_{2}^{l}\mathbf{z}^{l-1}(\mathbf{W})- \frac{1}{\sqrt{m}}W_{20}^{l}\mathbf{z}^{l-1}\left(\mathbf{W}_{0}\right)-\frac{ 1}{\sqrt{m}}W_{2}^{l}\mathbf{u}^{l-1}(\mathbf{W})+\frac{1}{\sqrt{m}}W_{20}^{l} \mathbf{u}^{l-1}\left(\mathbf{W}_{0}\right)\right.\] \[\quad+\frac{1}{\sqrt{n}}W_{1}^{l}\mathbf{y}-\frac{1}{\sqrt{n}}W_ {10}^{l}\mathbf{y}\|\] \[\leq\frac{1}{\sqrt{m}}\left\|W_{20}^{l}\right\|L_{\sigma}\left\| \tilde{\mathbf{z}}^{l-1}(\mathbf{W})-\tilde{\mathbf{z}}^{l-1}\left(\mathbf{W }_{0}\right)\right\|+\frac{1}{\sqrt{m}}\left\|W_{2}^{l}-W_{20}^{l}\right\| \left\|\mathbf{z}^{l-1}(\mathbf{W})\right\|\] \[\leq c_{20}L_{\sigma}\left\|\tilde{\mathbf{z}}^{l-1}(\mathbf{W})- \tilde{\mathbf{z}}^{l-1}\left(\mathbf{W}_{0}\right)\right\|+\frac{1}{\sqrt{m }}\left\|W_{2}^{l}-W_{20}^{l}\right\|\left\|\mathbf{u}^{l-1}(\mathbf{W})\right\| +\frac{1}{\sqrt{n}}\left\|W_{1}^{l}-W_{10}^{l}\right\|\left\|\mathbf{y}\right\|\] \[\leq c_{20}L_{\sigma}\left\|\tilde{\mathbf{z}}^{l-1}(\mathbf{W})- \tilde{\mathbf{z}}^{l-1}\left(\mathbf{W}_{0}\right)\right\|+c_{20}\left\| \mathbf{u}^{l-1}(\mathbf{W})-\mathbf{u}^{l-1}\left(\mathbf{W}_{0}\right)\right\|\] \[\quad+\frac{R_{2}}{\sqrt{m}}\left(c_{\mathrm{ADMM;\mathbf{z}}}^{ l-1}(\mathbf{m})+c_{\mathrm{ADMM;\mathbf{u}}}^{l-1}(\mathbf{m})\right)+R_{1}C_{y}.\]
Since
\[\left\|\mathbf{u}^{l}(\mathbf{W})-\mathbf{u}^{l}\left(\mathbf{W}_{0}\right) \right\|\leq\left(L_{\sigma}+1\right)\left\|\tilde{\mathbf{z}}^{l}(\mathbf{W}) -\tilde{\mathbf{z}}^{l}\left(\mathbf{W}_{0}\right)\right\|,\]
we have
\[\left\|\tilde{\mathbf{z}}^{l}(\mathbf{W})-\tilde{\mathbf{z}}^{l}\left(\mathbf{ W}_{0}\right)\right\|\leq c_{20}\left(2L_{\sigma}+1\right)\left\|\tilde{ \mathbf{z}}^{l-1}(\mathbf{W})-\tilde{\mathbf{z}}^{l-1}\left(\mathbf{W}_{0} \right)\right\|+\frac{R_{2}}{\sqrt{m}}\left(c_{\mathrm{ADMM;\mathbf{z}}}^{l-1} (m)+c_{\mathrm{ADMM;\mathbf{u}}}^{l-1}(m)\right)+R_{1}C_{y}.\]
Since
\[\left\|\tilde{\mathbf{z}}^{(1)}(\mathbf{W})-\tilde{\mathbf{z}}^ {(1)}\left(\mathbf{W}_{0}\right)\right\| \leq\frac{1}{\sqrt{m}}\left\|W_{2}^{(1)}-W_{20}^{(1)}\right\| \left\|\tilde{\mathbf{z}}^{(0)}-\mathbf{u}^{(0)}\right\|+\frac{1}{\sqrt{n}} \left\|W_{1}^{(1)}-W_{10}^{(1)}\right\|\left\|\mathbf{y}\right\|\] \[\leq R_{2}\left(C_{\mathbf{x}}+C_{\mathbf{u}}\right)+R_{1}C_{y}.\]
Recursively applying the previous equation, we get
\[\left\|\tilde{\mathbf{z}}^{l}(\mathbf{W})-\tilde{\mathbf{z}}^{l }\left(\mathbf{W}_{0}\right)\right\|\] \[\leq\left(\frac{R_{2}}{\sqrt{m}}\left(c_{\mathrm{ADMM;\mathbf{z}} }^{l-1}(m)+c_{\mathrm{ADMM;\mathbf{u}}}^{l-1}(m)\right)+R_{1}C_{y}\right)\sum_{ i=0}^{l-2}c_{20}^{i}\left(L_{\sigma}+1\right)^{i}+c_{20}^{l-1}\left(L_{\sigma}+1 \right)^{l-1}\left(R_{2}\left(C_{\mathbf{z}}+C_{\mathbf{u}}\right)+R_{1}C_{y}\right)\] \[=O(1).\]
Using the above inequality bound and Lemma (11), we can write the following with probability \(1-me^{-c_{bs}^{l}\ln^{2}(m)}\):
\[\left\|\left[\Sigma^{l}-\Sigma_{0}^{\prime d}\right]\mathbf{b}_{s,0}^{l}\right\| =\sqrt{\sum_{i=1}^{m}\left(\mathbf{b}_{s,0}^{l}\right)_{i}^{2} \left[\sigma^{\prime}\left(\tilde{\mathbf{z}}^{l}(\mathbf{W})\right)-\sigma^{ \prime}\left(\tilde{\mathbf{z}}^{l}\left(\mathbf{W}_{0}\right)\right)\right]^ {2}}\leq\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\sqrt{\sum_{i=1}^{m}\left[ \sigma^{\prime}\left(\tilde{\mathbf{z}}^{l}(\mathbf{W})\right)-\sigma^{\prime} \left(\tilde{\mathbf{z}}^{l}\left(\mathbf{W}_{0}\right)\right)\right]^{2}}\] \[\leq\left\|\mathbf{b}_{s,0}^{l}\right\|_{\infty}\beta_{\sigma} \left\|\tilde{\mathbf{z}}^{l}(\mathbf{W})-\tilde{\mathbf{z}}^{l}\left(\mathbf{W }_{0}\right)\right\|=\tilde{O}(1/\sqrt{m}).\]
This leads to,
\[T_{2}=\frac{1}{\sqrt{m}}\left\|\left(\left(2\left(W_{20}^{l}\right)^{T}-\sqrt{ m}I\right)\left(\Sigma^{\prime d}-\Sigma_{0}^{\prime d}\right)\right)\mathbf{b}_{s,0}^{l} \right\|\leq\frac{1}{\sqrt{m}}\|2\left(W_{20}^{l}\right)^{T}-\sqrt{m}I\|\left[ \Sigma^{l}-\Sigma_{0}^{\prime d}\right]\mathbf{b}_{s,0}^{l}\right\|=\tilde{O}(1 /\sqrt{m}).\]
Besides, by using the induction hypothesis on \(l\), the term \(T_{3}\) is bounded as
\[T_{3}=\frac{1}{\sqrt{m}}\left\|\left(\left(2\left(W_{20}^{l}\right)^{T}-\sqrt{ m}I\right)\Sigma^{\prime d}\right)\left(\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right) \right\|\leq\frac{1}{\sqrt{m}}\|2\left(W_{20}^{l}\right)^{T}-\sqrt{m}I\|\| \Sigma^{\prime d}\|\|\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\|=\tilde{O}(1/\sqrt {m}).\]
Now combining the bounds on the terms \(T_{1},~{}T_{2}\) and \(T_{3}\), we can write
\[\left\|\mathbf{b}_{s}^{l-1}-\mathbf{b}_{s,0}^{l-1}\right\|\leq T_{1}+T_{2}+T_{3}= \tilde{O}\left(\frac{1}{\sqrt{m}}\right). \tag{79}\]
Therefore, (78) is true for \(l-1\). Hence, by induction (78) is true for all \(l\in[L]\).
By using Lemma 11 and 12, in equation (55), we get
\[\left\|\mathbf{b}_{s}^{l}\right\|_{\infty}\leq\left\|\mathbf{b}_{s,0}^{l} \right\|_{\infty}+\left\|\mathbf{b}_{s}^{l}-\mathbf{b}_{s,0}^{l}\right\|=\tilde{O} \left(\frac{1}{\sqrt{m}}\right). \tag{80}\]
This implies,
\[\mathcal{Q}_{\infty}\left(f_{s}\right)=\max_{1\leq l\leq L}\left\{\left\|\mathbf{b }_{s}^{l}\right\|_{\infty}\right\}=\tilde{O}\left(\frac{1}{\sqrt{m}}\right). \tag{81}\]
|
2309.12692 | Enhancing Graph Representation of the Environment through Local and
Cloud Computation | Enriching the robot representation of the operational environment is a
challenging task that aims at bridging the gap between low-level sensor
readings and high-level semantic understanding. Having a rich representation
often requires computationally demanding architectures and pure point cloud
based detection systems that struggle when dealing with everyday objects that
have to be handled by the robot. To overcome these issues, we propose a
graph-based representation that addresses this gap by providing a semantic
representation of robot environments from multiple sources. In fact, to acquire
information from the environment, the framework combines classical computer
vision tools with modern computer vision cloud services, ensuring computational
feasibility on onboard hardware. By incorporating an ontology hierarchy with
over 800 object classes, the framework achieves cross-domain adaptability,
eliminating the need for environment-specific tools. The proposed approach
allows us to handle also small objects and integrate them into the semantic
representation of the environment. The approach is implemented in the Robot
Operating System (ROS) using the RViz visualizer for environment
representation. This work is a first step towards the development of a
general-purpose framework, to facilitate intuitive interaction and navigation
across different domains. | Francesco Argenziano, Vincenzo Suriani, Daniele Nardi | 2023-09-22T08:05:32Z | http://arxiv.org/abs/2309.12692v1 | # Enhancing Graph Representation of the Environment through Local and Cloud Computation
###### Abstract
Enriching the robot representation of the operational environment is a challenging task that aims at bridging the gap between low-level sensor readings and high-level semantic understanding. Having a rich representation often requires computationally demanding architectures and pure point cloud based detection systems that struggle when dealing with everyday objects that have to be handled by the robot. To overcome these issues, we propose a graph-based representation that addresses this gap by providing a semantic representation of robot environments from multiple sources. In fact, to acquire information from the environment, the framework combines classical computer vision tools with modern computer vision cloud services, ensuring computational feasibility on onboard hardware. By incorporating an ontology hierarchy with over 800 object classes, the framework achieves cross-domain adaptability, eliminating the need for environment-specific tools. The proposed approach allows us to handle also small objects and integrate them into the semantic representation of the environment. The approach is implemented in the Robot Operating System (ROS) using the RViz visualizer for environment representation. This work is a first step towards the development of a general-purpose framework, to facilitate intuitive interaction and navigation across different domains.
## I Introduction
In recent years, the field of robotics has witnessed significant advancements in perception capabilities, thanks to the proliferation of sensors and computer vision techniques. However, bridging the gap between low-level sensor readings and high-level semantic understanding remains a challenge. To this end, we propose a framework, in its early stages, that tackles this gap using a graph representation to connect sensor data with a semantic representation of the environment.
To be able to have good precision in object detection and achieve computationally acceptable performance on resource-constrained robotic hardware, our approach combines classical computer vision tools with modern computer vision cloud services. By leveraging the power of cloud computing, we can offload intensive processing tasks and ensure real-time responsiveness even on limited onboard hardware. Since robots often need to deal with small objects in the environment, we adopted the cloud-vision system to have high precision on single objects in the environment.
One of the major advantages of our framework lies in its ability to be cross-domain and adaptable to different environments without requiring the development of specific customizations for each scenario. This is achieved through the incorporation of an ontology hierarchy, encompassing more than 800 object classes. We integrated the remote hierarchy with the local ontology to obtain a unified graph representation that can be used in the robot's tasks. By using such a comprehensive ontology, our framework can handle diverse environments and objects, facilitating seamless navigation and interaction across various contexts.
To semantically represent the robot's environment, we employ a set of entity classes to define objects and their attributes, while a graph representation is utilized to establish connections between the environment's entities. Such representation can be then exploited also to perform many Human-Robot Interaction tasks, for example in unexplored environments. Another scenario of application is in exploring and storing the information of a scene at different time frames to capture the temporal evolution of an environment (ideally, humans could ask the robotic agent information related to a specific time instance, like when the room was tidied rather than the actual version of the environment) and this can also be studied within the
Fig. 1: The obtained 3D scene representation in RViz with the exploration of a small portion of the environment. The objects are delimited by the bounding boxes (obtained from the cloud computation) fused with the point cloud information. After this, the corresponding graph representation is generated.
Continual Learning framework.
We implement the proposed architecture on the Robot Operating System (ROS) and use the RViz visualizer, allowing for intuitive visualization and interaction with the environment. An example of this visualization can be seen in Fig. 1. The platform used is the TIAGo robot, manufactured by PAL Robotics1.
Footnote 1: [https://pal-robotics.com/](https://pal-robotics.com/)
The rest of this paper is organized as follows: Section 2 provides a brief overview of related work in semantic representation in robotics. Section 3 elaborates on our proposed framework, highlighting the graph-based representation and ontology hierarchy, focusing on the integration with ROS and the chosen platform. Section 4 presents concluding remarks.
## II Related Work
Semantic representation in robotics is a vital area of research, enabling robots to comprehend and interact with their environment effectively. Various approaches have been proposed to bridge the gap between sensor data and semantic understanding. Object recognition algorithms and scene understanding techniques [3] are commonly employed to extract high-level semantic information from sensor data, facilitating intelligent decision-making by robots.
In last years, graph-based approaches have gained popularity in robotics due to their ability to capture complex relationships and dependencies within the environment. By representing the environment as a graph, these frameworks provide a structured representation that facilitates semantic understanding and reasoning. Graph-based frameworks have been successfully applied to object detection [11] and scene parsing [7], enabling robots to perceive and interpret their surroundings effectively. 3D scene graphs have been presented and used in [1, 6] to represent 3D environments. In those, nodes represent spatial concepts at multiple levels of abstraction and edges represent relations between concepts. One of the limitations in building such a representation automatically has been represented by the computational costs. To overcome such issues, recently, in [5] the authors introduced Hydra, capable to build incrementally a 3D scene graph from sensor data in real time thanks to the combination of novel online algorithms and a highly parallelized perception architecture. Another approach, capable of incrementally building the scene graph but also aggregating PointNet[9] features from primitive scene components using graph neural network has been proposed in [10]. In this, an attention mechanism has also been proposed to deal with missing graph data in incremental reconstruction scenarios. When dealing with local approaches, the set of objects that are detectable is quite limited due to the limited availability in hosting large neural network models. To this end, in order to guarantee cross-domain adaptability through a large set of detectable objects and computational sustainability on robot CPUs, cloud services have been adopted as an addition to the local perception pipeline. To address this challenge, cloud services for computer vision, such as Google Cloud Vision2, have emerged as viable solutions in robotic applications [4]. By leveraging cloud services, robots can offload processing tasks, enabling real-time perception even on resource-constrained hardware [8]. By relying on this platform
Fig. 2: On the left: population in the RViz visualizer of the objects detected thanks to Google Cloud’s API. On the right, the graph extracted from the underlying relations between objects, and between objects and their properties. Different colors mean different semantic groups of the nodes: materials, shapes, objects and colors.
we fused the remote hierarchy with the local ontology to map the environment obtaining a unified graph representation that can be used for robot navigation and object localization tasks. Moreover, with respect to the state of the art, we are working on a different level of detail. In this way, we are able to include in the scene representation a wider variety of objects, thus expanding the set of future possible human-robot interaction scenarios.
## III Methodology
### _Entity Representation and Hierarchy_
Dealing with hierarchies of concepts with too many entries can be very difficult to handle, especially when cross-domain applications are involved. Hierarchical ontologies that include very deep levels of details can be composed of thousands of classes, thus there is a bigger risk of miss-classifying the objects in the real world that the robotic agent can come across. Choosing an adequate level of detail is therefore a key aspect for robots that are set to explore different domains. It is desired that these agents should be able to recognize objects and properties of such objects (for example, shapes and materials) that an average human participant in the interaction should recognize. To this end, the proposed approach exploits Google Cloud Vision APIs and their underlying taxonomy. The main idea behind this choice is that for a robotic agent that should eventually enter in contact with some humans, is not requested to deal with very detailed concepts that the human agent could not be aware of, and therefore an ontology hierarchy based on everyday-life objects and entities (even if cross-domain) is enough. This taxonomy can be observed in Fig. 3. Such hierarchy is composed as follows:
* the root is labelled as _Entity_, from which every other concept is derived;
* the first level of descendants consists on some basic general concepts that group several categories like _Animal_, _Vehicle_, _Building_ and so on;
* from this point, the hierarchy is refined in a non-homogeneous fashion, with some classes refined more times than others before reaching the leaf concepts of the tree;
* at the end of the hierarchy there are the most basic concepts, like _Hammer_, _Dishwasher_ and _Bee_.
For the purposes of this research, classes and concepts concerning animals and people were removed. Despite this, the resulting taxonomy is still composed of more than 800 classes, which we considered a suitable amount for this application.
In addition to the main class, Google Vision APIs are able to provide of additional boundary information of the detected object. The challenge with this information is that comes all together and in an unstructured manner. It is important that concepts with different semantics (e.g. the object class and the color, or the material) are treated differently. To this end, we extended the labels provided by Google by reorganizing them in an ad-hoc augmented ontology: in this ontology, every concept is represented in the format '_name.e_', in which the 'e' letter changes depending on what is the semantic on the information we are representing. We have distinguished 4 main groups of concepts: '_o_' stands for _objects_ (i.e. 'hat.o'); '_m_' stands for material ('plastic.m'); '_s_' stands for shape ('cube.s'); '_c_' stands for color ('red.c'). With this organization, we are also able to express relations between concepts: a detected blue chair in the environment is translated as the triple "chair.o ObjHasColor blue.c"; and since the same object can hold multiple relations ("chair.o ObjHasColor blue.c" but also "chair.o ObjHasMaterial plastic.m"), this means that even few images of the environment can produce numerous triples that are then stored and represented within a knowledge graph. All the possible features that are assigned to the objects of the world can come from different sources. The ones that are hard to perceive locally by the robot (like materials of objects) are extracted entirely in cloud, while others are obtained by generalizing and abstracting from other objects using an approach derived by [2]. An example of the output of the system can be seen in Fig. 2, while the architecture can be observed in Fig. 4.
### _ROS Architecture_
In the lower portion of the Fig. 4, it is possible to observe the pipeline of operations that involves the depth images that come from the TIAGo robot's RGB-D camera. Initially, from each image, the more relevant point clouds are extracted and referenced. These point clouds, however, are expressed in the
Fig. 3: Part of the hierarchy class from Google Cloud Vision. From the entity, the father of all the entities, more than 800 classes are provided. The full set of classes is available at [https://storage.googleapis.com/openimages/2018_04/bbox_labels_600_hierarchy_visualizer/circle.html](https://storage.googleapis.com/openimages/2018_04/bbox_labels_600_hierarchy_visualizer/circle.html) |
2306.09351 | BN-DRISHTI: Bangla Document Recognition through Instance-level
Segmentation of Handwritten Text Images | Handwriting recognition remains challenging for some of the most spoken
languages, like Bangla, due to the complexity of line and word segmentation
brought by the curvilinear nature of writing and lack of quality datasets. This
paper solves the segmentation problem by introducing a state-of-the-art method
(BN-DRISHTI) that combines a deep learning-based object detection framework
(YOLO) with Hough and Affine transformation for skew correction. However,
training deep learning models requires a massive amount of data. Thus, we also
present an extended version of the BN-HTRd dataset comprising 786 full-page
handwritten Bangla document images, line and word-level annotation for
segmentation, and corresponding ground truths for word recognition. Evaluation
on the test portion of our dataset resulted in an F-score of 99.97% for line
and 98% for word segmentation. For comparative analysis, we used three external
Bangla handwritten datasets, namely BanglaWriting, WBSUBNdb_text, and ICDAR
2013, where our system outperformed by a significant margin, further justifying
the performance of our approach on completely unseen samples. | Sheikh Mohammad Jubaer, Nazifa Tabassum, Md. Ataur Rahman, Mohammad Khairul Islam | 2023-05-31T04:08:57Z | http://arxiv.org/abs/2306.09351v1 | BN-DRISHTI: Bangla Document Recognition through Instance-level Segmentation of Handwritten Text Images
###### Abstract
Handwriting recognition remains challenging for some of the most spoken languages, like Bangla, due to the complexity of line and word segmentation brought by the curvilinear nature of writing and lack of quality datasets. This paper solves the segmentation problem by introducing a state-of-the-art method (BN-DRISHT1) that combines a deep learning-based object detection framework (YOLO) with Hough and Affine transformation for skew correction. However, training deep learning models requires a massive amount of data. Thus, we also present an extended version of the BN-HTRd dataset comprising 786 full-page handwritten Bangla document images, line and word-level annotation for segmentation, and corresponding ground truths for word recognition. Evaluation on the test portion of our dataset resulted in an F-score of 99.97% for line and 98% for word segmentation. For comparative analysis, we used three external Bangla handwritten datasets, namely BanglaWriting, WBSUBNb_text, and ICDAR 2013, where our system outperformed by a significant margin, further justifying the performance of our approach on completely unseen samples.
Footnote 1: **Code and Demo:**[https://github.com/crusnic-corp/BN-DRISHTI](https://github.com/crusnic-corp/BN-DRISHTI)
Keywords:Handwritten Text Recognition (HTR) Data Annotation Image Segmentation Computer Vision Deep Learning.
## 1 Introduction
Line and word segmentation are one of the most fundamental parts of handwritten document image recognition. As the field of deep learning is maturing at an unprecedented speed, the choice for solving this sort of task employing off-the-shelf deep learning frameworks is getting popular nowadays for its efficiency. However, few attempts have been made to utilize this approach for Bangla handwritten recognition task due to the scarcity of datasets in this domain. Our previous endeavors involved an initial dataset-making process named BN-HTRd (v1.0), comprising of Bangla handwritten document images and only line-level
annotations and ground truths for word recognition. However, that dataset was incomplete due to the missing word-level annotation. Therefore, to have a more comprehensive and useable handwritten recognition dataset, we have extended the BN-HTRd (v4.0) dataset4 by integrating word-level annotations and necessary improvements in the ground truths for the word recognition task.
Footnote 4: **Extended Dataset:** [https://data.mendeley.com/datasets/743k6dm543](https://data.mendeley.com/datasets/743k6dm543)
As segmentation plays a vital role in recognizing handwritten documents, another pivotal _contribution_ of this paper is the conglomeration of a state-of-the-art method for segmenting lines and words from transcribed images. Our approach treats the segmentation task as an object detection problem by identifying the distinct instances of similar objects (i.e., lines, words) and demarcating their boundaries. Thus in a way, we are performing instance-level segmentation as it is particularly useful when homogeneous objects are required to be considered separately. To do so, we partially rely on the YOLO (You Only Look Once) framework. However, the success of our method is more than just the training of the YOLO algorithm. In order to get the perfect words segmented from possibly complex curvilinear text lines, we had to improvise our approach to retrieve the main handwritten text lines correctly by removing other unnecessary elements. For that, we used a combination of the Hough and Affine transform methods. The Hough transform predicts the skew angles of the main handwritten text lines, and the Affine transform rotates them according to the expected gradients, making them straight horizontally. Therefore, the word segmentation approach provides much better results compared to the segmentation on skewed lines. Thus, the main contributions of this paper are threefold:
1. Introducing a straightforward novel hybrid approach, for instance-level handwritten document segmentation into corresponding lines and words.
2. Achieved _state-of-the-art_ (SOTA) scores on three different prominent Bangla handwriting datasets for line/word segmentation tasks.
3. Set a new benchmark for the BN-HTRd dataset. Also, extended5 it to be one of the largest and the most comprehensive Bangla handwritten document image segmentation and recognition dataset by adding 200k+ annotations.
Footnote 5: **Changes:** [https://data.mendeley.com/v1/datasets/compare/743k6dm543/4/1](https://data.mendeley.com/v1/datasets/compare/743k6dm543/4/1)
## 2 Related Work
**CMATERdb**[21] is one of the oldest character-level datasets consisting of 150 Bangla handwritten document images distributed among two versions. Another prominent character-level dataset having 2000 handwritten samples named **BanglaLekha-Isolated**[5] contains 166105 handwritten characters written by an age group of 6 to 28. **Ekush**[15], which is a multipurpose dataset, contains 367,018 isolated handwritten characters written by 3086 individual writers. The authors also benchmarked the dataset using a multilayer CNN model (**EkushNet**) for character classification, achieving an accuracy of 97.73% on their dataset while scoring 95.01% in the external **CMATERdb** dataset.
A paragraph-level dataset that resembles our dataset in terms of word-level annotation is the **BanglaWriting**[12] dataset, which includes single-page handwriting comprising 32,787 characters, 21,234 words, and 5,470 unique words produced by 260 writers of different ages and personalities. Another paragraph-level unannotated dataset **WBSUBNb_text**[10], consisting of 1383 handwritten Bangla scripts having around 100k words, was collected from 190 transcribers for the writer identification task. While in terms of a document-level dataset, mostly resembling our own, **ICDAR 2013**[22] handwriting segmentation contests dataset comes with 2649 lines and 23525 word-level annotations for 50 handwritten document images on Bangla.
Segmenting handwritten document images in terms of lines and words is the most crucial part when it comes to end-to-end handwritten document image recognition. In _Projection-based_ methods [9][14][13][8], the handwritten lines are obtained by computing the average distance between the peaks of the projected histogram. A method based on the skew normalization process is proposed in [3]. _Hough-based_ methods [9] represent geometric shapes such as straight lines, circles, and ellipses in terms of parameters to determine geometric locations that suggest the existence of the desired shape. The author of [8] presented a skew correction technique for handwritten Arabic document images using their optimized randomized Hough transform, followed by resolving the primary line for segmentation. For layout analysis, _Morphology-based_ approaches [9][7] have been used along with piece-wise painting (PPA) algorithms [2], to segment script independent handwritten text lines. In contrast, _Graphbased_ approaches [9][23][11] compactly represent the image structure by keeping the relevant information on the arrangement of text lines. _Learning-based_ techniques recently became popular for segmenting handwritten text instances. The authors of [19][24][20][4] used a Fully Convolutional Network (FCN) for this purpose. A model based on the modified multidimensional long short-term memory recurrent neural networks (**MDLSTM RNNs**) was proposed in [6]. An unsupervised _clustering_ approach [16] was utilized for line segmentation which achieved an F-score of 81.57% on the BN-HTRd dataset.
A series of consistent recent works on **Bangla handwriting segmentation**[17][1][18] is carried out by a common research team that also developed the WBBSUBNb_text dataset. Their technique predominantly relies on the projection profile method and connected component analysis. They initially worked on a tri-level (line/word/character) segmentation [17] while their latest works are focused solely on word [1] and line segmentation [18]. Moreover, in [18], the method serves the line segmentation on multi-script handwritten documents while the other two research only work for the Bangla scripts.
Our work can be categorized as a **Hybrid Approach** for segmenting lines and words. Our supervised models employ YOLO deep learning framework to predict lines and words from handwritten document images. We used the Hough Line Transform to measure the segmented line's skew angle, then corrected it with Affine Transform. These combinations were never used in the literature for Bangla handwritten recognition tasks.
## 3 Dataset
Data annotation is one of the most crucial parts of the dataset curation process where supervised learning is concerned. As a primary text source, we considered the BBC Bangla News platform since it does not require any restrictions and has an open access policy. Hence, we downloaded various categories of news content as files in TEXT and PDF format, renamed files according to the sequence of 1 to 237, and put them in separate folders. We distributed those 237 folders among 237 writers of different ages, disciplines, and genders. They were instructed to write down the text file's contents in their natural writing style and to take pictures of the pages afterward. This resulted in 1,591 handwritten images in total. Due to the complexity of the task, we were only able to recruit a total of 75 individuals to annotate lines of assigned handwritten images using an annotation tool called LabelImg. As a result, we were only able to annotate a maximum of 150 folders. The resultant annotation produced YOLO and PASCAL VOC formatted ground truth for line segmentation. These 150 folders of handwritten images and their line annotations were included in the first version of the BN-HTRd dataset [16]. For the purpose of word segmentation, we have extended the dataset (v4.0) by adding bounding-box annotations of individual words for all the annotated lines. We also organized each word of the text file into separate rows in Excel in order to create the ground truth Unicode representation of the corresponding word's images for recognition purposes in the future.
We used this extended BN-HTRd dataset containing annotations in 150 folders to develop and test our system. It contains a total of 786 handwritten images comprising 14,383 lines and 1,08,181 words. The rest of the unannotated 87 folders were automatically annotated using our system, resulting in an additional 14,836 lines and 1,06,135 words, which we denoted as Automatic Annotations. For the purpose of experimental evaluation, we split the 150 folders into two subsets and took one image from each of the folders for either validation or testing (resulting in 75 images for each subset). The rest of the 636 images were used for training purposes. Table 1 below shows this subdivision.
## 4 Proposed Methodology
We have broken down our overall system architecture in Fig. 1, which consists of six parts. Those six parts cover the overall process of how our system functions. Before dissecting those parts in detail in the later sections (4.1 - 4.5), we will provide a brief overview in the following:
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Type** & **Purpose** & **Train** & **Valid** & **Test** & **Total** \\ \hline \hline Doc. Images & Line Segmentation & 636 & 75 & 75 & 786 \\ \hline Line Images & Word Segmentation & 11,471 & 1,515 & 1,397 & 14,383 \\ \hline Word Images & Word Recognition & 86,055 & 11,712 & 10,414 & 1,08,181 \\ \hline \end{tabular}
\end{table}
Table 1: Distribution of extended BN-HTRd (v4.0) dataset for experimentation.6
* Our efforts in making and extending the BN-HTRd dataset involved various development processes such as distributing the data to the writers, manual annotations, and making it compatible with supervised learning methods such as ours (details in section 3).
* Although training the models is a crucial part of any supervised system, it was not enough in our case despite YOLO being one of the best frameworks. It was predicting redundant lines, which we had to eliminate in order to get better segmentation scores (details in section 4.1 and 4.2).
* As we were also getting some unnecessary lines along with the target line, a better line segmentation method is essential to segment the words correctly. To remove them, we rotated the curvilinear lines using the Hough-Affine transformation and corrected their skewness (details in section 4.3).
* We applied the final/second YOLO line prediction on the skew-corrected lines, followed by some post-processing in order to extract the main hand-written line (details in section 4.4).
* Finally, word prediction and segmentation are performed on skew-corrected final segmented line images using the word model (details in section 4.5).
Figure 1: Overall System Architecture for BN-DRISHTI.
### Training Models
YOLOv5x (XLarge) model architecture having a default SGD optimizer was used to train both our Line and Word models for 300 epochs. We used document images with line annotations to train the initial line segmentation model. In contrast, line images and their word annotations were used to train the Word model. The training was done using an NVIDIA RTX 3060 Laptop GPU containing 6 GB GDDR6 memory and 3840 CUDA cores.
### First-Line Prediction and Segmentation
The line detection is performed on document images without pre-processing or resizing; some output samples are shown in Fig. 2a. YOLO generates a TEXT file for each document image, representing each predicted line as \(<\)\(class\_id\), x, y, width, height, confidence\(>\) without particular order. The confidence threshold during prediction is set to 0.3 to include lines with few words or a single word that was initially missing. However, this approach resulted in both unnecessary line predictions and correct ones with confidence below 0.5. To address this, the output is sorted based on the y-axis attribute, and unnecessary bounding boxes having unusual heights but lower confidence that encompasses or overlaps with one or more boxes are filtered out, resulting in filtered first-line predictions (Fig. 2b). The filtered predicted lines are then extracted using their YOLO attributes: \(<\)_x, y, width, height\(>\)_. Fig. 2c illustrates the process of first-line detection, filtering, and corresponding segmentation.
Figure 2: Representation of First-line prediction and segmentation, where a) sample image with first-line prediction containing multiple unnecessary predictions, b) filtered first-line prediction, and c) another sample image with filtered first-line prediction and segmentation for curvilinear handwriting.
### Rotation (skew estimation and correction)
After analyzing our first segmented line images, we found out that, with the main handwritten line, we are also getting some unwanted lines at the top or bottom due to the skewness of the lines and the rectangular shape of the predicted bounding box. Therefore, the skew correction over the first line prediction is important in order to retrieve the main handwritten line. We denoted this process as _Rotation_, which is performed by applying the Hough line and Affine transform. We have represented the overall rotation process in Fig. 3.
#### 4.3.1 Skew Estimation:
We categorized handwritten lines' skew into two types: Positive and Negative (shown in Fig. 4). The skew angle estimation is performed in two phases:
1. Line Skew (LSkew) Estimation: where we applied the Standard Hough Transform (SHT).
2. Dimension Skew (DSkew) Estimation: where we applied the Probabilistic Hough Transform (PHT).
_LSkew:_ In the Bangla writings, each word consists of letters and the letters are often connected by a horizontal line called'matra'. By connecting those horizontal lines above the words using SHT, we construct straight lines, which we denote as Hough lines. Using those Hough lines, we estimate the skew angle of the main handwritten line. In terms of the representation of LSkew (Fig. 4), if the detected Hough lines have positive skew, the estimated skew angle will be negative; otherwise positive. We illustrate this LSkew estimation process in Fig. 5 by taking two samples of segmented line images, where one got positive skew, and the other got negative skew.
Figure 3: Flowchart of skew estimation and correction over the first predicted lines.
The SHT is applied to get the Hough lines by connecting the adjacent edge points of the main handwritten line's words, represented in Fig. 5 (top). Consequently, we calculated the average of all the detected Hough lines' parameters and considered this value to be the best detected Hough line. Fig. 5 (bottom) represents the average skew angle (\(\theta_{avg}\)), which is the optimal skew angle of our best detected Hough line.
_DSkew:_ In some cases, SHT fails to detect the Hough lines, despite the main handwritten line on those segmented images being well skewed. We identified that the dimension of those failed images is too small compared to the standard dimension of the line images where SHT works. Moreover, in most cases, those line images contain only a few words, in such cases, not requiring any skew correction. Therefore, we opt for the DSkew process by applying PHT. We perform up-scaling on those failed images by preserving the aspect ratio before applying PHT (shown in Fig. 6).
Figure 4: Representation of Hough lines using equation \(\rho=\) x*\(\cos\theta\) + y*\(\sin\theta\); where \(\theta\) is the angle of the detected line and \(\rho\) is the distance from x-axis.
Figure 5: Detected (top) and the Best Detected (bottom) Hough Lines, where the main handwritten line contains, a) Positive Skew, and b) Negative Skew.
Figure 6: Changing the dimension of nonstandard line image before applying PHT.
We apply some preprocessing steps such as image binarization and morphological operation with a 3x3 kernel to make the objects' lines and overall shape thicker and sharper. Finally, the canny edge detection method is applied before we can use the PHT. The output of preprocessing steps can be seen in Fig. 7.
The PHT not only joins the'matra' of words but also connects any subsets of the points of each word edges individually if there is any potential Hough line. We named it dimension skew or DSew, as each word component in the image takes part in the skew estimation process. Like SHT, we also get the typical Hough line parameters such as \((x_{1},y_{1})\), \((x_{2},y_{2})\), and \((\rho,\theta)\) in PHT. Hence, we applied the PHT in the edge detected image (of Fig. 7c) and got the Hough lines detected, shown in Fig. 7d. As the process detects multiple Hough lines for almost every word, therefore, each line has many \(\theta\), which we denote as _Degree_. To obtain the optimal skew angle of that image, we perform a voting process by dividing the \(xy\) space into six cases to determine where the maximum detected Hough lines had fallen. We then take an average of those lines' parameters to find the average of Degree (\(Degree_{avg}\)) and consider this as the skew angle of the detected Hough lines by PHT. The six cases of the voting process and their outcomes are given in Table 2:
#### 4.3.2 Skew Correction:
In order to correct the estimated skew of our segmented lines, we rotate them using the Affine Transform (AT) relative to the center of the image. The rotation process for LS
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Based On** & **Voting Categories** & **Detected Hough** & **Final outcome as an** \\ & & **Line Types** & **average of degrees** \\ \hline \multirow{2}{*}{Coordinates} & \(x_{1}\) equals \(x_{2}\) & Vertical & Return \(Degree_{avg}\) as \(90^{*}\) \\ \cline{2-4} & \(y_{1}\) equals \(y_{2}\) & Straight & Return \(Degree_{avg}\) as \(0^{*}\) \\ \hline \multirow{4}{*}{Quadrants} & -\(45^{*}\leq\) Degree \(\leq 0^{*}\) & Positive Skew & Return \(Degree_{avg}\) \\ \cline{2-4} & -\(90^{*}\leq\) Degree \(<\)-\(45^{*}\) & Negative Skew & Return \(Degree_{avg}\) \\ \cline{2-4} & \(0^{*}<\)Degree \(\leq 45^{*}\) & Negative Skew & Return \(Degree_{avg}\) \\ \cline{2-4} & \(45^{*}<\)Degree \(\leq 90^{*}\) & Positive Skew & Return \(Degree_{avg}\) \\ \hline \end{tabular}
\end{table}
Table 2: Voting process of DSew with their categories and outcomes.
Figure 7: Preprocessing and Hough line detection of sample resized line image represented in Fig. 6; where a) Binarization, b) Morphological Dilation, c) Canny edge detection, and d) Detected Hough lines using PHT.
_LSew:_ After estimating the optimal skew angle (\(\theta_{avg}\)) using LSkew, we rotate the image with that skew angle through AT using the following two conditions:
1. If the value of \(\theta_{avg}\) is Negative, we rotate the image Clockwise.
2. If the value of \(\theta_{avg}\) is Positive, we rotate the image Anti-Clockwise.
Fig. 8 illustrates the skew-correction for the segmented lines of Fig. 5.
_DSew:_ The rotation for DSkew correction is similar to the rotation for LSkew correction, but the process of finding the optimal degree for rotation is different. Here, we calculate the optimal skew angle (\(\theta_{avg}\)) based on the estimated \(Degree_{avg}\) from DSkew. Then according to \(\theta_{avg}\), we rotate the image using AT by following the four conditions listed in Table 3:
### Final/Second Line Prediction and Segmentation
Final or second line prediction is applied on the skew-corrected lines to retrieve the main handwritten lines by eliminating the unwanted lines. Before that, we trim down each side of the DSkewed line image by a little portion to avoid unnecessary word prediction. Here, we consider a confidence threshold of 0.5. We also follow a selection process when we have multiple lines even after the second line prediction, as described below:
1. **The number of line predictions is one:** In this case, we segment the line with the given bounding box attributes, like in Fig. 9. If the width of the predicted line is less than 40% of the image width, we keep it as it is.
Figure 8: Skew correction of segmented lines using AT where the original line was, a) Negative Skewed (rotated anti-clockwise); and b) Positive Skewed (rotated clockwise).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**No.** & **Conditions** & **Optimal Skew (\(\theta_{avg}\))** & **Rotation** \\ \hline
1. & -45\({}^{*}\)\(\leq\)\(Degree_{avg}\)\(\leq\) 0\({}^{*}\) & \(\theta_{avg}\)\(=Degree_{avg}\) & Clockwise \\ \hline
2. & -90\({}^{*}\)\(\leq\)\(Degree_{avg}\)\(<\) -45\({}^{*}\) & \(\theta_{avg}\)\(=Degree_{avg}\) + 90\({}^{*}\) & Anti-clockwise \\ \hline
3. & 0\({}^{*}\)\(<\)\(Degree_{avg}\)\(\leq\) 45\({}^{*}\) & \(\theta_{avg}\)\(=Degree_{avg}\) & Anti-clockwise \\ \hline
4. & 45\({}^{*}\)\(<\)\(Degree_{avg}\)\(\leq\) 90\({}^{*}\) & \(\theta_{avg}\)\(=Degree_{avg}\) - 90\({}^{*}\) & Clockwise \\ \hline \end{tabular}
\end{table}
Table 3: Conditions for skew correction for the process of DSkew.
2. **The number of line predictions is two:** In this case, normally, we segment the line prediction with maximum widths, like in Fig. 10. But, if both the predicted line's width is less than 50% of the image width, then we check their confidence and segment the line with maximum confidence. Otherwise, we keep the image as it is.
3. **The number of line predictions is three:** In this case, we segment the line which stays in the middle like in Fig. 11.
4. **The number of line predictions is more than three:** Unseen cases where we select and segment the line having maximum width.
As the segmented lines have passed through the pre-processing, rotation, and final line segmentation process, we now have our final lines segmented from the handwritten document images. Note that, we also keep track of the predicted _line numbers_ within the document for future recognition purposes. Fig. 12 illustrates the resultant final line segmentation of the lines represented in Fig. 5.
Figure 11: Line image with two line prediction and segmentation.
Figure 12: a) Initial line segmentation by YOLO containing mostly curvilinear or skewed handwritten lines with noises, and b) Final segmented lines by our line segmentation approach, which are straight and without any unnecessary lines.
Figure 10: Line image with two line prediction and segmentation (usual case).
### Word Prediction and Segmentation
We perform word prediction on the Final segmented lines by directly employing our custom YOLO word model, where we set the confidence threshold to be 0.4. We also sort the predictions based on the horizontal axis of the lines in order to get the position of a particular word in that line for future recognition purposes. Fig. 13 illustrates word prediction and segmentation from the running example.
## 5 Experimental Results
In this section, we evaluate the efficiency of our line and word segmentation approach on the BN-HTRd dataset. We will also compare our results with an unsupervised line segmentation approach of BN-HTR_LS system [16].
### Evaluation Matrices
Two bounding boxes (lines) are considered a one-to-one match if the total matching pixels exceed or equal the evaluator's approved threshold (\(T_{a}\)). Let \(N\) be the number of ground-truth elements, \(M\) be the count of detected components, and _o2o_ be the number of one-to-one matches between \(N\) and \(M\); the Detection Rate (DR) and Recognition Accuracy (RA) are equivalent of Recall and Precision. Combining these, we can get the final performance metric FM (similar to F-score) using the equation below:
\[DR=\frac{o2o}{N},\quad RA=\frac{o2o}{M},\quad FM=\frac{2DR*RA}{DR+RA} \tag{1}\]
### Line Segmentation
For the evaluation of our BN-DRISHTI line segmentation approach, we first did the **Quantitative analysis** on the test set of 75 handwritten document images from the BN-HTRd dataset containing 1397 (_N_) manually annotated ground truth lines. Our segmentation approach's final line prediction was 1396 (_M_). Among those, the number of _o2o_ matches was 1314. However, by using only YOLO trained model, we got 1433 (_M_) which implies that YOLO predicted 37 more redundant lines as compared to our approach, making our approach much superior. These results are listed in rows 2-3 of Table 4.
Figure 13: Word prediction and segmentation on skew corrected final segmented lines; where \(W_{i}\) is the \(i^{th}\) word within the line.
After analyzing the line's ground truth and prediction bounding boxes visually (see Fig. 14), we came to the conclusion that the overlap between them for each line is not quite accurate since we performed skew correction before segmenting the line images. Thus, in automatic or quantitative evaluation the results we are getting are not as significant as we were expecting, since almost every line of the document images was segmented perfectly. Hence, we decided to do a **Qualitative evaluation** by going through all the ground truth and predictions manually to find the _o2o_ for each handwritten document. And the overall _o2o_ match was 1396, which is equal to the final line predictions we were getting. In Table 4 we put together the relative performance of our line segmentation approach's (BN-DRISHTI) quantitative and qualitative analysis as compared to the unsupervised approach of BN-HTR_LS system7[16] where they only performed line segmentation on the same dataset.
Footnote 7: **BN-HTR_LS Codebase:**[https://github.com/shaoncscu/BN-HTR_LS](https://github.com/shaoncscu/BN-HTR_LS)
### Word Segmentation
For this experiment, we used 10,414 manually annotated ground truth words within the line images of the test set's 75 handwritten documents. Our word model predicted 10,348 words. Table 5 shows the score of **Quantitative** analysis.
Again for the same aforementioned reason, the quantitative evaluation does not do justice to our approach's true word segmentation capabilities. Hence, we visually compared the ground truths against our predictions and found that the position of the words bounding box has changed drastically due to the changes in image dimension during our line segmentation approach, as illustrated in Fig. 14. This occurred because the original ground truth annotation was on the
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Approaches** & **N** & **M** & **o2o** & **DR(\%)** & **RA(\%)** & **FM(\%)** \\ \hline BN-HTR\_LS [16] & 2915 & 3437 & 2591 & 88.88 & 75.38 & 81.57 \\ \hline YOLO line model & 1397 & 1433 & 1314 & 94.06 & 91.7 & 92.86 \\ \hline
**BN-DRISHTI** (Quantitative) & 1397 & 1396 & 1314 & 94.06 & 94.13 & 94.09 \\ \hline
**BN-DRISHTI** (Qualitative) & 1397 & 1396 & 1396 & **99.93** & **1.00** & **99.97** \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of line segmentation results on BN-HTRd test sets.
Figure 14: a) Ground truth annotation on skewed line; Vs. b) Prediction on straight line.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Ground Truths** & **Prediction** & **DR (\%)** & **RA (\%)** & **FM (\%)** \\ \hline
10,414 & 10,348 & 15.2 & 17.7 & 16.0 \\ \hline \end{tabular}
\end{table}
Table 5: Quantitative evaluation of our word segmentation on BN-HTRd test sets.
skewed lines, and our word prediction was done on the skew-corrected straight lines. Thus, after analyzing the ground truth and prediction bounding boxes, we came to the conclusion that the evaluation will not be fair if done automatically. Therefore, we again opt for a manual **Qualitative** analysis. We show both the quantitative and qualitative results in Table 6.
In Table 6, the qualitative analysis results perfectly justify our systems word segmentation capabilities. We also emphasize that word segmentation is far more precise when combined with our skew correction strategy.
### Automatic Annotation
We used the extra 87 folders containing 805 document images without ground truths to get an idea about the efficiency of our system's automatic line and word annotation capability. We randomly picked 151 document images and their predictions and compared them manually. The obtained manual evaluation scores are given in Table 7, and the results far exceed our expectations.
### Comparative Analysis
**ICDAR 2013 Dataset [22]**: This handwriting segmentation contests dataset contains 50 images for Bangla. As ground truth (\(N\)), we got 879 lines and 6,711 words; against which our system segmented 874 lines and 6,667 words (\(M\)). We choose team Golestan-a, Golestan-b, and INMC for performance comparison, as the Golestan method outperforms all other contestants with an overall score (SM) of 94.17%. And for Line segmentation, the INMC method was on the top with a 98.66% FM score. The comparison in Table 8 indicates that our system outperforms Golestan and INMC team's SM scores by a good margin. While our word segmentation results absolutely smashed the competitors, the line segmentation score was only second to INMC by a narrow margin.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Analysis** & **Word prediction on** & **N** & **M** & **DR** & **RA** & **FM** \\ \hline Quantitative & First segmented (Skewed) line & 10,414 & 10,383 & 0.39 & 0.45 & 0.42 \\ \hline Quantitative & Final segmented (Straight) line & 10,414 & 10,348 & 0.15 & 0.17 & 0.16 \\ \hline Qualitative & Final segmented (Straight) line & 10,414 & 10,348 & 0.98 & 0.98 & **0.98** \\ \hline \end{tabular}
\end{table}
Table 6: Results of our word segmentation approach on the original ground truth (skewed) vs. skew-corrected (straight) lines from the BN-HTRd test sets.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Class** & **Ground Truth** & **Prediction** & **o2o** & **DR (\%)** & **RA (\%)** & **FM (\%)** \\ \hline Line & 2829 & 2840 & 2829 & 100 & 99.61 & 99.80 \\ \hline Word & 19,402 & 19,393 & 19,299 & 99.4 & 99.51 & 99.49 \\ \hline \end{tabular}
\end{table}
Table 7: Results of automatic annotation on the unannotated portion of BN-HTRd.
**BanglaWriting Dataset [12]**: It comprises 260 full-page Bangla handwritten documents and only the words ground truth. We manually evaluated the word segmentation results using randomly selected 50 document images from this dataset, as the word annotation was done directly over the document without any intermediate line annotation. Those selected 50 images contain 4409 words, and our system correctly segmented 4186 words against them. Table 9 indicates how our system performed on the BanglaWriting dataset.
**WBSUBNdb_text Dataset [10]**: This publicly available dataset has been used by two of the most prominent line [18] and word [1] segmentation methods for evaluation. As it contains 1352 Bangla handwriting without any ground truth, we only performed a qualitative analysis similar to the settings mentioned in those papers. We positioned our approach against these systems in Table 10.
## 6 Conclusions
The main contribution of this research is the significant improvement in line and word segmentation for Bangla handwritten scripts, which lays the foundation of our envisioned Bangla Handwritten Text Recognition (HTR). To alleviate the shortage of Bangle document-level handwritten datasets for future researchers, we have extended our BN-HTRd dataset. Currently, it is the largest dataset of its type with line and word-level annotation. Moreover, keeping the recognition task in mind, we have stored the words' Unicode representation against their position in the ground truth text. The main recipe behind our approach's overwhelming success is a two-layer line segmentation technique combined with an
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Systems** & **Class** & **DR (\%)** & **RA (\%)** & **FM (\%)** \\ \hline \multirow{2}{*}{WBSUBNdb} & Lines [18] & 96.99 & 97.07 & 97.02 \\ \cline{2-5} & Words [1] & 86.96 & 93.25 & 90.0 \\ \hline \multirow{2}{*}{**BN-DRISHTI**} & Lines & 99.27 & 99.44 & **99.35** \\ \cline{2-5} & Words & 96.85 & 97.18 & **97.01** \\ \hline \end{tabular}
\end{table}
Table 10: Comparison of segmentation results based on WBSUBNdb_text dataset.
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|l|l|l|} \hline
**Systems** & **Class** & **N** & **M** & **o2o** & **DR (\%)** & **RA (\%)** & **FM (\%)** & **SM (\%)** \\ \hline \multirow{2}{*}{Golestan-a} & Lines & 2649 & 2646 & 2602 & 98.23 & 98.34 & 98.28 & \multirow{2}{*}{94.17} \\ \cline{2-5} & Words & 23525 & 23322 & 21093 & 89.66 & 90.44 & 90.05 & \\ \hline \multirow{2}{*}{Golestan-b} & Lines & 2649 & 2646 & 2602 & 98.23 & 98.34 & 98.23 & \multirow{2}{*}{90.06} \\ \cline{2-5} & Words & 23525 & 23400 & 21077 & 89.59 & 90.07 & 89.83 & \\ \hline \multirow{2}{*}{INMC} & Lines & 2649 & 2650 & 2614 & 98.68 & 98.64 & **98.66** & \multirow{2}{*}{93.96} \\ \cline{2-5} & Words & 23525 & 22957 & 20745 & 88.18 & 90.36 & 89.26 & \\ \hline \multirow{2}{*}{**BN-DRISHTI**} & Lines & 879 & 874 & 863 & 98.18 & 98.74 & 98.46 & \multirow{2}{*}{**96.65**} \\ \cline{2-5} & Words & 6711 & 6677 & 6348 & 98.74 & 95.07 & **94.83** & \\ \hline \end{tabular}
\end{table}
Table 8: Comparison among top teams of ICDAR 2013 and our BN-DRISHTI system.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Task** & **N** & **M** & **o2o** & **DR (\%)** & **RA (\%)** & **FM (\%)** \\ \hline Word Segmentation & 4409 & 4219 & 4186 & 94.9 & 99.2 & 97.0 \\ \hline \end{tabular}
\end{table}
Table 9: Word segmentation results on fifty images of BanglaWriting dataset.
intricate skew correction in the middle. Our proposed line segmentation approach has achieved a near-perfect benchmark evaluation score in terms of F measure (99.97%) compared to the unsupervised approach (81.57%) of BN-HTR_LS [16]. The word segmentation technique also achieved an impressive score (98%) on the skew-corrected lines by our system compared to the skewed lines. Furthermore, we have compared our method against the previous SOTA systems on three of the most prominent Bangla handwriting datasets. Our approach outperformed all those methods by a significant margin, making our BN-DRISHTI system a new state-of-the-art for Bangla handwritten segmentation task. We aim to expand our work by integrating supervised word recognition to build an "End-To-End Bangla Handwritten Image Recognition system".
|
2309.16392 | Algebraic Multiplicity and the Poincaré Problem | In this paper we derive an upper bound for the degree of the strict invariant
algebraic curve of a polynomial system in the complex project plane under
generic condition. The results are obtained through the algebraic
multiplicities of the system at the singular points. A method for computing the
algebraic multiplicity using Newton polygon is also presented. | Jinzhi Lei, Lijun Yang | 2023-09-28T12:39:56Z | http://arxiv.org/abs/2309.16392v1 | [
###### Abstract
In this paper we derive an upper bound for the degree of the strict invariant algebraic curve of a polynomial system in the complex project plane under generic condition. The results are obtained through the algebraic multiplicities of the system at the singular points. A method for computing the algebraic multiplicity using Newton polygon is also presented.
Polynomial system, invariant algebraic curve, Poincare problem] Algebraic Multiplicity and the Poincare Problem
J. Lei and L. Yang]Jinzhi Lei and Lijun Yang
34A05; 34M99.
## 1 Introduction
In this paper, we will present an approach to establish the upper bound of the degree of the strict invariant algebraic curve of a polynomial system in the complex projective plane \(\mathbb{P}^{2}_{\mathbb{C}}\). A polynomial system in \(\mathbb{P}^{2}_{\mathbb{C}}\) is defined by the vector field
\[\dot{z}=P(z,w),\ \ \dot{w}=Q(z,w), \tag{1.1}\]
where \(P\) and \(Q\) are relatively prime polynomials with complex coefficients.
A polynomial \(f(z,w)\) is said to be a Darboux Polynomial of (1.1) if there exists a polynomial \(R_{f}(z,w)\) such that
\[P(z,w)\frac{\partial f}{\partial z}+Q(z,w)\frac{\partial f}{\partial w}=R_{f}( z,w)f(z,w). \tag{1.2}\]
We call the zero-set \(C(f)=\{(z,w)\in\hat{\mathbb{C}}^{2}|\,f(z,w)=0\}\) an invariant algebraic curve, and \(R_{f}\) the cofactor of \(f\). In particular, if \(C(f)\) contains no constant irreducible component (i.e., the line \(z=z_{0}\) or \(w=w_{0}\)), then \(f\) is a strict Darboux polynomial, and \(C(f)\) is a strict invariant algebraic curve.
The study of invariant algebraic curves of a polynomial system goes back to Darboux and Poincare( see Schlomiuk [11]). In general, the Darboux polynomial of the sytem (1.1) can be found by solving the equation (1.2) for \(f\) and \(R_{f}\)
Equation (2) is easy to solve if the degree of \(f\) is known in advance (for example, see Pereira [10, Propostion 1]). However, it is still an open problem, for a given system, to establish the upper bound for the degree of the invariant algebraic curve effectively. This is named as the Poincare problem. It is known that such an upper bound do exists for a given polynomial system, see Schlomiuk [11, Corollary 3.1]. However, the uniform upper bound that depends merely on the degree of the system does not exists, for non-trivial example, see Ollagnier [8]. As consequence, the practical arithmetic to find the bound from the coefficients is significant for the general solution for finding the invariant algebraic curve of a polynomial system. For more remarks and results on the Poincare problem, see Carnicer [2], Campillo and Carnicer [4], Schlomiuk [11], Walcher [12]. The first result to address the Poincare problem was presented by Carnicer[2] as following.
Theorem 1.2 ( Carnicer's theorem[2]): _Let \(\mathcal{F}\) be a foliation of \(\mathbb{P}^{2}_{\mathbb{C}}\) and let \(C\) be an algebraic curve in \(\mathbb{P}^{2}_{\mathbb{C}}\). Suppose that \(C\) is invariant by \(\mathcal{F}\) and there are no dicritical singularities of \(\mathcal{F}\) in \(C\). Then_
\[\partial^{o}C\leq\partial^{o}\mathcal{F}+2\]
In the proof of Carnicer's theorem, the relationship between the sum of the multiplicities of a foliation along the branches of a curve, the degree of the curve, the degree of the foliation and the Euler characteristic of the curve are systematic used. This idea is also used in the present paper. However, it was not provided in [2] the effective method to determine whether a singular point is dicritical or not. The same inequality had been showed by Cerveau and Lins Neto[3] for system of which all the singularities of the invariant algebraic curve are nodal. A more straightforward result was presented by Walcher using elementary method[12]. Walcher's result stated:
Theorem 1.3: [12, Theorem 3.4] _Assume that a vector field \(X\) of degree \(M\) on \(\mathbb{P}^{2}_{\mathbb{C}}\) admits an irreducible invariant algebraic curve, and if all the stationary points of \(X\) at infinity are nondegenerate and non-dicritical, then the degree of the curve cannot exceed \(M+1\)._
In Walcher's proof, the Poincare-Dulac normal forms of the nondegenerate stationary points of a vector field were discussed. In particular, when the stationary point is non-dicritical, the precise information of the number of irreducible semi-invariants of the vector field \(X\) was obtained, from which the upper bound of the degree of an invariant algebraic curve is derived. It was also pointed out in [12] that if there are dicritical ones among the nondegenerate stationary points, then the vector field can admit infinitely many (pairwise relatively prime) semi-invariants. Moreover, the condition of non-dicritical can be verified through the investigation of the linear approximation of the vector field at the stationary points. Thus, Walcher's result provided a practical approach for the Poincare problem.
In this paper, we will present an alternative approach for the Poincare problem by considering the algebraic multiplicities (see Definition 2.1) of the singular
points of the system, and obtain an approximate inequality for the upper bound for the degrees under some generic conditions. The main results of this paper are:
**Theorem 1.4**.: _Consider the differential equation_
\[\frac{\mathrm{d}w}{\mathrm{d}z}=\frac{P(z,w)}{z\,Q(z,w)}, \tag{1.3}\]
_of degree \(M=\max\{\deg P(z,w),\deg z\,Q(z,w)\}\). If (1.3) admits an irreducible strict Darboux polynomial \(f(z,w)\). Let \(a_{1},\cdots,a_{k}\in\mathbb{C}\) be all roots of \(P(0,w)=0\), and \(a_{0}=\infty\), and \(\mathrm{Mul}(0,a_{i})\) be the algebraic multiplicity of \((0,a_{i})\), then_
\[\deg_{w}f(z,w)\leq\sum_{i=0}^{k}\mathrm{Mul}(0,a_{i}). \tag{1.4}\]
_In particular, if the singularities \((0,a_{i})\) are not algebraic critical, then_
\[\deg_{w}f(z,w)\leq M\,(k+1). \tag{1.5}\]
**Theorem 1.5**.: _Consider the polynomial system (1.1) of degree \(M=\max\{\deg P(z,w),\)\(\deg Q(z,w)\}\), if (1.1) has an invariant straight line \(L\), and the singular points at \(L\) are not algebraic critical, and if (1.1) admits an irreducible strict Darboux polynomial \(f(z,w)\), then_
\[\deg f(z,w)\leq M(M+1).\]
Note that, in Theorem 1.5, we don't need the singularities to be non-degenerate, and we will see in next section that not algebraic critical is weaker than non-dicritical. In Theorem 1.5, we require that (1.1) has an invariant straight line. In fact, it is generic that the line at infinity is invariant. Hence, the condition in Theorem 1.5 is generic.
The rest of this paper is arranged as following. In Section 2, we will introduce the concept and computing method of algebraic multiplicity. And next, the main theorems are proved. In Section 3, as application, the 2D Lotka-Volterra system is studied.
## 2 Algebraic Multiplicity and the Poincare Problem
Let \(f(z,w)\) be a Darboux polynomial of (1.1). In general, the upper bound of the degree of \(f(z,w)\) can not be determined merely from the equation (1.2). The assumption that \(f(z,w)\) is irreducible must be taken into account. If \(f(z,w)\) is irreducible, non lost the generality, perform the transformation \((z,w)\mapsto(z+c\,w,w)\)\((c\in\mathbb{R})\) if necessary, we may assume that \(\deg_{w}f(z,w)=\deg f(z,w)\). Let \(m=\deg_{w}f(z,w)\), then there are \(m\) algebraic functions \(w_{i}(z)\) satisfying \(f(z,w_{i}(z))=0\)\((i=1,2,\cdots,m)\). If these \(m\) algebraic functions pass through some common singular points, then \(m\) can be bounded by the possible number of the algebraic solutions that pass through these singular points. To this end, we will define the algebraic multiplicity as the number of local algebraic solutions as following.
**Definition 2.1**.: Consider a differential equation
\[\frac{\mathrm{d}w}{\mathrm{d}z}=F(z,w), \tag{2.1}\]
and \((z_{0},w_{0})\in\mathbb{C}^{2}.\) A formal series
\[w(z)=w_{0}+\sum_{i\geq 0}\alpha_{i}\,(z-z_{0})^{\mu_{i}}, \tag{2.2}\]
is said to be a local algebraic solution of (2.1) at \((z_{0},w_{0})\) if \(w(z)\) is a formal series solution of (2.1) with \(\alpha_{i}\neq 0,\)\(\mu_{i}\in\mathbb{Q}^{+},\) and \(\mu_{i}<\mu_{i+1}\)\((\forall i).\) The algebraic multiplicity of (2.1) at \((z_{0},w_{0}),\) denoted by \(\mathrm{Mul}(z_{0},w_{0};F)\) or simply by \(\mathrm{Mul}(z_{0},w_{0})\) while the context is clear, is defined as the number of distinct local non-constant algebraic solutions of (2.1) at \((z_{0},w_{0}).\) If \(\mathrm{Mul}(z_{0},w_{0})=\infty,\) then \((z_{0},w_{0})\) is said algebraic critical.
It is evident that algebraic critical implies dicritical (i.e., there are infinitely many invariant curves passing through the same point).
When \(w_{0}=\infty,\) let \(\bar{w}=1/w,\) then \(\bar{w}(z)\) satisfies
\[\frac{\mathrm{d}\bar{w}}{\mathrm{d}z}=-\bar{w}^{2}\,F(z,1/\bar{w}):=\bar{F}(z,\bar{w}), \tag{2.3}\]
and the algebraic multiplicity \(\mathrm{Mul}(z_{0},\infty;F)\) is simply defined as \(\mathrm{Mul}(z_{0},0;\bar{F}).\)
Let \(a,b,c\in\mathbb{C}\) with \(a,c\neq 0,\) and let \(W=a\,(w-w_{0})+b\,(z-z_{0}),\)\(Z=c\,(z-z_{0}),\) then \(W(Z)\) satisfies an equation of form
\[\frac{\mathrm{d}W}{\mathrm{d}Z}=\tilde{F}(Z,W). \tag{2.4}\]
It is easy to show that a local algebraic solution of (2.1) at \((z_{0},w_{0})\) corresponds to a local algebraic solution of (2.4) at \((0,0).\) Hence we have
\[\mathrm{Mul}(z_{0},w_{0};F)=\left\{\begin{array}{ll}\mathrm{Mul}(0,0;\tilde {F}),&\mbox{if }\tilde{F}(Z,0)\not\equiv 0\\ \mathrm{Mul}(0,0;\tilde{F})+1,&\mbox{if }\tilde{F}(Z,0)\equiv 0\end{array}\right. \tag{2.5}\]
It is evident that, if \((z_{0},w_{0})\) is a regular point and \(F(z,w_{0})\not\equiv 0,\) then \(\mathrm{Mul}(z_{0},w_{0})=1.\) To estimate the algebraic multiplicity at singular point \((z_{0},w_{0}),\) we can substitute (2.2) into (2.1) to find out all possible formal series solutions. A method for finding the formal series solution of a polynomial system at a singular point is given in Lei and Guan [7] using Newton polygon (Bruno[1], Chebotarev[6]). The result and proof are restated below.
**Lemma 2.2**.: _Consider the polynomial system_
\[\frac{\mathrm{d}w}{\mathrm{d}z}=\frac{P(z,w)}{Q(z,w)} \tag{2.6}\]
_where_
\[P(z,w)=\sum_{i\geq 0}P_{i}(z)\,w^{i},\quad Q(z,w)=\sum_{i\geq 0}Q_{i}(z)\,w^{i},\]
_and_
\[P_{i}(z)=p_{i,0}\,z^{k_{i}}+p_{i,1}\,z^{k_{i}+1}+\cdots,\ \ Q_{i}(z)=q_{i,0}\,z^{l_{i}}+q_{i,1}\,z^{l_{i}+1}+ \cdots,\ \ (i\geq 0)\]
_If \((0,0)\) is a singular point of (2.6), and there exists \(j\), satisfying_
(1)_. \(k_{j}=l_{j-1}-1\);_
(2)_. For any \(i\neq j\),_
\[\min\{k_{i},l_{i-1}-1\}>k_{j}+(j-i)\,(p_{j,0}/q_{j-1,0})\]
(3)_. \(p_{j,0}/q_{j-1,0}\in\mathbb{Q}^{+}\), then \((0,0)\) is algebraic critical for the system (2.6)._
Proof.: Let \(\lambda=p_{j,0}/q_{j-1,0}\), and \(u(z)=w(z)\,z^{-\lambda}\), then \(u(z)\) satisfies
\[\frac{\mathrm{d}u}{\mathrm{d}z} = \frac{\sum_{i\geq 0}(p_{i,0}\,z^{k_{i}+i\,\lambda}-q_{i-1,0}\, \lambda\,z^{l_{i-1}+i\,\lambda-1}+h.o.t.)\,u^{i}}{\sum_{i\geq 0}(q_{i,0}\,z^{l_{i}+(i+ 1)\,\lambda}+h.o.t.)u^{i}}\] \[= \frac{z^{l_{j-1}+j\,\lambda-1}\,\sum_{i\geq 0}(p_{i,0}\,z^{k_{ i}-k_{j}+(i-j)\,\lambda}-q_{i-1,0}\,\lambda\,z^{l_{i-1}-l_{j-1}+(i-j)\, \lambda}+h.o.t.)\,u^{i}}{z^{l_{j-1}+j\,\lambda}\,\sum_{i\geq 0}(q_{i,0}\,z^{l_{i}-l_{j-1}+(i -j)\,\lambda}+h.o.t.)u^{i}}\]
Taking the conditions of \(j\) into account, we can rewrite above equation as
\[\frac{\mathrm{d}u}{\mathrm{d}z}=\frac{z^{s}\,\hat{P}(z,u)}{z\,\hat{Q}(z,u)}\]
where \(\hat{P}(0,u),\hat{Q}(0,u)\not\equiv 0\), and \(s=\min_{i\geq 0}\{k_{i}-k_{j}+(i-j)\,\lambda,l_{i-1}-l_{j-1}+(i-j)\,\lambda\}\in \mathbb{Q}^{+}\). Let \(z=\bar{z}^{q_{j-1,0}}\), then
\[\frac{\mathrm{d}u}{\mathrm{d}\bar{z}}=\frac{q_{j-1,0}\,\bar{z}^{s\,q_{j-1,0}-1 }\,\hat{P}(\bar{z}^{q_{j-1,0}},u)}{\hat{Q}(\bar{z}^{q_{j-1,0}},u)} \tag{2.7}\]
It's easy to have \(s\,q_{j-1,0}\in\mathbb{N}\) and \(\hat{P}(\bar{z}^{q_{j-1,0}},u),\hat{Q}(\bar{z}^{q_{j-1,0}},u)\) are polynomials of \(\bar{z}\) and \(u\). Thus, for any \(\alpha\) such that \(\hat{Q}(0,\alpha)\neq 0\), (2.7) has a unique solution \(u(\bar{z};\alpha)\) which is analytic at \(\bar{z}=0\) and satisfies \(u(0;\alpha)=\alpha\). Thus,
\[w(z;\alpha)=z^{\lambda}\,u(z^{1/q_{j-1,0}};\alpha)=z^{\lambda}(\alpha+\sum_{i \geq 1}\frac{1}{i!}u^{(i)}_{\bar{z}}(0;\alpha)z^{i/q_{j-1,0}})\]
is a solution of (2.6), i.e., \(w(z;\alpha)\) is a local algebraic solution of (2.6) for any \(\alpha\) such that \(\hat{Q}(0;\alpha)\neq 0\). Hence, \((0,0)\) is algebraic critical for (2.6).
_Remark 2.3_.: 1. The Lemma 2.2 is also valid for equations of which \(P\) and \(Q\) are Puiseux series of \(z\) and \(w\) (with slight change in the proof):
\[P(z,w)=\sum_{i,j\geq 0}\,p_{i,j}\,z^{i/\mu}\,w^{j/\nu},\quad Q(z,w)=\sum_{i,j \geq 0}\,q_{i,j}\,z^{i/\mu}\,w^{j/\nu}\quad(\mu,\nu\in\mathbb{N})\]
2. From the proof of Lemma 2.2, if the index \(j\) satisfied conditions (1), (2), but \(p_{j,0}/q_{j-1,0}\in\mathbb{R}^{+}\backslash\mathbb{Q}^{+}\), let \(\lambda=p_{j,0}/q_{j-1,0}\), then (2.6) admits infinity solutions of form \(w(z;\alpha)=z^{\lambda}\,u(z^{1/s};\alpha)\), where \(u(\bar{z};\alpha)\) is the solution of \[\frac{\mathrm{d}u}{\mathrm{d}\bar{z}}=\frac{\hat{P}(\bar{z}^{1/s},u)}{s\,\hat {Q}(\bar{z}^{1/s},u)}\] such that \(u(0;\alpha)=\alpha\). Thus, (2.6) is dicritical at \((0,0)\), but not necessary algebraic critical.
**Lemma 2.4**.: _Let \((0,0)\) be a singular point of (2.6), then either \((0,0)\) is algebraic critical, or_
\[\mathrm{Mul}(0,0)\leq\max\{\deg_{w}P(z,w),\deg_{w}Q(z,w)+1\}. \tag{2.8}\]
Proof.: Let \(N=\deg_{w}P(z,w),M=\deg_{w}Q(z,w)\), and
\[P(z,w)=\sum_{i=0}^{N}P_{i}(z)\,w^{i},\quad Q(z,w)=\sum_{i=0}^{M}Q_{i}(z)\,w^{ i},\]
where
\[P_{i}(z)=p_{i,0}\,z^{k_{i}}+p_{i,1}\,z^{k_{i}+1}+\cdots,\ \ Q_{i}(z)=q_{i,0}\,z^{l_{i}}+q_{i,1}\,z^{l_{i}+1}+\cdots\]
Substitute
\[w(z)=\alpha_{0}\,z^{\lambda_{0}}+h.o.t.\ \ (\alpha_{0}\neq 0,\lambda_{0}\in \mathbb{Q}^{+}) \tag{2.9}\]
into (2.6), then
\[0 = \sum_{i=0}^{M}Q_{i}(z)(\alpha_{0}\,z^{\lambda_{0}}+h.o.t.)^{i}\, (\alpha_{0}\lambda_{0}\,z^{\lambda_{0}-1}+h.o.t.)-\sum_{i=0}^{N}P_{i}(z)\,( \alpha_{0}\,z^{\lambda_{0}}+h.o.t.)^{i}\] \[= \sum_{i=0}^{M}q_{i,0}\,\lambda_{0}\,\alpha_{0}^{i+1}\,z^{l_{i}+( i+1)\,\lambda_{0}-1}-\sum_{i=0}^{N}p_{i,0}\,\alpha_{0}^{i}\,z^{k_{i}+i\,\lambda_{0} }+h.o.t.\]
Thus, at least two of the exponents:
\[l_{i}+(i+1)\,\lambda_{0}-1,\ \ k_{j}+j\,\lambda_{0},\ \ (0\leq i\leq M,\ \ 0\leq j\leq N)\]
are equal to each other and not larger than any other exponents, and \(\alpha_{0}\neq 0\) that vanishes the coefficient of the lowest degree. If this is the case, \((\lambda_{0},\alpha_{0})\) is said to be acceptable to (2.6). Assume that \((0,0)\) is not algebraic critical (i.e., Lemma 2.2 is not satisfied), then the values \(\lambda_{0}\) and \(\alpha_{0}\) can be obtained using Newton polygon[1, 6] as following. Let \(\Gamma\) be the Newton open polygon of all points(see Figure 1)
\[(i+1,l_{i}-1),\quad(j,k_{j}),\ \ (0\leq i\leq M,\ \ 0\leq j\leq N) \tag{2.10}\]
Let \(\Gamma_{i_{1}}^{i_{2}}\) be an edge of \(\Gamma\), with \(i_{1}<i_{2}\) to be the horizontal coordinates of the extreme vertices. Let \(-\lambda_{0}\) to be the slope of \(\Gamma_{i_{1}}^{i_{2}}\), then \(\alpha_{0}\) should satisfy a polynomial of degree \(i_{2}-i_{1}\). In particular, \((\lambda_{0},\alpha_{0})\) is said to be \(d\)-folded if \(\alpha_{0}\) is a \(d\)-folded root of above polynomial. Thus, for the edge \(\Gamma_{i_{1}}^{i_{2}}\), there are at most \(i_{2}-i_{1}\) pairs of \((\lambda_{0},\alpha_{0})\)
that are acceptable to (2.6). Thus, there are totally at most \(\max\{M+1,N\}\) pairs of \((\lambda_{0},\alpha_{0})\) that are acceptable to (2.6).
For each \((\lambda_{0},\alpha_{0})\) in the first step, let \(w(z)=\alpha_{0}\,z^{\lambda_{0}}+w_{1}(z)\), then \(w_{1}(z)\) satisfies the equation
\[Q(z,\alpha_{0}\,z^{\lambda_{0}}+w_{1})(\alpha_{0}\,\lambda_{0}\,z^{\lambda_{0} -1}+w_{1}^{\prime})-P(z,\alpha_{0}\,z^{\lambda_{0}}+w_{1})=0. \tag{2.11}\]
Repeat the foregoing argument, if \((0,0)\) is not algebraic critical point of (2.6), then there are finite solutions of (2.11) of form
\[w_{1}(z)=\alpha_{1}\,z^{\lambda_{1}}+h.o.t.,\ \ (\lambda_{1}\in\mathbb{Q}^{+},\ \ \lambda_{1}>\lambda_{0}). \tag{2.12}\]
To complete the proof, it's sufficient to show that if \((\lambda_{0},\alpha_{0})\) is \(d-\)folded, then there are at most \(d\) pairs of \((\lambda_{1},\alpha_{1})\) with \(\lambda_{1}>\lambda_{0}\) which are acceptable to (2.11).
Let
\[Q_{1}(z,w_{1}) = Q(z,\alpha_{0}\,z^{\lambda_{0}}+w_{1}),\] \[P_{1}(z,w_{1}) = P(z,\alpha_{0}\,z^{\lambda_{0}}+w_{1})-\alpha_{0}\,\lambda_{0}\, z^{\lambda_{0}-1}\,Q(z,\alpha_{0}\,z^{\lambda_{0}}+w_{1})\]
then \(w_{1}(z)\) satisfies
\[Q_{1}(z,w_{1})\,w_{1}^{\prime}-P_{1}(z,w_{1})=0 \tag{2.13}\]
Write
\[Q_{1}(z,w_{1})=\sum_{i\geq 0}Q_{1,i}(z)\,w_{1}^{i},\ \ \ P_{1}(z,w_{1})=\sum_{i \geq 0}P_{1,i}(z)\,w_{1}^{i}\]
Figure 1: Newton Polygon
and let \(l_{1,i}\) and \(k_{1,i}\) be the lowest degrees of \(Q_{1,i}(z)\) and \(P_{1,i}(z)\) respectively, and \(r_{1,i}=\min\{k_{1,i},l_{1,i-1}-1\}.\) We will prove that if \((\lambda_{0},\alpha_{0})\) is \(d\)-folded, then for any \(i>d,\)
\[r_{1,d}\leq r_{1,i}+(i-d)\,\lambda_{0} \tag{2.14}\]
When (2.14) is satisfied, then there are at most \(d\)-pairs of \((\lambda_{1},\alpha_{1})\) which are acceptable to (2.13) and \(\lambda_{1}>\lambda_{0}.\) In fact, let \((\lambda_{1},\alpha_{1})\) to be acceptable to (2.13), then there exist \(j_{1}<j_{2},\) such that
\[\lambda_{1}=\frac{r_{1,j_{1}}-r_{1,j_{2}}}{j_{2}-j_{1}}>\lambda_{0}\]
and
\[r_{1,d}\geq r_{1,j_{1}}+(j_{1}-d)\,\lambda_{1},\ \ r_{1,d}\geq r_{1,j_{2}}+(j_{2 }-d)\,\lambda_{1}\]
If \(j_{1}>d\) (or \(j_{2}>d\)), then
\[r_{1,d}>r_{1,j_{1}}+(j_{1}-d)\,\lambda_{0}\ \ \ (\mbox{or}\ \ r_{1,d}>r_{1,j_{2}}+(j_{2 }-d)\,\lambda_{0})\]
which is contradict to (2.14). Hence, \(j_{1}<j_{2}\leq d\), and there are at most \(d\)-pairs of \((\lambda_{1},\alpha_{1})\) (taking account that \((0,0)\) is not algebraic critical).
To prove (2.14), let
\[Q(z,\alpha\,z^{\lambda_{0}})=\sum_{i\geq 0}\xi_{i}(\alpha)\,z^{s_{i}} \ \ \ \ \ \ \ \ (s_{0}<s_{1}<\cdots)\] \[P(z,\alpha\,z^{\lambda_{0}})=\sum_{i\geq 0}\eta_{i}(\alpha)\,z^{ \tau_{i}} \ \ \ \ \ \ \ \ (\tau_{0}<\tau_{1}<\cdots)\]
then
\[Q_{1,i}(z) = \frac{1}{i!}\,z^{-i\,\lambda_{0}}\,\sum_{j\geq 0}\xi_{j}^{(i)}( \alpha_{0})\,z^{s_{j}}\] \[P_{1,i}(z) = \frac{1}{i!}\,z^{-i\,\lambda_{0}}\,\left(\sum_{j\geq 0}\eta_{j}^{( i)}(\alpha_{0})\,z^{\tau_{j}}-\alpha_{0}\,\lambda_{0}\,z^{\lambda_{0}-1}\, \sum_{j\geq 0}\xi_{j}^{(i)}(\alpha_{0})\,z^{s_{j}}\right)\]
and hence
\[r_{1,i}\geq\min\{\tau_{0},s_{0}+\lambda_{0}-1\}-i\,\lambda_{0}. \tag{2.15}\]
Thus, it is sufficient to show that
\[\min\{k_{1,d},l_{1,d-1}-1\}=\min\{\tau_{0},s_{0}+\lambda_{0}-1\}-d\,\lambda_{ 0}. \tag{2.16}\]
To this end, write
\[Q_{1,d-1}(z) = \frac{1}{d!}\xi_{0}^{(d-1)}(\alpha_{0})\,z^{s_{0}+\lambda_{0}-d_ {0}\,\lambda_{0}}+h.o.t.\] \[P_{1,d}(z) = \frac{1}{d!}\left(\eta_{0}^{(d)}(\alpha_{0})\,z^{\tau_{0}}-\alpha _{0}\,\lambda_{0}\,\xi_{0}^{(d)}(\alpha_{0})\,z^{s_{0}+\lambda_{0}-1}\right) \cdot z^{-d\,\lambda_{0}}+h.o.t.\]
and let
\[P(z,\alpha\,z^{\lambda_{0}})-\alpha\,\lambda_{0}\,z^{\lambda_{0}-1}\,Q(z, \alpha\,z^{\lambda_{0}})=\varphi(\alpha)\,z^{v_{0}}+h.o.t.\]
Because \((\lambda_{0},\alpha_{0})\) is acceptable to (2.6) and \(d\)-folded, we have
\[\varphi(\alpha_{0})=\cdots=\varphi^{(d-1)}(\alpha_{0})=0,\ \varphi^{(d)}(\alpha_{0}) \neq 0. \tag{2.17}\]
Therefore, we have the following:
1. If \(\tau_{0}<s_{0}+\lambda_{0}-1\), then \(\varphi(\alpha)=\eta_{0}(\alpha)\) and \(\eta_{0}^{(d)}(\alpha_{0})\neq 0\).
2. If \(s_{0}+\lambda_{0}-1<\tau_{0}\), then \(\varphi(\alpha)=-\lambda_{0}\,\alpha\,\xi_{0}(\alpha)\), and hence \(\xi_{0}^{(d)}(\alpha_{0})\neq 0\).
3. If \(s_{0}+\lambda_{0}-1=\tau_{0}\), then \(\varphi_{0}(\alpha)=\eta_{0}(\alpha)-\alpha\lambda_{0}\xi_{0}(\alpha)\), and hence \[\varphi_{0}^{(d)}(\alpha_{0})=-\lambda_{0}\xi_{0}^{(d-1)}(\alpha_{0})+(\eta_{ 0}^{(d)}(\alpha_{0})-\alpha_{0}\lambda_{0}\xi_{0}^{(d)}(\alpha_{0}))\neq 0.\] Thus, we have \(\xi_{0}^{(d-1)}(\alpha_{0})\neq 0\) or \(\eta_{0}^{(d)}(\alpha_{0})-\alpha_{0}\lambda_{0}\xi_{0}^{(d)}(\alpha_{0})\neq 0\).
It is not difficult to verify that (2.16) is held in any one of the above cases, and thus the Lemma is concluded.
From the proof of Lemma 2.4, the local algebraic solutions of (2.6) at \((0,0)\) can be obtained by repeating the Newton polygon. Moreover, following the procedure, we will either stop by the case that \((0,0)\) is algebraic critical (Lemma 2.2), or encounter the local algebraic solution of form
\[w(z)=\sum_{i=0}^{k-1}\alpha_{i}\,z^{\lambda_{i}}+u(z)\]
where \((\lambda_{k-1},\alpha_{k-1})\) is 1-folded, and \(u(z)\) satisfies an equation
\[\frac{\mathrm{d}u}{\mathrm{d}z}=\frac{\hat{P}(z,u)}{\hat{Q}(z,u)} \tag{2.18}\]
where \(\hat{P},\hat{Q}\) are Puiseux series. Whenever this is the case, we have the following.
**Lemma 2.5**.: _In the equation (2.18) that derived from (2.6) through above procedure, let_
\[\hat{P}(z,u)=\hat{p}_{0,0}z^{k_{0}}+\hat{p}_{1,0}z^{k_{1}}\,u+h.o.t.,\ \ \hat{Q}(z,u)=\hat{q}_{0,0}z^{l_{0}}+h.o.t.\]
_If \((\lambda_{k-1},\alpha_{k-1})\) is 1-folded, and one of the following is satisfied:_
1. \(k_{1}\neq l_{0}-1\)_; or_
2. \(k_{1}=l_{0}-1\)_, and_ \(\hat{p}_{1,0}/\hat{q}_{0,0}\not\in(\lambda_{k-1},\infty)\cap\mathbb{Q}^{+}\)_,_
_then \((0,0)\) is not algebraic critical of (2.6)._
Proof.: Let \(u(z)\) be a local algebraic solution of (2.18), expressed as
\[u(z)=\sum_{i\geq k}\alpha_{i}\,z^{\lambda_{i}} \tag{2.19}\]
where \(\lambda_{i}>\lambda_{i-1},\ (\forall i\geq k)\). We will show that \((\lambda_{i},\alpha_{i})\) are determined by (2.18) uniquely.
From the proof of Lemma 2.4, we have
\[k_{0}-\min\{k_{1},l_{0}-1\}>\lambda_{k-1}\]
Hence, substitute (2.19) into (2.18), and taking account that \((\lambda_{k-1},\alpha_{k-1})\) is 1-folded, and either \(k_{1}\neq l_{0}-1\) or \(k_{1}=l_{0}-1\), \(p_{1,0}/q_{0,0}\not\in(\lambda_{k-1},\infty)\cap\mathbb{Q}^{+}\), we have \(\lambda_{k}=k_{0}-\min\{k_{1},l_{0}-1\}\), and \(\alpha_{k}\) is determined uniquely by \(p_{0,0},q_{0,0},p_{1,0},k_{1},l_{0}\). Therefore, \((\lambda_{k},\alpha_{k})\) is also 1-folded. Let \(u(z)=\alpha_{k}\,z^{\lambda_{k}}+v(z)\), then \(v(z)\) satisfies
\[\frac{\mathrm{d}v}{\mathrm{d}z}=\frac{\hat{p}^{\prime}_{0,0}\,z^{k^{\prime}_{0 }}+\hat{p}_{1,0}\,z^{k_{1}}\,v+h.o.t.}{\hat{q}_{0,0}\,z^{l_{0}}+h.o.t.} \tag{2.20}\]
where \(k^{\prime}_{0}>k_{0}\). In particular, conditions in the Lemma are also valid for (2.20). Thus, we can repeat the procedure, and hence there is unique solution \(u(z)\) of form (2.19), and \((0,0)\) is not algebraic critical for (2.6).
_Remark 2.6_.: In the Lemma 2.5, we might also find the solution of form (2.19) when \(k_{1}=l_{0}-1\) and \(\hat{p}_{1,0}/\hat{q}_{0,0}\in(\lambda_{k-1},\infty)\cap\mathbb{Q}^{+}\). However, when this is the case, we can identify two cases:
1. If \(\hat{p}_{1,0}/\hat{q}_{0,0}\in(\lambda_{i},\lambda_{i+1})\cap\mathbb{Q}\) for some \(i\geq k-1\), then the condition in Lemma 2.2 is satisfied at the \(i\)'th step, and \((0,0)\) is algebraic critical.
2. If \(\hat{p}_{1,0}/\hat{q}_{0,0}=\lambda_{i}\) for some \(i\), then \((0,0)\) is not algebraic critical.
In any case, we can stop the procedure in finite steps. Thus, it's effective to find the algebraic multiplicities of (2.6) using the Newton polygon.
**Example** Consider the equation
\[(z+w^{2})\,w^{\prime}-(z^{2}+\mu w)=0 \tag{2.21}\]
The Newton polygon of (2.21) is shown at Figure 2. From the Newton polygon, if \(\mu\in(1/2,2)\cap\mathbb{Q}\), then \((0,0)\) is algebraic critical, with local algebraic solutions
\[w(z)=\alpha_{0}z^{\mu}+h.o.t.\quad(\alpha_{0}\neq 0)\]
Figure 2. Newton Polygon of (2.21)
Mean while, if \(\mu\not\in(1/2,2)\cap\mathbb{Q}\), the possible local algebraic solutions are
\[w(z) = \frac{1}{2-\mu}\,z^{2}+h.o.t.\ \ (\mbox{if}\ \ \mu\neq 2)\] \[w(z) = \pm\sqrt{2\mu-1}\,z^{1/2}+h.o.t.\ \ (\mbox{if}\ \mu\neq 1/2)\]
When \(\mu\neq 2\), let
\[w(z)=\frac{1}{2-\mu}\,z^{2}+w_{1,1}(z)\]
then \(w_{1,1}(z)\) satisfies
\[w^{\prime}_{1,1}=\frac{2\,z^{5}-(2-\mu)^{3}\,\mu w_{1,1}+h.o.t.}{-(2-\mu)^{3} \,z+h.o.t.}\]
Thus, we conclude the following. If \(\mu\in(2,5)\cap\mathbb{Q}\), then \((0,0)\) is algebraic critical, with local algebraic solutions
\[w(z)=\frac{1}{2-\mu}\,z^{2}+\alpha_{1}\,z^{\mu}+h.o.t,\ \ \ (\alpha_{1}\neq 0).\]
If \(\mu\neq 2,5\), we have the local algebraic solution
\[w(z)=\frac{1}{2-\mu}\,z^{2}-\frac{2}{(5-\mu)\,(2-\mu)^{3}}\,z^{5}+h.o.t.\]
When \(\mu\not\in[1/2,2)\cap\mathbb{Q}\), let
\[w(z)=\sqrt{2\mu-1}\,z^{1/2}+w_{1,2}(z)\]
then \(w_{1,2}(z)\) satisfies
\[w^{\prime}_{1,2}=\frac{2\,z^{5/2}+(2-2\mu)\,z^{1/2}\,w_{1,2}+h.o.t.}{4\,\mu\,z ^{3/2}+h.o.t.}\]
Thus, if \(\mu\neq 1/5\), we have the local algebraic solution
\[w(z)=\sqrt{2\mu-1}\,z^{1/2}+\frac{1}{5\,\mu-1}\,z^{2}+h.o.t.\]
Thus, repeat the above procedure, we can determine, for given \(\mu\), the algebraic multiplicity \(\mbox{Mul}(0,0)\) of (2.21). In particular, if \(\mu\not\in(1/2,\infty)\cap\mathbb{Q}\), then \(\mbox{Mul}(0,0)\leq 3\).
In the rest of this section, we will prove the main results.
Proof of Theorem 1.4.: Let \(W\) be the set of all non-constant local algebraic solutions of (1.3) at \((0,a_{i})\) for some \(0\leq i\leq k\). Then
\[|W|=\sum_{i=0}^{k}\mbox{Mul}(0,a_{i})\]
Let \(f(z,w)\) be an irreducible strict Darboux polynomial of (1.3), and \(m=\deg_{w}f(z,w),\) then there are \(m\) algebraic functions \(w_{i}(z)\) that defined by \(f(z,w)=0.\) It is sufficient to show that any algebraic function \(w_{i}(z)\in W.\) To this end, we only need to show that
\[\lim_{z\to 0}w_{i}(z)=\{a_{0},a_{1},\cdots,a_{k}\} \tag{2.22}\]
Consider the equation
\[z\,Q(z,w)\,\frac{\partial f}{\partial z}+P(z,w)\,\frac{\partial f}{\partial w} =R_{f}(z,w)\,f(z,w)\]
Let \(z=0,\) then \(f(0,w)\) satisfies
\[P(0,w)\,f_{w}^{\prime}(0,w)=R_{f}(0,w)\,f(0,w).\]
Thus \(f(0,w)\) is an non-constant multiply of \(\prod_{i=1}^{k}(w-w_{i})^{l_{i}},\ \ (l_{i}\geq 0).\) From which (2.22) is easy to conclude.
It is easy to have \(\operatorname{Mul}(0,\infty)\leq M.\) Hence, if the singularities \((0,a_{i})\) are not algebraic critical, then, from Lemma 2.4,
\[\deg_{w}f(z,w)\leq M\,(k+1)\]
Proof of Theorem 1.5.: If (1.1) has an invariant straight line \(L,\) perform suitable transformation, we may assume that \(L\) is given by
\[a\,z+b\,w+z=0,\ \ \ \ (a\neq 0)\]
and \(\deg f(z,w)=\deg_{w}f(\frac{z-b\,w-c}{a},w).\) It is easy to see that the degree of the system is not increase under linear transformation. Let
\[\bar{w}=w,\bar{z}=a\,z+b\,w+c,\]
then \(\bar{w}(\bar{z})\) satisfies the equation of form
\[\frac{d\bar{w}}{d\bar{z}}=\frac{\bar{P}(\bar{z},\bar{w})}{\bar{z}\,\bar{Q}( \bar{z},\bar{w})}, \tag{2.23}\]
where \(\bar{P}(\bar{z},\bar{w}),\bar{Q}(\bar{z},\bar{w})\) are polynomials. Moreover, \(\bar{f}(\bar{z},\bar{w})=f(\frac{\bar{z}-b\,\bar{w}-c}{a},\bar{w})\) is an irreducible Darboux polynomial of (2.23), and \(\deg f(z,w)=\deg_{\bar{w}}\bar{f}(\bar{z},\bar{w}).\) Let \((a_{i},b_{i}),(1\leq i\leq M)\) be singular points of (1.1) at \(L,\) then \((0,b_{i})\) are singular points of (2.23) at \(\bar{z}=0,\) and not algebraic critical. Hence, apply Theorem 1.4 to (2.23), we have
\[\deg f(z,w)=\deg_{\bar{w}}\bar{f}(\bar{z},\bar{w})\leq M\,(M+1).\]
## 3 Application to 2D Lotka-Volterra system
In this section, we will apply Theorem 1.4 to 2D Lotka-Volterra system:
\[\dot{z}=z\,(z+c\,w-1),\quad\dot{w}=w\,(b\,z+w-a). \tag{3.1}\]
Invariant algebraic curves of Lotka-Volterra system had been studied by many authors. Recent results on this topic, refer to Ollaginer [9], Cairo _et.al.[5]_ and the references. In Ollaginer [9], the complete list of parameters of which the system has strict invariant algebraic curve is presented. We will reobtain one part of the results through the algebraic multiplicity.
Note that (3.1) is invariant under following transformations:
\[(z,w,a,b,c) \to (\frac{w}{a},\frac{z}{a},\frac{1}{a},c,b),\ \mbox{if}\ a\neq 0; \tag{3.2}\] \[(z,w,a,b,c) \to (\frac{1}{z},(1-c)\frac{w}{z},1-b,1-a,\frac{c}{c-1}),\ \mbox{if}\ c\neq 1. \tag{3.3}\]
Results in this section are also valid under above transformations.
Since \(z=0\) and \(w=0\) are invariant straight lines of (3.1), Theorem 1.4 is applicable.
**Proposition 3.1**.: _If the 2D L-V system_
\[\frac{\mathrm{d}w}{\mathrm{d}z}=\frac{w\,(b\,z+w-a)}{z\,(z+c\,w-1)} \tag{3.4}\]
_has a strict Darboux polynomial \(f\), then_
\[\begin{array}{rcll}\deg_{w}f(z,w)&\leq&\mathrm{Mul}(0,\infty)+\mathrm{Mul}( 0,a)+\mathrm{Mul}(0,0),&\mbox{if}\ \ \ a\neq 0\\ \deg_{w}f(z,w)&\leq&\mathrm{Mul}(0,\infty)+\mathrm{Mul}(0,0).&\mbox{if}\ \ \ a=0\end{array}\]
In particular, we have.
**Proposition 3.2**.: _If in (3.4),_
\[a\not\in\mathbb{Q}^{+},c\not\in\mathbb{Q}^{-},c-\frac{1}{a}\not\in\mathbb{Q}^ {+}\backslash\{1\} \tag{3.5}\]
_then (3.4) has strict invariant algebraic curve if and only if_
\[a(1-c)+(1-b)=0,\]
_and the invariant algebraic curve is given by_
\[a(z-1)+w=0.\]
Proof.: When (3.5) is satisfied, the singularities \((0,0),(0,a),(0,\infty)\) are not algebraic critical, and
\[\mathrm{Mul}(0,0)=0,\mathrm{Mul}(0,a)\leq 1,\mathrm{Mul}(0,\infty)=0\]
Hence, if \(f(z,w)\) is a strict irreducible Darboux polynomial, then \(\deg_{w}f=1\). From which the result is easy to conclude.
Proposition 3.2 shows that the algebraic multiplicities may give an exact bound for the degree of the Darboux polynomial in particular cases. However, if there are algebraic critical points among the singularities, (1.4) does not provide the finite value. In this case, as we had seen from Lemma 2.2, there are infinite local algebraic solutions. On the other hand, this does not automatically imply that all these local algebraic solutions are algebraic solutions. And hence, we come to the following concrete problem: If a singular point of a system is algebraic critical, how many local algebraic solutions are exactly the algebraic function? It requires additional work to discuss this problem, and one may hope that the solution of this problem should lead to the final resolution of the Poincare problem.
|
2309.05488 | Eigenstate thermalisation at the edge for Wigner matrices | We prove the Eigenstate Thermalisation Hypothesis for Wigner matrices
uniformly in the entire spectrum, in particular near the spectral edges, with a
bound on the fluctuation that is optimal for any observable. This complements
earlier works of Cipolloni et. al. (Comm. Math. Phys. 388, 2021; Forum Math.,
Sigma 10, 2022) and Benigni et. al. (Comm. Math. Phys. 391, 2022; arXiv:
2303.11142) that were restricted either to the bulk of the spectrum or to
special observables. As a main ingredient, we prove a new multi-resolvent local
law that optimally accounts for the edge scaling. | Giorgio Cipolloni, László Erdős, Joscha Henheik | 2023-09-11T14:29:51Z | http://arxiv.org/abs/2309.05488v3 | # Eigenstate Thermalisation at the Edge for Wigner Matrices
###### Abstract.
We prove the Eigenstate Thermalisation Hypothesis for Wigner matrices uniformly in the entire spectrum, in particular near the spectral edges, with a bound on the fluctuation that is optimal for any observable. This complements earlier works of Cipolloni et. al. [14, 19] and Benigni et. al. [8, 9] that were restricted either to the bulk of the spectrum or to special observables. As a main ingredient, we prove a new multi-resolvent local law that optimally accounts for the edge scaling.
Key words and phrases:Eigenstate Thermalisation Hypothesis, Quantum Unique Ergodicity, Local Law, Method of Characteristics 2020 Mathematics Subject Classification: 60B20, 82B10, 58J51 \({}^{*}\)Supported by ERC Advanced Grant "RMTBeyond" No. 101020331.
## 1. Introduction
In the physics literature, the _Eigenstate Thermalisation Hypothesis (ETH)_ asserts that each eigenfunction of a sufficiently chaotic quantum system is uniformly distributed in the phase space. This concept was coined by Srednicki [47] after similar ideas appeared in the seminal paper of Deutsch [26]. While the original physics setup concerns genuine many-body systems, especially a small system in a heat bath described by standard statistical mechanics, Deutsch has also formulated a phenomenological version of ETH for the simplest chaotic quantum system, the Wigner ensemble. In this form, ETH asserts that for any deterministic observable (matrix) \(A\) and for any normalised eigenvector \(\mathbf{u}\) of a large \(N\times N\) Wigner matrix, the quadratic form \(\langle\mathbf{u},A\mathbf{u}\rangle\) is very close to its statistical average, which, in the Wigner case, is the normalized trace \(\langle A\rangle:=\frac{1}{N}\mathrm{Tr}A\):
\[|\langle\mathbf{u},A\mathbf{u}\rangle-\langle A\rangle|\lesssim\frac{\|A\|}{\sqrt{N}}. \tag{1.1}\]
The \(1/\sqrt{N}\) speed of convergence is optimal and it is in agreement with the earlier predictions of Feingold and Peres [34], see also [27]. For more physics background and references, see the introduction of [14].
In the mathematics literature the same phenomenon is known as the _Quantum Unique Ergodicity (QUE)_. In precise mathematical terms it was formulated by Rudnick and Sarnak [44] for standard quantisations of ergodic classical dynamical systems and proved only in some special cases [43, 46, 35, 13], often as a purely limiting statement without optimizing the speed of convergence. The key point is to control the behaviour of _all_ eigenfunctions; a similar result for _most_ eigenfunctions (called _Quantum Ergodicity_) is much easier and has been earlier discovered by Snirel'man [45], see also [23, 49].
Motivated by the paper of Deutsch [26] and the novel technical developments in random matrix theory, the ETH for Wigner matrices in the form (1.1) has been the object of intense study in recent years. An important question is the precise dependence of the error term in the right hand side on \(A\). The first proof of (1.1) given in [14] involved the operator norm \(\|\hat{A}\|\) of the traceless part \(\hat{A}:=A-\langle A\rangle\) of \(A\), but this estimate is far from optimal for low rank observables. For example, if \(A=|\mathbf{q}\rangle\langle\mathbf{q}|\) is a rank-one projection onto a normalised vector \(\mathbf{q}\in\mathbb{C}^{N}\), then \(\langle\mathbf{u},A\mathbf{u}\rangle=|\langle\mathbf{u},\mathbf{q}\rangle|^{2}\) which is known to be essentially of order \(1/N\) by the _complete delocalisation of eigenvectors_, see [28, 29, 38, 10, 7]. However the result in [14] gives only the suboptimal estimate \(|\langle\mathbf{u},\mathbf{q}\rangle|^{2}\lesssim 1/\sqrt{N}\) for this special observable.
In the Gaussian (GUE/GOE) case, the eigenvectors are uniformly Haar distributed, hence explicit moment calculations for \(\langle\mathbf{u},A\mathbf{u}\rangle\) are possible by Weingarten calculus. The result indicates the following optimal form of (1.1):
\[|\langle\mathbf{u}_{i},A\mathbf{u}_{j}\rangle-\delta_{ij}\langle A\rangle|\lesssim \frac{\langle|\hat{A}|^{2}\rangle^{1/2}}{\sqrt{N}}. \tag{1.2}\]
Note that this estimate involves the (normalised) Hilbert-Schmidt norm \(\langle|\dot{A}|^{2}\rangle^{1/2}\) instead of the operator norm1, and it can also be extended to different eigenvectors \(\mathbf{u}_{i},\mathbf{u}_{j}\). In particular, (1.2) recovers the optimal delocalisation bound for eigenvectors as a special case.
Footnote 1: Note that \(\langle|\dot{A}|^{2}\rangle^{1/2}\) is substantially smaller than \(\|\dot{A}\|\) for matrices \(\dot{A}\) of low rank; in fact, if \(\operatorname{rank}(\dot{A})=1\), then \(\|\dot{A}\|=\sqrt{N}\langle|\dot{A}|^{2}\rangle^{1/2}\), losing the entire \(\sqrt{N}\) factor in (1.1) compared with the optimal (1.2).
The optimal form of ETH (1.2) for any Wigner matrix was proved for the special case when \(A\) is a projection in [8, 9], and for arbitrary \(A\) but only in the bulk of the spectrum2 in [19]. In fact, \(\sqrt{N}[\langle\mathbf{u}_{i},A\mathbf{u}_{j}\rangle-\delta_{ij}\langle A\rangle]\) is asymptotically normal with variance proportional to \(\langle|\dot{A}|^{2}\rangle^{1/2}\) (see [16, 19]) in the bulk, showing that the Hilbert-Schmidt norm \(\langle|\dot{A}|^{2}\rangle^{1/2}\) is indeed the optimal one. In the main theorem of the current paper (see Theorem 2.2 below) we prove (1.2) for all observables and all eigenfunctions, giving the optimal ETH for Wigner matrices in all spectral regimes.
Footnote 2: We point out that the end of the proof of Theorem 2.2 in the published version of [19] contained a small error; a correct and in fact simpler argument was given in the updated arXiv:2203.01861 version of the paper.
We remark that ETH is expected to hold for much more general random matrix ensembles. For example the approach in [14] could be directly generalized to a special class of generalized Wigner matrices in [3]. Furthermore, ETH in the bulk has recently been extended to deformed random matrix ensembles [21, 22], where both the leading term \(\delta_{ij}\langle A\rangle\) and the correct replacement for the traceless part of \(A\) in the right hand side of (1.2) became considerably more involved, in particular they are energy dependent. The edge regime and the optimal norm of \(A\) in the error term are still open questions for these ensembles, but we leave them to further works and for simplicity we focus on the Wigner case here.
The key tool to achieve our ETH is a new _multi-resolvent local law_ with traceless observables that is optimal at the spectral edges. Multi-resolvent local laws in general refer to concentration results for alternating products of resolvents of a random matrix and deterministic matrices (observables). Their proofs are typically more difficult at the spectral edges since, besides correctly accounting for the traceless observables, their optimal form also requires to exploit a delicate cancellation mechanism; the smallness of the local density of eigenvalues must accurately compensate for the linear instability of a nonlinear equation that governs the fluctuation. In contrast to the previous proofs of local laws behind ETH results, here we apply a dynamical approach, the _method characteristic flow_ complemented with a Green function comparison argument. While this method has already been extensively tested for single resolvent local laws [11, 36, 1, 41, 2, 42, 4], only two papers concern the multi-resolvent situation [20, 12], neither of them focuses on the critical edge behaviour. On top of the edge, we will need to track another key aspect of the local law; in order to obtain the Hilbert-Schmidt norm in (1.2), the same norm must appear in the local law as well. Typically, errors in the local laws involve the operator norm of the deterministic matrices between the resolvents, the control in the much harder Hilbert-Schmidt sense was considered only very recently in [19]. However, this work did not track the optimal edge behaviour. Our new local law is simultaneously optimal in both aspects. We will explain the strength of this new local law in the context of previous works in Section 2.1.
**Notations.** By \(\lceil x\rceil:=\min\{m\in\mathbb{Z}\colon m\geq x\}\) and \(\lfloor x\rfloor:=\max\{m\in\mathbb{Z}\colon m\leq x\}\) we denote the upper and lower integer part of a real number \(x\in\mathbb{R}\). We set \([k]:=\{1,...,k\}\) for \(k\in\mathbb{N}\) and \(\langle A\rangle:=d^{-1}\mathrm{Tr}(A)\), \(d\in\mathbb{N}\), for the normalised trace of a \(d\times d\)-matrix \(A\). For positive quantities \(A,B\) we write \(A\lesssim B\) resp. \(A\gtrsim B\) and mean that \(A\leq CB\) resp. \(A\geq cB\) for some \(N\)-independent constants \(c,C>0\) that depend only on the basic control parameters of the model in Assumption 2.1 below. Moreover, for \(N\)-dependent positive quantities \(A,B\), we write \(A\ll B\) whenever \(A/B\to 0\) as \(N\to\infty\).
We denote vectors by bold-faced lower case Roman letters \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{N}\), for some \(N\in\mathbb{N}\), and define
\[\langle\mathbf{x},\mathbf{y}\rangle:=\sum_{i}\bar{x}_{i}y_{i}\,,\qquad A_{\mathbf{x}\mathbf{y} }:=\langle\mathbf{x},A\mathbf{y}\rangle\,.\]
Matrix entries are indexed by lower case Roman letters \(a,b,c,...,i,j,k,...\) from the beginning or the middle of the alphabet and unrestricted sums over those are always understood to be over \(\{1,...,N\}\).
Finally, we will use the concept _'with very high probability'_, meaning that for any fixed \(D>0\), the probability of an \(N\)-dependent event is bigger than \(1-N^{-D}\) for all \(N\geq N_{0}(D)\). Also, we will use the convention that \(\xi>0\) denotes an arbitrarily small positive exponent, independent of \(N\). Moreover, we introduce the common notion of _stochastic domination_ (see, e.g., [30]): For two families
\[X=\left(X^{(N)}(u)\mid N\in\mathbb{N},u\in U^{(N)}\right)\quad\text{and}\quad Y =\left(Y^{(N)}(u)\mid N\in\mathbb{N},u\in U^{(N)}\right)\]
of non-negative random variables indexed by \(N\), and possibly a parameter \(u\), we say that \(X\) is stochastically dominated by \(Y\), if for all \(\epsilon,D>0\) we have
\[\sup_{u\in U^{(N)}}\boldsymbol{P}\left[X^{(N)}(u)>N^{\epsilon}Y^{(N)}(u)\right] \leq N^{-D}\]
for large enough \(N\geq N_{0}(\epsilon,D)\). In this case we write \(X\prec Y\). If for some complex family of random variables we have \(|X|\prec Y\), we also write \(X=O_{\prec}(Y)\).
**Acknowledgment**.: We thank Volodymyr Riabov for his help with creating Figure 1.
## 2. Main results
We consider \(N\times N\) Wigner matrices \(W\), i.e. \(W\) is a random real symmetric or complex Hermitian matrix \(W=W^{*}\) with independent entries (up to the Hermitian symmetry) and with single entry distributions \(w_{aa}\stackrel{{\mathrm{d}}}{{=}}N^{-1/2}\chi_{\mathrm{d}}\), and \(w_{ab}\stackrel{{\mathrm{d}}}{{=}}N^{-1/2}\chi_{\mathrm{od}}\), for \(a>b\). The random variables \(\chi_{\mathrm{d}},\chi_{\mathrm{od}}\) satisfy the following assumptions.3
Footnote 3: By inspecting our proof, it is easy to see that actually we do not need to assume that the off-diagonal entries of \(W\) are identically distributed. We only need that they all have the same second moments, but higher moments can be different.
**Assumption 2.1**.: _The off-diagonal distribution \(\chi_{\mathrm{od}}\) is a real or complex centered random variable, \(\mathbb{E}\chi_{\mathrm{od}}=0\), satisfying \(\mathbb{E}|\chi_{\mathrm{od}}|^{2}=1\). The diagonal distribution is a real centered random variable, \(\mathbb{E}\chi_{\mathrm{d}}=0\). Furthermore, we assume the existence of high moments, i.e. for any \(p\in\mathbb{N}\) there exists \(C_{p}>0\) such that_
\[\mathbb{E}\big{[}|\chi_{\mathrm{d}}|^{p}+|\chi_{\mathrm{od}}|^{p}\big{]}\leq C _{p}\,.\]
Our main result is the optimal form of the eigenstate thermalization hypothesis (ETH) for Wigner matrices uniformly in the spectrum, in particular, including the spectral edges. Its proof is given in Section 2.2 and it is based on a new _multi-resolvent local law_, Theorem 2.4 below.
**Theorem 2.2** (Eigenstate Thermalization Hypothesis).: _Let \(W\) be a Wigner matrix satisfying Assumption 2.1 with orthonormalized eigenvectors \(\boldsymbol{u}_{1},...,\boldsymbol{u}_{N}\) and let \(A\in\mathbb{C}^{N\times N}\) be deterministic. Then_
\[\max_{i,j\in[N]}|\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{j}\rangle-\delta_ {ij}\langle A\rangle|\prec\frac{\langle|\hat{A}|^{2}\rangle^{1/2}}{\sqrt{N}} \tag{2.1}\]
_where \(\hat{A}:=A-\langle A\rangle\) denotes the traceless part of \(A\)._
### Multi-resolvent local laws
Consider the resolvent \(G(z):=(W-z)^{-1}\), with \(z\in\mathbb{C}\setminus\mathbb{R}\). It is well known that in the limit \(N\to\infty\) the resolvent becomes deterministic, with its deterministic approximation \(m_{\mathrm{sc}}(z)\cdot I\), where \(m_{\mathrm{sc}}\) is the Stieltjes transform of the semicircular law:
\[m(z):=m_{\mathrm{sc}}(z)=\int_{\mathbb{R}}\frac{1}{x-z}\rho_{\mathrm{sc}}(x) \,\mathrm{d}x,\qquad\quad\rho_{\mathrm{sc}}(x):=\frac{1}{2\pi}\sqrt{[4-x^{2}]_ {+}}. \tag{2.2}\]
This holds even in the local regime as long as \(|\mathrm{Im}\,z|\gg N^{-1}\); such concentration results are commonly called _local laws_.
The single resolvent local law, in its simplest form4, asserts that
Footnote 4: Traditionally [29, 38, 10], local laws did not consider arbitrary test matrix \(A\), but only \(A=I\) or special rank one projections leading the _isotropic local laws_. General \(A\) was included later, e.g. in [32].
\[\big{|}\langle(G(z)-m(z))A\rangle\big{|}\prec\frac{\|A\|}{N\eta},\qquad\eta:=| \mathrm{Im}\,z|, \tag{2.3}\]
holds for any deterministic matrix (_observable_) \(A\). The \(1/N\eta\) error is optimal for \(A=I\) in the relevant \(\eta\lesssim 1\) regime and \(N\eta\langle G(z)-m(z)\rangle\) is approximately Gaussian with variance of order one [37]. However, for traceless observables, i.e. \(\langle A\rangle=0\), hence \(A=\hat{A}\), the bound in (2.3) improves to the optimal form,
\[\big{|}\langle(G(z)-m(z))A\rangle\big{|}=\big{|}\langle G(z)A\rangle\big{|} \prec\frac{\sqrt{\rho(z)}}{N\sqrt{\eta}}\langle|A|^{2}\rangle^{1/2},\qquad \rho(z):=\frac{1}{\pi}|\mathrm{Im}\,m(z)|.\]
The improvement in the \(\eta\)-power together and the additional density factor \(\rho(z)\) relevant near the spectral edges were first observed in [14], while the optimal dependence on the Hilbert-Schmidt norm of \(A\) was
proved in [19]. Single resolvent local laws, however, are not sufficient to control the eigenfunction overlaps as in (2.1). While the local law, via the spectral decomposition of \(\operatorname{Im}G=\frac{1}{2{\rm i}}(G-G^{*})\),
\[\langle\operatorname{Im}G(z)A\rangle=\frac{1}{N}\sum_{i}\frac{\eta}{(\lambda_{ i}-E)^{2}+\eta^{2}}\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{i}\rangle,\qquad z =E+{\rm i}\eta, \tag{2.4}\]
gives an effectively local average of approximately \(N\eta\) diagonal overlaps \(\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{i}\rangle\), inferring the size of a single overlap is not possible just from this average since \(\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{i}\rangle\) may change sign as \(i\) varies.
Two-resolvent local laws are much more powerful. In particular, using
\[\langle\operatorname{Im}G(z_{1})A\operatorname{Im}G(z_{2})A^{*}\rangle=\frac{ 1}{N}\sum_{i,j}\frac{\eta}{(\lambda_{i}-E_{1})^{2}+\eta^{2}}\frac{\eta}{( \lambda_{j}-E_{2})^{2}+\eta^{2}}|\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{ j}\rangle|^{2},\quad z_{l}=E_{l}+{\rm i}\eta,\;\;l=1,2, \tag{2.5}\]
we see that for a traceless observable, \(\langle A\rangle=0\), a bound of the form
\[\langle\operatorname{Im}G(z_{1})A\operatorname{Im}G(z_{2})A^{*}\rangle\prec \|A\|^{2} \tag{2.6}\]
at \(\eta\sim N^{-1+\xi}\), \(\xi>0\), would imply that a local average (in both indices) of \(|\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{j}\rangle|^{2}\) is bounded by \(N^{-1+2\xi}\|A\|^{2}\). Since \(|\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{j}\rangle|^{2}\) is positive (unlike \(\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{i}\rangle\) in (2.4)), we can deduce the optimal bound \(|\langle\boldsymbol{u}_{i},A\boldsymbol{u}_{j}\rangle|^{2}\prec\frac{1}{N}\|A \|^{2}\) for each overlap. This argument in this form is valid only in the bulk; near the spectral edges the right hand side of (2.6) needs to be improved to \(\rho(z_{1})\rho(z_{2})\|A\|^{2}\); this was already achieved in [14]. However, to obtain the optimal Hilbert-Schmidt norm of the observable in (2.1) a second improvement to the form
\[\langle\operatorname{Im}G(z_{1})A\operatorname{Im}G(z_{2})A^{*}\rangle\prec \rho(z_{1})\rho(z_{2})\langle|A|^{2}\rangle,\qquad\langle A\rangle=0, \tag{2.7}\]
is necessary. The main achievement of the current paper is to extract both types of improvement _simultaneously_.
While Theorem 2.2 requires only the upper bound (2.7) for \(\operatorname{Im}GA\!\operatorname{Im}GA\), along its proof other alternating products of resolvents (with or without \(\operatorname{Im}\) ) and deterministic matrices emerge. More precisely, setting \(G_{i}:=G(z_{i})\) and considering deterministic matrices \(B_{i}\), the main object of interest is
\[G_{1}B_{1}G_{2}B_{2}G_{3}\ldots B_{k-1}G_{k} \tag{2.8}\]
for some fixed \(k\). We will call expressions of the form (2.8) _(resolvent) chains_. We will show a _multi-resolvent local law_, i.e. that any chain (2.8) concentrates around a deterministic object and give an upper bound on the fluctuation. The interesting regime is the local one, i.e. when \(|\operatorname{Im}z_{i}|\ll 1\). We will also consider the case when some of the \(G_{i}\)'s are replaced by their imaginary part \(\operatorname{Im}G_{i}\), and we will show that in this case the fluctuations are reduced close to the edge of the spectrum by some factor of \(|\operatorname{Im}m(z_{i})|\) which is essentially the density \(\rho_{\rm sc}\) at \(\operatorname{Re}z_{i}\).
It turns out [14] that the sizes of both the deterministic limit of (2.8) and its fluctuation are substantially reduced if some of the matrices \(B_{i}\) are traceless. Therefore, in the main part of the paper we study (2.8) when all the matrices \(B_{i}\) are traceless, \(\langle B_{i}\rangle=0\), this will also imply a local law for (2.8) for generic \(B_{i}\)'s using that any matrix \(B\) can be decomposed into a constant and a traceless part as \(B=\langle B\rangle\cdot I+\hat{B}\).
We will prove local laws that are optimal simultaneously in the two different aspects mentioned above in addition to account for the improvement owing to the traceless observables. The first aspect is to trace the improvement near the spectral edges in terms of additional \(\rho\)-powers; in general the presence of each \(\operatorname{Im}G\) provides an additional \(\rho\) factor. Second, instead of the technically much easier Euclidean matrix norm (operator norm) of the \(B_{i}\)'s, we need to use the more sophisticated Hilbert-Schmidt norm. One additional advantage of using the Hilbert-Schmidt norm is that it enables us to test the chain in (2.8) against rank one matrices and still get optimal bounds. In particular, testing it against the projection \(|\boldsymbol{x}\rangle\langle\boldsymbol{y}|\) immediately gives the so-called _isotropic local laws_, i.e. concentration for the individual matrix elements \(\langle\boldsymbol{x},G_{1}B_{1}\ldots B_{k-1}G_{k}\boldsymbol{y}\rangle\), for any deterministic vectors \(\boldsymbol{x},\boldsymbol{y}\).
Our results also hold for the case when the spectral parameters \(z_{i}\)'s are different, but we will not explore the additional potential improvements from this fact since it is not needed for ETH. While in some part of the argument we track the different values of \(|\operatorname{Im}z_{i}|\) precisely (instead of overestimating them by the worst one), we will not exploit the additional gain from possibly different real parts \(\operatorname{Re}z_{i}\); this study is left for future investigations.
Multi-resolvent local laws for chains (2.8) with traceless deterministic matrices have been the object of interest in several recent papers, however in each of these works only one aspect of the fluctuations of
(2.8) was taken into consideration: either the problem was optimal only in the bulk of the spectrum [19], hence missing \(\rho\) factors were ignored, or the error term was estimated using the crude operator norm of the \(B_{i}\)[14, 18], or only chains of length one (\(k=1\)) had an optimal error term in both aspects [15]. Our new result (Theorem 2.4 below) does not have any of these restriction: we give a bound on the fluctuation of (2.8) uniformly in the spectrum with optimal \(N\)- and \(\rho\)-powers and with the Hilbert-Schmidt norm on the traceless \(B_{i}\)'s.
#### 2.1.1. Preliminaries on the deterministic approximation
Before stating our main technical result we introduce some additional notation. Given a non-crossing partition \(\pi\) of the set \([k]:=\{\,1,\ldots,k\,\}\) arranged in cyclic order, the partial trace \(\mathrm{pTr}_{\pi}\) of an ordered set of matrices \(B_{1},\ldots,B_{k-1}\) is defined as
\[\mathrm{pTr}_{\pi}(B_{1},\ldots,B_{k-1}):=\prod_{S\in\pi\setminus\mathfrak{B}( k)}\left\langle\prod_{j\in S}B_{j}\right\rangle\prod_{j\in\mathfrak{B}(k) \setminus\{\,k\,\}}B_{j}, \tag{2.9}\]
with \(\mathfrak{B}(k)\in\pi\) denoting the unique block containing \(k\). Then, for generic \(B_{i}\)'s, the deterministic approximation of (2.8) is given by [17, Theorem 3.4]:
\[M_{[1,k]}=M(z_{1},B_{1},\ldots,B_{k-1},z_{k}):=\sum_{\pi\in\mathrm{NC}([k])} \mathrm{pTr}_{K(\pi)}(B_{1},\ldots,B_{k-1})\prod_{S\in\pi}m_{\circ}[S], \tag{2.10}\]
where \(\mathrm{NC}([k])\) denotes the non-crossing partitions of the set \([k]\), and \(K(\pi)\) denotes the Kreweras complement of \(\pi\) (see [17, Definition 2.4] and [40]). Furthermore, for any subset \(S\subset[k]\) we define \(m[S]:=m_{\mathrm{sc}}[\boldsymbol{z}_{S}]\) as the iterated divided difference of \(m_{\mathrm{sc}}\) evaluated in \(\boldsymbol{z}_{S}:=\{z_{i}:i\in S\}\) which can also be written as
\[m[S]=m_{\mathrm{sc}}[\boldsymbol{z}_{S}]=m_{\mathrm{sc}}[\{\,z_{i}:i\in S\,\}] =\int_{-2}^{2}\rho_{\mathrm{sc}}(x)\prod_{i\in S}\frac{1}{x-z_{i}}\mathrm{d}x. \tag{2.11}\]
We denote by \(m_{\circ}[\cdot]\) the free-cumulant transform of \(m[\cdot]\) which is uniquely defined implicitly by the relation
\[m[S]=\sum_{\pi\in\mathrm{NC}(S)}\prod_{S^{\prime}\in\pi}m_{\circ}[S^{\prime} ],\qquad\forall S\subset[k], \tag{2.12}\]
e.g. \(m_{\circ}[i,j]=m[\{\,i,j\,\}]-m[\{\,i\,\}]m[\{\,j\,\}]\). For example, for \(k=2\) we have
\[M(z_{1},B_{1},z_{2}) =\langle B_{1}\rangle(m_{\mathrm{sc}}[z_{1},z_{2}]-m_{\mathrm{sc} }(z_{1})m_{\mathrm{sc}}(z_{2}))+B_{1}m_{\mathrm{sc}}(z_{1})m_{\mathrm{sc}}(z_{ 2}) \tag{2.13}\] \[=\frac{\langle B_{1}\rangle}{2\pi}\int_{-2}^{2}\frac{\sqrt{4-x^{ 2}}}{(x-z_{1})(x-z_{2})}\mathrm{d}x+(B_{1}-\langle B_{1}\rangle)m_{\mathrm{sc} }(z_{1})m_{\mathrm{sc}}(z_{2}).\]
The main objects of interest within this section are general resolvent chains
\[\mathcal{G}_{1}B_{1}\mathcal{G}_{2}B_{2}\ldots B_{k-1}\mathcal{G}_{k} \tag{2.14}\]
where \(\mathcal{G}_{i}\in\{G_{i},\mathrm{Im}\,G_{i}\}\), and we denote by \(\mathfrak{I}_{k}\subset[k]\) the set of the indices for which \(\mathcal{G}_{i}=\mathrm{Im}\,G_{i}\). Note that some resolvents may be replaced with their imaginary parts. In order to generalize (2.10), for any subset \(\mathfrak{I}_{k}\subset[k]\) we define5
Footnote 5: Calligraphic letters like \(\mathcal{G},\mathcal{M}\) indicate that we may consider \(\mathrm{Im}\,G\) instead of some resolvents \(G\) in the chain.
\[\mathcal{M}_{[1,k]}=\mathcal{M}(z_{1},B_{1},\ldots,B_{k-1},z_{k};\mathfrak{I}_ {k}):=\sum_{\pi\in\mathrm{NC}([k])}\mathrm{pTr}_{K(\pi)}(B_{1},\ldots,B_{k-1} )\prod_{S\in\pi}m_{\circ}^{(\mathfrak{I}_{k})}[S], \tag{2.15}\]
with \(m_{\circ}^{(\mathfrak{I}_{k})}[S]\) implicitly defined as in (2.12) with \(m[S]\) replaced with \(m^{(\mathfrak{I}_{k})}[S]\), where
\[m^{(\mathfrak{I}_{k})}[S]=m^{(\mathfrak{I}_{k})}[\{\,z_{i}:i\in S\,\}]:=\int_{- 2}^{2}\rho_{\mathrm{sc}}(x)\left(\prod_{i\in\mathfrak{I}_{k}\cap S}\mathrm{ Im}\,\frac{1}{x-z_{i}}\right)\left(\prod_{i\in S\setminus\mathfrak{I}_{k}}\frac{1}{x-z_{i} }\right)\mathrm{d}x. \tag{2.16}\]
We now give some bounds on the deterministic approximations in the case where all matrices in (2.15) are traceless, \(\langle B_{i}\rangle=0\).6 The proof of the following lemma is presented in Appendix A.
**Lemma 2.3** (\(M\)-bounds).: _Fix \(k\geq 1\). Consider spectral parameters \(z_{1},...,z_{k+1}\in\mathbb{C}\setminus\mathbb{R}\) and traceless matrices \(A_{1},...,A_{k}\in\mathbb{C}^{N\times N}\). Moreover, let_
\[\eta_{j}:=\left|\operatorname{Im}z_{j}\right|,\qquad m_{j}:=m_{\text{sc}}(z_{j} )\,,\qquad\rho_{j}:=\frac{1}{\pi}|\text{Im}\,m_{j}|\,.\]
1. _Denoting_ \(\ell:=\min_{j\in[k]}\big{[}\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathfrak{I}_{ k}))\big{]}\) _and assuming_ \(N\ell\geq 1\)_, we have the average bound_ \[|\langle\mathcal{M}(z_{1},A_{1},...,A_{k-1},z_{k};\mathfrak{I}_{k})A_{k} \rangle|\lesssim\left(\prod_{i\in\mathfrak{I}_{k}}\rho_{i}\right)N^{k/2-1} \prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\,.\] (2.17)
2. _Denoting_ \(\ell:=\min_{j\in[k+1]}\big{[}\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathfrak{I} _{k+1}))\big{]}\) _and assuming_ \(N\ell\geq 1\)_, we have the isotropic bound_7__ Footnote 7: The isotropic bound for \(|\langle\mathbf{x},\mathcal{M}(z_{1},A_{1},...,A_{k},z_{k+1};\mathfrak{I}_{k+1}) \mathbf{y}\rangle|\lesssim\left(\prod_{i\in\mathfrak{I}_{k+1}}\rho_{i}\right)N^{k /2}\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\,.\)__ (2.18) _for arbitrary bounded deterministic vectors_ \(\left\|\mathbf{x}\right\|,\left\|\mathbf{y}\right\|\lesssim 1\)_._
Note that (2.17) already reflects the different aspects of our local law: it correctly accounts for the \(\rho\)-powers for each \(\operatorname{Im}G\), it involves the Hilbert-Schmidt norm of the observables and it is not hard to see that the \(N\)-power is also optimal. Note that the isotropic bound (2.18) is stated separately for convenience, although it will be a straightforward consequence of the average bound (2.17).
#### 2.1.2. Multi-resolvent local law
As our main input for Theorem 2.2, we will prove the following multi-resolvent local law, optimally accounting for the decay of the density at the edge.
**Theorem 2.4** (Multi-resolvent local law with optimal edge dependence).: _Let \(W\) be a Wigner matrix satisfying Assumption 2.1, and fix \(k\in\mathbb{N}\). Consider spectral parameters \(z_{1},\ldots,z_{k+1}\in\mathbb{C}\setminus\mathbb{R}\), the associated resolvents \(G_{j}=G(z_{j}):=(W-z_{j})^{-1}\) with \(\mathcal{G}_{j}\in\{G_{j},\operatorname{Im}G_{j}\}\), and traceless matrices \(A_{1},\ldots,A_{k}\in\mathbb{C}^{N\times N}\). Finally, let_
\[\eta_{j}:=\left|\operatorname{Im}z_{j}\right|,\qquad m_{j}:=m_{\text{sc}}(z_{j })\,,\qquad\rho_{j}:=\frac{1}{\pi}|\text{Im}\,m_{j}|\,,\qquad j\in[k+1]. \tag{2.19}\]
1. _Denote by_ \(\mathfrak{I}_{k}\) _the set of indices_ \(j\in[k]\) _where_ \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\)_. Then, setting_ \[\ell:=\min_{j\in[k]}\big{[}\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathfrak{I} _{k}))\big{]},\] _we have the_ average law \[\big{|}\langle\mathcal{G}_{1}A_{1}\mathcal{G}_{2}\ldots\mathcal{G}_{k}A_{k} \rangle-\langle\mathcal{M}_{[1,k]}A_{k}\rangle\big{|}\prec\left[\left(\prod_{i \in\mathfrak{I}_{k}}\rho_{i}\right)\wedge\max_{i\in[k]}\sqrt{\rho_{i}}\right] \,\frac{N^{k/2-1}}{\sqrt{N\ell}}\,\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2 }\,,\] (2.20) _uniformly in spectral parameters satisfying_ \(\min_{j}N\eta_{j}\rho_{j}\geq N^{\epsilon}\) _and_ \(\max_{j}|z_{j}|\leq N^{1/\epsilon}\) _for some_ \(\epsilon>0\)_._
2. _Denote by_ \(\mathfrak{I}_{k+1}\) _the set of indices_ \(j\in[k+1]\) _where_ \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\)_. Then, setting_ \[\ell:=\min_{j\in[k+1]}\big{[}\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathfrak{I} _{k+1}))\big{]},\] _we have the_ isotropic law \[\big{|}\langle\mathbf{x},\mathcal{G}_{1}A_{1}\mathcal{G}_{2}\ldots A_{k}\mathcal{G} _{k+1}\mathbf{y}\rangle-\langle\mathbf{x},\mathcal{M}_{[1,k+1]}\mathbf{y}\rangle\big{|} \prec\left[\left(\prod_{i\in\mathfrak{I}_{k+1}}\rho_{i}\right)\wedge\max_{i \in[k+1]}\sqrt{\rho_{i}}\right]\,\frac{N^{k/2}}{\sqrt{N\ell}}\,\prod_{j\in[k]} \langle|A_{j}|^{2}\rangle^{1/2}\,,\] (2.21) _uniformly in bounded deterministic vectors_ \(\left\|\mathbf{x}\right\|,\left\|\mathbf{y}\right\|\lesssim 1\) _and spectral parameters satisfying_ \(\min_{j}N\eta_{j}\rho_{j}\geq N^{\epsilon}\) _and_ \(\max_{j}|z_{j}|\leq N^{1/\epsilon}\) _for some_ \(\epsilon>0\)_._
Observe that, in the regime \(N\ell\gg 1\), the error terms in (2.20) and (2.21) are smaller by an additional small \((N\ell)^{-1/2}\)-factor compared to the size of the leading terms in (2.17) and (2.18), respectively.
**Remark 2.5** (Optimality).: _The bounds (2.20) and (2.21) are optimal (up to the \(N^{\xi}\) factor hidden in the \(\prec\)-relation) in the class of bounds that involve only the parameters \(N\), \(\eta_{i}\), \(\rho_{i}\) and the Hilbert-Schmidt norm of \(A_{i}\)'s. This fact can be seen by computing the variance of the left hand sides in the case when \(W\) is a GUE matrix. The resolvents can be written out by spectral theorem, similarly to (2.5), and the variance with respect to the eigenvectors can be explicitly computed by Weingarten calculus, while the variance with respect to the eigenvalues (that are independent of the eigenvectors) can be identified from well-known central limit theorems for linear statistics of eigenvalues. For example, for \(k=2\), \(A_{1}=A_{2}=A\), \(z_{1}=z_{2}=z\) and \(\mathfrak{I}_{k}=\emptyset\), in this way we obtain_
\[\sqrt{\mathbb{E}\big{|}\langle GAGA\rangle-m^{2}\langle A^{2}\rangle\big{|}^{ 2}}\sim\frac{1}{N\eta}\langle A^{2}\rangle+\frac{\sqrt{\rho}}{N\sqrt{\eta}} \langle A^{4}\rangle^{1/2}. \tag{2.22}\]
_After estimating \(\langle A^{4}\rangle\leq N\langle A^{2}\rangle^{2}\), which may saturate for certain \(A\), we see the optimality of (2.20) for this case. The general case is a similar, albeit somewhat tedious calculation._
**Remark 2.6** (Interpretations).: _We have two further comments on Theorem 2.4._
* _For_ \(\mathfrak{I}_{k}=\emptyset\) _and_ \(\mathfrak{I}_{k+1}=\emptyset\) _both bounds, (_2.20_) and (_2.21_), have already been proven in_ _[_19_, Theorem 2.2 and Corollary 2.4]__. In the complementary cases_ \(\mathfrak{I}_{k}\neq\emptyset\) _and_ \(\mathfrak{I}_{k+1}\neq\emptyset\)_, we point out that the minimum_ \(\big{[}...\wedge...\big{]}\) _in (_2.20_) and (_2.21_) is realized by the product_ \(\prod_{i\in\mathfrak{I}}\rho_{i}\) _since_ \(\rho_{i}\lesssim 1\)_. In particular, as a rule of thumb, every index_ \(j\) _for which_ \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\)_, decreases both the size of the deterministic approximation (_2.17_)-(_2.18_) and the size of the error (_2.20_)-(_2.21_) by a factor_ \(\rho_{j}\)_, with_ \(\rho_{j}\leq 1\)_, compared to the case when_ \(\mathcal{G}_{j}=G_{j}\)_. An exception to this rule is (_2.20_) for_ \(k=1\)_; here the bounds for_ \(\langle GA\rangle\) _and_ \(\langle\operatorname{Im}GA\rangle\) _are identical._
* _The estimates in Theorem_ 2.4 _remain valid if we replace_ \[\begin{split}\langle\mathcal{M}_{[1,k]}A_{k}\rangle& \longrightarrow\left(\prod_{i\in\mathfrak{I}_{k}}\operatorname{Im}m_{i} \right)\left(\prod_{i\notin\mathfrak{I}_{k}}m_{i}\right)\langle A_{1}...A_{k }\rangle\\ \langle\mathbf{x},\mathcal{M}_{[1,k+1]}\mathbf{y}\rangle& \longrightarrow\left(\prod_{i\in\mathfrak{I}_{k+1}}\operatorname{Im}m_{i} \right)\left(\prod_{i\notin\mathfrak{I}_{k+1}}m_{i}\right)\langle\mathbf{x},A_{1}...A_{k}\mathbf{y}\rangle\end{split}\] (2.23) _in (_2.20_) and (_2.21_), respectively, i.e., if we consider only the trivial partition into singletons_ \(\pi\) _in the definition (_2.15_) of_ \(\mathcal{M}_{[1,k]}\)_. This is simply due to the fact that all other summands in (_2.15_) are explicitly smaller than the error terms in (_2.20_)-(_2.21_). A proof of this fact is given in Appendix_ A_._
**Remark 2.7** (Generalisations).: _We mention a few direct generalisations of Theorem 2.4 whose proofs are omitted as they are straightforward._
* _In Theorem_ 2.4 _each_ \(\mathcal{G}\) _can be replaced by a product of_ \(\mathcal{G}\)_'s and an individual_ \(\mathcal{G}\) _may also stand for_ \(|G|\)_, not only for_ \(G\) _or_ \(\operatorname{Im}G\) _(see_ _[_18_, Lemma 3.2]__,_ _[_19_, Lemma 3.1]__, and also Lemma_ 4.6 _below). We refrain from stating these results explicitly as they are easily obtained using appropriate integral representations of general products of such_ \(\mathcal{G}\)_'s in terms of a single_ \(\operatorname{Im}G\)_._
* _We stated the multi-resolvent local laws in Theorem_ 2.4 _only for_ \(\mathcal{G}_{j}\in\{G_{j},\operatorname{Im}G_{j}\}\)_, however, inspecting the proof, one can easily see that it also leads to a local law for_ \(\mathcal{G}_{j}\in\{G_{j},\operatorname{Im}G_{j},G_{j}^{\mathrm{t}}, \operatorname{Im}G_{j}^{\mathrm{t}}\}\)_, where_ \(G^{\mathrm{t}}\) _stands for the transpose of_ \(G\)_. In particular, this implies that the ETH in Theorem_ 2.2 _can also be extended to_ \[\max_{i,j\in[N]}|\langle\overline{\mathbf{u}_{i}},A\mathbf{u}_{j}\rangle-\langle A \rangle\langle\overline{\mathbf{u}_{i}},\mathbf{u}_{j}\rangle|\prec\frac{\langle|\hat {A}|^{2}\rangle^{1/2}}{\sqrt{N}}.\] (2.24) _Furthermore, setting_ \(\sigma:=\mathbb{E}\chi_{\mathrm{od}}^{2}\)_, for_ \(|\sigma|<1\) _we have (see_ _[_14_, Theorem 2.3]__)_ \[\big{|}\langle\overline{\mathbf{u}_{i}},\mathbf{u}_{j}\rangle\big{|}\prec\frac{C_{ \sigma}}{\sqrt{N}}.\] _In two extreme cases_ \(\sigma=\pm 1\)_, we have_ \(|\langle\overline{\mathbf{u}_{i}},\mathbf{u}_{j}\rangle|=\delta_{i,j}\) _if_ \(\sigma=1\) _and_ \(|\langle\overline{\mathbf{u}_{i}},\mathbf{u}_{j}\rangle|=\delta_{i,N-j+1}\) _if_ \(\sigma=-1\) _and_ \(\mathbb{E}(W_{aa}^{2})=0\) _(see_ _[_14_, Remark 2.4]__). We remark that here_ \(\mathbf{u}_{i}\) _denotes the eigenvector corresponding to the eigenvalue_ \(\lambda_{i}\)_, with the_ \(\lambda_{i}\)_'s labeled in increasing order._
### Proof of Theorem 2.2
Fix \(\epsilon>0\), pick \(E\in[-2,2]\) and define \(\eta(E)\) implicitly by
\[N\eta(E)\rho(E+\mathrm{i}\eta(E))=N^{\epsilon}.\]
Let \(A\) be a traceless matrix \(\langle A\rangle=0\), then by spectral decomposition (2.5) and the well-known eigenvalue rigidity8 (see, e.g., [29]) it is easy to see that (see [14, Lemma 1] for more details)
Footnote 8: Rigidity asserts that the increasingly ordered eigenvalues \(\lambda_{i}\) are very close to the \(i\)-th \(N\)-quantile \(\gamma_{i}\) of the semicircle density \(\rho_{\mathrm{sc}}\) in the sense \(|\lambda_{i}-\gamma_{i}|\prec N^{-2/3}[i\wedge(N+1-i)]^{-1/3}\), i.e. each eigenvalue is strongly concentrated around the corresponding quantile essentially on the scale of the local eigenvalue spacing.
\[\max_{i,j\in[N]}N\left|\langle\mathbf{u}_{i},A\mathbf{u}_{j}\rangle\right|^{2}\prec N^ {2\epsilon}\sup_{E_{1},E_{2}\in[-2,2]}\frac{|\langle\operatorname{Im}G(E_{1} +\mathrm{i}\eta(E_{1}))A\operatorname{Im}G(E_{2}+\mathrm{i}\eta(E_{2}))A^{*} \rangle|}{\rho(E_{1}+\mathrm{i}\eta(E_{1}))\rho(E_{2}+\mathrm{i}\eta(E_{2}))} \prec N^{2\epsilon}\langle|A|^{2}\rangle\,.\]
We point out that in the last inequality we used (2.20) for \(k=2\) and \(\mathfrak{I}_{2}=[2]\):
\[\left|\langle\operatorname{Im}G_{1}A\!\operatorname{Im}G_{2}A^{*}\rangle- \operatorname{Im}m_{1}\!\operatorname{Im}m_{2}\langle|A|^{2}\rangle\right| \prec\frac{\rho_{1}\rho_{2}}{\sqrt{N\ell}}\langle|A|^{2}\rangle\,.\]
The fact that this bound holds simultaneously for all \(E_{1}=\operatorname{Re}z_{1}\in[-2,2]\) and \(E_{2}=\operatorname{Re}z_{2}\in[-2,2]\) follows by a simple grid argument together with the Lipschitz regularity of the resolvent (with Lipschitz constant of order \(N\) at spectral parameters with imaginary part bigger than \(1/N\)). This completes the proof of Theorem 2.2.
The rest of the paper is devoted to the proof of the multi-resolvent local law, Theorem 2.4.
## 3. Multi-resolvent local law: Proof of Theorem 2.4
In this section we prove the _multi-resolvent local laws_ in Theorem 2.4 via the following three steps:
1. **Global law.** Prove a multi-resolvent _global law_, i.e. for spectral parameters "far away" from the spectrum, \(\min_{j}\operatorname{dist}(z_{j},[-2,2])\geq\delta\) for some small \(\delta>0\) (see Proposition 3.1).
2. **Characteristic flow.** Propagate the global law to a _local law_ by considering the evolution of the Wigner matrix \(W\) along the Ornstein-Uhlenbeck flow, thereby introducing an almost order one Gaussian component (see Proposition 3.3). The spectral parameters evolve from the global regime to the local regime according to the _characteristic (semicircular) flow_. The simultaneous effect of these two evolutions is a key cancellation of two large terms.
3. **Green function comparison.** Remove the Gaussian component by a Green function comparison (GFT) argument (see Proposition 3.4).
As the first step, we have the following global law. Its proof, which is analogous to the proofs presented in [18, Appendix B] and [19, Appendix A], is given in Appendix A.2 for completeness. We point out that all these proofs do not use the system of master inequalities and the bootstrapped error analysis that form the technical backbone of [18, 19], they use simple norm bounds on the resolvents. In particular, Proposition 3.1 holds for general deterministic matrices since the traceless condition plays no role in this case.
**Proposition 3.1** (Step 1: Global law).: _Let \(W\) be a Wigner matrix satisfying Assumption 2.1, and fix any \(k\in\mathbb{N}\) and \(\delta>0\). Consider spectral parameters \(z_{1},...,z_{k+1}\in\mathbb{C}\setminus\mathbb{R}\), the associated resolvents \(G_{j}=G(z_{j}):=(W-z_{j})^{-1}\), with \(\mathcal{G}_{j}\in\{G_{j},\operatorname{Im}G_{j}\}\), and deterministic matrices \(B_{1},...,B_{k}\in\mathbb{C}^{N\times N}\). Denote \(\eta_{i}:=|\operatorname{Im}z_{i}|\) and \(\rho_{i}:=\pi^{-1}|\operatorname{Im}m_{\mathrm{sc}}(z_{i})|\). Then, uniformly in deterministic matrices \(B_{i}\) and in spectral parameters satisfying \(\operatorname{dist}(z_{j},[-2,2])\geq\delta\), the following holds._
1. _Let_ \(\mathfrak{I}_{k}\) _be the set of indices_ \(j\in[k]\) _where_ \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\)_, and define_ \(\ell:=\min_{j\in[k]}\big{[}\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathfrak{I}_{ k}))\big{]}\)_. Then we have the averaged bound_ \[\big{|}\langle\mathcal{G}_{1}B_{1}...\mathcal{G}_{k}B_{k}\rangle-\langle \mathcal{M}_{[1,k]}B_{k}\rangle\big{|}\prec\left[\left(\prod_{i\in\mathfrak{I} _{k}}\rho_{i}\right)\wedge\max_{i\in[k]}\sqrt{\rho_{i}}\right]\frac{N^{k/2-1} }{\sqrt{N\ell}}\prod_{j\in[k]}\langle|B_{j}|^{2}\rangle^{\frac{1}{2}}\,.\] (3.1)
2. _Let_ \(\mathfrak{I}_{k+1}\) _be the set of indices_ \(j\in[k+1]\) _where_ \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\)_, and define_ \(\ell:=\min_{j\in[k+1]}\big{[}\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathfrak{I}_ {k+1}))\big{]}\)_. Then, for deterministic unit vectors_ \(\mathbf{x},\mathbf{y}\)_, we have the isotropic bound_ \[\big{|}\langle\mathcal{G}_{1}B_{1}...B_{k}\mathcal{G}_{k+1}\rangle_{\mathbf{x}\mathbf{ y}}-(\mathcal{M}_{[1,k+1]})_{\mathbf{x}\mathbf{y}}\big{|}\prec\left[\left(\prod_{i\in \mathfrak{I}_{k+1}}\rho_{i}\right)\wedge\max_{i\in[k+1]}\sqrt{\rho_{i}} \right]\frac{N^{k/2}}{\sqrt{N\ell}}\prod_{j\in[k]}\langle|B_{j}|^{2}\rangle^{ \frac{1}{2}}\,.\] (3.2)
In the next Proposition 3.3, using Proposition 3.1 as an input, we derive Theorem 2.4 for Wigner matrices which have an order one Gaussian component. For this purpose we consider the evolution of the Wigner matrix \(W\) along the Ornstein-Uhlenbeck flow
\[\mathrm{d}W_{t}=-\frac{1}{2}W_{t}\mathrm{d}t+\frac{\mathrm{d}B_{t}}{\sqrt{N}}, \qquad W_{0}=W, \tag{3.3}\]
with \(B_{t}\) being real symmetric or complex Hermitian Brownian motion9 with entries having \(t\) times the same first two moments of \(W\), and define its resolvent \(G_{t}(z):=(W_{t}-z)^{-1}\) with \(z\in\mathbb{C}\setminus\mathbb{R}\). Even if not stated explicitly we will always consider this flow only for short times, i.e. for \(0\leq t\leq T\), where the maximal time \(T\) is smaller than \(\gamma\), for some small constant \(\gamma>0\). Note that along the flow (3.3) the first two moments of \(W_{t}\) are preserved, and so the self-consistent density of states of \(W_{t}\) is unchanged; it remains the standard semicircle law. We now want to compute the deterministic approximation of product of resolvents and deterministic matrices with trace zero,
Footnote 9: Strictly speaking, we use this Brownian motion only when \(\sigma:=\mathbb{E}\chi_{\mathrm{od}}^{2}\) is real and \(\mathbb{E}\chi_{\mathrm{d}}^{2}=1+\sigma\), otherwise we need a small modification, see later in Section 4.
\[\mathcal{G}_{t}(z_{1})A_{1}\mathcal{G}_{t}(z_{2})A_{2}\mathcal{G}_{t}(z_{3})A_ {3}\dots,\qquad\quad\langle A_{i}\rangle=0, \tag{3.4}\]
and have a very precise estimate of the error term.
In fact, we also let the spectral parameters evolve with time with a carefully chosen equation that conveniently cancels some leading error terms in the time evolution of (3.4). The corresponding equation is the characteristic equation for the semicircular flow, i.e. given by the first order ODE (see Figure 1):
\[\partial_{t}z_{i,t}=-m(z_{i,t})-\frac{z_{i,t}}{2}\,. \tag{3.5}\]
Define \(\eta_{i,t}:=|\mathrm{Im}\,z_{i,t}|\) and \(\rho_{i,t}:=\pi^{-1}|\mathrm{Im}\,m(z_{i,t})|\). Note that along the characteristics we have
\[\partial_{t}m(z_{i,t})=-\partial_{z}m(z_{i,t})\left(m(z_{i,t})+\frac{z_{i,t}}{ 2}\right)=-\partial_{z}m(z_{i,t})\left(-\frac{1}{2m(z_{i,t})}+\frac{m(z_{i,t} )}{2}\right)=\frac{m(z_{i,t})}{2}, \tag{3.6}\]
where in the last two equalities we used the defining equation \(m(z)^{2}+zm(z)+1=0\) of the Stieltjes transform of the semicircular law. In particular, taking the imaginary part of (3.6) we get \(\rho_{i,s}\sim\rho_{i,t}\) for any \(0\leq s\leq t\), while the behavior of the \(\eta_{i,t}\) depends on the regime: in the bulk \(\eta_{i,t}\) decreases linearly in time with a speed of order one, close to the edge \(\eta_{i,t}\) decreases still linearly, but with a speed depending on \(\rho\), i.e. it is slower near the edges. By standard ODE theory we obtain the following lemma:
**Lemma 3.2**.: _Fix an \(N\)-independent \(\gamma>0\), fix \(0<T<\gamma\), and pick \(z\in\mathbb{C}\setminus\mathbb{R}\). Then there exists an initial condition \(z_{0}\) such that the solution \(z_{t}\) of (3.5) with this initial condition \(z_{0}\) satisfies \(z_{T}=z\). Furthermore, there exists a constant \(C>0\) such that \(\mathrm{dist}(z_{0},[-2,2])\geq CT\)._
Figure 1. Several trajectories for solutions of (3.5) are depicted. We chose ten reference times, indicated by dots, showing that the rate of change along the flow strongly depends on \(\rho\). The solid black line is the graph of \(E\mapsto\eta(E)\) with \(\eta(E)\) implicitly defined via \(\eta(E)\rho(E+\mathrm{i}\eta(E))=\mathrm{const.}\) for a small positive constant. A similar picture also appeared in [11, Figure 1].
The spectral parameters evolving by (3.3) will have the property that
\[\mathcal{G}_{t}(z_{1,t})A_{1}\ldots A_{k-1}\mathcal{G}_{t}(z_{k,t})-\mathcal{M}_{ [1,k],t}\approx\mathcal{G}_{0}(z_{1,0})A_{1}\ldots A_{k-1}\mathcal{G}_{0}(z_{k,0 })-\mathcal{M}_{[1,k],0}, \tag{3.7}\]
with \(\mathcal{M}_{[1,k],t}:=\mathcal{M}(z_{1,t},A_{1},\ldots,A_{k-1},z_{k,t})\), for any \(0\leq t\leq T\). Note that the deterministic approximation \(\mathcal{M}_{[1,k],t}\) depends on time only through the time dependence of the spectral parameters. The deterministic approximation of (3.4) with fixed spectral parameters is unchanged along the whole flow (3.3) since the Wigner semicircular density is preserved under the OU flow (3.3).
**Proposition 3.3** (Step 2: Characteristic flow).: _Fix \(\epsilon,\gamma>0\), \(0\leq T\leq\gamma\), \(K\in\mathbb{N}\). Consider \(z_{1,0},\ldots,z_{K+1,0}\in\mathbb{C}\setminus\mathbb{R}\) as initial conditions of the solution \(z_{j,t}\) of (3.5) for \(0\leq t\leq T\), define \(G_{j,t}:=G_{t}(z_{j,t})\) and let \(\mathcal{G}_{j,t}\in\{G_{j,t},\operatorname{Im}G_{j,t}\}\). Let \(\left\|\mathbf{x}\right\|,\left\|\mathbf{y}\right\|\lesssim 1\) be bounded deterministic vectors._
1. _For any_ \(k\leq K\) _let_ \(\mathfrak{I}_{k}\) _be the set of indices_ \(j\in[k]\) _where_ \(\mathcal{G}_{j,t}=\operatorname{Im}G_{j,t}\)_, and define_ \(\ell_{t}:=\min_{j\in[k]}\left[\eta_{j,t}(\rho_{j,t}+\mathbf{1}(j\notin\mathfrak{ I}_{k}))\right]\)_, the time dependent analogue_10 _of_ \(\ell\)_. Then, assuming that_ Footnote 10: We point out that the index \(j\) realizing the minimum may change along the time evolution. Additionally, by (3.6) and the text below it, we note that if \(\min_{i}N\eta_{i}\rho_{i}\geq N^{\epsilon}\) then \(\min_{i}N\eta_{i,t}\rho_{i,t}\geq N^{\epsilon}\) for any \(0\leq t\leq T\). \[\left|\langle\mathcal{G}_{1,0}A_{1}...\mathcal{G}_{k,0}A_{k}\rangle-\langle \mathcal{M}_{[1,k],0}A_{k}\rangle\right|\prec\left[\left(\prod_{i\in\mathfrak{I }_{k}}\rho_{i,0}\right)\wedge\max_{i\in[k]}\sqrt{\rho_{i,0}}\right]\frac{N^{k/ 2-1}}{\sqrt{N\ell_{0}}}\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\] (3.8) _holds uniformly for any_ \(k\leq K\)_, any choice of_ \(A_{1},\ldots,A_{k}\) _traceless deterministic matrices and any choice of_ \(z_{i,0}\)_'s such that_ \(N\eta_{i,0}\rho_{i,0}\geq N^{\epsilon}\) _and_ \(|z_{i,0}|\leq N^{1/\epsilon}\)_, then we have_ Footnote 11: This condition can easily be relaxed to being matching up to an error of size \(N^{-2}\) as done, e.g., in [31, Theorem 16.1]. \[\left|\langle\mathcal{G}_{1,T}A_{1}...\mathcal{G}_{k,T}A_{k}\rangle-\langle \mathcal{M}_{[1,k],T}A_{k}\rangle\right|\prec\left[\left(\prod_{i\in\mathfrak{I }_{k}}\rho_{i,T}\right)\wedge\max_{i\in[k]}\sqrt{\rho_{i,T}}\right]\frac{N^{k/ 2-1}}{\sqrt{N\ell_{T}}}\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\,,\] (3.9) _for any_ \(k\leq K\)_, again uniformly in traceless matrices_ \(A_{i}\)_, and in spectral parameters satisfying_ \(N\eta_{i,T}\rho_{i,T}\geq N^{\epsilon}\)_,_ \(|z_{i,T}|\leq N^{1/\epsilon}\)_._
2. _Let_ \(\mathfrak{I}_{k+1}\) _be the set of indices_ \(j\in[k+1]\) _where_ \(\mathcal{G}_{j,t}=\operatorname{Im}G_{j,t}\)_, and define_ \(\ell_{j,t}:=\min_{j\in[k+1]}\left[\eta_{j,t}(\rho_{j,t}+\mathbf{1}(j\notin \mathfrak{I}_{k+1}))\right]\)_. Then, assuming that_ Footnote 11: This condition can easily be relaxed to being matching up to an error of size \(N^{-2}\) as done, e.g., in [31, Theorem 16.1]. \[\left|\langle\mathbf{x},\mathcal{G}_{1,0}A_{1}...A_{k}\mathcal{G}_{k+1,0}\mathbf{y} \rangle-\langle\mathbf{x},\mathcal{M}_{[1,k+1],0}\mathbf{y}\rangle\right|\prec\left[ \left(\prod_{i\in\mathfrak{I}_{k+1}}\rho_{i,0}\right)\wedge\max_{i\in[k+1]} \sqrt{\rho_{i,0}}\right]\frac{N^{k/2}}{\sqrt{N\ell_{0}}}\prod_{j\in[k]}\langle| A_{j}|^{2}\rangle^{1/2}\] (3.10) _holds for any_ \(k\leq K\)_, uniformly in_ \(A\)_'s and in the spectral parameters as in part (a), and in deterministic vectors, then we have_ Footnote 12: This condition can easily be relaxed to being matching up to an error of size \(N^{-2}\) as done, e.g., in [31, Theorem 16.1]. \[\left|\langle\mathbf{x},\mathcal{G}_{1,T}A_{1}...A_{k}\mathcal{G}_{k+1,T}\mathbf{y} \rangle-\langle\mathbf{x},\mathcal{M}_{[1,k+1],T}\mathbf{y}\rangle\right|\prec\left[ \left(\prod_{i\in\mathfrak{I}_{k+1}}\rho_{i,T}\right)\wedge\max_{i\in[k+1]} \sqrt{\rho_{i,T}}\right]\frac{N^{k/2}}{\sqrt{N\ell_{T}}}\prod_{j\in[k]}\langle| A_{j}|^{2}\rangle^{1/2}\,,\] (3.11) _for any_ \(k\leq K\)_, again uniformly in_ \(A\)_'s and in spectral parameters as in part (a), and in deterministic vectors_ \(\mathbf{x},\mathbf{y}\)_._
Proposition 3.3 is proven in Section 4. As the third and final step, we show that the additional Gaussian component introduced in Proposition 3.3 can be removed using a Green function comparison (GFT) argument. The proof of this proposition is presented in Section 5.
**Proposition 3.4** (Step 3: Green function comparison).: _Let \(H^{(\mathbf{v})}\) and \(H^{(\mathbf{w})}\) be two \(N\times N\) Wigner matrices with matrix elements given by the random variables \(v_{ab}\) and \(w_{ab}\), respectively, both satisfying Assumption 2.1 and having matching moments up to third order,11 i.e._ Footnote 11: This condition can easily be relaxed to being matching up to an error of size \(N^{-2}\) as done, e.g., in [31, Theorem 16.1]._
\[\mathbb{E}\bar{w}_{ab}^{u}v_{ab}^{s-u}=\mathbb{E}\bar{w}_{ab}^{u}w_{ab}^{s-u}\,, \quad s\in\left\{0,1,2,3\right\},\quad u\in\left\{0,...,s\right\}. \tag{3.12}\]
_Fix \(K\in\mathbb{N}\) and consider spectral parameters \(z_{1},...,z_{K+1}\in\mathbb{C}\setminus\mathbb{R}\) satisfying \(\min_{j}N\eta_{j}\rho_{j}\geq N^{\epsilon}\) and \(\max_{j}|z_{j}|\leq N^{1/\epsilon}\) for some \(\epsilon>0\) and the associated resolvents \(G_{j}^{(\#)}=G^{(\#)}(z_{j}):=(H^{(\#)}-z_{j})^{-1}\) with \(\mathcal{G}_{j}^{(\#)}\in\{G_{j}^{(\#)},\operatorname{Im}G_{j}^{(\#)}\}\) and \(\#=\mathbf{v},\mathbf{w}\). Pick traceless matrices \(A_{1},...,A_{K}\in\mathbb{C}^{N\times N}\)._
_Assume that, for \(H^{(\mathbf{v})}\), we have the following bounds (writing \(\mathcal{G}_{j}\equiv\mathcal{G}_{j}^{(\mathbf{v})}\) for brevity)._
_._
* _For any_ \(k\leq K\)_, consider any subset of cardinality_ \(k\) _of the_ \(K+1\) _spectral parameters and, similarly, consider any subset of cardinality_ \(k\) _of the_ \(K\) _traceless matrices. Relabel both of them by_ \([k]\)_, and denote the set of indices_ \(j\in[k]\) _by_ \(\mathcal{I}_{k}\) _where_ \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\)_. Setting_ \(\ell:=\min_{j\in[k]}\left[\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathcal{I}_{k }))\right]\) _we have that_ \[\left|\langle\mathcal{G}_{1}A_{1}...\mathcal{G}_{k}A_{k}\rangle-\langle \mathcal{M}_{[1,k]}A_{k}\rangle\right|\prec\left[\left(\prod_{i\in\mathcal{I}_{ k}}\rho_{i}\right)\wedge\max_{i\in[k]}\sqrt{\rho_{i}}\right]\,\frac{N^{k/2-1}}{ \sqrt{N\ell}}\,\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\,,\] (3.13) _uniformly in all choices of subsets of_ \(z\)_'s and_ \(A\)_'s._
* _For any_ \(k\leq K\)_, consider any subset of cardinality_ \(k+1\) _of the_ \(K+1\) _spectral parameters and, similarly, consider any subset of cardinality_ \(k\) _of the_ \(K\) _traceless matrices. Relabel them by_ \([k+1]\) _and_ \([k]\)_, respectively, and denote the set of indices_ \(j\in[k+1]\) _by_ \(\mathcal{I}_{k+1}\) _where_ \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\)_. Setting_ \(\ell:=\min_{j\in[k+1]}\left[\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathcal{I}_{ k+1}))\right]\) _we have that_ \[\left|\langle\boldsymbol{x},\mathcal{G}_{1}A_{1}...A_{k}\mathcal{G}_{k+1} \boldsymbol{y}\rangle-\langle\boldsymbol{x},\mathcal{M}_{[1,k+1]}\boldsymbol {y}\rangle\right|\prec\left[\left(\prod_{i\in\mathcal{I}_{k+1}}\rho_{i} \right)\wedge\max_{i\in[k+1]}\sqrt{\rho_{i}}\right]\,\frac{N^{k/2}}{\sqrt{N \ell}}\,\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\,,\] (3.14) _uniformly in all choices of subsets of_ \(z\)_'s and_ \(A\)_'s as in part (a) and in bounded deterministic vectors_ \(\left\|\boldsymbol{x}\right\|,\left\|\boldsymbol{y}\right\|\lesssim 1\)_._
_Then, (3.13)-(3.14) also hold for the ensemble \(H^{(\boldsymbol{w})}\), uniformly all choices of subsets of \(z\)'s and \(A\)'s and in bounded deterministic vectors._
We are now ready to finally conclude the proof of Theorem 2.4. Fix \(T>0\), and fix \(z_{1},\ldots,z_{k+1}\in\mathbb{C}\setminus\mathbb{R}\) such that \(\min_{i}N\eta_{i}\rho_{i}\geq N^{\epsilon}\), and let \(z_{i,0}\) be the initial conditions of the characteristics (3.5) chosen so that \(z_{i,T}=z_{i}\) (this is possible thanks to Lemma 3.2). Then, the assumption (3.8) of Proposition 3.3 is satisfied for those \(z_{i,0}\) by Proposition 3.1 with \(\delta=CT\), where \(C>0\) is the constant from Lemma 3.2. We can thus use Proposition 3.3 to show that (3.9) and (3.11) hold. Finally, the Gaussian component added in Proposition 3.3 is removed using Proposition 3.4 with the aid of a complex version of the standard moment-matching lemma [31, Lemma 16.2], see Lemma A.2 in Appendix A.3 for more details.
## 4. Characteristic flow: Proof of Proposition 3.3
In this section we present the proof of Proposition 3.3. The argument is structured as follows:
* In Section 4.1 we begin by proving the average part, Proposition 3.3 (a), in the case when \(\mathcal{G}_{j,t}=\operatorname{Im}G_{j,t}\) for each \(j\in[k]\), i.e., we prove (3.9) for chains containing only \(\operatorname{Im}G\)'s. Along the flow (3.3) new resolvents without imaginary part arise, so the pure \(\operatorname{Im}G\) structure cannot be directly maintained. However, we can use the integral representation (see, e.g. [18, Eq. (3.14)]), \[\prod_{j=1}^{m}G(z_{j})=\frac{1}{\pi}\int_{\mathbb{R}}\operatorname{Im}G(x+ \mathrm{i}\eta)\prod_{j=1}^{m}\frac{1}{x-z_{j}+\operatorname{sgn}(\operatorname {Im}z_{j})\mathrm{i}\eta}\mathrm{d}x,\] (4.1) (that is valid for any \(0<\eta<\min_{j}\operatorname{Im}z_{j}\) or \(\max_{j}\operatorname{Im}z_{j}<-\eta<0\)) to express each \(G\) in terms of \(\operatorname{Im}G\), thus the flow for purely \(\operatorname{Im}G\) chains will be self-contained.
* Afterwards, in the very short Section 4.2, we prove the isotropic part, Proposition 3.3 (b) again first in the case when \(\mathcal{G}_{j,t}=\operatorname{Im}G_{j,t}\) for each \(j\in[k+1]\). Due to the Hilbert-Schmidt error terms, the isotropic bound (3.11) will directly follow from (3.9) proven in Section 4.1.
* Finally, using the integral representation (4.1) in the special case \(m=1\), we derive the general case of mixed chains from the purely \(\operatorname{Im}G\)'s case in Section 4.3.
Without loss of generality, to keep the presentation simpler, throughout this section we will assume that \(\sigma:=\mathbb{E}\chi_{\mathrm{cd}}^{2}\) is real and \(\mathbb{E}\chi_{\mathrm{d}}^{2}=1+\sigma\) (recall that \(\chi_{\mathrm{d}},\chi_{\mathrm{cd}}\) are the distribution of the diagonal and off-diagonal matrix elements of \(W\), respectively). At the end, in Section 4.4, we will explain how to lift these two restrictions.
We recall our choice of the characteristics
\[\partial_{t}z_{i,t}=-m(z_{i,t})-\frac{z_{i,t}}{2}. \tag{4.2}\]
Additionally, we record the following trivially checkable integration rules for any \(\alpha\geq 1\):
\[\int_{0}^{t}\frac{1}{\eta_{i,s}^{\alpha}}\,\mathrm{d}s\lesssim\frac{\log N}{\eta _{i,t}^{\alpha-1}\rho_{i,t}}\qquad\text{and}\qquad\int_{0}^{t}\frac{1}{\eta_{s} ^{\alpha}}\,\mathrm{d}s\lesssim\frac{\log N}{\eta_{t}^{\alpha-2}\hat{\ell}_{t} }\quad\text{with}\quad\eta_{t}:=\min_{i}\eta_{i,t}\,,\quad\hat{\ell}_{t}:= \min_{i}\eta_{i,t}\rho_{i,t}\,. \tag{4.3}\]
Note that, in general, \(\hat{\ell}\) differs from \(\ell\), introduced in Theorem 2.4. However, in case that every resolvent \(\mathcal{G}\) in a given chain is an \(\mathrm{Im}\,G\), i.e. \(\mathfrak{I}\) is the full set of indices, then it holds that \(\hat{\ell}=\ell\). The notation 'hat' will be consistently used to indicate that a chain contains only \(\mathrm{Im}\,G\)'s (see (4.6)-(4.7) below).
Using the short-hand notation \(G_{i,t}:=(W_{t}-z_{i,t})^{-1}\) with \(W_{t}\) being the solution of (3.3), we now compute the derivative (recall (2.15))
\[\mathrm{d}\langle(\mathrm{Im}\,G_{1,t}A_{1}...\mathrm{Im}\,G_{k,t}-\mathcal{ M}(z_{1,t},A_{1},...,z_{k,t};[k]))A_{k}\rangle=... \tag{4.4}\]
along the characteristics with the aid of Ito's formula. We point out that the following derivation of the flow holds for any deterministic matrices \(A_{i}\), i.e. in this derivation we do not assume that \(\langle A_{i}\rangle=0\). We will assume again that \(A_{i}\) are traceless only later starting from the beginning of Section 4.1.
The evolution for (4.4) (see (4.9) below) is obtained by multilinearity from the analogous formula for the time derivative of a resolvent chain without any imaginary parts. So first we compute
\[\begin{split}\mathrm{d}\langle(G_{[1,k],t}-M_{[1,k],t})A_{k} \rangle&=\frac{1}{\sqrt{N}}\sum_{a,b=1}^{N}\partial_{ab}\langle G _{[1,k],t}A_{k}\rangle\mathrm{d}B_{ab,t}+\frac{k}{2}\langle G_{[1,k],t}A_{k} \rangle\mathrm{d}t+\sum_{i,j=1\atop i<j}^{k}\langle G_{[i,j],t}\rangle\langle G _{[j,i],t}\rangle\mathrm{d}t\\ &\quad+\sum_{i=1}^{k}\langle G_{i,t}-m_{i,t}\rangle\langle G_{[1,k],t}^{(i)}A_{k}\rangle\mathrm{d}t-\partial_{t}\langle M_{[1,k],t}A_{k} \rangle\mathrm{d}t+\frac{\sigma}{N}\sum_{i,j=1\atop i\leq j}^{k}\langle G_{[i,j],t}G_{[j,i],t}^{\mathrm{t}}\rangle\mathrm{d}t\,,\end{split} \tag{4.5}\]
where \(\partial_{ab}:=\partial_{w_{ab}}\) denotes the direction derivative in the direction \(w_{ab}=w_{ab}(t):=(W_{t})_{ab}\). Here we introduced the notation
\[G_{[i,j],t}:=\begin{cases}G_{i,t}A_{i}\ldots A_{j-1}G_{j,t}&\text{if}\quad i< j\\ G_{i,t}&\text{if}\quad i=j\\ G_{i,t}A_{i,t}\ldots\mathrm{Im}\,G_{k,t}A_{k}G_{1,t}A_{1}\ldots A_{j-1}G_{j,t} &\text{if}\quad i>j\,,\end{cases}\]
and analogously for the deterministic approximation \(M_{[i,j],t}\) (cf. (2.10)). Furthermore, we define \(G_{[i,j],t}^{(l)}\) exactly as \(G_{[i,j],t}\) but with the \(l\)-th factor \(G_{l,t}\) being substituted by \(G_{i,t}^{2}\). For the last term in (4.5) we used the convention that \(\langle G_{[i,j],t}^{\mathrm{t}}G_{[j,i],t}\rangle=\langle G_{i,t}^{\mathrm{t} }G_{i,t}A_{i+1}G_{[i+1,i],t}\rangle\) for \(j=i\).
In order to write the derivative (4.4) in a manageable form, we need to introduce some further short-hand notations. Set
\[\widehat{G}_{[\hat{i},\hat{j}],t}:=\begin{cases}\mathrm{Im}\,G_{i,t}A_{i} \ldots A_{j-1}\mathrm{Im}\,G_{j,t}&\text{if}\quad i<j\\ \mathrm{Im}\,G_{i,t}&\text{if}\quad i=j\\ \mathrm{Im}\,G_{i,t}A_{i,t}\ldots\mathrm{Im}\,G_{k,t}A_{k}\mathrm{Im}\,G_{1,t} A_{1}\ldots A_{j-1}\mathrm{Im}\,G_{j,t}&\text{if}\quad i>j,\end{cases} \tag{4.6}\]
and define \(\widehat{G}_{[\hat{i},\hat{j}],t}^{(l)}\) exactly as \(\widehat{G}_{[\hat{i},\hat{j}],t}\) except the \(l\)-th factor \(\mathrm{Im}\,G_{l,t}\) is substituted with \(G_{l,t}\mathrm{Im}\,G_{l,t}\). Similarly, \(\widehat{G}_{[\hat{i},\hat{j}],t}^{(l^{\ast})}\) is defined as \(\widehat{G}_{[\hat{i},\hat{j}],t}\) but with the \(l\)-th \(\mathrm{Im}\,G_{l,t}\) is substituted by \(G_{l,t}^{\ast}\mathrm{Im}\,G_{l,t}\). Furthermore, we also define
\[\widehat{G}_{[\hat{i},\hat{j}],t}:=\begin{cases}\mathrm{Im}\,G_{i,t}A_{i} \ldots A_{j-1}G_{j,t}&\text{if}\quad i<j\\ G_{i,t}&\text{if}\quad i=j\\ \mathrm{Im}\,G_{i,t}A_{i,t}\ldots\mathrm{Im}\,G_{k,t}A_{k}\mathrm{Im}\,G_{1,t} A_{1}\ldots A_{j-1}G_{j,t}&\text{if}\quad i>j;\end{cases} \tag{4.7}\]
note the absent hat on the \(j\) index indicates that the last resolvent \(G_{j,t}\) is without imaginary part. We also define \(\widehat{G}_{[i^{\ast},\hat{j}],t}\) replacing \(\mathrm{Im}\,G_{i,t}\) with \(G_{i,t}^{\ast}\) in (4.6) and similarly \(\widehat{G}_{[i^{\ast},\hat{j}],t}\) is defined by replacing \(\mathrm{Im}\,G_{i,t}\) with \(G_{i,t}^{\ast}\) and \(\mathrm{Im}\,G_{j,t}\) with \(G_{j,t}\) in (4.6). In particular, the 'decorations' of \(i\) and \(j\) indicate, whether \(G_{i,t}\) and \(G_{j,t}\) are really taken as plain resolvents (no decoration) or as adjoints (star) or with imaginary part (hat). We point out that throughout this entire section 'hat' on \(G\) indicates that the chain contains only \(\mathrm{Im}\,G_{i}\) unless specified as in (4.7). Finally, we use similar notations for the corresponding deterministic approximation \(\widehat{M}_{[i^{\ast},j^{\ast}],t}\) whose 'undecorated' version was defined in (2.10). Here \(\#\) indicates one of the
possible 'decorations', i.e. star, hat or no decoration and the corresponding change entails modifying the factor \((x-z_{i})^{-1}\) in (2.11) to \((x-\bar{z}_{i})^{-1}\) in case of star, and to \(\operatorname{Im}\,(x-\bar{z}_{i})^{-1}\) in case of hat (as in (2.15)-(2.16)).
The time derivative of the deterministic term in (4.4) is obtained by the following lemma, the proof of which is given in Appendix A.
**Lemma 4.1**.: _For any \(k\geq 1\) we have_
\[\partial_{t}\langle\widehat{M}_{[\hat{1},\hat{k}],t}A_{k}\rangle =\frac{k}{2}\langle\widehat{M}_{[\hat{1},\hat{k}],t}A_{k}\rangle+ \sum_{i,j=1\atop i<j}^{k}\langle\widehat{M}_{[\hat{1},\hat{j}],t}\rangle \langle\widehat{M}_{[\hat{j},\hat{j}],t}\rangle+\sum_{i,j=1\atop i<j}^{k} \langle\widehat{M}_{[i^{*},\hat{j}],t}\rangle\langle\widehat{M}_{[\hat{j}^{*},\hat{j}],t}\rangle \tag{4.8}\] \[+\sum_{i,j=1\atop i<j}^{k}\langle\widehat{M}_{[\hat{i},\hat{j}],t}\rangle\langle\widehat{M}_{[\hat{j}^{*},\hat{j}],t}\rangle+\sum_{i,j=1 \atop i<j}^{k}\langle\widehat{M}_{[i^{*},\hat{j}],t}\rangle\langle\widehat{M }_{[\hat{j},\hat{j}],t}\rangle.\]
Hence, by Ito's formula, for any \(k\geq 1\), the evolution of \(\widehat{G}_{[\hat{1},\hat{k}],t}\) is given by (for brevity we omit the \(\mathrm{d}t\) differentials)
\[\mathrm{d}\langle(\widehat{G}_{[\hat{1},\hat{k}],t}-\widehat{M}_ {[\hat{1},\hat{k}],t})A_{k}\rangle\] \[=\frac{1}{\sqrt{N}}\sum_{a,b=1}^{N}\partial_{ab}\langle\widehat{ G}_{[\hat{1},\hat{k}],t}A_{k}\rangle\mathrm{d}B_{ab}+\frac{k}{2}\langle( \widehat{G}_{[\hat{1},\hat{k}],t}-\widehat{M}_{[\hat{1},\hat{k}],t})A_{k} \rangle+\Omega_{1}+\Omega_{2}+\Omega_{3}+\Omega_{4}+\Omega_{\sigma}\] \[\quad+\sum_{i=1}^{k}\langle G_{i,t}-m_{i,t}\rangle\langle\widehat {G}_{[\hat{1},\hat{k}],t}^{(i)}A_{k}\rangle+\sum_{i=1}^{k}\langle G_{i,t}^{* }-\overline{m_{i,t}}\rangle\langle\widehat{G}_{[\hat{1},\hat{k}],t}^{(i^{*}) }A_{k}\rangle+\langle\widehat{G}_{[\hat{1},\hat{k}],t}A_{k}\rangle\sum_{i=1}^ {k}\frac{\langle\operatorname{Im}G_{i,t}-\operatorname{Im}m_{i,t}\rangle}{ \operatorname{Im}z_{i,t}}\,, \tag{4.9}\]
where
\[\Omega_{1}: =\sum_{i,j=1\atop i<j}^{k}\left[\langle\widehat{G}_{[i,j],t}- \widehat{M}_{[\hat{i},j],t}\rangle\langle\widehat{M}_{[\hat{j},\hat{j}],t} \rangle+\langle\widehat{M}_{[\hat{i},\hat{j}],t}\rangle\langle\widehat{G}_{[ \hat{j},\hat{i}],t}-\widehat{M}_{[\hat{j},\hat{i}],t}\rangle+\langle\widehat{G }_{[\hat{i},\hat{j}],t}-\widehat{M}_{[\hat{i},\hat{j}],t}\rangle\langle \widehat{G}_{[\hat{j},\hat{i}],t}-\widehat{M}_{[\hat{j},\hat{i}],t}\rangle \right],\] \[\Omega_{2}: =\sum_{i,j=1\atop i<j}^{k}\left[\langle\widehat{G}_{[i^{*},\hat{ j}],t}-\widehat{M}_{[\hat{i}^{*},\hat{j}],t}\rangle\langle\widehat{M}_{[\hat{j}^{*}, \hat{j}],t}\rangle+\langle\widehat{M}_{[\hat{i}^{*},\hat{j}],t}\rangle\langle \widehat{G}_{[\hat{j}^{*},\hat{j}],t}-\widehat{M}_{[\hat{j}^{*},\hat{j}],t} \rangle+\langle\widehat{G}_{[\hat{i}^{*},\hat{j}],t}-\widehat{M}_{[\hat{i}^{*},\hat{j}],t}\rangle\langle\widehat{G}_{[\hat{j}^{*},\hat{i}],t}-\widehat{M}_{[ \hat{j}^{*},\hat{i}],t}\rangle\right],\] \[\Omega_{3}: =\sum_{i,j=1\atop i<j}^{k}\left[\langle\widehat{G}_{[\hat{i}, \hat{j}],t}-\widehat{M}_{[\hat{i},\hat{j}],t}\rangle\langle\widehat{M}_{[\hat{j} ^{*},\hat{i}],t}\rangle+\langle\widehat{M}_{[\hat{i},\hat{j}],t}\rangle\langle \widehat{G}_{[\hat{j}^{*},\hat{i}],t}-\widehat{M}_{[\hat{j}^{*},\hat{i}],t} \rangle+\langle\widehat{G}_{[\hat{i},\hat{j}],t}-\widehat{M}_{[\hat{j},\hat{j} ],t}\rangle\langle\widehat{G}_{[\hat{j}^{*},\hat{i}],t}-\widehat{M}_{[\hat{j}^{*},\hat{j}],t}\rangle+\langle\widehat{G}_{[\hat{i},\hat{j}],t}-\widehat{M}_{[ \hat{j},\hat{j}],t}\rangle\langle\widehat{G}_{[\hat{j}^{*},\hat{i}],t}-\widehat{ M}_{[\hat{j}^{*},\hat{j}],t}\rangle\right],\] \[\Omega_{4}: =\sum_{i,j=1\atop i<j}^{k}\left[\langle\widehat{G}_{[i^{*},\hat{ j}],t}-\widehat{M}_{[\hat{i}^{*},\hat{j}],t}\rangle\langle\widehat{M}_{[\hat{j},\hat{i}],t} \rangle+\langle\widehat{M}_{[\hat{i}^{*},\hat{j}],t}\rangle\langle\widehat{G}_{[ \hat{j},\hat{i}],t}-\widehat{M}_{[\hat{j},\hat{i}],t}\rangle+\langle\widehat{G }_{[\hat{i}^{*},\hat{j}],t}-\widehat{M}_{[\hat{i}^{*},\hat{j}],t}\rangle\langle \widehat{G}_{[\hat{j},\hat{i}],t}-\widehat{M}_{[\hat{j},\hat{i}],t}\rangle\right],\] \[\Omega_{\sigma}: =\frac{\sigma}{N}\sum_{i,j=1\atop i\leq j}^{k}\left[\langle( \widehat{G}_{[\hat{i},\hat{j}],t}G_{[\hat{j}],t,t}^{\dagger})+\langle G_{[i^{*},\hat{j}],t}G_{[\hat{j}^{*},\hat{j}],t}^{\dagger}\rangle+\langle G_{[\hat{i},\hat{j}],t}G_{[\hat{j}^{*},\hat{i}],t}^{\dagger}\rangle+\langle G_{[\hat{i}^{*},\hat{j}],t}G_{[\hat{j}^{*},\hat{i}],t}^{\dagger}\rangle+\langle G_{[i^{*}, \hat{j}],t}G_{[\hat{j}^{*},\hat{i}],t}^{\dagger}\rangle\right]\,.\]
Observe that the flow (4.9) for imaginary parts \(\operatorname{Im}\,G\) contains much more terms compared to a flow for plain resolvents \(G\) (see (4.5)). This is a simple consequence of the fact that each time an \(\operatorname{Im}\,G\) is differentiated it creates two terms, i.e. \(\partial_{ab}\operatorname{Im}\,G=G\Delta^{ab}\operatorname{Im}\,G+ \operatorname{Im}\,G\Delta^{ab}G^{*}\), with \(\Delta^{ab}\) being a matrix consisting of all zeroes except for the \((a,b)\)-entry which is equal to one. Furthermore, the novel last term in (4.9) comes from applying a Ward identity, \(GG^{*}=\operatorname{Im}\,G/\operatorname{Im}\,z\). We now write out the random part \(\mathrm{d}\langle\widehat{G}_{[\hat{1},\hat{k}],t}A_{k}\rangle\) of the flow (4.9) for the simpler cases \(k=1\) and \(k=2\) to show its main structure. Here we used that \(\widehat{M}_{[\hat{1},t}=\operatorname{Im}\,m_{1,t}\) with \(m_{i}:=m(z_{i,t})\).
**Example 4.2**.: For \(k=1\) we have the evolution
\[\begin{split}\mathrm{d}\langle\mathrm{Im}\,GA\rangle&= \sum_{a,b=1}^{N}\partial_{ab}\langle\mathrm{Im}\,GA\rangle\frac{\mathrm{d}B_{ab }}{\sqrt{N}}+\left(\frac{1}{2}+\frac{\langle\mathrm{Im}\,G-\mathrm{Im}\,m \rangle}{\mathrm{Im}\,z_{t}}\right)\langle\mathrm{Im}\,GA\rangle+\langle G-m \rangle\langle\mathrm{Im}\,GAG\rangle\\ &\quad+\overline{\langle G-m\rangle}\langle\mathrm{Im}\,GAG^{*} \rangle+\frac{\sigma}{N}\langle\mathrm{Im}\,GAGG^{\mathrm{t}}\rangle+\frac{ \sigma}{N}\langle(G^{*})^{\mathrm{t}}G^{*}A\mathrm{Im}\,GA\rangle+\frac{ \sigma}{N}\langle\mathrm{Im}\,G^{\mathrm{t}}G^{*}AG\rangle\,,\end{split} \tag{4.11}\]
and for \(k=2\) we get (for keeping the formula somewhat short, we assume \(\sigma=0\))
\[\begin{split}\mathrm{d}\langle\mathrm{Im}\,G_{1}A_{1}\mathrm{ Im}\,G_{2}A_{2}\rangle&=\sum_{a,b=1}^{N}\partial_{ab}\langle \mathrm{Im}\,G_{1}A_{1}\mathrm{Im}\,G_{2}A_{2}\rangle\frac{\mathrm{d}B_{ab}}{ \sqrt{N}}+\langle\mathrm{Im}\,G_{1}A_{1}\mathrm{Im}\,G_{2}A_{2}\rangle\\ &\quad+\left(\frac{\langle\mathrm{Im}\,G_{1}-\mathrm{Im}\,m_{1} \rangle}{\mathrm{Im}\,z_{1,t}}+\frac{\langle\mathrm{Im}\,G_{2}-\mathrm{Im}\,m _{2}\rangle}{\mathrm{Im}\,z_{2,t}}\right)\langle\mathrm{Im}\,G_{1}A_{1} \mathrm{Im}\,G_{2}A_{2}\rangle+\langle G_{2}^{*}A_{2}G_{1}\rangle\langle \mathrm{Im}\,G_{1}A_{1}\mathrm{Im}\,G_{2}\rangle\\ &\quad+\langle G_{1}^{*}A_{1}G_{2}\rangle\langle\mathrm{Im}\,G_{2 }A_{2}\mathrm{Im}\,G_{1}\rangle+\langle\mathrm{Im}\,G_{1}A_{1}G_{2}\rangle \langle\mathrm{Im}\,G_{2}A_{2}G_{1}\rangle+\langle G_{2}^{*}A_{2}\mathrm{Im} \,G_{1}\rangle\langle G_{1}^{*}A_{1}\mathrm{Im}\,G_{2}\rangle\\ &\quad+\langle G_{1}-m_{1}\rangle\langle\mathrm{Im}\,G_{1}A_{1} \mathrm{Im}\,G_{2}A_{2}G_{1}\rangle+\langle G_{2}-m_{2}\rangle\langle\mathrm{Im }\,G_{2}A_{2}\mathrm{Im}\,G_{1}A_{1}G_{2}\rangle\\ &\quad+\langle G_{1}^{*}-\overline{m_{1}}\rangle\langle\mathrm{Im }\,G_{1}A_{1}\mathrm{Im}\,G_{2}A_{2}G_{1}^{*}\rangle+\langle G_{2}^{*}- \overline{m_{2}}\rangle\langle\mathrm{Im}\,G_{2}A_{2}\mathrm{Im}\,G_{1}A_{1}G _{2}^{*}\rangle.\end{split} \tag{4.12}\]
Note that (4.11)-(4.12) combined with (4.8) give (4.9) for the special cases \(k=1,2\).
### Proof of Proposition 3.3 (a) for pure \(\mathrm{Im}\,G\)-chains
The goal of this section is to prove
\[\langle\widehat{G}_{[\widehat{1},\widehat{k}],T}A_{k}\rangle-\langle\widehat{ M}_{[\widehat{1},\widehat{k}],T}A_{k}\rangle=\langle\widehat{G}_{[\widehat{1}, \widehat{k}],0}A_{k}\rangle-\langle\widehat{M}_{[\widehat{1},\widehat{k}],0}A _{k}\rangle+\mathcal{O}_{\prec}\left(\Big{(}\prod_{i\in[k]}\rho_{i,T}\Big{)} \frac{N^{k/2-1}}{\sqrt{N\ell_{T}}}\,\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^ {1/2}\right), \tag{4.13}\]
uniformly in the spectrum and in the choice of traceless matrices \(A_{i}\). We may assume that all the \(A_{i}\)'s are Hermitian; the general case follows by multilinearity.
#### 4.1.1. Master inequalities
For the purpose of proving (4.13), recall the notation \(\hat{\ell}_{t}=\min\eta_{i,t}\rho_{i,t}\) from (4.3) and define
\[\Phi_{1}(t):=\frac{N\sqrt{\ell}_{t}}{\rho_{t}\langle|A|^{2}\rangle^{1/2}} \big{|}\langle G_{t}A\rangle\big{|};\] (4.14a) and for \[k\geq 2 \tag{4.14b}\]
Note that we defined \(\Phi_{1}(t)\) in a slightly different way than \(\Phi_{k}(t)\) for \(k\geq 2\), this is a consequence of the fact that for \(k=1\) we have \(|\langle GA\rangle|\sim|\langle\mathrm{Im}\,GA\rangle|\), i.e. for this special case the imaginary part does not reduce the fluctuation unlike for longer chains (see also Remark 2.6 (ii)). The prefactors in (4.14) are chosen such that we expect \(\Phi_{k}(t)\) to be an essentially order one quantity, see (4.13). The goal is to show exactly this, i.e. that \(\Phi_{k}(t)\prec 1\), uniformly in time \(t\leq T\) for any \(k\geq 1\). Note that by (3.8) it follows
\[\Phi_{k}(0)\prec 1, \tag{4.15}\]
for any \(k\geq 1\).
To prove \(\Phi_{k}(t)\prec 1\), we will derive a series of _master inequalities_ for these quantities with the following structure. We assume that
\[\Phi_{k}(t)\prec\phi_{k} \tag{4.16}\]
holds for some deterministic control parameter \(\phi_{k}\), _uniformly_ in \(0\leq t\leq T\), in spectral parameters satisfying \(N\hat{\ell}_{t}\geq N^{\epsilon}\) and in traceless deterministic matrices \(A_{j}\) (we stress that \(\phi_{k}\)'s depend neither on time, nor on the spectral parameters \(z_{i,t}\), nor on the matrices \(A_{j}\)). Given this input, we will then show that \(\Phi_{k}(t)\)'s also satisfy a better upper bound in terms of \(\phi\)'s. Iterating this procedure we will arrive at the final bound \(\Phi_{k}(t)\prec 1\).
**Proposition 4.3** (Master inequalities).: _Fix \(k\in\mathbb{N}\) and \(t\in[0,T]\). Assume that \(\Phi_{l}(s)\prec\phi_{l}\) for any \(1\leq l\leq 2k\) uniformly in \(s\in[0,t]\), in the spectral parameters with \(N\hat{\ell}_{s}\geq N^{\epsilon}\) and in the traceless deterministic matrices \(A_{j}\). Set \(\phi_{0}:=1\). Then we have the master inequalities_
\[\Phi_{k}(t)\prec 1+\frac{\sqrt{\phi_{2k}}}{(N\hat{\ell}_{t})^{1/4}}+\frac{1} {N\hat{\ell}_{t}}\sum_{l=1}^{k}\tilde{\phi}_{l}+\frac{1}{(N\hat{\ell}_{t})^{3/ 2}}\sum_{l=1}^{k-1}\tilde{\phi}_{l}\tilde{\phi}_{k-l}+\frac{|\sigma|}{(N\hat{ \ell}_{t})^{1/4}}\sum_{l=1}^{k}\sqrt{\phi_{2l}}+\frac{|\sigma|}{N\hat{\ell}_{t }}\sum_{l=0}^{k}\sqrt{\phi_{2l}\phi_{2(k-l)}}\,, \tag{4.17}\]
_where we introduced the shorthand notation_
\[\tilde{\phi}_{l}:=\phi_{l}+\mathbf{1}(l\ \text{is\ \ odd})\sqrt{\phi_{l+1}\phi _{l-1}},\quad\text{for}\quad l\in[k-1]\,. \tag{4.18}\]
Using the master inequalities, we conclude this section with the proof of (3.9) for pure \(\operatorname{Im}G\) chains.
Proof of Proposition 3.3 (a) for pure \(\operatorname{Im}G\) chains.: We now consider the master inequalities (4.17) for \(t=T\), with \(T\) the time defined in the statement of Proposition 3.3.
We use a two-step induction. The base case consists of the cases \(k=1,2\) (using \(|\sigma|\leq 1\)):
\[\begin{split}\Phi_{1}(T)\prec 1+\frac{\sqrt{\phi_{2}}}{(N \hat{\ell}_{T})^{1/4}}+\frac{\phi_{1}}{N\hat{\ell}_{T}},\\ \Phi_{2}(T)\prec 1+\frac{\sqrt{\phi_{4}}+\sqrt{\phi_{2}}}{(N \hat{\ell}_{T})^{1/4}}+\frac{\phi_{2}+\phi_{1}+\sqrt{\phi_{2}}}{N\hat{\ell}_{T }}+\frac{\phi_{1}^{2}+\phi_{1}\sqrt{\phi_{2}}+\phi_{2}}{(N\hat{\ell}_{T})^{3/2 }}.\end{split} \tag{4.19}\]
To estimate \(\phi_{4}\) in (4.19) we rely on the following _reduction inequality_; its proof is given in Appendix A.
**Lemma 4.4** (Reduction inequality).: _Fix \(k\geq 2\), and assume that \(\Phi_{l}(t)\prec\phi_{l}\) holds uniformly12 in \(t\in[0,T]\) for \(0\leq l\leq 2k\). Then_
Footnote 12: Here and in the sequel when we say that such relation involving \(\Phi(t)\) ”holds uniformly”, we mean that uniformly in traceless deterministic matrices \(A_{i}\)’s and in all spectral parameters satisfying \(N\hat{\ell}_{t}\geq N^{\epsilon}\).
\[\Phi_{2k}(T)\prec\begin{cases}(N\hat{\ell}_{T})^{1/2}+\frac{1}{(N\hat{\ell}_{T })^{1/2}}\phi_{k}^{2}&\text{$k$ even}\\ (N\hat{\ell}_{T})^{1/2}+\phi_{k-1}+\phi_{k+1}+\frac{1}{(N\hat{\ell}_{T})^{1/2} }\phi_{k+1}\phi_{k-1}&\text{$k$ odd}.\end{cases} \tag{4.20}\]
The following abstract iteration lemma shows how to use the master inequalities for improving the bound on \(\Phi\).
**Lemma 4.5** (Iteration).: _Let \(X=X_{N}(\hat{\ell})\) be an \(N\)-dependent random variable depending also on the parameter \(\hat{\ell}\). Fix \(\epsilon,\delta>0\). Suppose that for any \(l\in\mathbb{N}\) and any \(x>0\) the fact that \(X\prec x\) uniformly for \(\hat{\ell}\geq N^{-1+\ell}\) implies_
\[X\prec A+\frac{x}{B}+x^{1-\alpha}C^{\alpha}, \tag{4.21}\]
_uniformly for \(\hat{\ell}\geq N^{-1+(l+l^{\prime})\epsilon}\), for some constants \(l^{\prime}\in\mathbb{N}\), \(B\geq N^{\delta}>0\), \(A,C>0\), and \(\alpha\in(0,1)\), and suppose we also know that \(X\prec N^{D}\) uniformly13 in \(\hat{\ell}\geq N^{-1+\epsilon}\). Then_
Footnote 13: We remark that \(D,\delta,\alpha\) are \(N\)–independent constants, all the other quantities may depend on \(N\).
\[X\prec A+C,\]
_uniformly for \(\hat{\ell}\geq N^{-1+(1+\kappa l^{\prime})\epsilon}\), for some \(\kappa=\kappa(\alpha,D,\delta)\)._
Proof.: The proof is a simple iteration of (4.21) \(\kappa\) times; it is immediate to see that \(\kappa\) depends only on \(\alpha,D,\delta\).
Notice that using Lemma 4.5 reduces the domain of parameters \(\eta_{i},\rho_{i}\) for which the master inequalities (4.17) hold, e.g. from \(\hat{\ell}_{T}\geq N^{-1+\ell\epsilon}\) to \(\hat{\ell}_{T}\geq N^{-1+(l+l^{\prime})\epsilon}\), and so on. However, this can happen only finitely many times, and so it does not affect the estimates in the sense of stochastic domination that always allows for a small \(N\)-power tolerance that can be achieved by adjusting \(\epsilon\) small enough. For simplicity, we ignore this subtlety here, see [21, Sections 4.1-4.3] for a more detailed explanation.
Using iteration from Lemma 4.5 and the reduction inequality (4.20) for \(k=2\) we obtain
\[\Phi_{1}(T)\prec 1+\frac{\sqrt{\phi_{2}}}{(N\hat{\ell}_{T})^{1/4}}\quad\text{and} \quad\Phi_{2}(T)\prec 1+\frac{\phi_{1}}{N\hat{\ell}_{T}}+\frac{\phi_{1}^{2}}{(N\hat{\ell}_{T})^{3/2 }}.\]
Then, plugging the first relation into the second, and using iteration again we conclude
\[\Phi_{1}(T)\prec 1\quad\text{and}\quad\Phi_{2}(T)\prec 1\,.\]
To prove the same relation for \(\Phi_{l}(T)\) with \(l\geq 3\), we use a step-two induction. Fix an even \(k\geq 4\) and assume as our induction hypothesis that \(\Phi_{l}(T)\prec 1\) for any \(1\leq l\leq k-2\). We now prove that \(\Phi_{l}(T)\prec 1\) also holds for \(l=k-1,k\). From (4.17), using \(N\hat{\ell}_{T}\geq 1\) and the induction hypothesis \(\Phi_{l}(T)\prec\phi_{l}:=1\) for \(1\leq l\leq k-2\), we have
\[\Phi_{k-1}(T)\prec 1+\frac{\sqrt{\phi_{2(k-1)}}}{(N\hat{\ell}_{T})^{1/4}}+ \frac{\phi_{k-1}+\sqrt{\phi_{k}}}{N\hat{\ell}_{T}}+\frac{1}{(N\hat{\ell}_{T})^ {1/4}}\sum_{l=k/2}^{k-1}\sqrt{\phi_{2l}}\,,\] \[\Phi_{k}(T)\prec 1+\frac{\sqrt{\phi_{2k}}}{(N\hat{\ell}_{T})^{1/4}}+ \frac{\phi_{k}+\phi_{k-1}+\sqrt{\phi_{k}}}{N\hat{\ell}_{T}}+\frac{1}{(N\hat{ \ell}_{T})^{1/4}}\sum_{l=k/2}^{k}\sqrt{\phi_{2l}}\,.\]
Then using (4.20) and iteration from Lemma 4.5 together with \(\phi_{l}=1\) for any \(1\leq l\leq k-2\) and \(N\hat{\ell}_{T}\geq 1\), we obtain
\[\Phi_{k-1}(T)\prec 1+\frac{\sqrt{\phi_{k}}}{(N\hat{\ell}_{T})^{1/4}}\quad\text{ and}\quad\Phi_{k}(T)\prec 1+\frac{\phi_{k-1}}{N\hat{\ell}_{T}}\,.\]
Plugging the first relation into the second, we obtain by iteration that
\[\Phi_{k-1}(T)\prec 1\qquad\text{and}\qquad\Phi_{k}(T)\prec 1\,.\]
This concludes the induction step and hence the proof of Proposition 3.3 (a) modulo the proof of the master inequalities, Proposition 4.3, that will be done next.
#### 4.1.2. Proof of Proposition 4.3
As a preparation for the proof of the master inequalities (Proposition 4.3), we recall that \(t\mapsto\eta_{i,t}\) is decreasing and \(\rho_{i,s}\sim\rho_{i,t}\) for any \(0\leq s\leq t\lesssim 1\) (see (3.6), (4.2), and the paragraphs around).
Proof of Proposition 4.3.: We begin with the case \(k=1\). Hence, for \(A_{1}=A\), we start by rewriting the flow (4.11) with \(\operatorname{Im}G\) replaced by \(G=G_{t}(z_{t})\) (recall (4.14)):
\[\mathrm{d}\langle GA\rangle=\sum_{a,b=1}^{N}\partial_{ab}\langle GA\rangle \frac{\mathrm{d}B_{ab}}{\sqrt{N}}+\frac{1}{2}\langle GA\rangle\mathrm{d}t+ \langle G-m\rangle\langle G^{2}A\rangle\mathrm{d}t+\frac{\sigma}{N}\langle GAGG ^{\dagger}\rangle\mathrm{d}t\,. \tag{4.22}\]
We point out that the additional term \(\frac{1}{2}\langle GA\rangle\) in the rhs. of (4.22) can be incorporated into the lhs. by differentiating \(e^{-t/2}\langle GA\rangle\); the extra exponential factor is irrelevant since \(e^{t/2}\sim 1\) for our times \(t\lesssim 1\). Note that the same argument applies to the term
\[\frac{k}{2}\langle(\widehat{G}_{[\hat{1},\hat{k}],t}-\widehat{M}_{[\hat{1}, \hat{k}],t})A_{k}\rangle\]
appearing in (4.9) for general \(k\). We are now ready to obtain the master inequality for \(\Phi_{1}(t)\).
Assume \(\Phi_{k}(t)\prec\phi_{k}\) for \(k=1,2\), in the sense of uniformity explained after (4.16) (recall that \(\Phi_{1}(0)\prec 1\) by (4.15)), and we will prove improved bounds on \(\Phi_{1}(t)\). We first consider the third summand in (4.22). Here, we use the integral representation (see also [21, Lemma 5.1])
\[G^{2}(z)=\frac{1}{2\pi\sharp}\oint_{\Gamma}\frac{G(w)}{(w-z)^{2}}\mathrm{d}w\,, \tag{4.23}\]
which simply follows from residue calculus. Here, \(\Gamma\) is a tiny circle of radius \(|\mathrm{Im}\,z|/2\) around \(z\in\mathbb{C}\setminus\mathbb{R}\), which ensures that \(|\mathrm{Im}\,w|\mathrm{Im}\,m(w)|\sim|\mathrm{Im}\,z||\mathrm{Im}\,m(z)|\) as follows by elementary continuity properties of \(m(w)\). In this way, applying (4.23) for every fixed time \(s\leq t\) and using the fact that the deterministic approximation of \(\langle G^{2}A\rangle\) vanishes as \(\langle A\rangle=0\), we obtain (with the \(G_{s}:=G_{s}(z_{s})\), \(m_{s}:=m(z_{s})\) notation)
\[\big{|}\langle G_{s}^{2}A\rangle\big{|}\prec\frac{1}{\eta_{s}}\frac{\rho_{s} \langle|A|^{2}\rangle^{1/2}}{N\sqrt{\hat{\ell}_{s}}}\phi_{1}\,.\]
Hence, in combination with the single resolvent local law \(|\langle G_{s}-m_{s}\rangle|\prec 1/(N\eta_{s})\), we find
\[\frac{N\sqrt{\hat{\ell}_{t}}}{\rho_{t}\langle|A|^{2}\rangle^{1/2}}\int_{0}^{t} \langle G_{s}-m_{s}\rangle\langle G_{s}^{2}A\rangle\,\mathrm{d}s\prec\frac{N \sqrt{\hat{\ell}_{t}}}{\rho_{t}\langle|A|^{2}\rangle^{1/2}}\int_{0}^{t}\phi_{1 }\,\,\frac{\rho_{s}\langle|A|^{2}\rangle^{1/2}}{N^{2}\eta_{s}^{2}\hat{\ell}_{s }^{1/2}}\,\mathrm{d}s\lesssim\frac{\phi_{1}}{N\hat{\ell}_{t}}\log N. \tag{4.24}\]
In the last step we used the integration estimate (4.3) and the fact that along the characteristics \(\hat{\ell}_{s}\gtrsim\hat{\ell}_{t}\) for \(0\leq s\leq t\). The prefactor \(N\sqrt{\hat{\ell}_{t}}/(\rho_{t}\langle|A|^{2}\rangle^{1/2})\) is included in anticipation of the same prefactor in the definition of \(\Phi_{1}\) in (4.14).
Then we proceed with the estimate of the quadratic variation of the martingale term in (4.22):
\[\frac{1}{N}\sum_{a,b=1}^{N}\big{[}|\partial_{ab}\langle G_{s}A \rangle|^{2}+\sigma\partial_{ab}\langle G_{s}A\rangle\overline{\partial_{ba} \langle G_{s}A\rangle}\big{]}\mathrm{d}t \lesssim\frac{1}{N^{3}}\sum_{a,b=1}^{N}|(G_{s}AG_{s})_{ab}|^{2} \mathrm{d}t\] \[=\frac{1}{N^{2}}\langle G_{s}AG_{s}G_{s}^{*}AG_{s}^{*}\rangle \mathrm{d}t=\frac{1}{N^{2}\eta_{t}^{2}}\langle\mathrm{Im}\,G_{s}A\mathrm{Im} \,G_{s}A\rangle\mathrm{d}t,\]
where we used that \(\mathrm{d}[B_{ab},\overline{B_{cd}}]=\delta_{ac}\delta_{bd}+\sigma\delta_{ad} \delta_{bc}\) and the Ward identity \(GG^{*}=\frac{\mathrm{Im}\,G}{\mathrm{Im}\,z}\). Then, we write
\[\langle\mathrm{Im}\,G_{s}A\mathrm{Im}\,G_{s}A\rangle=\langle\widehat{M}_{[ \widehat{1},\widehat{2}],s}A\rangle+\left(\langle\mathrm{Im}\,G_{s}A\mathrm{ Im}\,G_{s}A\rangle-\langle\widehat{M}_{[\widehat{1},\widehat{2}],s}A\rangle \right)\prec\rho_{s}^{2}\langle|A|^{2}\rangle+\frac{\rho_{s}^{2}\langle|A|^{2 }\rangle}{\sqrt{N\hat{\ell}_{s}}}\phi_{2}\,.\]
Here we used that the deterministic approximation \(\langle\widehat{M}_{[\widehat{1},\widehat{2}],s}A\rangle\) is bounded by \(\rho_{s}^{2}\langle|A|^{2}\rangle\) and we used (4.14) together with \(\Phi_{2}(s)\prec\phi_{2}\). For the time integration of the quadratic variation term, with the appropriate prefactor, we obtain
\[\begin{split}\frac{N\sqrt{\hat{\ell}_{t}}}{\rho_{t}\langle|A|^{2 }\rangle^{1/2}}&\left(\int_{0}^{t}\frac{\langle\mathrm{Im}\,G_{ s}A\mathrm{Im}\,G_{s}A\rangle}{N^{2}\eta_{s}^{2}}\,\mathrm{d}s\right)^{1/2} \\ &\sim\frac{N\sqrt{\hat{\ell}_{t}}}{\rho_{t}\langle|A|^{2}\rangle^ {1/2}}\left(\int_{0}^{t}\frac{\rho_{s}^{2}\langle|A|^{2}\rangle}{N^{2}\eta_{s} ^{2}}\left(1+\frac{\phi_{2}}{(N\hat{\ell}_{s})^{1/2}}\right)\,\mathrm{d}s \right)^{1/2}\lesssim 1+\frac{\sqrt{\phi_{2}}}{(N\hat{\ell}_{t})^{1/4}}\,.\end{split} \tag{4.25}\]
Here in the last inequality we used that along the characteristics \(\hat{\ell}_{s}\gtrsim\hat{\ell}_{t}\) for \(0\leq s\leq t\) and the integration rule (4.3). Using the Burkholder-Davis-Gundy (BDG) inequality we conclude exactly the same estimate (4.25) for the stochastic term in (4.22) in high probability as in quadratic variation.
Next, we estimate the last term in the rhs. of (4.22):
\[\begin{split}\frac{N\sqrt{\hat{\ell}_{t}}}{\rho_{t}\langle|A|^{2 }\rangle^{1/2}}\int_{0}^{t}\frac{|\sigma|}{N}\big{|}\langle G_{s}AG_{s}G_{s}^{ *}\rangle\big{|}\,\mathrm{d}s&\leq\frac{N\sqrt{\hat{\ell}_{t}}} {\rho_{t}\langle|A|^{2}\rangle^{1/2}}\int_{0}^{t}\frac{1}{N\eta_{s}^{3/2}} \langle\mathrm{Im}\,G_{s}A\mathrm{Im}\,G_{s}A\rangle^{1/2}\langle\mathrm{Im}\, G_{s}\rangle^{1/2}\,\mathrm{d}s\\ &\prec\frac{N\sqrt{\hat{\ell}_{t}}}{\rho_{t}\langle|A|^{2}\rangle ^{1/2}}\int_{0}^{t}\frac{\rho_{s}^{1/2}}{N\eta_{s}^{3/2}}\left(\langle|A|^{2} \rangle\rho_{s}^{2}+\frac{\langle|A|^{2}\rangle\rho_{s}^{2}\phi_{2}}{\sqrt{N \hat{\ell}_{s}}}\right)^{1/2}\,\mathrm{d}s\\ &\lesssim 1+\frac{\sqrt{\phi_{2}}}{(N\hat{\ell}_{t})^{1/4}},\end{split} \tag{4.26}\]
where in the first inequality we used Schwarz inequality together with several Ward identities, and in the second inequality the single resolvent local law \(|\langle G_{s}-m_{s}\rangle|\prec 1/(N\eta_{s})\) to show that \(\langle\mathrm{Im}\,G_{s}\rangle\prec\rho_{s}\) (recall that we consider the regime \(N\eta_{s}\rho_{s}\geq N^{\epsilon}\), so \(1/(N\eta_{s})\leq\rho_{s}\)).
Putting all these estimates together, and using that \(\Phi_{1}(0)\prec 1\) by (4.15) to bound the initial condition after integration, we obtain the first master inequality
\[\Phi_{1}(t)\prec 1+\frac{\phi_{1}}{N\hat{\ell}_{t}}+\frac{\sqrt{\phi_{2}}}{(N \hat{\ell}_{t})^{1/4}}, \tag{4.27}\]
again in the sense of uniformity explained after (4.16).
For the proof of the master inequalities (4.17) with \(k\geq 2\), a fundamental input for the estimates of the various terms in (4.9) is the following _\(G^{2}\)-Lemma_. Recall that even if we are interested only in pure \(\mathrm{Im}\,G\) chains, their evolution equation (4.9) necessarily contains mixed chains as well. The \(G^{2}\)-Lemma turns them back to pure \(\mathrm{Im}\,G\) chains. It expresses how to estimate _not_ strictly alternating purely \(\mathrm{Im}\,G\) chains in terms of strictly alternating purely \(\mathrm{Im}\,G\) chains based upon the integral representation (4.1). Note that this formula involves the non-analytic function \(\mathrm{Im}\,G\) hence simple and flexible contour deformations are prohibited, contrary to the \(k=1\) case, where we did not care about preserving \(\mathrm{Im}\,G\)'s and the contour integral (4.23) with the analytic \(G(z)\) was applicable.
For brevity we will state the \(G^{2}\)-Lemma for spectral parameters \(z_{1},...,z_{k}\) without time dependence, but eventually we will use them for \(z_{1,t},...,z_{k,t}\) at any fixed time along the flow. The proof is given in Section 4.1.3 below.
**Lemma 4.6** (\(G^{2}\)-Lemma).: _Fix \(k\geq 2\). Let \(i,j\in[k]\) with \(j-i\geq 1\) and assume that \(\Phi_{l}\prec\phi_{l}\) holds uniformly (in the sense explained after (4.16)) for some control parameters \(\phi_{l}\geq 1\) for \(l=1,2,\ldots,k\). Then, for all versions of \(\widehat{G}_{[i^{\#},j^{\#}]}\) and \(\widehat{M}_{[i^{\#},j^{\#}]}\), i.e. for any choice of \(\#\) indicating star (adjoint), hat (imaginary part) or simply no 'decoration', we have the following:14_
Footnote 14: Note that we use the \(\prec\)-notation to purely deterministic quantities. The reason is that it conveniently absorbs irrelevant \(|\log\eta|\lesssim(\log N)\)-factors coming from slightly singular integrals, see Footnote 15.
\[\left|\langle\widehat{M}_{[i^{\#},j^{\#}]}\rangle\right|\prec\left(\frac{\rho_ {i}\rho_{j}}{\eta_{i}\eta_{j}}\right)^{1/2}\Big{(}\prod_{n=i+1}^{j-1}\rho_{n} \Big{)}\,N^{\frac{j-i}{2}-1}\,\Big{(}\prod_{m=i}^{j-1}\langle|A_{m}|^{2} \rangle^{1/2}\Big{)} \tag{4.28}\]
_and (the decorations at the indices \(i\) and \(j\) on \(\widehat{G}\) and on \(\widehat{M}\) have to be matching)_
\[\left|\langle\widehat{G}_{[i^{\#},j^{\#}]}-\widehat{M}_{[i^{\#},j^{\#}]} \rangle\right|\prec\left(\frac{\rho_{i}\rho_{j}}{\eta_{i}\eta_{j}}\right)^{1/ 2}\,\Big{(}\prod_{n=i+1}^{j-1}\rho_{n}\Big{)}\,\frac{N^{\frac{j-i}{2}-1}}{ \sqrt{N\hat{\ell}}}\,\Big{(}\prod_{m=i}^{j-1}\langle|A_{m}|^{2}\rangle^{1/2} \Big{)}\tilde{\phi}_{j-i}\,, \tag{4.29}\]
_where we used the notation \(\tilde{\phi}_{j-i}=\phi_{j-i}+\mathbf{1}(j-i\ \mathrm{odd})\sqrt{\phi_{j-i-1} \phi_{j-i+1}}\) (as in (4.18))._
_Moreover, it holds that (now \(\#\) indicates star (adjoint) or no 'decoration')_
\[\left|\langle\widehat{G}_{[\hat{1},\hat{k}]}^{(i^{\#})}A_{k}\rangle\right| \prec\left(\frac{\rho_{i}}{\eta_{i}}\right)^{1/2}\left|\langle\mathrm{Im}\,G _{i}\big{(}A_{i}\mathrm{Im}\,G_{i+1}...A_{i-1}\big{)}\mathrm{Im}\,G_{i}\big{(} A_{i}\mathrm{Im}\,G_{i+1}...A_{i-1}\big{)}^{*}\rangle\right|^{1/2}. \tag{4.30}\]
Since all resolvent chains and their \(M\)-approximations are multi-linear in the \(A\)'s, by a simple scaling we may assume, without loss of generality, that \(\langle|A_{j}|^{2}\rangle=1\) for all \(j\in[k]\). This shortens some formulas.
We start our estimates on \(\Phi_{k}(t)\) with bounding the quadratic variation of the martingale term in (4.9):
\[\frac{1}{N}\sum_{a,b=1}^{N}\left[\big{|}\partial_{ab}\langle \widehat{G}_{[\hat{1},\hat{k}]}A_{k}\rangle\right]^{2}+\sigma\partial_{ab} \langle\widehat{G}_{[\hat{1},\hat{k}]}A_{k}\rangle\overline{\partial_{ba} \langle\widehat{G}_{[\hat{1},\hat{k}]}A_{k}\rangle}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
where we used the bound in (4.30) together with a usual single resolvent local law \(|\langle G_{i,s}-m_{i,s}\rangle|\prec(N\eta_{i,s})^{-1}\) and applied a similar reasoning as for (4.31).
Then, we estimate the terms in \(\Omega_{\sigma}\) of (4.9). For \(j\neq i\) we have
\[\begin{split}&\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left(\prod_{i \in[k]}\rho_{i,t}\right)}\int_{0}^{t}\frac{1}{N}\big{|}\langle G_{[\widehat{i},j],s}G^{\mathrm{t}}_{[\widehat{j},i],s}\rangle\big{|}\,\mathrm{d}s\\ \leq&\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left( \prod_{i\in[k]}\rho_{i,t}\right)}\int_{0}^{t}\frac{1}{N}\langle G_{[\widehat{i },j],s}G^{*}_{[\widehat{j},i],s}\rangle^{1/2}\langle G^{*}_{[\widehat{j},i],s}G _{[\widehat{j},i],s}\rangle^{1/2}\,\mathrm{d}s\\ \prec&\sqrt{N\hat{\ell}_{t}}\int_{0}^{t}\frac{1}{N \eta_{i,s}\eta_{j,s}}\left(1+\frac{\phi_{2(j-i)}}{\sqrt{N\hat{\ell}_{t}}} \right)^{1/2}\left(1+\frac{\phi_{2(k-j+i)}}{\sqrt{N\hat{\ell}_{t}}}\right)^{1 /2}\,\mathrm{d}s\\ \lesssim&\frac{1}{\sqrt{N\hat{\ell}_{t}}}+\frac{ \sqrt{\phi_{2(j-i)}}}{(N\hat{\ell}_{t})^{3/4}}+\frac{\sqrt{\phi_{2(k-j+i)}}}{( N\hat{\ell}_{t})^{3/4}}+\frac{\sqrt{\phi_{2(j-i)}\phi_{2(k-j+i)}}}{N\hat{\ell}_{t}} \,,\end{split} \tag{4.33}\]
where in the first inequality we used Schwarz and in the second inequality the Ward identity (see (4.26) for similar computations in a simpler case). Similarly, for \(j=i\) we get a bound \(1+\sqrt{\phi_{2k}}/(N\hat{\ell}_{t})^{1/4}\). To combine these two cases in a simpler bound we just estimate
\[\begin{split}\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left(\prod_ {i\in[k]}\rho_{i,t}\right)}&\int_{0}^{t}\frac{1}{N}\big{|} \langle G_{[\widehat{i},j],s}G^{\mathrm{t}}_{[\widehat{j},i],s}\rangle\big{|} \,\mathrm{d}s\\ &\lesssim\frac{1}{\sqrt{N\hat{\ell}_{t}}}+\frac{\sqrt{\phi_{2(j-i )}}}{(N\hat{\ell}_{t})^{1/4}}+\frac{\sqrt{\phi_{2(k-j+i)}}}{(N\hat{\ell}_{t}) ^{1/4}}+\frac{\sqrt{\phi_{2(j-i)}\phi_{2(k-j+i)}}}{N\hat{\ell}_{t}}\,.\end{split} \tag{4.34}\]
We are now left with the terms \(\Omega_{1},\Omega_{2},\Omega_{3},\Omega_{4}\) of (4.9). We write out the estimates for \(\Omega_{1}\) as all the other \(\Omega_{a}\), \(a=2,3,4\), are completely analogous. Using (4.28)-(4.29) for \(i<j\) we estimate
\[\begin{split}&\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left(\prod_ {i\in[k]}\rho_{i,t}\right)}\int_{0}^{t}\big{|}\langle\widehat{G}_{[\widehat{i},j],s}-\widehat{M}_{[\widehat{i},j],s}\rangle\langle\widehat{M}_{[\widehat{j},i],s}\rangle\big{|}\,\mathrm{d}s\\ &\prec\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left(\prod_{i\in[ k]}\rho_{i,t}\right)}\int_{0}^{t}\frac{N^{(j-i)/2-1}}{\sqrt{N\hat{\ell}_{s}}} \Big{(}\prod_{n\in[i+1,j-1]}\rho_{n,s}\Big{)}\,\frac{\rho_{i,s}\rho_{j,s}}{\eta _{i,s}\eta_{j,s}}\Big{(}\prod_{n\in[i,j]^{c}}\rho_{n,s}\Big{)}N^{(k-j+i)/2-1} \tilde{\phi}_{j-i}\,\mathrm{d}s\\ &\lesssim\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left(\prod_{i\in[ k]}\rho_{i,t}\right)}\int_{0}^{t}\frac{\tilde{\phi}_{j-i}}{N\eta_{s}^{2}}\frac{N^{k/2-1}}{ \sqrt{N\hat{\ell}_{s}}}\Big{(}\prod_{i\in[k]}\rho_{i,s}\Big{)}\,\mathrm{d}s \lesssim\frac{\tilde{\phi}_{j-i}}{N\hat{\ell}_{t}},\end{split} \tag{4.35}\]
where \([i,j]^{c}:=[1,i-1]\cup[j+1,k]\). Similarly, we bound
\[\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left(\prod_{i\in[k]}\rho_{i,t}\right)} \int_{0}^{t}\big{|}\langle\widehat{G}_{[\widehat{i},j],s}-\widehat{M}_{[ \widehat{i},j],s}\rangle\langle\widehat{G}_{[\widehat{j},i],s}-\widehat{M}_{[ \widehat{j},i],s}\rangle\big{|}\,\mathrm{d}s\prec\frac{\tilde{\phi}_{j-i} \tilde{\phi}_{k-j+i}}{(N\hat{\ell}_{t})^{3/2}}\,. \tag{4.36}\]
Finally, we estimate the last term in the last line of the rhs. of (4.9) as
\[\frac{\sqrt{N\hat{\ell}_{t}}}{N^{k/2-1}\left(\prod_{i\in[k]}\rho_{i,t}\right)} \int_{0}^{t}\bigg{|}\langle\widehat{G}_{[\widehat{1},\widehat{k}],s}A_{k} \rangle\frac{\langle\operatorname{Im}G_{i,s}-\operatorname{Im}m_{i,s}\rangle}{ \eta_{i,s}}\bigg{|}\,\mathrm{d}s\prec\frac{1}{\sqrt{N\hat{\ell}_{t}}}+\frac{ \phi_{k}}{N\hat{\ell}_{t}}\,, \tag{4.37}\]
where we again used the usual single resolvent local law, the integration rule (4.3) and
\[\big{|}\langle\widehat{G}_{[\widehat{1},\widehat{k}],s}A_{k}\rangle\big{|}\prec N^{k/2-1}\left(\prod_{i\in[k]}\rho_{i,s}\right)\left(1+\frac{\phi_{k}}{ \sqrt{N\hat{\ell}_{s}}}\right)\,.\]
Putting all these estimates (4.31)-(4.37) together, we thus conclude (4.17). This finishes the proof of Proposition 4.3, modulo the proof of Lemma 4.6 that will be done next.
#### 4.1.3. Proof of Lemma 4.6
As a preparation for our proof, we observe that the estimate (2.17) (modulo logarithmic corrections in \(\ell\)) even holds true if the condition \(N\ell\geq 1\) with
\[\ell=\min_{i}[\eta_{i}(\rho_{i}+\mathbf{1}(i\notin\mathfrak{I}_{k}))]=\eta_{i_{ \text{min}}}(\rho_{i_{\text{min}}}+\mathbf{1}(i_{\text{min}}\notin\mathfrak{I}_ {k}))]\]
is violated, but the _second smallest_
\[\ell_{2}:=\min_{i\neq i_{\text{min}}}[\eta_{i}(\rho_{i}+\mathbf{1}(i\notin \mathfrak{I}_{k}))]\]
satisfies \(N\ell_{2}\geq 1\). More precisely, under this weaker assumption, we still have that
\[|\langle\mathcal{M}(z_{1},A_{1},...,A_{k-1},z_{k};\mathfrak{I}_{k})A_{k} \rangle|\lesssim\left(1+\mathbf{1}(i_{\text{min}}\notin\mathfrak{I}_{k})|\log \ell|\right)\left(\prod_{i\in\mathfrak{I}_{k}}\rho_{i}\right)N^{k/2-1}\prod_{ j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\,. \tag{4.38}\]
This simply follows by realizing that the key estimate within the proof of (2.17), namely (A.4) in Appendix A, can alternatively be estimated as
\[\left|m^{(\mathfrak{I}_{k})}[S]\right|\lesssim\left(1+\mathbf{1}(i_{\text{ min}}\notin\mathfrak{I}_{k},i_{\text{min}}\in S)|\log\ell|\right)\frac{\prod_{i \in S\cap\mathfrak{I}_{k}}\rho_{i}}{\ell_{2}^{|S|-1}},\]
and following the steps leading to the proof of Lemma 2.3 (a).15 We now turn to the actual proof of Lemma 4.6 and again assume that, by simple scaling, \(\langle|A_{m}|^{2}\rangle=1\) for all \(m\in[k]\).
Footnote 15: The logarithmic corrections are stemming from the estimate \(\int_{\mathbb{R}}\frac{\rho(z)}{|x-x|}\mathrm{d}x\lesssim 1+\big{|}\log| \mathrm{Im}\,z|\) (cf. (2.16)).
We start with the proof of (4.28) for both \(\#\)'s indicating no decoration and assuming, for definiteness, that \(\eta_{i}=\mathrm{Im}\,z_{i}>0\) and \(\eta_{j}=\mathrm{Im}\,z_{j}>0\); all other cases can be treated similarly and are hence omitted. In this case, we use the integral representation [18, Eq. (3.15)] (which simply follows from (2.15)-(2.16) using multilinearity)16
Footnote 16: Alternatively, this can also be obtained using (4.1) for \(m=2\): The resolvent chain, which is approximated by \(\langle\widetilde{M}_{[i,j]}\rangle\) contains a \(G_{j}G_{i}\)-factor after cyclicity of the trace. Applying (4.1) for \(m=2\) to this part of the chain and using a _meta argument_ like in Appendix A.4, we can also conclude (4.39).
\[\big{\langle}\widehat{M}_{[i,j]}\big{\rangle}=\frac{1}{\pi}\int_{\mathbb{R}} \frac{\big{\langle}\widehat{M}(x+\mathrm{i}\zeta,A_{i},z_{i+1},...,z_{j-1})A_ {j-1}\big{\rangle}}{(x-z_{i}+\mathrm{i}\zeta)(x-z_{j}+\mathrm{i}\zeta)} \mathrm{d}x \tag{4.39}\]
with \(\zeta:=(\eta_{i}\wedge\eta_{j})/2\). To estimate the \(x\)-integration in (4.39), we will apply the following basic lemma, which shall frequently be used in the sequel. Its proof is omitted as it is a simple Holder's inequality and elementary calculus using basic properties of \(\rho(z)\).
**Lemma 4.7**.: _Under the setting and assumptions described above, for any \(\alpha\in[0,1]\), we have that_
\[\frac{1}{\zeta^{\alpha}}\int_{\mathbb{R}}\frac{\big{(}\rho(x+\mathrm{i}\zeta) \big{)}^{1-\alpha}}{|x-z_{i}+\mathrm{i}\zeta||x-z_{j}+\mathrm{i}\zeta|} \mathrm{d}x\prec\frac{1}{(\eta_{i}\eta_{j})^{1/2}}\left(\frac{\rho_{i}\rho_{j}} {\big{(}(\eta_{i}\rho_{i})(\eta_{j}\rho_{j})\big{)}^{\alpha}}\right)^{1/2}\,. \tag{4.40}\]
Therefore, plugging in (4.38) with \(\mathbf{1}(...)=0\) for the numerator in (4.39) and then using (4.40), we obtain
\[\left|\langle\widehat{M}_{[i,j]}\rangle\right|\lesssim\left(\prod_{n=i+1}^{j- 1}\rho_{n}\right)N^{(j-i)/2-1}\int_{\mathbb{R}}\frac{\rho(x+\mathrm{i}\zeta)}{ |x-z_{i}+\mathrm{i}\zeta||x-z_{j}+\mathrm{i}\zeta|}\mathrm{d}x\prec\left( \frac{\rho_{i}\rho_{j}}{\eta_{i}\eta_{j}}\right)^{1/2}\left(\prod_{n=i+1}^{j-1} \rho_{n}\right)N^{(j-i)/2-1}\,, \tag{4.41}\]
completing the proof of (4.28).
We now turn to the proof of (4.29), again focusing on the case where both \(\#\)'s indicate no decoration and assuming that \(\eta_{i}=\mathrm{Im}\,z_{i}>0\) and \(\eta_{j}=\mathrm{Im}\,z_{j}>0\). As the first step, we apply the integral representations (4.1) and (4.39) (see [18, Eqs. (3.14) and (3.15)]) to find
\[\left|\langle\widehat{G}_{[i,j]}-\widehat{M}_{[i,j]}\rangle\right|\lesssim\int_ {\mathbb{R}}\frac{\big{|}\big{\langle}\big{(}\mathrm{Im}\,G(x+\mathrm{i}\zeta)A _{i}\widehat{G}_{[\widehat{i+1},\widehat{j-1}]}-\widehat{M}(x+\mathrm{i}\zeta,A _{i},...)\big{)}A_{j-1}\big{\rangle}|}{|x-z_{i}+\mathrm{i}\zeta||x-z_{j}+\mathrm{ i}\zeta|}\mathrm{d}x \tag{4.42}\]
with \(\zeta=(\eta_{i}\wedge\eta_{j})/2\) and split the integral into an _above the scale_ and a _below the scale_ part. This concept refers to spectral regimes \(x\in\mathbb{R}\) where the typical eigenvalue spacing \(\rho(x+\mathrm{i}\zeta)/N\) is larger or smaller than the given \(\zeta\). More precisely, we fix an arbitrarily small \(\xi>0\) and decompose \(\mathbb{R}\) into17
Footnote 17: To be precise, in the integral (4.42) we first need to cut-off the regime where \(|x|\geq N^{100}\), say, and estimate this contribution by a simple norm bound using that the spectrum of the Wigner matrix is contained in \([-2-\epsilon,2+\epsilon]\) with very high probability [29]. Such technicality about the irrelevant, very far out \(x\)-regime will henceforth be ignored.
\[\left\{x:N\rho(x+\mathrm{i}\zeta)\zeta\geq N^{\xi}\right\}\dot{\cup}\left\{x:N \rho(x+\mathrm{i}\zeta)\zeta<N^{\xi}\right\}=:I_{\mathrm{above}}\,\dot{\cup}I _{\mathrm{below}}\,. \tag{4.43}\]
For the _above the scale_ part, we use that \(\Phi_{j-i}\prec\phi_{j-i}\) and estimate this part of the integral (4.42) by
\[\int_{I_{\mathrm{above}}}\frac{\rho(x+\mathrm{i}\zeta)}{|x-z_{i}+\mathrm{i} \zeta||x-z_{j}+\mathrm{i}\zeta|}\left(\prod_{n=i+1}^{j-1}\rho_{n}\right)\frac {N^{(k-i)/2-1}}{\sqrt{N\hat{\ell}(x)}}\phi_{j-i}\mathrm{d}x\,, \tag{4.44}\]
where we emphasized that now \(\hat{\ell}(x)=\zeta\rho(x+\mathrm{i}\zeta)\wedge\min_{n\in[i+1,j-1]}\eta_{n} \rho_{n}\) depends on the integration variable \(x\) since the integrated chain in (4.42) contains a resolvent at spectral parameter \(x+\mathrm{i}\zeta\). Next, we further split \(I_{\mathrm{above}}\) into two parts \(I_{\mathrm{above}}=I_{\mathrm{above},=\dot{\cup}I_{\mathrm{above},<}}\) with
\[I_{\mathrm{above},=}:=\left\{x:\hat{\ell}(x)=\rho(x+\mathrm{i}\zeta)\zeta \right\}\quad\text{and}\quad I_{\mathrm{above},<}:=\left\{x:\hat{\ell}(x)< \rho(x+\mathrm{i}\zeta)\zeta\right\}\!, \tag{4.45}\]
depending on whether the minimum is attained at the special spectral argument \(x+\mathrm{i}\zeta\) or not, and estimate each of them separately. In this way, we obtain the contribution from \(I_{\mathrm{above},=}\) to (4.44) to equal
\[\frac{1}{\sqrt{N}}\left[\frac{1}{\zeta^{1/2}}\int_{I_{\mathrm{above},=}}\frac {\left(\rho(x+\mathrm{i}\zeta)\right)^{1/2}}{|x-z_{i}+\mathrm{i}\zeta||x-z_{j} +\mathrm{i}\zeta|}\mathrm{d}x\right]\rho_{i+1}\ldots\rho_{j-1}N^{(j-i)/2-1} \phi_{j-i}\,. \tag{4.46}\]
By means of Lemma 4.7 with \(\alpha=1/2\) applied to the integral in \(\left[\,\cdots\,\right]\), this can be bounded as
\[\frac{1}{(\eta_{i}\eta_{j})^{1/2}}\frac{1}{\sqrt{N}}\sum_{s=1}^{j-i}\frac{ \sqrt{\rho_{i}}\,\rho_{i+1}\ldots\rho_{j-1}\sqrt{\rho_{j}}}{\sqrt{(\eta_{i} \rho_{i})^{1/2}}\sqrt{(\eta_{j}\rho_{j})^{1/2}}}N^{(j-i)/2-1}\phi_{j-i}\leq \left(\frac{\rho_{i}\rho_{j}}{\eta_{i}\eta_{j}}\right)^{1/2}\left(\prod_{n=i+ 1}^{j-1}\rho_{n}\right)\frac{N^{(j-i)/2-1}}{\sqrt{N\hat{\ell}}}\phi_{j-i}\,. \tag{4.47}\]
For \(I_{\mathrm{above},<}\) the argument is completely analogous, yielding exactly the same bound as in (4.47). This completes the bound for the _above the scale_ part.
For the _below the scale_ part, we estimate the two terms in the numerator in (4.42) separately; in this regime the local law is anyway not effective in the sense that \(G-M\) is not smaller than \(G\). For the \(\widehat{M}\)-term, we recall the bound (4.38), and estimate
\[\int_{I_{\mathrm{below}}}\frac{\left|\langle\widehat{M}(x+\mathrm{ i}\zeta,A_{i},...)A_{j-1}\rangle\right|}{|x-z_{i}+\mathrm{i}\zeta||x-z_{j}+ \mathrm{i}\zeta|}\mathrm{d}x\lesssim N^{(j-i)/2-1}\rho_{i+1}\ldots\rho_{j-1} \left[\int_{I_{\mathrm{below}}}\frac{\rho(x+\mathrm{i}\zeta)}{|x-z_{i}+ \mathrm{i}\zeta||x-z_{j}+\mathrm{i}\zeta|}\mathrm{d}x\right]\] \[\prec \frac{1}{(\eta_{i}\eta_{j})^{1/2}}\frac{1}{N}\frac{\sqrt{\rho_{i}} \,\rho_{i+1}\ldots\rho_{j-1}\,\sqrt{\rho_{j}}}{(\eta_{i}\rho_{i})^{1/2}(\eta _{j}\rho_{j})^{1/2}}N^{(j-i)/2-1}\lesssim\left(\frac{\rho_{i}\rho_{j}}{\eta_{i} \eta_{j}}\right)^{1/2}\left(\prod_{n=i+1}^{j-1}\rho_{n}\right)\frac{N^{(j-i)/ 2-1}}{N\hat{\ell}}\,. \tag{4.48}\]
To go from the second to the third line, we used that \(\rho(x+\mathrm{i}\zeta)\zeta\prec N^{-1}\) for \(x\in I_{\mathrm{below}}\) (recall that \(\xi>0\) in the definition (4.43) may be chosen arbitrarily small) and employed Lemma 4.7 with \(\alpha=1\). In the ultimate step, we utilized \(\eta_{i}\rho_{i}\wedge\eta_{j}\rho_{j}\geq\hat{\ell}\) together with \(N\hat{\ell}\gtrsim 1\). This concludes the discussion of the \(\widehat{M}\)-term.
Next, we turn to the \(\widehat{G}\)-term in (4.42) in the regime \(x\in I_{\mathrm{below}}\) and first focus on the case where \(j-i\) is even. Here, we employ a Schwarz inequality in order to be able to exploit
\[\begin{split}\big{|}\big{\langle}\mathrm{Im}\,G(x+\mathrm{i} \zeta)& A_{i}\widehat{G}_{\widehat{[i+1,\widehat{j-1}]}}A_{j-1} \big{\rangle}\big{|}\\ &\leq\frac{\zeta_{x}}{\zeta}&\big{|}\big{\langle} \mathrm{Im}\,G(x+\mathrm{i}\zeta_{x})(A_{i}...\mathrm{Im}\,G_{r-1}A_{r-1}) \mathrm{Im}\,G_{r}(A_{i}...\mathrm{Im}\,G_{r-1}A_{r-1})^{*}\big{\rangle}\big{|}^ {1/2}\\ &\qquad\times\big{|}\big{\langle}\mathrm{Im}\,G_{r}(A_{r}... \mathrm{Im}\,G_{j-1}A_{j-1})\mathrm{Im}\,G(x+\mathrm{i}\zeta_{x})(A_{r}... \mathrm{Im}\,G_{j-1}A_{j-1})^{*}\big{\rangle}\big{|}^{1/2}\end{split} \tag{4.49}\]
where \(\zeta_{x}>\zeta\) is implicitly defined via \(N\rho(x+\mathrm{i}\zeta_{x})\zeta_{x}=N^{\xi}\) and we denoted \(r:=(i+j)/2\). After application of a Schwarz inequality, we find this part of (4.42) to be bounded by
\[\begin{split}&\left(\int_{I_{\mathrm{below}}}\frac{\zeta_{x}}{ \zeta}\frac{\big{|}\langle\mathrm{Im}\,G(x+\mathrm{i}\zeta_{x})(A_{i}... \mathrm{Im}\,G_{r-1}A_{r-1})\mathrm{Im}\,G_{\frac{j+i}{2}}(A_{i}...\mathrm{Im} \,G_{r-1}A_{r-1})^{*}\rangle\big{|}}{|x-z_{i}+\mathrm{i}\zeta||x-z_{j}+ \mathrm{i}\zeta|}\mathrm{d}x\right)^{1/2}\\ &\times\left(\int_{I_{\mathrm{below}}}\frac{\zeta_{x}}{\zeta} \frac{\big{|}\langle\mathrm{Im}\,G_{r}(A_{r}...\mathrm{Im}\,G_{j-1}A_{j-1}) \mathrm{Im}\,G(x+\mathrm{i}\zeta_{x})(A_{r}...\mathrm{Im}\,G_{j-1}A_{j-1})^{* }\rangle\big{|}}{|x-z_{i}+\mathrm{i}\zeta||x-z_{j}+\mathrm{i}\zeta|}\mathrm{ d}x\right)^{1/2}\end{split} \tag{4.50}\]
Adding and subtracting the respective \(\widehat{M}\)-terms for both resolvent chains in (4.50), we are left with two terms for each integral. For concreteness, we estimate the one in the first line in (4.50), the second line the same. The first line in (4.50) is bounded by (the square root of)
\[\begin{split}\frac{1}{\zeta}\int_{I_{\mathrm{below}}}\mathrm{d}x \frac{\zeta_{x}\rho(x+\mathrm{i}\zeta_{x})}{|x-z_{i}+\mathrm{i}\zeta||x-z_{j}+ \mathrm{i}\zeta|}&\left(\prod_{n=i+1}^{r-1}\rho_{n}\right)^{2} \rho_{r}\,N^{\frac{j-i}{2}-1}(1+\phi_{j-i})\\ &\sim(1+\phi_{j-i})\left(\frac{\rho_{i}\rho_{j}}{\eta_{i}\eta_{j} }\right)^{1/2}\left(\prod_{n=i+1}^{r-1}\rho_{n}\right)^{2}\rho_{r}\,\frac{N^{ \frac{j-i}{2}-1}}{N\hat{\ell}}\,.\end{split}\]
Here, we used that \(N\rho(x+\mathrm{i}\zeta_{x})\zeta_{x}=N^{\xi}\) for arbitrarily small \(\xi>0\) and employed Lemma 4.7 (with \(\alpha=1\)) in estimates analogous to (4.47) and (4.48). Combining this with the identical estimate for the second line of (4.50) and using \(N\hat{\ell}\geq 1\) and \(\phi_{j-i}\geq 1\), we finally deduce that
(4.51)
For \(j-i\) being odd, only the monotonicity argument (4.49) is different:
\[\begin{split}\big{|}\langle\mathrm{Im}\,G(x+\mathrm{i}\zeta)A_{ i}&\widehat{G}_{[\widehat{i+1},\widehat{j-1}]}A_{j-1}\rangle\big{|}\\ &\leq\frac{\zeta_{x}}{\zeta}\big{|}\langle\mathrm{Im}\,G(x+ \mathrm{i}\zeta_{x})(A_{i}...\mathrm{Im}\,G_{r-1}A_{r-1})\mathrm{Im}\,G_{r}(A_ {i}...\mathrm{Im}\,G_{r-1}A_{r-1})^{*}\rangle\big{|}^{1/2}\\ &\qquad\qquad\times\big{|}\langle\mathrm{Im}\,G_{r}(A_{r+1}... \mathrm{Im}\,G_{j-1}A_{j-1})\mathrm{Im}\,G(x+\mathrm{i}\zeta_{x})(A_{r+1}... \mathrm{Im}\,G_{j-1}A_{j-1})^{*}\rangle\big{|}^{1/2}\,,\end{split}\]
where we now denoted \(r:=(i+j+1)/2\). This asymmetry in the lengths of the resolvent chains now leads to the term \(\sqrt{\phi_{j-i+1}\phi_{j-i-1}}\) in (4.29), the rest of the argument is identical.
Finally, we turn to the proof of (4.30). Again, we focus on the case where \(\#\) indicates no decoration. By application of a Schwarz inequality, we find
(4.52)
where in the last step we used the Ward identity \(GG^{*}=\mathrm{Im}\,G/\eta\) together with the usual single resolvent local law applied to \(\mathrm{Im}\,G_{i}\). This concludes the proof of Lemma 4.6 which was the last missing piece for the proof of Proposition 3.3 (a) for pure \(\mathrm{Im}\,G\) chains.
### Proof of Proposition 3.3 (b) for pure \(\mathrm{Im}\,G\)-chains
In this section, we briefly explain how to derive Proposition 3.3 (b) from Proposition 3.3 (a). For fixed spectral parameters and bounded deterministic vectors \(\|\mathbf{x}\|\,,\|\mathbf{y}\|\lesssim 1\), we have
\[\Big{|}\langle\mathbf{x},(\widehat{G}_{[\widehat{1},\widehat{k+1}]}-\widehat{M}_{[ \widehat{1},\widehat{k+1}]})\mathbf{y}\rangle\Big{|}\lesssim\Big{|}\langle( \widehat{G}_{[\widehat{1},\widehat{k+1}]}-\widehat{M}_{[\widehat{1},\widehat{k+ 1}]})A_{k+1}\rangle\Big{|}+\Big{|}\langle\widehat{G}_{[\widehat{1},\widehat{k+ 1}]}-\widehat{M}_{[\widehat{1},\widehat{k+1}]}\rangle\Big{|} \tag{4.53}\]
with the special choice \(A_{k+1}:=N\mathbf{y}\mathbf{x}^{*}-\langle\mathbf{x},\mathbf{y}\rangle\). Next, using that \(\langle|A_{k+1}|^{2}\rangle^{1/2}\lesssim N^{1/2}\) we find from Proposition 3.3 (a) for pure \(\operatorname{Im}G\) chains the first term in (4.53) to be bounded as
\[\left|\langle(\widehat{G}_{[\hat{1},\widehat{k+1}]}-\widehat{M}_{[\hat{1}, \widehat{k+1})})A_{k+1}\rangle\right|\prec\Big{(}\prod_{i\in[k+1]}\rho_{i} \Big{)}\frac{N^{k/2}}{\sqrt{N\hat{\ell}}}\prod_{j\in[k]}\langle|A_{j}|^{2} \rangle^{1/2}\,.\]
For the second term, we apply (4.29) from Lemma 4.6 (note that by Proposition 3.3 (a) for pure \(\operatorname{Im}G\) chains, we have \(\Phi_{k}\prec\phi_{k}:=1\) and hence also \(\tilde{\phi}_{k}=1\)) and obtain
\[\left|\langle\widehat{G}_{[\hat{1},\widehat{k+1}]}-\widehat{M}_{[\hat{1}, \widehat{k+1})}\rangle\right|\prec\left(\frac{\rho_{1}\rho_{k+1}}{\eta_{1}\eta _{k+1}}\right)^{1/2}\Big{(}\prod_{i=2}^{k}\rho_{i}\Big{)}\frac{N^{k/2-1}}{ \sqrt{N\hat{\ell}}}\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\leq\Big{(} \prod_{i\in[k+1]}\rho_{i}\Big{)}\frac{N^{k/2}}{\sqrt{N\hat{\ell}}}\prod_{j\in[ k]}\langle|A_{j}|^{2}\rangle^{1/2}\,,\]
where in the last step we used \(\eta_{1}\rho_{1}\wedge\eta_{k+1}\rho_{k+1}\geq\hat{\ell}\) and \(N\hat{\ell}\geq 1\). This concludes the proof of Proposition 3.3 (b) for pure \(\operatorname{Im}G\) chains.
### Proof of Proposition 3.3 (b) for mixed chains
We consider mixed resolvent chains
\[\mathcal{G}_{1}A_{1}...\mathcal{G}_{k}A_{k}\]
with \(\mathcal{G}_{j}\in\{G_{j},\operatorname{Im}G_{j}\}\) and traceless matrices \(A_{1},...,A_{k}\in\mathbb{C}^{N\times N}\), and explain how the respective bounds in (2.20)-(2.21) are obtained from the multi-resolvent local law for pure \(\operatorname{Im}G\)-chains derived in Sections 4.1-4.2. We will henceforth focus on the average case, the isotropic bounds can immediately be obtained from those by following Section 4.2.
Recalling
\[\ell=\min_{j\in[k]}\big{[}\eta_{j}(\rho_{j}+\mathbf{1}(j\notin\mathfrak{I}_{k})) \big{]}\]
where \(\mathfrak{I}_{k}\) denotes the set of indices \(j\in[k]\) where \(\mathcal{G}_{j}=\operatorname{Im}G_{j}\), the goal of this section is to prove that
\[\left|\langle\mathcal{G}_{1}A_{1}...\mathcal{G}_{k}A_{k}\rangle-\langle \mathcal{M}_{[1,k]}A_{k}\rangle\right|\prec\left[\left(\prod_{i\in\mathfrak{I} _{k}}\rho_{i}\right)\wedge\max_{i\in[k]}\sqrt{\rho_{i}}\right]\,\frac{N^{k/2-1 }}{\sqrt{N\hat{\ell}}}\,\prod_{j\in[k]}\langle|A_{j}|^{2}\rangle^{1/2}\,. \tag{4.54}\]
In order to do so, we iteratively apply the integral representation (4.1) with \(m=1\) for every \(\mathcal{G}_{j}\) such that \(j\notin\mathfrak{I}_{k}\). In Section 4.3.1, this procedure will immediately yield the claimed bound (4.54) for \(\mathfrak{I}_{k}\neq\emptyset\) (recall from Remark 2.6 (ii), that in this case the minimum in (4.54) is always realized by the product). In the complementary case, \(\mathfrak{I}_{k}=\emptyset\), which has already been studied in [19], the outcome of iteratively applying (4.1) is the natural continuation of the pattern obtained for \(\mathfrak{I}_{k}\neq\emptyset\). However, in this way we only find the weaker bound, where in (4.54) the minimum \(\big{[}...\wedge...\big{]}\) replaced by one. The improvement to include the small factor \(\max_{i\in[k]}\sqrt{\rho_{i}}\) requires a short separate argument, which we provide in Section 4.3.2.
#### 4.3.1. The case \(\mathfrak{I}_{k}\neq\emptyset\)
For concreteness, we consider the case where \(\mathfrak{I}_{k}=[k-1]\), i.e. \(\mathcal{G}_{k}=G_{k}\) with \(\operatorname{Im}z_{k}>0\) w.l.o.g. and all other \(\mathcal{G}\)'s are \(\operatorname{Im}G\)'s. Then, using the integral representation (4.1) with \(m=1\) and \(\eta=\zeta=\operatorname{Im}z_{k}/2\), and its analog for the deterministic approximation (see [18, Eqs. (3.14) and (3.15)] and (4.42) above), we find that
\[\begin{split}|\langle\operatorname{Im}&\,G_{1}A_{1}...G_{k}A_{k}\rangle-\langle\mathcal{M}(z_{1},A_{1},...,z_{k};[k-1])A_{k} \rangle|\\ &\quad\lesssim\int_{\mathbb{R}}\frac{|\langle\operatorname{Im}G_{1 }A_{1}...\operatorname{Im}G(x+\mathrm{i}\zeta)A_{k}\rangle-\langle\mathcal{M} (z_{1},A_{1},...,x+\mathrm{i}\zeta;[k])A_{k}\rangle|}{|x-z_{k}+\mathrm{i} \zeta|}\,\mathrm{d}x\end{split}\]
We then follow the steps in the proof of Lemma 4.6 starting from (4.42) in order to estimate the integral. In particular, we split the integration region into \(I_{\mathrm{above}}\) and \(I_{\mathrm{below}}\), just as in (4.43). In the treatment of these regimes, the two main differences compared to the proof of Lemma 4.6 are the following:
1. We use the \(M\)-bound in (4.38) with logarithmic corrections, which can be absorbed into \(\prec\).
2. Lemma 4.7 gets replaced by the bound \[\int_{\mathbb{R}}\frac{\big{(}\rho(x+\mathrm{i}\zeta)\big{)}^{\alpha}}{|x-z_{k}+ \mathrm{i}\zeta|}\,\mathrm{d}x\prec 1\qquad\text{for all}\qquad\alpha>0\,,\] which can easily be seen using that \(\operatorname{Im}z_{k}\geq N^{-1}\) and \(\rho(w)\) decays polynomially as \(|w|\to\infty\).
For example, instead of (4.46) we estimate (recall that \(I_{\rm above}\) is further split into \(I_{\rm above,=}\) and \(I_{\rm above,<}\) in (4.45))
\[\frac{1}{\sqrt{N}}\left[\frac{1}{\zeta^{1/2}}\int_{I_{\rm above,=}}\!\!\!\frac{ \big{(}\rho(x+{\rm i}\zeta)\big{)}^{1/2}}{|x-z_{k}+{\rm i}\zeta|}{\rm d}x \right]\rho_{1}\ldots\rho_{k-1}N^{k/2-1}\prec\left(\prod_{i\in[k-1]}\rho_{i} \right)\,\frac{N^{k/2-1}}{\sqrt{N\ell}}\,,\]
neglecting the product of Hilbert-Schmidt norms. We point out that, compared to the estimates in the pure \(\operatorname{Im}G\)-case, now \(\ell:=\min_{j\in[k]}\big{[}\eta_{j}(\rho_{j}+\mathfrak{1}(j\neq k))\big{]}\) and \(\rho_{k}\) disappeared from the rhs. Therefore, as a result, we find the claimed bound (4.54) for \(\mathfrak{I}_{k}=[k-1]\). All other cases with \(\mathfrak{I}_{k}\neq\emptyset\) follow by iteratively applying this strategy. This completes the proof of Proposition 3.3 (b) if \(\mathfrak{I}_{k}\neq\emptyset\).
#### 4.3.2. The case \(\mathfrak{I}_{k}=\emptyset\)
As mentioned above, in order to obtain the improvement by \(\max_{i\in[k]}\sqrt{\rho_{i}}\), we now give a separate argument. We thereby closely follow the steps in Section 4.1 and point out only the main differences. In particular, we now use the flow (4.5), together with the following lemma proven in Appendix A.4, instead of (4.9). Here, similarly to (4.5), the absence of hats indicates that _none_ of the resolvents \(\mathcal{G}\) in the chain approximated by \(M\) is an \(\operatorname{Im}G\).
**Lemma 4.8**.: _We have_
\[\partial_{t}\langle M_{[1,k],t}A_{k}\rangle=\frac{k}{2}\langle M_{[1,k],t}A_{ k}\rangle+\sum_{\begin{subarray}{c}i,j=1,\\ i<j\end{subarray}}^{k}\langle M_{[i,j],t}\rangle\langle M_{[j,i],t}\rangle.\]
Moreover, using the shorthand notations
\[\eta_{t}:=\min_{i\in[k]}\eta_{i,t}\quad\text{and}\quad\rho_{t}:=\max_{i\in[k] }\rho_{i,t}\,,\]
we introduce the new normalized differences
\[\Psi_{k}(t):=\frac{\sqrt{N\eta_{t}}}{N^{k/2-1}\,\sqrt{\rho_{t}}\prod_{j\in[k] }\langle|A_{j}|^{2}\rangle^{1/2}}\big{|}\langle(G_{[1,k],t}-M_{[1,k],t})A_{k} \rangle\big{|} \tag{4.55}\]
for every \(k\in\mathbb{N}\). The \(\Psi_{k}\)'s introduced here are the no-\(\operatorname{Im}G\)-analogs of the \(\Phi_{k}\)'s defined in (4.14), i.e. all hats are removed and we replaced \(\hat{\ell}_{t}\to\eta_{t}\) as well as \(\prod_{i}\rho_{i,t}\to\sqrt{\rho_{t}}\).
In the following, we will derive master inequalities for the \(\Psi_{k}\)'s, analogously to Proposition 4.3. However, compared to the proof in Section 4.1, we now have two major simplifications:
1. Since the bound (4.54) for \(\mathfrak{I}_{k}\neq\emptyset\) is already proven, the contribution of the quadratic variation term in (4.5), which automatically carries two \(\operatorname{Im}G\)'s, is easily estimated as (again assuming \(\langle|A_{j}|^{2}\rangle=1\) for all \(j\in[k]\) henceforth) \[\frac{\sqrt{N\eta_{t}}}{N^{k/2-1}\,\sqrt{\rho_{t}}}\left(\int_{0 }^{t}\frac{\langle\operatorname{Im}G_{i,s}\big{(}A_{i}G_{i+1,s}...A_{i-1} \big{)}\operatorname{Im}G_{i,s}\big{(}A_{i}G_{i+1,s}...A_{i-1}\big{)}^{s} \rangle}{N^{2}\eta_{i,s}^{2}}\,{\rm d}s\right)^{1/2}\] \[\qquad\qquad\prec\frac{\sqrt{N\eta_{t}}}{N^{k/2-1}\,\sqrt{\rho_{ t}}}\left(\int_{0}^{t}\frac{N^{k-2}\rho_{i,s}^{2}}{N\eta_{i,s}^{2}}\,{\rm d}s \right)^{1/2}\lesssim\,\sqrt{\frac{\rho_{i,t}\eta_{t}}{\rho_{t}\eta_{i,t}}} \leq 1\,,\] analogously to (4.31). Note that in the first step, we did not use the overestimate \(1/\eta_{i,s}\leq 1/\eta_{s}\) inside the integral as done in (4.31). The same reasoning applies to the analog of the last line in (4.9). We point out that, in this section, the already proven bounds for resolvent chains containing at least one \(\operatorname{Im}G\) make the usage of reduction inequalities as in Lemma 4.4 obsolete.
2. For treating the analogues of \(\Omega_{1},\Omega_{2},\Omega_{3},\Omega_{4}\) in (4.9), it is not necessary to "restore" \(\operatorname{Im}G\)'s via the integral representation (4.1) as in the proof of the \(G^{2}\)-Lemma 4.6. Instead, in the course of proving an analog of Lemma 4.6 (again suppressing the time dependence of the \(z\)'s as well as \(\eta\) and \(\rho\)) it is sufficient to apply resolvent identities for \(|z_{i}-z_{j}|\geq\eta\) and the integral representation \[G(z_{i})G(z_{j})=\frac{1}{2\pi{\rm i}}\int_{\Gamma}\frac{G(w)}{(w-z_{i})(w-z_{j} )}{\rm d}w\,,\] for \(|z_{i}-z_{j}|\leq\eta\). In this case \(z_{i}\) and \(z_{j}\) are necessarily on the same halfplane (\(\operatorname{Im}z_{i}\operatorname{Im}z_{j}>0\)) and, just as in (4.23), \(\Gamma\) is a tiny contour encircling \(z_{i},z_{j}\in\mathbb{C}\setminus\mathbb{R}\) in such a way that \(\operatorname{dist}(\Gamma,\{z_{i},z_{j}\})\sim\eta\), which ensures that \(|\operatorname{Im}m(w)|\lesssim\max_{i\in[k]}\rho_{i}\) on \(\Gamma\) as follows by elementary continuity properties of \(m(w)\).
As a consequence, for fixed \(k\in\mathbb{N}\), we find, assuming \(\Psi_{l}\prec\psi_{l}\) for some control parameters \(\psi_{l}\geq 1\) for \(l=1,2,\ldots,k\) in the usual sense of uniformity explained below (4.16), that
\[\big{|}\big{\langle}M_{[i,j]}\big{\rangle}\big{|}\prec\frac{1}{\eta}\,N^{\frac{ i-i}{2}-1}\,\Big{(}\prod_{m=i}^{j-1}\langle|A_{m}|^{2}\rangle^{1/2}\Big{)}\,,\]
as an analog of (4.28) and
\[\big{|}\big{\langle}G_{[i,j]}-M_{[i,j]}\big{\rangle}\big{|}\prec\frac{1}{\eta }\,N^{\frac{i-i}{2}-1}\sqrt{\frac{\rho}{N\eta}}\,\Big{(}\prod_{m=i}^{j-1} \langle|A_{m}|^{2}\rangle^{1/2}\Big{)}\psi_{j-i}\,,\]
as an analog of (4.29), for all \(i,j\in[k]\) with \(j-i\geq 1\).
Overall, using the above two simplifications and following the arguments in (4.31)-(4.37), we arrive at the following new set of master inequalities.
**Proposition 4.9** (Master inequalities II).: _Fix \(k\in\mathbb{N}\) and \(t\in[0,T]\). Assume that \(\Psi_{l}(s)\prec\psi_{l}\) for any \(1\leq l\leq k\) uniformly in \(s\in[0,t]\) (in the sense of (4.16)) and set \(\psi_{0}:=1\). Then we have the master inequalities_
\[\Psi_{k}(t)\prec 1+\frac{1}{N\hat{\ell}_{t}}\sum_{l=1}^{k}\psi_{l}+\frac{1}{( N\hat{\ell}_{t})^{3/2}}\sum_{l=1}^{k-1}\psi_{l}\psi_{k-l}+\frac{|\sigma|}{(N \hat{\ell}_{t})^{1/4}}\sum_{l=1}^{k}\sqrt{\psi_{2l}}+\frac{|\sigma|}{N\hat{ \ell}_{t}}\sum_{l=0}^{k}\sqrt{\psi_{2l}\psi_{2(k-l)}} \tag{4.56}\]
_where we denoted \(\hat{\ell}_{t}=\min_{i\in[k]}\eta_{i,t}\rho_{i,t}\) for brevity (recall (4.3); not to be confused with the \(\ell\) used around (4.54)!)._
Using that \(N\hat{\ell}_{t}\geq N^{\epsilon}\) and iteration (Lemma 4.5), analogously to Section 4.1.1, we can immediately deduce that \(\Psi_{k}(T)\prec 1\) where \(T\) is the time defined in the statement of Proposition 3.3. This concludes the proof of Proposition 3.3 (b) for the remaining case \(\mathfrak{I}_{k}=\emptyset\).
### Modifications for general \(\sigma=\mathbb{E}\chi_{\mathrm{od}}^{2}\)
The proof of Proposition 3.3 presented so far assumed for simplicity that \(\sigma=\mathbb{E}\chi_{\mathrm{od}}^{2}\) is real and \(\mathbb{E}\chi_{\mathrm{d}}^{2}=1+\sigma\). We now explain how to prove the general case, when these two restrictions are lifted. The only changes concern the choice of the initial condition and of the evolution \(B_{t}\) in the flow (3.3).
If \(\sigma\) is not real, we modify the evolution in (3.3) in such a way the entries of \(B_{t}\) are \(\sqrt{t}\) times a standard complex Gaussian, and we modify the initial condition in (3.3) from \(W_{0}=W\) to \(W_{0}=\widetilde{W}_{T}\), with another Wigner matrix \(\widetilde{W}_{T}\) prepared such that
\[e^{-T/2}\widetilde{W}_{T}+\sqrt{1-e^{-T}}U\stackrel{{\mathrm{d}}} {{=}}W. \tag{4.57}\]
Here \(U\) is a GUE matrix, which is independent of \(\widetilde{W}_{T}\). We point out that the limiting eigenvalue density of \(\widetilde{W}_{T}\) does not change along the flow (3.3) as a consequence of the fact that \(\mathbb{E}|(W_{t})_{ab}|^{2}\), for \(a>b\), is preserved, and only
\[\mathbb{E}(W_{t})_{ab}^{2}=e^{-t}\mathbb{E}(\widetilde{W}_{T})_{ab}^{2}, \qquad\quad\mathbb{E}(W_{t})_{aa}^{2}=e^{-t/2}\mathbb{E}(\widetilde{W}_{T})_{ aa}^{2}+\frac{1}{N}\sqrt{1-e^{-t}}\,,\qquad t\in[0,T]\,,\]
change. The fact that \(\mathbb{E}(W_{t})_{ab}^{2}\) and \(\mathbb{E}(W_{t})_{aa}^{2}\) do change along the flow contributes to a change of order \(1/N\) in the averaged Stieltjes transform of \(W_{t}\); such change is easily seen to be negligible for the precision of the local laws we are considering here. If \(\sigma\in\mathbb{R}\) but \(\mathbb{E}\chi_{\mathrm{d}}^{2}\neq 1+\sigma\), similarly to (4.57), we choose \(B_{t}\) so that its entries have variance \(t\) times the variance of \(W\) for the off-diagonal entries and \(\mathbb{E}(B_{t})_{aa}^{2}=(1+\sigma)t\), and we can prepare yet another Wigner matrix \(\widetilde{W}_{T}\) such that
\[e^{-T/2}\widetilde{W}_{T}+\sqrt{1-e^{-T}}\widehat{U}\stackrel{{ \mathrm{d}}}{{=}}W, \tag{4.58}\]
with \(\widehat{U}\) being independent of \(\widetilde{W}_{T}\) and having the same entries distribution as \(W\) except for the diagonal entries having variance \(\mathbb{E}\widehat{U}_{aa}^{2}=\frac{1}{N}(1+\sigma)\). The second moments of \((\widetilde{W}_{t})_{ab}\) are preserved and only the diagonal changes
\[\mathbb{E}(\widetilde{W}_{t})_{aa}^{2}=e^{-t/2}\mathbb{E}(\widetilde{W}_{T})_{ aa}^{2}+\frac{1}{N}\sqrt{1-e^{-t}}(1+\sigma);\]
hence the limiting eigenvalue distribution is still given by the semicircular law.
## 5. Green function comparison: Proof of Proposition 3.4
In this section, we remove the Gaussian component introduced in Propositions 3.3 by a Green function comparison (GFT) argument, i.e. we prove Proposition 3.4. For simplicity, we will write the detailed proof only in the case of no imaginary parts, i.e. \(\mathfrak{I}_{k}=\emptyset\) and \(\mathfrak{I}_{k+1}=\emptyset\) in the average and isotropic case, respectively. The minor modifications needed for handling the other cases will be briefly discussed in Section 5.4 below.
Before entering the proof, we point out that typical GFT arguments (starting from [48]) are used to compare the distribution of a genuinely fluctuating observable under two different matrix ensembles whose single entry distributions have matching first few moments. Technically, a family of interpolating ensembles is constructed which may be finite (e.g. Lindeberg replacement strategy) or continuous (e.g. along an Ornstein-Uhlenbeck flow) and the change of the distribution in question is closely monitored along the interpolation. In this standard setup for GFT, however, local laws serve as _a priori_ bounds obtained by independent methods and they assumed to hold for all interpolating ensembles in between. In other words, concentration-type information about resolvents \(G(z)\) with \(\operatorname{Im}z\) well above the eigenvalue spacing are turned into information on the distribution of \(G(z)\) with \(\operatorname{Im}z\) at, or even slightly below the eigenvalue spacing. Our application of GFT is different in spirit, since we aim to prove local laws for one ensemble knowing them for the other one. Thus GFT needs to be done _self-consistently_ with monitoring a carefully designed quantity that satisfies a Gronwall-type inequality along the interpolation.
We remark that more than ten years ago Knowles and Yin in [39] used GFT in a similar spirit to prove single resolvent local law for ensembles where the deterministic approximation \(M\) to \(G\) is not a multiple of identity matrix (for example deformed Wigner matrices). Later a much more direct and generally applicable alternative method based upon the matrix Dyson equation [5, 6] has been developed to prove such local laws without GFT. Our current dynamical approach revives the idea of a self-consistent GFT, since it naturally serves as a counterpart of the characteristic flow to remove the Gaussian component added along that flow. In fact, the approach of [39] also used a tandem of gradual reduction of \(\eta=\operatorname{Im}z\) (called _bootstrapping_ steps) and a self-consistent GFT (called _interpolation_ steps), see Fig. 1.1 in [39]. However, the bootstrapping step in [39] was much less effective than the characteristic flow which does the \(\eta\)-reduction in one step even for a much more complex multi-resolvent chain. In the GFT step, we use the simple entry-by-entry Lindeberg replacement strategy that is better adjustable to our complicated resolvent chains instead of a special continuous interpolation as in [39], but the core of both techniques is a self-consistent Gronwall argument. The main technical challenge in our proof is that the error in one step of the Lindeberg replacement is not always sufficiently small, but by carefully monitoring the errors in each step, we gain from summing them up explicitly. We will explain this mechanism in Example 5.11.
Now we turn to the actual proof. Recalling the notations
\[\eta:=\min_{i}|\operatorname{Im}z_{i}|\quad\text{and}\quad\rho:=\pi^{-1}\max _{i}|\operatorname{Im}m_{i}|\,, \tag{5.1}\]
we begin by distinguishing the _averaged_ and _isotropic_ control quantities
\[\Psi_{k}^{\text{av}} :=\frac{\sqrt{N}\eta}{N^{k/2-1}\sqrt{\rho}}\big{|}\big{\langle} \big{(}G_{1}A_{1}...G_{k}-M_{[1,k]}\big{)}A_{k}\big{\rangle}\big{|} \tag{5.2}\] \[\Psi_{k}^{\text{iso}}(\boldsymbol{x},\boldsymbol{y}) :=\frac{\sqrt{N}\eta}{N^{k/2}\sqrt{\rho}}\big{|}\big{(}G_{1}A_{1}...A_{k}G_{k+1}-M_{[1,k+1]}\big{)}_{\boldsymbol{x}\boldsymbol{y}}\big{|}\,, \tag{5.3}\]
where \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{C}^{N}\) are unit deterministic vectors and the traceless matrices \(A_{i}\in\mathbb{C}^{N\times N}\) are assumed to have normalized Hilbert-Schmidt norms, \(\langle|A_{i}|^{2}\rangle^{1/2}=1\). Recall that, in (5.2)-(5.3), we only consider chains without \(\operatorname{Im}G\)'s, the more general cases will be discussed later in Section 5.4. Finally, we point out that our notation in (5.2)-(5.3) already suppressed the dependence on the spectral parameters and deterministic matrices, since the sets of these are considered fixed along the argument. In the following, we will often say that an estimate on \(\Psi\) holds _uniformly_, by which we will always mean uniformity in all unit deterministic vectors and all choices of subsets of spectral parameters and deterministic matrices as explained in Proposition 3.4 (a).
Now, the goal of this section is to prove Proposition 3.4. More precisely, we will show that, if the optimal multi-resolvent local laws
\[\Psi_{k}^{\text{av}}+\Psi_{k}^{\text{iso}}\prec 1,\quad\text{for all fixed}\quad k \in\mathbb{N}, \tag{5.4}\]
hold _uniformly_ for a Wigner matrix with some given single entry distributions, then they also hold for every other Wigner matrix with different single entry distributions, again _uniformly_, provided that the _first three moments_ of the entries of these two ensembles _match_. A fundamental input for our proof is that the corresponding single resolvent local laws hold for _every_ Wigner matrix ensemble [29, 38, 10], i.e. the following Green function comparison argument is not needed for them.
**Theorem 5.1**.: _For fixed \(\epsilon>0\), we have_
\[\left|\left\langle G-m\right\rangle\right|\prec\frac{1}{N\eta}\,,\qquad\big{|} \big{(}G-m\big{)}_{\mathbf{x}\mathbf{y}}\big{|}\prec\sqrt{\frac{\rho}{N\eta}}+\frac{1 }{N\eta} \tag{5.5}\]
_uniformly in unit deterministic vectors \(\mathbf{x},\mathbf{y}\) and at spectral parameter \(z\in\mathbb{C}\setminus\mathbb{R}\) with \(\eta=|\mathrm{Im}\,z|\geq N^{-1+\epsilon}\) and \(\mathrm{Re}\,z\in\mathbb{R}\), where \(\rho=\pi^{-1}|\mathrm{Im}\,m(z)|\)._
For convenience, these single resolvent laws will be expressed in the compact form
\[\Psi_{0}^{\mathrm{av}}+\Psi_{0}^{\mathrm{iso}}\prec 1\,,\]
which extends (5.2)-(5.3) when no traceless matrices \(A\) are present (see, e.g., [18, 19]).
Before starting the proof, we recall some notation which has already been used in the statement of Proposition 3.4. We will distinguish between the two ensembles compared in the GFT argument by using different letters, \(v_{ab}\) and \(w_{ab}\), for their matrix elements, and we shall occasionally use the notation \(H^{(\mathbf{v})}\) and \(H^{(\mathbf{w})}\) to indicate the difference. Alternatively, one could denote the matrix elements by a universal letter \(h_{ab}\) and distinguish the two ensembles in the underlying measure, especially in the expectations \(\mathbb{E}_{\mathbf{v}}\) and \(\mathbb{E}_{\mathbf{w}}\). However, since the proof of Proposition 3.4 works by replacing the matrix elements one-by-one in \(N(N+1)/2\) steps, we use the first notation, analogously to [31, Section 16].
### Preliminaries
The principal idea of the proof is as follows: First, we fix a bijective ordering
\[\phi:\{(i,j)\in[N]^{2}:i\leq j\}\to[\gamma(N)]\,,\qquad\gamma(N):=\frac{N(N+1) }{2} \tag{5.6}\]
on the index set of independent entries of a Wigner matrix. Then, according to the induced ordering, the matrix elements are swapped one-by-one from the distribution \(v_{ab}\) to \(w_{ab}\) in \(\gamma(N)\sim N^{2}\) steps. In particular, at step \(\gamma\in\{0\}\cup[\gamma(N)]\) in this replacement procedure, the resulting matrix \(H^{(\gamma)}\) has entries which are distributed according to \(w_{ij}\) whenever \(\phi\big{(}(i,j)\big{)}\leq\gamma\) and according to \(v_{ij}\) whenever \(\phi\big{(}(i,j)\big{)}>\gamma\), i.e. \(H^{(0)}=H^{(\mathbf{v})}\) and \(H^{(\gamma(N))}=H^{(\mathbf{w})}\). This one-by-one replacement of the matrix elements naturally requires understanding the _isotropic_ law (3.14), as already indicated in (5.3).
In order to derive (5.4) also for \(H^{(\mathbf{w})}\), we compute high moments of \(\Psi_{k}^{\mathrm{av/iso}}\) for \(H^{(\gamma)}\) and \(H^{(\gamma-1)}\) for general \(\gamma\in[\gamma(N)]\) and compare the results. Given sufficiently good one-step bounds, a telescopic argument will yield the estimate (5.4) also for \(H^{(\mathbf{w})}\). These "sufficiently good" one-step bounds are essentially required to accommodate the large number \(O(N^{2})\) of necessary replacements in order to arrive at \(H^{(\gamma(N))}\). A key feature of our proof, in contrast to previous applications of the replacement strategy, is that the error will not always be \(o(N^{-2})\) in each step, but their cumulative size after summation is still \(o(1)\).
The proof of Proposition 3.4 is divided in two main parts: At first, in **Part (a)**, Section 5.2, we show the isotropic part of (5.4), that is \(\Psi_{k}^{\mathrm{iso}}\prec 1\), via a double induction on the number \(k\in\mathbb{N}\) of traceless matrices and the moment \(p\in\mathbb{N}\) taken of \(\Psi_{k}^{\mathrm{iso}}\), i.e. \(\mathbb{E}|\Psi_{k}^{\mathrm{iso}}|^{p}\). Thereby, we crucially use that the \(\prec\)-bound is (essentially) equivalent to controlling arbitrarily high moments up to an \(N^{\xi}\)-error with arbitrarily small \(\xi>0\). Afterwards, in **Part (b)**, Section 5.3, using Part (a) as an input, we will demonstrate \(\Psi_{k}^{\mathrm{av}}\prec 1\) (and thus conclude the proof of Proposition 3.4 for \(\mathscr{I}_{k+1}=\emptyset\) resp. \(\mathscr{I}_{k}=\emptyset\)) for every fixed \(k\) via a single induction on the moment \(p\). The main reason for this order of the argument is that the one-by-one replacement in step \(\gamma\) is conducted via resolvent expansion focusing on the differing matrix entries at positions \((i,j)=\phi^{-1}(\gamma)\) and \((j,i)\), and thereby it naturally produces isotropic quantities (see Lemma 5.3 below). Hence, the argument for \(\Psi_{k}^{\mathrm{av}}\) cannot be self-contained and must rely on \(\Psi_{k}^{\mathrm{iso}}\), which in fact will not involve the averaged local laws at all.
We fix some further notation. We have an initial Wigner matrix \(H^{(0)}:=H^{(\mathbf{v})}\) and iteratively define
\[H^{(\gamma)}:=H^{(\gamma-1)}-\frac{1}{\sqrt{N}}\Delta_{V}^{(\gamma)}+\frac{1}{ \sqrt{N}}\Delta_{W}^{(\gamma)}, \tag{5.7}\]
a sequence of Wigner matrices for \(\gamma\in[\gamma(N)]\), where we denoted18
Footnote 18: Observe that in this normalization, the non-zero entries of \(\Delta_{V}^{(\gamma)}\) and \(\Delta_{W}^{(\gamma)}\) are of order one random variables.
\[\Delta_{V}^{(\gamma)}:=\sqrt{N}\frac{E^{(ij)}(H^{(\mathbf{v})})_{ij}+E^{(ji)}(H^{( \mathbf{v})})_{ji}}{1+\delta_{ij}}\quad\text{and}\quad\Delta_{W}^{(\gamma)}:=\sqrt{ N}\frac{E^{(ij)}(H^{(\mathbf{w})})_{ij}+E^{(ji)}(H^{(\mathbf{w})})_{ji}}{1+\delta_{ij}}\,. \tag{5.8}\]
Here, \(\phi\big{(}(i,j)\big{)}=\gamma\) and \(E^{(ij)}\) denotes the matrix whose matrix elements are zero everywhere except at position \((i,j)\), i.e. \((E^{(ij)})_{k\ell}=\delta_{ik}\delta_{j\ell}\). The denominator \(1+\delta_{ij}\) is introduced to account for the factor of two in the numerator occurring for diagonal indices. Note that \(H^{(\gamma)}\) and \(H^{(\gamma-1)}\) differ only in the \((i,j)\) and \((j,i)\) matrix elements, and they can be written as
\[H^{(\gamma-1)}=\widetilde{H}^{(\gamma)}+\frac{1}{\sqrt{N}}\Delta_{V}^{(\gamma )}\quad\text{and}\quad H^{(\gamma)}=\widetilde{H}^{(\gamma)}+\frac{1}{\sqrt{N }}\Delta_{W}^{(\gamma)} \tag{5.9}\]
with a matrix \(\widetilde{H}^{(\gamma)}\) whose matrix element is zero at the \((i,j)\) and \((j,i)\) positions. Similarly, we denote the corresponding resolvents at spectral parameter \(z_{j}\in\mathbb{C}\setminus\mathbb{R}\) by
\[G_{j}^{(\gamma)}:=(H^{(\gamma)}-z_{j})^{-1}\,,\quad G_{j}^{(\gamma-1)}:=(H^{( \gamma-1)}-z_{j})^{-1}\,,\quad\text{and}\quad\widetilde{G}_{j}^{(\gamma)}:=( \widetilde{H}^{(\gamma)}-z_{j})^{-1}\,. \tag{5.10}\]
Observe that, at each step \(\gamma\) in the replacement procedure, the deterministic approximation to a resolvent chain involving \(G^{(\gamma)}\) is the same. This is because only the first two moments of the matrix elements of \(H^{(\gamma)}\) determine this approximation, symbolically denoted by \(M\), via the _Matrix Dyson Equation (MDE)_, see, e.g., [33]. For a chain in the checked resolvents \(\widetilde{G}\), the approximating \(M\) is _in principle_ differing from the non-checked ones, simply because the self-energy operator \(\widetilde{\mathcal{S}}^{(\gamma)}[R]=\mathbb{E}[\widetilde{H}^{(\gamma)}R \widetilde{H}^{(\gamma)}]\) associated with \(\widetilde{H}^{(\gamma)}\) is no longer exactly the averaged trace \(\langle\cdot\rangle\). However, since this discrepancy introduces an error of size \(1/N\) in the MDE, which is a stable equation, this will not be visible in the local laws (5.4). Therefore, we shall henceforth ignore this minor point and shall just define the normalized differences
\[\Psi_{k}^{\mathrm{av},(\gamma)}\,,\quad\widetilde{\Psi}_{k}^{\mathrm{av},( \gamma)}\,,\quad\Psi_{k}^{\mathrm{iso},(\gamma)}(\mathbf{x},\mathbf{y})\,,\quad\text{ and}\quad\widetilde{\Psi}_{k}^{\mathrm{iso},(\gamma)}(\mathbf{x},\mathbf{y})\,,\]
exactly as in (5.2)-(5.3), but with \(G_{j}\) replaced by \(G_{j}^{(\gamma)}\) and \(\widetilde{G}_{j}^{(\gamma)}\), respectively. We emphasize again that the deterministic counterparts in all of the normalized differences are the _same_.
We can now turn to the actual proof.
### Part (a): Proof of the isotropic law
In this first part, we exclusively work with isotropic quantities and we shall hence drop the superscript \({}^{\mathrm{iso}}\) in the entire Section 5.2. As already mentioned above, we shall prove the claim by a _double induction_ on \(k\) and the moment \(p\) taken of \(\Psi_{k}\), i.e. \(\mathbb{E}|\Psi_{k}|^{p}\).
Thereby, the primary induction parameter is \(k\) and our goal is to show that, if for some \(k\in\mathbb{N}\) we have
\[\max_{\gamma\leq\gamma(N)}\Psi_{k^{\prime}}^{(\gamma)}+\max_{\gamma\leq\gamma( N)}\widetilde{\Psi}_{k^{\prime}}^{(\gamma)}\prec 1\,,\qquad\forall\,k^{\prime}\in\{0,...,k-1\}\,, \tag{5.11}\]
then also
\[\max_{\gamma\leq\gamma(N)}\Psi_{k}^{(\gamma)}+\max_{\gamma\leq\gamma(N)} \widetilde{\Psi}_{k}^{(\gamma)}\prec 1\,. \tag{5.12}\]
Within the proof of (5.12), for a fixed \(k\), we will then crucially use that the \(\prec\)-bound is equivalent to controlling arbitrarily high moments \(\mathbb{E}|\Psi_{k}|^{p}\) up to an \(N^{\xi}\)-error for an arbitrarily small \(\xi>0\). Therefore, we use another secondary induction on the moment \(p\). More precisely, in order to establish (5.12) from (5.11), our goal is to show that, for any fixed \(k\in\mathbb{N}\), if for some \(p\in\mathbb{N}\) we have that
\[\max_{\gamma\leq\gamma(N)}\big{\|}\Psi_{k}^{(\gamma)}\big{\|}_{p-1}+\max_{ \gamma\leq\gamma(N)}\big{\|}\widetilde{\Psi}_{k}^{(\gamma)}\big{\|}_{p-1} \lesssim N^{\xi}\]
for any \(\xi>0\), then also
\[\max_{\gamma\leq\gamma(N)}\big{\|}\Psi_{k}^{(\gamma)}\big{\|}_{p}+\max_{\gamma \leq\gamma(N)}\big{\|}\widetilde{\Psi}_{k}^{(\gamma)}\big{\|}_{p}\lesssim N^{\xi} \tag{5.13}\]
holds for any \(\xi>0\), where implicit constants depend on \(k,p\) and \(\xi\). Here for a random variable \(X\) we used the definition \(\|X\|_{p}:=[\mathbb{E}|X|^{p}]^{1/p}\).
To summarize, as the _induction hypothesis_, given some arbitrary fixed \(p,k\in\mathbb{N}\), we will assume that
\[\max_{\gamma\leq\gamma(N)}\Psi_{k^{\prime}}^{(\gamma)}+\max_{\gamma\leq\gamma(N )}\widetilde{\Psi}_{k^{\prime}}^{(\gamma)}\prec 1\quad\text{and}\quad\max_{\gamma\leq\gamma(N)}\big{\|} \Psi_{k}^{(\gamma)}\big{\|}_{p-1}+\max_{\gamma\leq\gamma(N)}\big{\|}\widetilde{ \Psi}_{k}^{(\gamma)}\big{\|}_{p-1}\leq C_{k,p,\xi}N^{\xi} \tag{5.14}\]
hold uniformly for all \(k^{\prime}\in\{0,...,k-1\}\) and \(\xi>0\) with an appropriate \(N\)-independent constant. Then we will conclude (5.13).
The overall _base case_ (\(k=1\), \(p=1\)) is easy to verify: it solely consists of the usual isotropic law (the first estimate in (5.14) for \(k^{\prime}=0\)) and the trivial bound \(\mathbb{E}|\Psi_{1}|^{0}=1\) (the second estimate in (5.14) for \(k=1\) and \(p=1\)).
We start with two arbitrary but fixed bounded deterministic vectors \(\|\mathbf{x}\|,\|\mathbf{y}\|\lesssim 1\) and introduce the set
\[I_{\mathbf{xy}}:=\{\mathbf{x},\mathbf{y}\}\cup\{\mathbf{e}_{a}:a\in[N]\}\subset\mathbb{C}^{N} \tag{5.15}\]
of vectors, which will naturally arise along the argument (see (5.32) below), where \(\mathbf{e}_{a}\) denotes the standard basis vector in the coordinate direction \(a\). Note that the cardinality of \(I_{\mathbf{xy}}\) is \(N+2\). After defining19
Footnote 19: Here, \(p\) is a superscript, not a power.
\[\Omega^{p}_{k}(\gamma):=\max_{\mathbf{u},\mathbf{v}\in I_{\mathbf{xy}}}\|\Psi^{(\gamma)}_{ k}(\mathbf{u},\mathbf{v})\|_{p}^{p} \tag{5.16}\]
(we omitted the dependence on \(\mathbf{x},\mathbf{y}\) in the notation, as they are considered fixed along the whole argument), the principal goal of the induction step is to prove the following proposition.
**Proposition 5.2** (Gronwall estimate).: _Fix \(p,k\in\mathbb{N}\) and assume (5.14) holds. Then, for any \(\xi>0\), there exist some constants \(C_{1},C_{2}>0\) (depending on \(p\), \(k\), and \(\xi\), but independent of \(N\), \(\mathbf{x}\), and \(\mathbf{y}\)) such that_
\[\Omega^{p}_{k}(\gamma_{0})\leq C_{1}\frac{1}{N^{2}}\sum_{\gamma<\gamma_{0}} \Omega^{p}_{k}(\gamma)+C_{2}N^{\xi} \tag{5.17}\]
_for every \(\gamma_{0}\in[\gamma(N)]\)._
Note that (5.17) is a discrete Gronwall inequality for \(\Omega^{p}_{k}(\gamma)\). Hence, having Proposition 5.2 at hand (note that, in particular, \(\Omega^{p}_{k}(0)\leq C_{2}N^{\xi}\)), we obtain
\[\max_{\gamma\leq\gamma(N)}\Omega^{p}_{k}(\gamma)\leq C_{2}\mathrm{e}^{C_{1}}N ^{\xi}\leq C_{3}(k,p,\xi)N^{\xi}\,, \tag{5.18}\]
uniformly in \(\mathbf{x}\) and \(\mathbf{y}\) and all choices of spectral parameters and traceless deterministic matrices, which then implies the \(\Psi\)-part of (5.13). In the next subsections we present auxiliary results necessary for the proof of Proposition 5.2 which will then be concluded in Section 5.2.5. The \(\widetilde{\Psi}\)-part of (5.13) and thus the induction step will finally be completed in Section 5.2.6.
In order to simplify notation, we shall henceforth drop the subscripts for all resolvents and deterministic matrices, i.e. write \(G_{j}=G\) and \(A_{j}=A\) instead.
#### 5.2.1. Preliminaries
The fundamental building block of our proof is the following elementary lemma on resolvent expansion. Note that we need to express \(G^{(\gamma-1)},G^{(\gamma)}\) in terms of the "unperturbed" resolvent \(\widetilde{G}^{(\gamma)}\) of \(\widetilde{H}^{(\gamma)}\) that has zero elements in the \(\gamma\)-th position, and conversely, we need to express \(\widetilde{G}^{(\gamma)}\) in terms of both "perturbed" resolvents using \(\Delta^{(\gamma)}_{V}\) and \(\Delta^{(\gamma)}_{W}\) from (5.8) as perturbations, see (5.10). We work with finite resolvent expansions up to some order \(m\), independent of \(N\), to be determined later. The last term therefore always contains the original resolvent as well and it will have to be estimated deterministically by its norm but if \(m\) is large enough this will be affordable.
**Lemma 5.3** (Resolvent expansions).: _For every fixed \(m\in\mathbb{N}\), it holds that_
\[\widetilde{G}^{(\gamma)}=\sum_{\ell=0}^{m}N^{-\ell/2}\big{(}G^{(\gamma)}\Delta ^{(\gamma)}_{W}\big{)}^{\ell}G^{(\gamma)}+N^{-(m+1)/2}\big{(}G^{(\gamma)} \Delta^{(\gamma)}_{W}\big{)}^{m+1}\widetilde{G}^{(\gamma)}\] (5.19a) _and_ \[G^{(\gamma)}=\sum_{\ell=0}^{m}(-1)^{\ell}N^{-\ell/2}\big{(}\widetilde{G}^{( \gamma)}\Delta^{(\gamma)}_{W}\big{)}^{\ell}\widetilde{G}^{(\gamma)}+(-1)^{(m+ 1)}N^{-(m+1)/2}\big{(}\widetilde{G}^{(\gamma)}\Delta^{(\gamma)}_{W}\big{)}^{m+ 1}G^{(\gamma)}\,. \tag{5.19b}\]
_These relations also hold verbatim when replacing \(G^{(\gamma)}\to G^{(\gamma-1)}\) and \(\Delta^{(\gamma)}_{W}\to\Delta^{(\gamma)}_{V}\). \(\Box\)_
We now expand each \(G^{(\gamma)}\) in
\[\big{|}\Psi_{k}^{(\gamma)}(\mathbf{x},\mathbf{y})\big{|}^{p}=\left(\frac{N\eta}{\rho} \right)^{p/2}N^{-pk/2}\big{|}\big{(}(G^{(\gamma)}A)^{k}G^{(\gamma)}-M_{[1,k+1]} \big{)}_{\mathbf{x}\mathbf{y}}\big{|}^{p} \tag{5.20}\]
and each \(G^{(\gamma-1)}\) in
\[\big{|}\Psi_{k}^{(\gamma-1)}(\mathbf{x},\mathbf{y})\big{|}^{p}=\left(\frac{N\eta}{\rho }\right)^{p/2}N^{-pk/2}\big{|}\big{(}(G^{(\gamma-1)}A)^{k}G^{(\gamma-1)}-M_{[1,k +1]}\big{)}_{\mathbf{x}\mathbf{y}}\big{|}^{p} \tag{5.21}\]
according to (5.19b) (for some \(m\geq 4\) to be determined below, depending on \(p\) and \(k\); see (5.49)) and sort the resulting terms by their power \(r=0,1,2,...\) of \(N^{-1/2}\). Then we take the expectation with respect to \(w_{ij}\) and \(v_{ij}\), respectively (recall that \(\phi\big{(}(i,j)\big{)}=\gamma\)), and use the moment matching condition (3.12). As a result, we find that the terms with a prefactor \(N^{-r/2}\) for \(r=0,1,2,3\) are algebraically _exactly the same_ for both (5.20) and (5.21). The conclusion of this argument is formalized in the following lemma.
**Lemma 5.4**.: _For any fixed \((i,j)\in[N]^{2}\) with \(i\leq j\) and \(\gamma=\phi(i,j)\) we have that_
\[\mathbb{E}_{w_{ij}}\big{|}\Psi_{k}^{(\gamma)}(\mathbf{x},\mathbf{y})\big{|}^{p} =\sum_{r=0}^{3}N^{-r/2}\alpha_{k,r}^{(\gamma)}(\mathbf{x},\mathbf{y}) \big{|}\tilde{\Psi}_{k}^{(\gamma)}(\mathbf{x},\mathbf{y})\big{|}^{p-r}+\mathrm{higher\ order\ terms} \tag{5.22}\] \[\mathbb{E}_{v_{ij}}\big{|}\Psi_{k}^{(\gamma-1)}(\mathbf{x},\mathbf{y}) \big{|}^{p} =\sum_{r=0}^{3}N^{-r/2}\alpha_{k,r}^{(\gamma)}(\mathbf{x},\mathbf{y}) \big{|}\tilde{\Psi}_{k}^{(\gamma)}(\mathbf{x},\mathbf{y})\big{|}^{p-r}+\mathrm{higher\ order\ terms} \tag{5.23}\]
_for some identical coefficients \(\alpha_{k,r}^{(\gamma)}(\mathbf{x},\mathbf{y})\) independent of \(v_{ij}\) and \(w_{ij}\) whose precise values are (mostly) irrelevant. Here "higher order terms" denote terms with prefactor \(N^{-r/2}\) with \(r\geq 4\)._
In the following Sections 5.2.2-5.2.4, preparing the conclusion of the proof of Proposition 5.2 in Section 5.2.5, we will discuss the higher order terms in (5.22) and (5.23). These have to be estimated individually by size when we will consider the difference of (5.22) and (5.23). Recall that, we will eventually compare \(\Psi_{k}^{(0)}(\mathbf{x},\mathbf{y})\) and \(\Psi_{k}^{(\gamma(N))}(\mathbf{x},\mathbf{y})\) in \(\gamma(N)=O(N^{2})\) many steps, which is why the higher order terms must all be bounded by \(1/N^{2}\), roughly said. More precisely, we will use the following telescopic summation: For every \(\gamma_{0}\in[\gamma(N)]\), it holds that
\[\big{|}\|\Psi_{k}^{(\gamma_{0})}(\mathbf{x},\mathbf{y})\|_{p}^{p}-\|\Psi_{k}^{(0)}(x, \mathbf{y})\|_{p}^{p}\big{|}\leq\sum_{1\leq\gamma\leq\gamma_{0}}\Big{|}\|\Psi_{k}^ {(\gamma)}(\mathbf{x},\mathbf{y})\|_{p}^{p}-\|\Psi_{k}^{(\gamma-1)}(\mathbf{x},\mathbf{y})\|_{ p}^{p}\Big{|}\;. \tag{5.24}\]
In the next Section 5.2.2, we will explain the term with \(r=4\) in Lemma 5.4, i.e. with \(N^{-2}\)-prefactor, in detail. All other higher order terms with \(r\geq 5\) but still involving only the resolvent \(\widetilde{G}^{(\gamma)}\) are completely analogous, in fact easier (see Section 5.2.3 later for some detail). Afterwards, in Section 5.2.4, we will discuss, how the maximal order \(m\) of the resolvent expansion (5.19b) has to be chosen in order to accommodate the remainder term involving a non-checked resolvent \(G^{(\gamma)}\) (resp. \(G^{(\gamma-1)}\)).
Throughout the following argument we shall focus on the higher order terms in (5.22), the treatment of (5.23) is exactly the same. Whenever it does not lead to confusion, we shall henceforth drop the superscript \(\gamma\).
#### 5.2.2. Fourth order terms in Lemma 5.4
The goal of the current Section 5.2.2 is to show that the terms of order \(r=4\) arising in the telescopic summation (5.24) can be bounded by the rhs. of (5.17).
In the following, we denote (cf. (5.8))
\[\Delta=\Delta^{(\gamma)}=\frac{E^{(ij)}+E^{(ji)}}{1+\delta_{ij}} \tag{5.25}\]
and find, similarly to (5.8), after taking the full expectation, the \(r=4\) (i.e. \(1/N^{2}\)) prefactor of the higher order terms in (5.22) to be bounded by (a constant times)
\[\mathbb{E}\sum_{d=1}^{4\wedge p}\big{|}\tilde{\Psi}_{k}(\mathbf{x},\mathbf{y})\big{|}^{p -d}\left(\frac{N\eta}{\rho}\right)^{d/2}N^{-dk/2}\sum_{4\Delta\,\sim\,d}\big{|} \underbrace{\left(...\Delta...\Delta...\right)_{\mathbf{x}\mathbf{y}}\cdots\left(... \Delta...\right)_{\mathbf{x}\mathbf{y}}}_{\text{four\ \Delta in a total $d$ chains}}\big{|}\,. \tag{5.26}\]
Here \(d\) counts the number of formerly "intact" resolvent chains \(\big{(}(\widetilde{GA})^{k}\widetilde{G}\big{)}_{\boldsymbol{xy}}\), which have been 'destroyed' by at least one replacement \(\widetilde{G}\to\widetilde{G}\Delta\widetilde{G}\) due to the expansion (5.19b). The symbol
\[\sum_{4\Delta\,\leadsto\,d} \tag{5.27}\]
indicates that we sum over all possibilities to destroy exactly \(d\) chains by four \(\Delta\)'s. Note that a chain may be "destroyed" by more than one \(\Delta\), therefore \(d\) may be less than four. After using the explicit form of \(\Delta\), altogether we arrive at a finite sum of \(4+d\) chains.
**Example 5.5**.: For example, for \(d=1\) we have that
\[\begin{split}&\sum_{4\Delta\,\leadsto\,1}\big{|}\big{(}... \Delta...\Delta...\Delta...\Delta...\big{)}_{\boldsymbol{xy}}\big{|}\\ &=\sum_{\begin{subarray}{c}k_{1},...,k_{5}\geq 0:\\ \sum_{l}k_{l}=k\end{subarray}}\big{|}\big{(}(\widetilde{GA})^{k_{1}}\widetilde {G}\Delta(\widetilde{GA})^{k_{2}}\widetilde{G}\Delta(\widetilde{GA})^{k_{3} }\widetilde{G}\Delta(\widetilde{GA})^{k_{4}}\widetilde{G}\Delta(\widetilde{ GA})^{k_{5}}\widetilde{G}\big{)}_{\boldsymbol{xy}}\big{|}\\ &=\sum_{\begin{subarray}{c}k_{1},...,k_{5}\geq 0:\\ \sum_{l}k_{l}=k\end{subarray}}\Big{[}\big{|}((\widetilde{GA})^{k_{1}}\widetilde {G})_{\boldsymbol{xe}_{i}}((\widetilde{GA})^{k_{2}}\widetilde{G})_{ \boldsymbol{e}_{j}\boldsymbol{e}_{j}}\big{(}(\widetilde{GA})^{k_{3}} \widetilde{G}\big{)}_{\boldsymbol{e}_{i}\boldsymbol{e}_{i}}\big{(}(\widetilde {GA})^{k_{4}}\widetilde{G}\big{)}_{\boldsymbol{e}_{j}\boldsymbol{e}_{j}} \big{(}(\widetilde{GA})^{k_{3}}\widetilde{G}\big{)}_{\boldsymbol{e}_{i} \boldsymbol{y}}\big{|}+...\Big{]}\end{split} \tag{5.28}\]
with the neglected summands being analogous, only having different distributions of \(\boldsymbol{e}_{i}\) and \(\boldsymbol{e}_{j}\) occurring, which can be produced by the structure of \(\Delta\).
For general \(d\), in each of the \(4+d\) resolvent chains in the rhs. of (5.26), we now add and subtract the corresponding deterministic \(M\)-term, \((\widetilde{GA})^{k}\widetilde{G}=((\widetilde{GA})^{k}\widetilde{G}-M_{k+1} )+M_{k+1}\) (see also (5.33) below), schematically written as \(G=(G-M)+M\). In the sequel, we will distinguish the following two complementary cases:
1. At least \(d\) of the \(d+4\) resolvent chains are replaced by their fluctuating part, \(G-M\).
2. At least five of the \(d+4\) resolvent chains are replaced by their deterministic counterpart, \(M\).
Case (i): In case (i), we first separate those possibilities from (5.27), where the destruction of the \(d\) chains \(\overline{\big{(}(\widetilde{GA})^{k}\widetilde{G}\big{)}_{\boldsymbol{xy}}}\) in fact _preserves_\(d\) resolvent chains each with \(k\) traceless matrices \(A\), but with deterministic vectors, which are not \(\boldsymbol{x}\) and \(\boldsymbol{y}\). This happens when all four \(\Delta\)'s are placed at the ends of the chains. For example, if \(d=1\), we separate these possibilities as
\[\begin{split}&\widetilde{G}_{\boldsymbol{xe}_{i}}\widetilde{G}_{ \boldsymbol{e}_{j}\boldsymbol{e}_{j}}\widetilde{G}_{\boldsymbol{e}_{i} \boldsymbol{e}_{i}}\widetilde{G}_{\boldsymbol{e}_{j}\boldsymbol{e}_{j}}\big{(} (\widetilde{GA})^{k}\widetilde{G}\big{)}_{\boldsymbol{e}_{i}\boldsymbol{y}}+...\quad\text{or}\\ &\widetilde{G}_{\boldsymbol{xe}_{i}}\widetilde{G}_{\boldsymbol{e} _{j}\boldsymbol{e}_{j}}\big{(}(\widetilde{GA})^{k}\widetilde{G}\big{)}_{ \boldsymbol{e}_{i}\boldsymbol{e}_{i}}\widetilde{G}_{\boldsymbol{e}_{j} \boldsymbol{e}_{j}}\widetilde{G}_{\boldsymbol{e}_{i}\boldsymbol{y}}+...\,. \end{split} \tag{5.29}\]
In the following, we shall focus on the first exemplary term in (5.29). Its fluctuating part
\[\big{(}(\widetilde{GA})^{k}\widetilde{G}-M_{[1,k+1]}\big{)}_{\boldsymbol{e}_{ i}\boldsymbol{y}} \tag{5.30}\]
can then be paired with the leftover \(\big{(}N\eta/\rho\big{)}^{1/2}N^{-k/2}\) in (5.26) and thereby produces a further full \(\big{|}\widetilde{\Psi}_{k}^{(\gamma)}(\boldsymbol{e}_{j},\boldsymbol{y}) \big{|}\); the remaining terms coming from a single resolvent in (5.29) are simply estimated by one,
\[|\widetilde{G}_{\boldsymbol{uv}}|\prec 1\,,\qquad\boldsymbol{u},\boldsymbol{v} \in I_{\boldsymbol{xy}}\quad\text{cf.\ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq
**Example 5.6**.: Writing
\[M_{j-i+1}\equiv M_{[i,j]}\quad\text{for}\quad 1\leq i<j\leq k+1\,, \tag{5.33}\]
with a slight abuse of notation, we estimate the \(d=1\) term in (5.26) (after having split off the cases when one of the \(k_{l}\)'s equals \(k\) and all others are zero in (5.29)) as
\[\mathbb{E}\big{|}\tilde{\Psi}_{k}(\mathbf{x},\mathbf{y})\big{|}^{p-1} \left(\frac{N\eta}{\rho}\right)^{1/2}N^{-k/2}\times\] \[\quad\times\sum_{\begin{subarray}{c}0\leq k_{l}\leq k-1:\\ \sum_{l}k_{l}=k\end{subarray}}\Big{[}\big{|}\big{(}(\tilde{G}A)^{k_{1}}\tilde{ G}-M_{k_{1}+1}\big{)}_{\mathbf{e}\mathbf{e}_{i}}\big{(}M_{k_{2}+1}\big{)}_{\mathbf{e}\mathbf{e}_{j} \mathbf{e}_{j}}\big{(}M_{k_{3}+1}\big{)}_{\mathbf{e}_{i}\mathbf{e}_{i}}\big{(}M_{k_{4}+1} \big{)}_{\mathbf{e}_{j}\mathbf{e}_{j}}\big{(}M_{k_{5}+1}\big{)}_{\mathbf{e}_{i}\mathbf{y}} \big{|}\] \[\qquad+\big{|}\big{(}(\tilde{G}A)^{k_{1}}\tilde{G}-M_{k_{1}+1} \big{)}_{\mathbf{x}\mathbf{e}_{i}}\big{(}(\tilde{G}A)^{k_{2}}\tilde{G}-M_{k_{2}+1} \big{)}_{\mathbf{e}_{j}\mathbf{e}_{j}}\big{(}M_{k_{3}+1}\big{)}_{\mathbf{e}_{i}\mathbf{e}_{i}} \big{(}M_{k_{4}+1}\big{)}_{\mathbf{e}_{j}\mathbf{e}_{j}}\big{(}M_{k_{5}+1}\big{)}_{ \mathbf{e}_{i}\mathbf{y}}\big{|}+...\Big{]}\] \[\lesssim N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/2}N^{-k/2}\sum_{ \begin{subarray}{c}0\leq k_{l}\leq k-1:\\ \sum_{l}k_{l}=k\end{subarray}}\left[\left(\frac{\rho}{N\eta}\right)^{1/2}N^{ \sum_{l}k_{l}/2}+\left(\frac{\rho}{N\eta}\right)N^{\sum_{l}k_{l}/2}+...\right] \lesssim N^{\xi}\,, \tag{5.34}\]
where analogous summands (i.e. having further \(G-M\) factors instead of \(M\), or other arrangements of standard basis vectors \(\mathbf{e}_{i},\mathbf{e}_{j}\) stemming from (5.25)) are again indicated by dots. In the first estimate, we used that \(\big{|}(M_{j+1})_{\mathbf{u}\mathbf{v}}\big{|}\lesssim N^{j/2}\) for all \(\mathbf{u},\mathbf{v}\in I_{x,y}\) from Lemma 2.3 (b) together with the induction hypothesis (5.14).
In the general case, \(d\geq 1\), the argument works analogously to the above example: The minimal number of \(d\) fluctuating terms carrying an \((\rho/N\eta)^{1/2}\)-factor cancel the leftover \((N\eta/\rho)^{d/2}\)-factor in (5.26). The remaining \(N^{k_{l}/2}\)-factors can then be handled by a simple power counting.
Overall, we find that, all the terms in (5.26) summarized in Case (i), can be bounded by
\[C_{1}\tilde{\Omega}_{k}^{p}(\gamma)+C_{2}N^{\xi} \tag{5.35}\]
for some positive constants \(C_{1},C_{2}>0\), which shall henceforth be used generically, i.e. their value might change from line to line (but remain uniformly bounded in \(\gamma\)).
Case (ii): For the second case, we recall that all the purely deterministic terms are _independent_ of \(\gamma\), i.e., as emphasized above, at each replacement step the deterministic approximation to a resolvent chain is the same. However, it is _not_ sufficient to just estimate every \(M\)-term blindly via \(\big{|}(M_{j+1})_{\mathbf{u}\mathbf{v}}\big{|}\lesssim N^{j/2}\), as done in (5.34). Instead, we need to _gain from the summation_ in (5.24) over all replacement positions. This is the main new element of our proof compared with previous GFT arguments.
**Example 5.7**.: We again look at our \(d=1\) example. Using the notation (5.33), we find the _trivial estimate_
\[\mathbb{E}\big{|}\tilde{\Psi}_{k}(\mathbf{x},\mathbf{y})\big{|}^{p-1} \left(\frac{N\eta}{\rho}\right)^{1/2}N^{-k/2}\sum_{\begin{subarray}{c}0\leq k _{l}\leq k:\\ \sum_{l}k_{l}=k\end{subarray}}\Big{[}\big{|}\big{(}M_{k_{1}+1}\big{)}_{\mathbf{x} \mathbf{e}_{i}}\big{(}M_{k_{2}+1}\big{)}_{\mathbf{e}_{j}\mathbf{e}_{j}}\big{(}M_{k_{3}+1} \big{)}_{\mathbf{e}_{i}\mathbf{e}_{i}}\big{(}M_{k_{4}+1}\big{)}_{\mathbf{e}_{j}\mathbf{e}_{j}} \big{(}M_{k_{5}+1}\big{)}_{\mathbf{e}_{i}\mathbf{y}}\big{|}+...\Bigg{]}\] \[\lesssim N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/2}N^{-k/2}\sum_{ \begin{subarray}{c}0\leq k_{l}\leq k:\\ \sum_{l}k_{l}=k\end{subarray}}\Big{[}N^{\sum_{l}k_{l}/2}+...\Big{]}\lesssim N^{ \xi}\left(\frac{N\eta}{\rho}\right)^{1/2}\,, \tag{5.36}\]
where we again used the induction hypothesis (5.14) and \(\big{|}(M_{j+1})_{\mathbf{u}\mathbf{v}}\big{|}\lesssim N^{j/2}\). This bound is off by a factor \((N\eta/\rho)^{1/2}\), which we will now improve on.
Indeed, the point in _gaining from the summation_ is that, although at each individual step \(\gamma\), the deterministic terms in (5.36) might be large, _on average_ over \(\gamma\) their contribution is bounded. More precisely, fixing one constellation of \(k_{l}\)'s in (5.36) and using \(\mathbb{E}\big{|}\tilde{\Psi}_{k}\big{|}^{p-1}\lesssim N^{\xi}\), we find the average of the
first line in (5.36) over all \(i,j\in[N]\) to be bounded by (a constant times)
\[\begin{split}& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/2}N^{-k/2} \frac{1}{N^{2}}\sum_{i,j}\Big{[}\big{|}\big{(}M_{k_{1}+1}\big{)}_{\mathbf{x}\mathbf{e}_ {i}}\big{(}M_{k_{2}+1}\big{)}_{\mathbf{e}_{j}\mathbf{e}_{j}}\big{(}M_{k_{3}+1}\big{)} _{\mathbf{e}_{i}\mathbf{e}_{i}}\big{(}M_{k_{4}+1}\big{)}_{\mathbf{e}_{j}\mathbf{e}_{j}}\big{(} M_{k_{5}+1}\big{)}_{\mathbf{e}_{i}\mathbf{y}}\big{|}+...\Big{]}\\ \lesssim& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/2 }\frac{1}{N^{2}}\sum_{i,j}\left[\frac{\big{|}(M_{k_{1}+1})_{\mathbf{x}\mathbf{e}_{i}} \big{|}}{N^{k_{1}/2}}+...\right]\\ \lesssim& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/ 2}\frac{1}{N}\sqrt{N}\left[\frac{\sqrt{(|M_{k_{1}+1}|^{2})_{\mathbf{x}\mathbf{x}}}}{N ^{k_{1}/2}}+...\right]\lesssim\,N^{\xi}\left(\frac{\eta}{\rho}\right)^{1/2} \lesssim N^{\xi}\,.\end{split} \tag{5.37}\]
To go from the first to the second line, we used \(\big{|}(M_{j+1})_{\mathbf{u}\mathbf{v}}\big{|}\lesssim N^{j/2}\) for all but the first \(M\) factor. Next, we used a Schwarz inequality for the \(i\)-summation, which involves the off-diagonal term \((M_{k_{1}+1})_{\mathbf{x}\mathbf{e}_{i}}\):
\[\sum_{i}\big{|}(M_{k_{1}+1})_{\mathbf{x}\mathbf{e}_{i}}\big{|}\leq\sqrt{N}\left(\sum_{ i}\big{|}(M_{k_{1}+1})_{\mathbf{x}\mathbf{e}_{i}}\big{|}^{2}\right)^{1/2}\leq\sqrt{N} \sqrt{\big{(}|M_{k_{1}+1}|^{2}\big{)}_{\mathbf{x}\mathbf{x}}}\,. \tag{5.38}\]
In the penultimate estimate, we used that
\[\sqrt{\big{(}|M_{j+1}|^{2}\big{)}_{\mathbf{u}\mathbf{u}}}\lesssim N^{j/2}\,, \tag{5.39}\]
as follows from the fact that \(N^{j/2}\) is in fact the operator norm bound for \(M_{j+1}\), and the final estimate in (5.37) simply used the general fact \(\eta/\rho\lesssim 1\).
We point out that we even could have gained another \(1/\sqrt{N}\)-factor from the \(i\)-summation by not estimating \(\big{(}M_{k_{5}+1}\big{)}_{\mathbf{e}_{i}\mathbf{y}}\) trivially by \(N^{k_{5}/2}\) but using
\[\begin{split}\sum_{i}\big{|}(M_{k_{1}+1})_{\mathbf{x}\mathbf{e}_{i}} \big{(}M_{k_{5}+1}\big{)}_{\mathbf{e}_{i}\mathbf{y}}\big{|}&\leq\left( \sum_{i}\big{|}(M_{k_{1}+1})_{\mathbf{x}\mathbf{e}_{i}}\big{|}^{2}\right)^{1/2}\left( \sum_{i}\big{|}(M_{k_{5}+1})_{\mathbf{e}_{i}\mathbf{y}}\big{|}^{2}\right)^{1/2}\\ &\leq\sqrt{\big{(}|M_{k_{1}+1}|^{2}\big{)}_{\mathbf{x}\mathbf{x}}}\sqrt{ \big{(}|M_{k_{5}+1}|^{2}\big{)}_{\mathbf{y}\mathbf{y}}}\,.\end{split} \tag{5.40}\]
instead of (5.38). However, we do not need this additional factor \(1/\sqrt{N}\) here. Finally, note that the \(j\)-summation in (5.37) would have been useless, since the \(j\)-terms are diagonal. The summation gain is effective only for off-diagonal terms as in (5.38).
The above example indicates the following general mechanism: After estimating all the \(G-M\)-type terms with the aid of the induction hypothesis (5.14), and estimating the \(M\)-factors just trivially by their size, we are left with an excess \((N\eta/\rho)^{u/2}\)-factor, for some \(u\in[4]\). In order to remove this leftover factor, we need at least \(u\) _(collectively) summable bounded \(M\)-terms_ like
\[\frac{\big{|}(M_{k_{1}+1})_{\mathbf{x}\mathbf{e}_{i}}\big{|}}{N^{k_{1}/2}} \tag{5.41}\]
in (5.37) (see also (5.39)). In fact, each of these collectively summable factors will gain one \(1/\sqrt{N}\) compared to the trivial estimate, like the one in (5.36). Here, the notion "collective" refers to particular index structures, which allow an effective summation. Denoting terms like (5.41) symbolically by \(M_{\mathbf{x}\mathbf{e}_{i}}\) for brevity, by _(collectively) summable bounded \(M\)-terms_ we mean the following possible index structures
\[\begin{split} u=1&:\quad\quad\sum_{i,j}|M_{\mathbf{x} \mathbf{e}_{i}}|\quad\text{or}\quad\sum_{i,j}|M_{\mathbf{e}_{j}\mathbf{y}}|\quad\text{or} \quad...\\ u=2&:\quad\quad\sum_{i,j}|M_{\mathbf{x}\mathbf{e}_{i}}||M_{\bm {e}_{j}\mathbf{y}}|\quad\text{or}\quad\sum_{i,j}|M_{\mathbf{x}\mathbf{e}_{i}}||M_{\mathbf{e}_{i }\mathbf{y}}|\quad\text{or}\quad...\\ u=3&:\quad\quad\sum_{i,j}|M_{\mathbf{x}\mathbf{e}_{i}}||M_{\bm {e}_{i}\mathbf{y}}||M_{\mathbf{e}_{j}\mathbf{y}}|\quad\text{or}\quad\sum_{i,j}|M_{\mathbf{x} \mathbf{e}_{i}}||M_{\mathbf{e}_{j}\mathbf{y}}|^{2}\quad\text{or}\quad...\\ u=4&:\quad\quad\sum_{i,j}|M_{\mathbf{x}\mathbf{e}_{i}}||M_{ \mathbf{x}\mathbf{e}_{j}}||M_{\mathbf{e}_{i}\mathbf{y}}||M_{\mathbf{e}_{j}\mathbf{y}}|\quad\text{or} \quad\sum_{i,j}|M_{\mathbf{x}\mathbf{e}_{i}}|^{2}|M_{\mathbf{e}_{j}\mathbf{y}}|^{2}\quad\text{or} \quad...\end{split} \tag{5.42}\]
where dots are always indicating other similar terms, obtained from trivial exchanges \(\mathbf{x}\leftrightarrow\mathbf{y}\) or \(i\leftrightarrow j\).
In principle, every summation over \(i\) and \(j\) potentially gains a full \(1/N\)-factor each - provided that there are enough \(M\)'s with suitable indices as in (5.42). The existence of \(u\)_collectively summable bounded \(M\)-terms_ then ensures that of this potential \(1/N^{2}\)-improvement at least a \(1/N^{u/2}\)-gain is effective. More precisely, as an example, for the first column of terms in (5.42) we have that
\[u=1: \sum_{i,j}|M_{\boldsymbol{x}\boldsymbol{e}_{i}}|\leq N^{3/2} \left(\sum_{i}|M_{\boldsymbol{x}\boldsymbol{e}_{i}}|^{2}\right)^{1/2}\lesssim N ^{2-1/2}\] \[u=2: \sum_{i,j}|M_{\boldsymbol{x}\boldsymbol{e}_{i}}||M_{\boldsymbol{ e}_{j}\boldsymbol{y}}|\leq N\left(\sum_{i}|M_{\boldsymbol{x}\boldsymbol{e}_{i}}|^{2 }\right)^{1/2}\left(\sum_{j}|M_{\boldsymbol{e}_{j}\boldsymbol{y}}|^{2}\right)^ {1/2}\lesssim N^{2-2/2}\] \[u=3: \sum_{i,j}|M_{\boldsymbol{x}\boldsymbol{e}_{i}}||M_{\boldsymbol{ e}_{i}\boldsymbol{y}}||M_{\boldsymbol{e}_{j}\boldsymbol{y}}|\] \[\qquad\leq N^{1/2}\left(\sum_{i}|M_{\boldsymbol{x}\boldsymbol{ e}_{i}}|^{2}\right)^{1/2}\left(\sum_{i}|M_{\boldsymbol{e}_{i}\boldsymbol{y}}|^{2} \right)^{1/2}\left(\sum_{j}|M_{\boldsymbol{e}_{j}\boldsymbol{y}}|^{2}\right)^ {1/2}\lesssim N^{2-3/2}\] \[u=4: \sum_{i,j}|M_{\boldsymbol{x}\boldsymbol{e}_{i}}||M_{\boldsymbol{ x}\boldsymbol{e}_{j}}||M_{\boldsymbol{e}_{i}\boldsymbol{y}}||M_{\boldsymbol{e}_{j} \boldsymbol{y}}|\] \[\qquad\leq\left(\sum_{i}|M_{\boldsymbol{x}\boldsymbol{e}_{i}}|^{2 }\right)^{1/2}\left(\sum_{i}|M_{\boldsymbol{e}_{i}\boldsymbol{y}}|^{2}\right)^ {1/2}\left(\sum_{j}|M_{\boldsymbol{e}_{j}\boldsymbol{y}}|^{2}\right)^{1/2} \left(\sum_{j}|M_{\boldsymbol{x}\boldsymbol{e}_{j}}|^{2}\right)^{1/2}\lesssim N ^{2-4/2} \tag{5.43}\]
by application of Schwarz inequalities like in (5.38)-(5.40) and using that \(\|M\|\lesssim 1\). We point out that the \(\eta/\rho\leq 1\) factor within each excess \((N\eta/\rho)^{1/2}\) would not be able to compensate for excess \(N\)-factors; but the _gains from the summation_ are obtained solely on the level of \(N\)'s.
It follows from a simple counting argument (or simply by considering all cases directly), that for any \(u\in[4]\), we find an appropriately summable index structure within the at least five purely deterministic terms, as in (5.42)-(5.43). Hence, we deduce that
(5.44)
where
\[\big{[}u\,\text{sum. bdd.}\,M\text{-terms}\big{]}^{(\gamma)}_{\boldsymbol{x},\boldsymbol{y}} \tag{5.45}\]
stands symbolically for a product of \(u\)_collectively summable bounded deterministic terms_, like (5.41), for which we have just shown the following.
**Lemma 5.8**.: _It holds that_
\[\sum_{\gamma\in[\gamma(N)]}\Big{|}\big{[}u\,\text{sum. bdd.}\,M\text{-terms} \big{]}^{(\gamma)}_{\boldsymbol{x},\boldsymbol{y}}\Big{|}\lesssim N^{2-u/2}\,. \tag{5.46}\]
Combining (5.44) with (5.46), this concludes the argument for the fourth order terms in (5.22).
#### 5.2.3. Further higher order terms in Lemma 5.4
Just as in the previous Section 5.2.2, the goal of the current Section 5.2.3 is to show that the terms of order \(r\geq 5\) arising in the telescopic summation (5.24) can be bounded by the rhs. of (5.17).
For these other higher order terms in (5.22) with \(r\geq 5\) and involving _only_\(\widecheck{G}\) (and not \(G\)), the two cases distinguished above for \(r=4\) generalize to the following.
Case (i'): At least \(d\) of the \(d+r\) resolvent chains are replaced by their fluctuating part, \(G-M\).
Case (ii'): At least \(r+1\) of the \(d+r\) resolvent chains are replaced by their deterministic counterpart, \(M\).
For Case (i'), we separate a \(1/N^{2}\)-prefactor and find that the remaining part can be estimated by
\[C_{1}N^{-(r-4)/2}\widecheck{\Omega}^{p}_{k}(\gamma)+C_{2}N^{\xi}N^{-(r-4)/2}\,, \tag{5.47}\]
completely analogously to (5.35). In fact, we gain an additional \(N^{-(r-4)/2}\ll 1\) factor in both terms. This reflects the idea that more \(G-M\) terms are better because their presumed bounds carry a factor \((\rho/N\eta)^{1/2}\) (encoded in the prefactor \((N\eta/\rho)^{1/2}\) in the definition of \(\Psi_{k}^{\rm iso}\) in (5.3)).
For Case (ii'), we include the additional \(N^{-(r-4)/2}\) (after having separated a \(1/N^{2}\)-prefactor) into our counting of the leftover \((N\eta/\rho)^{\rm u/2}\)-factor (recall the discussion below (5.41)). In this way, we find that the maximal number of such leftover factors is \(r-(r-4)=4\). Hence, for every \(u\in[4]\), we find an appropriately summable index structure, completely analogously to (5.42), and deduce that (leaving out the separated \(1/N^{2}\)-prefactor)
\[\begin{split} r^{\rm th}&\,\text{order term in \eqref{eq:r-4}}\\ &\leq C_{1}N^{-(r-4)/2}\tilde{\Omega}_{k}^{p}(\gamma)+C_{2}N^{ \xi}\bigg{(}N^{-(r-4)/2}+\sum_{u=1}^{4}\left(\frac{N\eta}{\rho}\right)^{u/2} \Big{|}\big{[}u\,\text{sum. bdd.}\,M\text{-terms}\big{]}_{\boldsymbol{x}, \boldsymbol{y}}^{(\gamma)}\Big{|}\bigg{)}\,,\end{split} \tag{5.48}\]
which can be directly incorporated into (5.44) after adjusting the constants. Note that while the contributions form Case (i') improve by larger \(r\), the terms from Case (ii') that carry many \(M\)-factors, do not.
Combining (5.48) with (5.46), this concludes the argument for the higher order terms in (5.22).
#### 5.2.4. Truncation of the resolvent expansion
It remains to discuss the _truncation terms_, which involve _not_ only \(\tilde{G}\), but also \(G\), i.e. the order \(m\in\mathbb{N}\) for the truncation of the resolvent expansion (5.19b). Also here, our goal is to show that the contribution of these terms arising in the telescopic summation (5.24) can be bounded by the rhs. of (5.17). After expanding each resolvent in (5.20) via (5.19b), for every fixed \(q\geq 1\), we collect those terms which contain the final summand in (5.19b) (the _truncation term_), and hence \(G\) exactly \(q\) times. For these terms with \(q\geq 1\) fixed, we then proceed as follows: Estimate those chains within the truncation term in which \(G\) appears trivially by norm, \(\|G\|\leq 1/\eta\) (note that there are at most \(k+1\) resolvents in such chains and we can afford estimating all of them by \(1/\eta\) not just the last one \(G\)) and use \(\|A\|\leq\sqrt{N}\langle|A|^{2}\rangle^{1/2}\) (recall that we assumed \(\langle|A|^{2}\rangle^{1/2}=1\) around (5.2)-(5.3)), and treat the other factors by our induction hypothesis (5.14) (resulting in an \(N^{\xi}\) factor).
In this way, we conclude the estimate
\[\big{[}q\,\text{truncation terms}\big{]}\lesssim N^{\xi}\frac{(N\eta/\rho)^{p/ 2}}{\big{(}N^{\frac{m+1}{2}}\big{)}^{q}}\left(\frac{N^{k/2}}{\eta^{k+1}} \right)^{q}=\frac{N^{\xi}}{N^{2q}}\frac{1}{N^{p(q-1)/2}}\left(\frac{\eta}{\rho }\right)^{p/2}\frac{1}{(N\eta)^{(k+1)q}}\lesssim\frac{N^{\xi}}{N^{2}} \tag{5.49}\]
when choosing \(m=p+3k+5\), where in the last step we used that \(\eta/\rho\lesssim 1\) and \(N\eta\gg 1\). We remark that \((N\eta/\rho)^{p/2}\) in (5.49) comes from the prefactor of \(\Psi_{k}\), \((N^{\frac{m+1}{2}})^{-q}\) from the cumulant order of the truncation terms and \(\big{(}N^{k/2}/\eta^{k+1}\big{)}^{q}\) from the trivial bounds.
#### 5.2.5. Proof of Proposition 5.2
As mentioned above (5.25), the treatment of the higher order terms in (5.23) is identical to our discussion above. Therefore, summarizing Sections 5.2.2-5.2.4, we have proven the following.
**Lemma 5.9**.: _Fix \(p,k\in\mathbb{N}\) and assume that the induction hypothesis (5.14) holds. Then, for every \(\gamma\in[\gamma(N)]\), we have that_
\[\Big{|}\|\Psi_{k}^{(\gamma)}(\boldsymbol{x},\boldsymbol{y})\|_{p}^{p}-\|\Psi_ {k}^{(\gamma-1)}(\boldsymbol{x},\boldsymbol{y})\|_{p}^{p}\Big{|}\leq\frac{C_{ 1}}{N^{2}}\tilde{\chi}_{k}^{p}(\gamma)+C_{2}\frac{N^{\xi}}{N^{2}}\bigg{(}1+ \sum_{u=1}^{4}\left(\frac{N\eta}{\rho}\right)^{u/2}\Big{|}\big{[}u\,\text{sum. bdd.}\,M\text{-terms}\big{]}_{\boldsymbol{x}, \boldsymbol{y}}^{(\gamma)}\Big{|}\bigg{)}\,,\]
_where \(\big{[}u\,\text{sum. bdd.}\,M\text{-terms}\big{]}_{\boldsymbol{x}, \boldsymbol{y}}^{(\gamma)}\) is understood as explained below (5.45)._
Next, employing the telescopic summation from (5.24) we find that
\[\|\Psi_{k}^{(\gamma_{0})}(\boldsymbol{x},\boldsymbol{y})\|_{p}^{p}\leq C_{1} \frac{1}{N^{2}}\sum_{\gamma<\gamma_{0}}\tilde{\Omega}_{k}^{p}(\gamma)+C_{2}N ^{\xi}+\frac{N^{\xi}}{N^{2}}\sum_{\gamma<\gamma_{0}}\bigg{(}\sum_{u=1}^{4} \left(\frac{N\eta}{\rho}\right)^{u/2}\Big{|}\big{[}u\,\text{sum. bdd.}\,M\text{-terms}\big{]}_{ \boldsymbol{x},\boldsymbol{y}}^{(\gamma)}\Big{|}\bigg{)} \tag{5.50}\]
after having absorbed \(\|\Psi_{k}^{(0)}(\boldsymbol{x},\boldsymbol{y})\|_{p}^{p}\) into \(C_{2}N^{\xi}\) by our initial assumption that we have multi-resolvent local laws (5.4) for the Wigner matrix \(H^{(v)}=H^{(0)}\). We are left with discussing the first and last term on the rhs. of (5.50).
For the first term, we rely on the following lemma, which says that, in particular, we can replace each \(\tilde{\Omega}^{p}_{k}(\gamma)\) in (5.50) by \(\Omega^{p}_{k}(\gamma)\), absorbing the additional error into \(C_{2}\).
**Lemma 5.10**.: _Fix \(p,k\in\mathbb{N}\). Then, for every fixed \(\gamma\in[\gamma(N)]\), the expressions (recall (5.32))_
\[\Omega^{p}_{k}(\gamma)\,,\quad\Omega^{p}_{k}(\gamma-1)\,,\quad\text{and}\quad \tilde{\Omega}^{p}_{k}(\gamma)\]
_are comparable up to an additive error of order \(N^{\xi}\) for arbitrarily small \(\xi>0\)._
Proof.: We give a sketch of the simple argument based on Lemma 5.9 in combination with Lemma 5.4: Similarly to the proof of Lemma 5.9, we first expand \(G^{(\gamma)}\) (resp. \(G^{(\gamma-1)}\)) in \(\|\Psi^{(\gamma)}_{k}(\mathbf{x},\mathbf{y})\|^{p}_{p}\) (resp. \(\|\Psi^{(\gamma-1)}_{k}(\mathbf{x},\mathbf{y})\|^{p}_{p}\)) by means of (5.19b) and realize that \(\alpha^{(\gamma)}_{k,0}(\mathbf{x},\mathbf{y})=1\) in (5.22)-(5.23). The various terms arising in the expansion (now for all \(r\geq 1\) and not only for \(r\geq 4\)) are dealt with as explained in Sections 5.2.2-5.2.4.
However, there is a major simplification, since we do not need to gain from the summation as in Case (ii) in Section 5.2.2: The maximal excess power \(u\) of the leftover \((N\eta/\rho)^{1/2}\)-factor is bounded by the order \(r\) of the expansions in (5.22)-(5.23) (simply because at order \(r\), there are at most \(d=r\) destroyed resolvent chains), such that the characteristic \(1/N^{r/2}\)-factor at order \(r\) balances this excess. Finally, we take a maximum over all \(\mathbf{u},\mathbf{v}\in I_{\mathbf{x},\mathbf{y}}\) for all \(\|\tilde{\Psi}^{(\gamma)}_{k}(\mathbf{u},\mathbf{v})\|^{p}_{p}\) arising through the expansion (see (5.32)).
This finishes the sketch of the proof of Lemma 5.10.
For the last term in (5.50), we extend the summation \(\sum_{\gamma<\gamma_{0}}\) to all indices \(i,j\in[N]\); it is an upper bound as we only sum positive terms. Then, for every fixed \(u\in[4]\), we need to gain from this summation of \(\big{[}u\,\text{sum. bdd.}\,M\text{-terms}\big{]}_{\mathbf{x},\mathbf{y}}^{(\gamma)}\) over all \(\gamma\in[\gamma(N)]\) precisely \(N^{-u/2}\) compared to the naive \(N^{2}\)-size of the summation. This was achieved in Lemma 5.8 by the index structure (5.42) of the factors and application of several Schwarz inequalities (5.43).
Hence, combining (5.50) with Lemma 5.10 and Lemma 5.8, we find that
\[\|\Psi^{(\gamma_{0})}_{k}(\mathbf{x},\mathbf{y})\|^{p}_{p}\leq C_{1}\frac{1}{N^{2}} \sum_{\gamma<\gamma_{0}}\Omega^{p}_{k}(\gamma)+C_{2}N^{\xi}\,.\]
Since the rhs. is independent of the elements in \(I_{\mathbf{x}\mathbf{y}}\) (recall (5.32)), we can as well maximize over those on the lhs. and arrive at Proposition 5.2.
#### 5.2.6. Conclusion of the induction step
Having Proposition 5.2 and hence (5.18) at hand, we can immediately deduce
\[\max_{\gamma\leq\gamma(N)}\tilde{\Omega}^{p}_{k}(\gamma)\lesssim N^{\xi}\]
from Lemma 5.10 above. This proves the \(\tilde{\Psi}\)-part of (5.13) and thus finishes the induction step.
Therefore, using uniformity of this bound, we conclude the proof of the isotropic multi-resolvent local laws (3.14).
### Part (b): Proof of the averaged law
The general idea of the proof of the averaged law is exactly the same as in the previous section: We replace all matrix elements one-by-one in \(\gamma(N)\sim N^{2}\) steps and sum up the changes over all positions \(\gamma\in[\gamma(N)]\) (cf. (5.24)). However, there are a several (minor) differences in the averaged case compared to Section 5.2, which we will explain in the following.
Since both, averaged and isotropic normalized differences, (5.2) and (5.3), appear, we shall henceforth reintroduce the superscripts \({}^{\text{av}}\) and \({}^{\text{iso}}\). Moreover, contrary to the isotropic proof, in this part it is sufficient to consider an arbitrary fixed \(k\in\mathbb{N}\) and perform a _single induction_ on the moment \(p\) taken of \(\Psi^{\text{av}}_{k}\), i.e. \(\mathbb{E}|\Psi^{\text{av}}_{k}|^{p}=\|\Psi^{\text{av}}_{k}\|^{p}_{p}\). We point out that the induction on \(k\) used in the previous section is not needed, because the proof of the isotropic laws has already been concluded (see (5.53) later). Hence, as the _induction hypothesis_, we will assume that
\[\max_{\gamma\leq\gamma(N)}\|\Psi^{\text{av},(\gamma)}_{k}\|_{p-1}+\max_{\gamma \leq\gamma(N)}\|\tilde{\Psi}^{\text{av},(\gamma)}_{k}\|_{p-1}\lesssim N^{\xi} \tag{5.51}\]
holds uniformly in traceless matrices for all \(\xi>0\), and our goal is to prove the same relation with \(p\) replacing \(p-1\). The base case is thus simply the trivial bound (\(p=1\)) given by \(\mathbb{E}|\Psi^{\text{av}}_{k}|^{0}=1\). To ease notation, just as in Section 5.2, we will drop the subscripts for all resolvents and deterministic matrices, i.e. write \(G_{j}=G\) and \(A_{j}=A\) instead. Moreover, whenever it does not lead to confusion, we will drop all further sub- and superscripts.
Completely analogously to Section 5.2, we use resolvent expansions from Lemma 5.3 to prove the exact agreement of the orders \(r\in\{0,1,2,3\}\) as in Lemma 5.4. For the higher order terms (again focusing on the most critical fourth order ones, see Section 5.2.2), we argue completely analogously to (5.26), but now we have an additional effect: Whenever an intact averaged chain gets destroyed by a replacement \(G\to G\Delta G\) from a derivative, we obtain (a sum of) isotropic chains with a \(1/N\) prefactor from the normalization of the trace, i.e.
\[\langle(GA)^{k}\rangle\longrightarrow\langle G\Delta(GA)^{k}\rangle=\frac{1}{ N}\big{(}(GA)^{k}G\big{)}_{\boldsymbol{e}_{i}\boldsymbol{e}_{j}}+\frac{1}{N} \big{(}(GA)^{k}G\big{)}_{\boldsymbol{e}_{j}\boldsymbol{e}_{i}}\,. \tag{5.52}\]
In this way, the analogue of (5.26) reads
\[\mathbb{E}\sum_{d=1}^{4\nu_{p}}\big{|}\widetilde{\Psi}_{k}^{\text{av}}( \boldsymbol{x},\boldsymbol{y})\big{|}^{p-d}\left(\frac{N\eta}{\rho}\right)^{d /2}N^{-d(k/2-1)}\frac{1}{N^{d}}\sum_{(4-d)\Delta\,\sim\,d}\big{|}\underbrace {\big{(}...\Delta...\Delta...\big{)}_{\boldsymbol{e}_{i}\boldsymbol{e}_{j}} \cdot...\,\big{(}...\Delta...\big{)}_{\boldsymbol{e}_{j}\boldsymbol{e}_{i}}} _{(4-d)\,\,\Delta\text{ in a total }d\text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \
similarly to (5.38)-(5.40). Note that (5.56) is better than the trivial bound, which would give \(N\|M\|^{2}\). The key for exploiting this improvement is the following lemma, the proof of which is given in Appendix A.
**Lemma 5.12**.: _Using the assumptions and notations from Lemma 2.3 and the normalization \(\langle|A_{i}|^{2}\rangle=1\), we have that_
\[\left\langle\big{|}\mathcal{M}(z_{1},A_{1},...,A_{k},z_{k+1};\mathfrak{I}_{k+1 })\big{|}^{2}\right\rangle\lesssim N^{k}\left(\prod_{i\in\mathfrak{I}_{k+1}} \rho_{i}\right)^{2}\left[\left(\frac{\max_{i\in[k+1]}\big{(}\rho_{i}+\mathbf{1 }(i\notin\mathfrak{I}_{k+1})\big{)}}{N\ell}\right)^{2}\vee\frac{1}{N}\right]. \tag{5.57}\]
Applying (5.57) for \(k=k_{l}\) and \(\mathfrak{I}_{k+1}=\emptyset\) (recall (2.10), (2.15), and (5.33)), we see the bound
\[\langle|M_{k+1}|^{2}\rangle\lesssim N^{k_{l}}\left[\left(\frac{\rho}{N\eta} \right)^{2}\vee\frac{1}{N}\right]. \tag{5.58}\]
We remark that this estimate is better by the factor \(\left[\big{(}N\eta/\rho\big{)}^{-2}\lor N^{-1}\right]\ll 1\) compared to the naive norm bound \(|(M_{k_{l}+1})_{\boldsymbol{w}}|^{2}\leq\|M_{k_{l}+1}\|^{2}\lesssim N^{k_{l}}\) from Lemma 2.3 (b) employed in (5.55). Hence, fixing one constellation of \(k_{l}\)'s in (5.55), we find the average of the first line in (5.55) over all \(i,j\in[N]\) to be bounded by
\[\begin{split}& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/2}N^{-k/2} \frac{1}{N^{2}}\sum_{i,j}\left[\big{|}\big{(}M_{k_{1}+1}\big{)}_{\boldsymbol{ e}_{i}\boldsymbol{e}_{i}}\big{(}M_{k_{2}+1}\big{)}_{\boldsymbol{e}_{j} \boldsymbol{e}_{j}}\big{(}M_{k_{3}+1}\big{)}_{\boldsymbol{e}_{i}\boldsymbol{e }_{i}}\big{(}M_{k_{4}+1}\big{)}_{\boldsymbol{e}_{j}\boldsymbol{e}_{j}}\big{|} +...\right]\\ \lesssim& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/ 2}N^{-k/2}\frac{1}{N^{2}}\left[\prod_{l\in[4]}\left(\sum_{i}\big{|}\big{(}M_{k _{l}+1}\big{)}_{\boldsymbol{e}_{i}\boldsymbol{e}_{i}}\big{|}^{2}\right)^{1/2}+...\right]\\ \lesssim& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{1/ 2}N^{-k/2}\frac{1}{N^{2}}\left[\left(\prod_{l\in[4]}N^{k_{l}+1}\left[\left( \frac{\rho}{N\eta}\right)^{2}\vee\frac{1}{N}\right]\right)^{1/2}+...\right] \\ \lesssim& N^{\xi}\left[\left(\frac{\rho}{N\eta} \right)^{7/2}\vee\left(\frac{\eta}{\rho}\right)^{1/2}\frac{1}{N^{3/2}}\right] \lesssim N^{\xi}\,.\end{split} \tag{5.59}\]
To go from the first to the second line, we employed a trivial Schwarz inequality. To go to the penultimate line, we used (5.56) with \(M=M_{k_{l}+1}\). For the final estimate, we employed \((\prod_{l\in[4]}N^{k_{l}+1})^{1/2}=N^{k/2+2}\).
Next, we consider one example for \(d=4\), where all four resolvent chains are replaced by their deterministic counterpart. In this case, the analog of (5.59) reads
\[\begin{split}& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{2}N^{-2k} \frac{1}{N^{2}}\sum_{i,j}\left[\big{|}\big{(}M_{k+1}\big{)}_{\boldsymbol{e}_{ i}\boldsymbol{e}_{j}}\big{(}M_{k+1}\big{)}_{\boldsymbol{e}_{j}\boldsymbol{e}_{i}} \big{(}M_{k+1}\big{)}_{\boldsymbol{e}_{i}\boldsymbol{e}_{j}}\big{(}M_{k+1} \big{)}_{\boldsymbol{e}_{j}\boldsymbol{e}_{i}}\big{|}+...\right]\\ \lesssim& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{2} N^{-k}\frac{1}{N^{2}}\left[\sum_{i,j}\big{|}\big{(}M_{k+1}\big{)}_{ \boldsymbol{e}_{i}\boldsymbol{e}_{j}}\big{|}^{2}+...\right]\\ \lesssim& N^{\xi}\left(\frac{N\eta}{\rho}\right)^{2} N^{-k}\frac{1}{N^{2}}\left[N^{k+1}\left[\left(\frac{\rho}{N\eta}\right)^{2} \vee\frac{1}{N}\right]+...\right]\lesssim N^{\xi}\left[\frac{1}{N}\vee \left(\frac{\eta}{\rho}\right)^{2}\right]\lesssim N^{\xi}\,.\end{split}\]
To go from the first to the second line, we estimated two factors of \(M_{k+1}\) by their norm, \(\big{|}(M_{k+1})_{\boldsymbol{w}}\big{|}\lesssim N^{k/2}\). Next, to go to the third line, we employed (5.56) and Lemma 5.12. The final estimate used \(\eta/\rho\lesssim 1\).
The above examples showcase the general mechanism for the terms in Case (ii): After estimating all the \((G-M)\)-type terms with the aid of the induction hypothesis (5.51), we are left with an excess \((N\eta/\rho)^{u/2}\)-factor, for some \(u\in[4]\). Analogously to (5.41)-(5.42), this leftover factor is then controlled by _gaining from the summation_ like in (5.57). We skip the simple counting argument ensuring this gain.
The treatment of the further higher order terms and the truncation of the resolvent expansion is completely analogous to Sections 5.2.3 and 5.2.4, respectively. Therefore, by telescopic summation like
in (5.24), we find that
\[\max_{\gamma\leq\gamma(N)}\|\Psi_{k}^{\mathrm{av},(\gamma)}\|_{p}^{p}+\max_{\gamma \leq\gamma(N)}\|\widetilde{\Psi}_{k}^{\mathrm{av},(\gamma)}\|_{p}^{p}\lesssim \|\Psi_{k}^{\mathrm{av},(0)}\|_{p}^{p}+N^{\xi}\lesssim N^{\xi}\]
where in the last step we absorbed \(\|\Psi_{k}^{\mathrm{av},(0)}\|_{p}^{p}\) into \(N^{\xi}\) by our initial assumption that we have multi-resolvent local laws (5.4) for the matrix \(H^{(\mathbf{v})}=H^{(0)}\). The checked version is obtained completely analogously to Lemma 5.10.
This completes the proof of the induction step. We have thus finished the argument for the averaged case and hence the proof of Proposition 3.4.
### The case \(\mathfrak{I}_{k}\neq\emptyset\) (resp. \(\mathfrak{I}_{k+1}\neq\emptyset\))
In this section, we explain how to adjust the above argument for proving Proposition 3.4 in the case that at least one of the resolvents in the chains of interests
\[\left\langle\mathcal{G}_{1}A_{1}...\mathcal{G}_{k}A_{k}\right\rangle\qquad \text{and}\qquad\left(\mathcal{G}_{1}A_{1}...\mathcal{G}_{k}A_{k}\mathcal{G}_ {k+1}\right)_{\mathbf{v}\mathbf{y}}\]
is an imaginary part, i.e. \(\mathcal{G}_{i}=\operatorname{Im}G_{i}\) for at least one index \(i\in[k]\) (resp. \(i\in[k+1]\)). Recall the local laws for the average and isotropic chain from (3.13) and (3.14), respectively. Compared to the case of no imaginary parts, handled in the previous Sections 5.2-5.3, there are now two changes: First, the bound contains the product \(\prod_{i\in\mathfrak{I}}\rho_{i}\) (instead of one). Second, the smallness factor \((N\eta/\rho)^{-1/2}\) from before is now replaced by \((N\ell)^{-1/2}\)
For adjusting the first change, the simple but key insight is, that when applying the resolvent expansion from Lemma 5.3 to both \(G\) and \(G^{*}\) in \(\operatorname{Im}G=\frac{1}{2\mathrm{i}}(G-G^{*})\), we can always "restore" exactly one \(\operatorname{Im}G\) on the rhs. More precisely, taking (5.19b) for concreteness and using \(\Delta=\Delta^{*}\), we have that
\[\operatorname{Im}G=\frac{1}{2\mathrm{i}}\big{[}G-G^{*}\big{]} =\frac{1}{2\mathrm{i}}\bigg{[}\left(\widetilde{G}-N^{-1/2} \widetilde{G}\Delta\widetilde{G}+N^{-1}\widetilde{G}\Delta\widetilde{G}\Delta \widetilde{G}+...\right)\] \[\qquad\qquad-\left(\widetilde{G}^{*}-N^{-1/2}\widetilde{G}^{*} \Delta\widetilde{G}^{*}+N^{-1}\widetilde{G}^{*}\Delta\widetilde{G}^{*}\Delta \widetilde{G}^{*}+...\right)\bigg{]}\] \[=\operatorname{Im}\widetilde{G}-N^{-1/2}\big{(}\operatorname{Im} \widetilde{G}\Delta\widetilde{G}+\widetilde{G}^{*}\Delta\operatorname{Im} \widetilde{G}\big{)}\] \[\qquad\qquad+N^{-1}\big{(}\operatorname{Im}\widetilde{G}\Delta \widetilde{G}\Delta\widetilde{G}+\widetilde{G}^{*}\Delta\operatorname{Im} \widetilde{G}\Delta\widetilde{G}+\widetilde{G}^{*}\Delta\widetilde{G}^{*} \Delta\operatorname{Im}\widetilde{G}\big{)}+...\]
In this way, the imaginary parts in the original chain are "preserved" by the resolvent expansion. Recall that \(|\operatorname{Im}\widetilde{G}_{\mathbf{uv}}(z)|\prec\rho(z)\) (as a consequence of (5.5) for \(N|\operatorname{Im}z|\rho(z)\gg 1\); recall \(N\hat{\ell}\gg 1\)), which improves (5.31). In particular, using Lemma 2.3, we find that the factor \(\big{(}\prod_{i\in\mathfrak{I}}\rho_{i}\big{)}^{-d}\) stemming from the correct normalisation of the analog of \(\Psi_{k}^{\mathrm{av}/\mathrm{iso}}\) in (5.2)-(5.3) and thus appearing in the expression analogous to (5.26) is naturally compensated by a product of \(\rho\)'s stemming from the destroyed chains.
For adjusting to the second change, it suffices to replace every \(\eta/\rho\) appearing in Sections 5.2-5.3 by \(\ell\) and realize that the complement of the interesting regime, i.e. the regime \(\ell\geq 1\) is already proven in Proposition 3.1.
## Appendix A Additional technical results
In this section we prove several additional technical results which are used in the main sections.
### Bounds on the deterministic approximations
Proofs of Lemma 2.3 and the claim in Remark 2.6 (ii).: We will first proof the following stronger bound in Lemma A.1, from which we immediately deduce Lemma 2.3 and the claim in Remark 2.6 (ii). The proof of the following lemma is given at the end of the current section.
**Lemma A.1**.: _Fix \(k\geq 1\). Consider spectral parameters \(z_{1},...,z_{k}\in\mathbb{C}\setminus\mathbb{R}\) and traceless matrices \(A_{1},...,A_{k}\in\mathbb{C}^{N\times N}\), and define for every \(j\in[k]\)_
\[\eta_{j}:=|\operatorname{Im}z_{j}|,\qquad\rho_{j}:=\frac{1}{\pi}|\operatorname {Im}m_{\mathrm{sc}}(z_{j})|,\qquad\ell:=\min_{j}\big{[}\eta_{j}(\rho_{j}+\mathbf{1} (j\notin\mathfrak{I}_{k}))\big{]}\,.\]
_Then, for every \(1\leq s\leq\lfloor k/2\rfloor\), it holds that_
\[\left|\sum_{\begin{subarray}{c}\pi\in\mathrm{NC}([k]):\\ |\pi|=k+1-s\end{subarray}}\langle\mathrm{pTr}_{K(\pi)}(A_{1},\ldots,A_{k-1})A_{ k}\rangle\prod_{S\in\pi}m_{\circ}^{(\ref{eq:K})}[S]\right|\lesssim\left(\prod_{j\in \ref{eq:K}}\rho_{j}\right)\frac{1}{\ell^{s-1}}\prod_{\begin{subarray}{c}S\in K (\pi)\\ |S|\geq 2\end{subarray}}\prod_{j\in S}\left\langle|A_{j}|^{|S|}\right\rangle ^{\frac{1}{|S|}}\,.\] (A.1)
_with \(m_{\circ}^{(\ref{eq:K})}[S]\) being defined above (2.16). For \(s>\lfloor k/2\rfloor\) the lhs. of (A.1) equals zero._
For the proof of Lemma 2.3 (a) and the claim in Remark 2.6 (ii) concerning (2.20) we use that \(\langle|A|^{p}\rangle^{1/p}\leq N^{\frac{p-2}{2p}}\langle|A|^{2}\rangle^{1/2}\) for any \(p\geq 2\), and hence
\[\text{rhs. of \eqref{eq:K}}\lesssim N^{k/2-1}\left(\prod_{j\in\ref{eq:K}} \rho_{j}\right)\left(\prod_{j=1}^{k}\langle|A_{j}|^{2}\rangle^{1/2}\right) \frac{1}{(N\ell)^{s-1}}\,.\]
This shows that, in particular, all terms with \(s>1\) in (A.1) are explicitly smaller than the error term in (2.20), where we used that \(N\ell\gg 1\). The \(s=1\) term exactly constitutes the deterministic approximation in (2.23), i.e. the sum in (A.1) contains exactly one term
\[\sum_{\begin{subarray}{c}\pi\in\mathrm{NC}([k]):\\ |\pi|=k\end{subarray}}\langle\mathrm{pTr}_{K(\pi)}(A_{1},\ldots,A_{k-1})A_{k} \rangle\prod_{S\in\pi}m_{\circ}^{(\ref{eq:K})}[S]=\bigg{(}\prod_{j\in\ref{eq: K}}\operatorname{Im}m_{j}\bigg{)}\bigg{(}\prod_{j\not\in\ref{eq:K}}m_{j} \bigg{)}\langle A_{1}...A_{k}\rangle\,.\]
Here we used that \(|\pi|=k\) implies that the Kreweras complement consists of the full set, \(K(\pi)=[k]\).
Finally, for the proof of Lemma 2.3 (b) and the claim in Remark 2.6 (ii) concerning (2.21) (i.e. the corresponding isotropic bounds) we argue completely analogously to Section 4.2.
It remains to prove Lemma A.1.
Proof of Lemma a.1.: Fix an arbitrary non-crossing partition \(\pi\in\mathrm{NC}([k])\) consisting of \(|\pi|=k+1-s\) blocks.
First, note that, in order to get a non-vanishing partial trace
\[\langle\mathrm{pTr}_{K(\pi)}(A_{1},\ldots,A_{k-1})A_{k}\rangle=\prod_{S\in K (\pi)}\left\langle\prod_{j\in S}A_{j}\right\rangle\]
the minimal size of a block \(S\in K(\pi)\) is two (using that the \(A_{i}\)'s are traceless). Therefore, by application of Holder's inequality,
\[\big{|}\langle\mathrm{pTr}_{K(\pi)}(A_{1},\ldots,A_{k-1})A_{k}\rangle\big{|} \leq\prod_{\begin{subarray}{c}S\in K(\pi)\\ |S|\geq 2\end{subarray}}\prod_{j\in S}\left\langle|A_{j}|^{|S|}\right\rangle ^{\frac{1}{|S|}}\,.\] (A.2)
In order to estimate \(\prod_{S\in\pi}m_{\circ}^{(\ref{eq:K})}[S]\), we recall the Mobius inversion formula [17, Lemma 2.16]
\[m_{\circ}^{(\ref{eq:K})}[S]=m^{(\ref{eq:K})}[S]+\sum_{\begin{subarray}{c}\pi \in\mathrm{NC}(S)\\ |\pi|\geq 2\end{subarray}}(-1)^{|\pi|-1}\left(\prod_{T\in K(\pi)}C_{|T|-1} \right)\prod_{U\in\pi}m^{(\ref{eq:K})}[U]\] (A.3)
where \(C_{n}\) is the \(n^{\text{th}}\) Catalan number. Hence, it suffices to bound the iterated divided differences \(m^{(\ref{eq:K})}[S]\) for a subset \(S\subset[k]\) as
\[\left|m^{(\ref{eq:K})}[S]\right|\lesssim\frac{\prod_{i\in\ref{eq:K}\wedge S} \rho_{i}}{\ell^{|S|-1}}\] (A.4)
which is a direct consequence of the integral representation (2.16). Indeed, combining (A.3) with (A.4) and using that the sum in (A.3) is restricted to partitions of \(S\) with at least two blocks, we obtain
\[\left|\prod_{S\in\pi}m_{\circ}^{(\ref{eq:K})}[S]\right|\lesssim\left(\prod_{i \in\ref{eq:K}}\rho_{i}\right)\frac{1}{\ell^{s-1}}\] (A.5)
where we additionally used that the original non-crossing partition \(\pi\in\mathrm{NC}([k])\) consists of exactly \(k+1-s\) blocks. Combining (A.5) with (A.2), we conclude the proof of (A.1).
For \(s>\lfloor k/2\rfloor\), we note that the Kreweras complement \(K(\pi)\) necessary contains singletons, and hence the lhs. of (A.1) vanishes since \(\langle A_{i}\rangle=0\).
We conclude this section by giving the proof of Lemma 5.12.
Proof of Lemma 5.12.: The principal idea of the proof is very similar to the previous ones given in this section, hence we provide only a brief argument.
Recalling (2.15)-(2.16), we have that
(A.6)
Next, analogously to Lemma A.1 above, we decompose the summation over all partitions \(\pi\) into groups, where \(|\pi|=k+2-s\) with \(1\leq s\leq\lceil(k+1)/2\rceil\) is fixed (note that \(\lfloor\cdot\rfloor\) got replaced by \(\lceil\cdot\rceil\) due to the presence of a non-traceless identity matrix). Moreover, for fixed \(s\) we distinguish two cases in (A.6) (recall (2.9)): For Case (i), we assume that the unique block \(\mathfrak{B}(k+1)\in K(\pi)\) containing \(k+1\) contains no other elements, i.e. \(\mathfrak{B}(k+1)\setminus\{k+1\}=\emptyset\). For Case (ii), we assume that \(\mathfrak{B}(k+1)\setminus\{k+1\}\neq\emptyset\).
Case (i). First, we note that necessarily \(s\geq 2\) in this case. Then, we have that
\[\left\langle\left|\mathrm{pTr}_{K(\pi)}(A_{1},\ldots,A_{k})\right|^{2} \right\rangle\leq\left(\prod_{\begin{subarray}{c}S\in K(\pi)\setminus \mathfrak{B}(k+1)\\ |S|\geq 2\end{subarray}}\prod_{j\in S}\left\langle\left|A_{j}\right|^{|S|} \right\rangle^{\frac{1}{|S|}}\right)^{2}\leq\left(\frac{N^{k/2}}{N^{s-1}} \right)^{2}\,,\]
analogously to (A.2). Since in Case (i), \(z_{1}\) and \(z_{k+1}\) are always together in one block \(S\in\pi\) with \(|\pi|=k+2-s\), we obtain
\[\left|\prod_{S\in\pi}m_{\circ}^{(\mathfrak{I}_{k+1})}[S]\right|^{2}\lesssim \left[\frac{\left(\prod_{i\in\mathfrak{I}_{k+1}}\rho_{i}\right)\wedge\max_{i \in[k+1]}\rho_{i}}{\ell^{s-1}}\right]^{2}\]
analogously to (A.5) by means of (A.3) and the integral representation (2.16). The additional \(\wedge\max_{i\in[k+1]}\rho_{i}\), which is effective only for \(\mathfrak{I}_{k+1}=\emptyset\), comes from the estimate
\[\int_{\mathbb{R}}\frac{\rho(x)}{\left|x-z_{1}\right|\left|x-z_{k+1}\right|} \mathrm{d}x\lesssim\frac{\rho_{1}\vee\rho_{k+1}}{\ell}\,,\]
easily obtained by a Schwarz inequality.
Case (ii). In this case, the above estimates of the two factors in (A.6) modify to
\[\left\langle\left|\mathrm{pTr}_{K(\pi)}(A_{1},\ldots,A_{k})\right|^{2} \right\rangle\leq\left(\prod_{\begin{subarray}{c}S\in K(\pi)\\ |S|\geq 2\end{subarray}}\left(\prod_{j\in S_{1}}\left\langle\left|A_{j}\right| ^{2(|S_{1}|-1)}\right\rangle\left(\prod_{i=2}^{s}\prod_{j\in S_{i}}\left\langle \left|A_{j}\right|^{|S_{i}|}\right\rangle^{\frac{1}{|S_{i}|}}\right)\right)^{2}\,,\]
assuming that \(S_{1}=\mathfrak{B}(k+1)\), and
\[\left|\prod_{S\in\pi}m_{\circ}^{(\mathfrak{I}_{k+1})}[S]\right|^{2}\lesssim \left[\frac{\prod_{i\in\mathfrak{I}_{k+1}}\rho_{i}}{\ell^{s-1}}\right]^{2}\,.\]
Putting the two cases together and using \(\langle|A|^{p}\rangle^{1/p}\leq N^{\frac{p-2}{2p}}\langle|A|^{2}\rangle^{1/2}\) for any \(p\geq 2\) together with \(N\ell>1\) and the normalization \(\langle|A_{j}|^{2}\rangle=1\), we find that
\[\left\langle\left|\mathcal{M}(z_{1},A_{1},\ldots,A_{k},z_{k+1};\mathfrak{I}_{ k+1})\right|^{2}\right\rangle\lesssim N^{k}\,\left(\prod_{i\in\mathfrak{I}_{k+1}} \rho_{i}\right)^{2}\left[\left(\frac{\max_{i\in[k+1]}\left(\rho_{i}+\mathbf{1 }(i\notin\mathfrak{I}_{k+1})\right)}{N\ell}\right)^{2}+\frac{1}{N}\right]\,.\]
### Proof of the global law in Proposition 3.1
We only discuss the proof of the average case (3.1), the isotropic case (3.2) is analogous and hence omitted. Set \(d:=\min_{i}\operatorname{dist}(z_{i},[-2,2])\) and recall that \(d\geq\delta\gtrsim 1\).
The case of no \(\operatorname{Im}G\)'s, i.e. \(\mathfrak{I}_{k}=\emptyset\), has already been dealt with in [19, Appendix A] and yielded the bound (3.1) with a factor \(d^{-(k+1)}\) instead of \(\sqrt{\max_{i}\rho_{i}/\ell}\). In the \(d\gtrsim 1\) regime, this bound is in fact stronger, \(d^{-(k+1)}\lesssim d^{-1}\lesssim\sqrt{\max_{i}\rho_{i}/\ell}\), since \(|\rho(z)|\sim|\operatorname{Im}z|/\operatorname{dist}(z,[-2,2])^{2}\) and \(\ell\sim\min_{i}|\operatorname{Im}z_{i}|\).
In case of \(\mathfrak{I}_{k}\neq\emptyset\) we need to gain from the fact that the original chain contained \(\operatorname{Im}G\)'s. The principal idea is analogous to [18, Appendix B] and [19, Appendix A], as we employ a cumulant expansion and argue by induction on the length \(k\) of the initial chain. However, in order to gain from the imaginary parts, the key observation is that within the cumulant expansion, the total number of \(\operatorname{Im}\)'s is preserved, as becomes apparent from the formula
\[\partial_{ab}\operatorname{Im}G=G\Delta^{ab}\operatorname{Im}G+\operatorname{ Im}G\Delta^{ab}G^{*}\]
for the derivative of an \(\operatorname{Im}G\) factor. Here, \(\partial_{ab}\) denotes the partial derivative w.r.t. the matrix entry \(w_{ab}\) of the Wigner matrix \(W\) and \(\Delta^{ab}\) is a matrix consisting of all zeroes except for the \((a,b)\)-entry which is equal to one. Using the norm bounds \(\|\operatorname{Im}G_{j}\|\leq|\operatorname{Im}z_{j}|/\operatorname{dist}( z_{j},[-2,2])^{2}\sim\rho_{j}\) and \(\|G_{j}\|\leq 1/d\) by spectral decomposition, we obtain (3.1) but with a factor \(d^{k+1-|\lambda_{k}|}\) instead of \(\sqrt{\ell}\), analogously to [19, Eq. (A.2)]. Finally, since \(\sqrt{\ell}\lesssim d\lesssim d^{k+1-|\mathfrak{I}_{k}|}\), this concludes the proof.
### Complex moment matching
In order to conduct the third step of our proof, the Green function comparison (GFT) of Proposition 3.4, we need to guarantee the moment matching condition (3.12) of the single entry distributions. For real random variables (or complex ones with independent real and imaginary parts), the argument ensuring this (and even an approximately matching fourth moment) is standard (see, e.g., [31, Lemma 16.2]) and based on an explicit construction of a distribution supported on three points in \(\mathbb{R}\). However, for general complex random variables, this construction is not sufficient; we now present its complex variant.
Let \(Z\) be a complex random variable and denote its moments by
\[m_{i,j}=m_{i,j}(Z):=\mathbb{E}\big{[}\overline{Z}^{i}Z^{j}\big{]}\qquad\text{ for}\qquad i,j\in\mathbb{N}_{0}\,,\] (A.7)
and call \(i+j\) the _order_ of \(m_{i,j}\). Clearly \(m_{0,0}=1\) and \(m_{i,j}=\overline{m}_{j,i}\), so we can focus on \(m_{i,j}\) with \(i\leq j\).
**Lemma A.2**.: _Let \(m_{0,2},m_{0,3},m_{1,2}\in\mathbb{C}\) with \(|m_{0,2}|\leq 1\). Then there exists a complex random variable \(Z\) supported on at most eleven points \(z_{1},...,z_{11}\in\mathbb{C}\), such that its moments (A.7) are given by_
\[m_{0,1}(Z)=0\,,\quad m_{1,1}(Z)=1\,,\quad m_{0,2}(Z)=m_{0,2}\,,\quad m_{0,3}(Z )=m_{0,3}\,,\quad\text{and}\quad m_{1,2}(Z)=m_{1,2}\,.\] (A.8)
**Remark A.3**.: _A generalized version of this problem (constructing an atomic measure with arbitrary number of prescribed moments), known as the truncated complex \(K\)-moment problem, has been solved by Curto and Fialkow in [25]. To keep our result self-contained, we give a simple independent proof for the special case of three moments that we need here._
Having Lemma A.2 at hand, one can easily see that there exists a random variable that has the prescribed first three moments and it has an independent Gaussian component of given variance \(\gamma>0\). More precisely, given \(m_{0,1}=0\), \(m_{1,1}=1\), \(m_{0,2}\), \(m_{0,3}\), and \(m_{1,2}\) with \(|m_{0,2}|\leq 1\) as the set of moments of \(\chi_{\mathrm{od}}\), we look for a representation of \(Z\) in the form
\[Z:=(1-\gamma)^{1/2}Z^{\prime}+\gamma^{1/2}\xi_{G}\quad\text{with}\quad\gamma \in(0,1)\quad\text{fixed}\]
with some random variable \(Z^{\prime}\) to be constructed, where \(\xi_{G}\) is a centered complex Gaussian random variable having second moments \(m_{0,2}(\xi_{G})=m_{0,2}\) and \(m_{1,1}(\xi_{G})=1\). The moments of \(Z^{\prime}\) thus satisfy the relations
\[m_{i,j}=(1-\gamma)^{(i+j)/2}m_{i,j}(Z^{\prime})+\gamma^{(i+j)/2}m_{i,j}(\xi_{G })\quad\text{with}\quad 1\leq i+j\leq 3.\] (A.9)
In particular, \(|m_{0,2}(Z^{\prime})|=|m_{0,2}|\leq 1\), so the moment sequence \(m_{i,j}(Z^{\prime})\) from (A.9) satisfy the only nontrivial condition of Lemma A.2. Therefore, by Lemma A.2, we can construct the random variable \(Z^{\prime}\). Finally, we remark that all random variables involved have arbitrarily high moments (cf. Assumption 2.1). This moment matching argument shows how to choose the distribution of the initial condition \(W_{0}\) of the Ornstein-Uhlenbeck flow (3.3) so that after time \(T=\gamma\) it will match with the distribution of the original matrix \(W\) up to three moments.
Proof of Lemma a.2.: We only outline the construction of the points \(z_{1},...,z_{11}\in\mathbb{C}\), the precise computations are a simple exercise in calculus and linear algebra and hence omitted.
We set \(z_{11}=0\) to be the origin. The remaining ten points are then placed on five lines through the origin, carrying two points each, i.e. we put
\[z_{j}=r_{j}\mathrm{e}^{\mathrm{i}\varphi_{j}}\quad\text{and}\quad z_{11-j}= \hat{z}_{j}:=-\hat{r}_{j}\mathrm{e}^{\mathrm{i}\varphi_{j}}\quad\text{with} \quad r_{j},\hat{r}_{j}\geq 0\,,\varphi_{j}\in[0,2\pi)\quad\text{for}\quad j\in[5]\,.\]
For simplicity, we can even prescribe four of the five angular variables in such a way that the corresponding points lie on the real and imaginary axis and the two diagonals, i.e. set \(\varphi_{j}:=j\pi/4\) for \(j\in[4]\).
We then take the law of \(Z\) to be of the form
\[\sum_{j\in[5]}\big{(}p_{j}\delta_{z_{j}}+\hat{p}_{j}\delta_{\hat{z}_{j}}\big{)} +\bigg{(}1-\sum_{j\in[5]}(p_{j}+\hat{p}_{j})\bigg{)}\delta_{0}\]
for weights \(p_{j},\hat{p}_{j}\geq 0\) satisfying \(\sum_{j\in[5]}(p_{j}+\hat{p}_{j})\leq 1\). As mentioned above, it is a simple exercise to show that the remaining parameters \(r_{j},\hat{r}_{j},p_{j},\hat{p}_{j}\geq 0\) for \(j\in[5]\) and \(\varphi_{5}\in[0,2\pi)\) can be chosen in such a way to accommodate (A.8). More precisely, taking \(A_{j}:=p_{j}r_{j}=\hat{p}_{j}\hat{r}_{j}\geq 0\) for \(j\in[5]\) (this ensures \(m_{0,1}(Z)=0\)), \(r_{5}=\hat{r}_{5}\), and using our choices of \(\varphi_{j}=j\pi/4\) for \(j\in[4]\), the two complex conditions \(m_{0,3}(Z)=m_{0,3}\) and \(m_{1,2}(Z)=m_{1,2}\) turn into four real linear equations for the variables \(C_{j}:=B_{j}(r_{j}-\hat{r}_{j})\in\mathbb{R}\) for \(j\in[4]\) with \(B_{j}:=A_{j}(r_{j}+\hat{r}_{j})\geq 0\). The determinant of this linear systems can easily seen to be non-vanishing and it thus determines the difference variables \(r_{j}-\hat{r}_{j}\in\mathbb{R}\) for \(j\in[4]\). Finally, the independent variables \(\varphi_{5}\in[0,2\pi)\) and \(B_{j}:=A_{j}(r_{j}+\hat{r}_{j})\geq 0\) for \(j\in[5]\) can easily be chosen to satisfy \(m_{1,1}(Z)=1\) and \(m_{0,2}(Z)=m_{0,2}\).
### Additional proofs for Section 4
Proofs of Lemmas 4.1 and 4.8.: The claim of Lemma 4.1 follows by multi-linearity from Lemma 4.8.
For the proof of Lemma 4.8, we will use a _tensorization argument_ (or _meta argument_) similar to [24] and [21, Proof of Lemma D.1]. Throughout this proof the size \(N\) of \(W\) is fixed. For \(d\in\mathbb{N}\) consider the \((Nd)\times(Nd)\) Wigner matrix \(\mathbf{W}^{(d)}\), i.e. the entries of \(\mathbf{W}^{(d)}\) have variance \(1/(Nd)\). Let \(\mathbf{W}^{(d)}_{t}\) be the Ornstein-Uhlenbeck flow as in (3.3) with initial condition \(\mathbf{W}^{(d)}_{0}=\mathbf{W}^{(d)}\), and define its resolvent \(\mathbf{G}^{(d)}_{i,t}:=(\mathbf{W}^{(d)}_{t}-z_{i,t})^{-1}\), then the deterministic approximation of the resolvent is still given by \(m_{1}\), the Stieltjes transform of the semicircular law.
We now explain that also the deterministic approximation of products of resolvents and deterministic matrices is unchanged. For \(1\leq i\leq k\), define \(\mathbf{A}^{(d)}_{i}:=A_{i}\otimes I_{d}\), with \(I_{d}\) denoting the \(d\)-dimensional identity, then for \(\mathbf{M}^{(d)}_{[1,k],t}\) defined as in (2.10) with \(\mathbf{M}^{(d)}_{i,t}\) and \(\mathbf{A}^{(d)}_{i}\) we have
\[\mathbf{M}^{(d)}_{[1,k],t}:=\mathbf{M}^{(d)}(z_{1,t},\mathbf{A}^{(d)}_{1},\dots,\mathbf{A}^{( d)}_{k-1},z_{k,t})=M(z_{1,t},A_{1},\dots,A_{k-1},z_{k,t})\otimes I_{d}.\] (A.10)
Fix \(0<s<t\), then integrating (4.5) for the bold faced resolvent and deterministic matrices, in time from \(s\) to \(t\) and taking the expectation we obtain
\[\langle\mathbf{M}^{(d)}_{[1,k],t}\mathbf{A}_{k}\rangle-\langle\mathbf{M}^{(d )}_{[1,k],s}\mathbf{A}_{k}\rangle\] \[= -\mathbb{E}\langle(\mathbf{G}_{[1,k],t}-\mathbf{M}^{(d)}_{[1,k],t}\mathbf{A} _{k})+\mathbb{E}\langle(\mathbf{G}_{[1,k],s}-\mathbf{M}^{(d)}_{[1,k],s})\mathbf{A}_{k} \rangle+\frac{k}{2}\int_{s}^{t}\mathbb{E}\langle\mathbf{G}_{[1,k],r}\mathbf{A}_{k} \rangle\,\mathrm{d}r\] \[+\sum_{\begin{subarray}{c}i,j=1\\ i<j\end{subarray}}^{k}\int_{s}^{t}\mathbb{E}\langle\mathbf{G}_{[i,j],r}\rangle \langle\mathbf{G}_{[j,i],r}\rangle\,\mathrm{d}r+\sum_{i=1}^{k}\int_{s}^{t} \mathbb{E}\langle\mathbf{G}_{i,r}-m_{i,r}\rangle\langle\mathbf{G}^{(i)}_{[1,k],r}\mathbf{ A}_{k}\rangle\,\mathrm{d}r+\frac{\sigma}{Nd}\sum_{\begin{subarray}{c}i,j=1\\ i\leq j\end{subarray}}^{k}\int_{s}^{t}\mathbb{E}\langle\mathbf{G}_{[i,j],r}\mathbf{G}^{ \dagger}_{[j,i],r}\rangle\,\mathrm{d}r\,.\] (A.11)
Using the global law in Proposition 3.1 and (A.10), and taking the limit \(d\to\infty\), this implies that for \(|\mathrm{Im}\,z_{i}|\gtrsim 1\) we have
\[\langle M_{[1,k],t}A_{k}\rangle-\langle M_{[1,k],s}A_{k}\rangle=\frac{k}{2} \int_{s}^{t}\langle M_{[1,k],r}A_{k}\rangle\,\mathrm{d}r+\sum_{ \begin{subarray}{c}i,j=1\\ i<j\end{subarray}}^{k-1}\int_{s}^{t}\langle M_{[i,j],r}\rangle\langle M_{[j,i], r}\rangle\,\mathrm{d}r.\] (A.12)
Finally, dividing (A.12) by \(t-s\) and taking the limit \(s\to t\), we conclude the proof of Lemma 4.8.
Proof of Lemma 4.4.: The proof of this lemma is very similar to [19, Lemma 3.3]. Hence we give the argument only for the case where \(k\) is even, if \(k\) is odd the proof is completely analogous. Moreover, for notational simplicity we henceforth drop the time dependence and the precise indices of \(G\)'s and \(A\)'s, i.e. write \(\operatorname{Im}G\equiv\operatorname{Im}G_{i}\), \(A\equiv A_{j}\), \(\rho\equiv\rho_{i}\) and so on. Then, by application of the general bound
\[|\langle B_{1}B_{2}B_{3}B_{4}\rangle|\leq N\prod_{i=1}^{4}\langle|B_{i}|^{2} \rangle^{1/2}\quad\text{for all}\quad B_{i}\in\mathbb{C}^{N\times N}\]
applied to \(B_{i}=\sqrt{[\operatorname{Im}G]}A(\operatorname{Im}GA)^{k/2-1}\sqrt{[ \operatorname{Im}G]}\) and with the aid of (2.17), we find that
\[\Phi_{2k} =\frac{\sqrt{N\hat{\ell}}}{N^{k-1}\,\rho^{2k}\,\langle|A|^{2} \rangle^{k}}\big{|}\big{\langle}(\operatorname{Im}GA)^{2k}-\widehat{M}_{[ \hat{1},\widehat{2k}]}A\big{\rangle}\big{|}\] \[\lesssim\sqrt{N\hat{\ell}}+\frac{\sqrt{N\hat{\ell}}}{N^{k-1}\, \rho^{2k}\,\langle|A|^{2}\rangle^{k}}N\big{|}\big{\langle}(\operatorname{Im} GA)^{k}\big{\rangle}\big{|}^{2}\] \[\lesssim\sqrt{N\hat{\ell}}+\frac{\sqrt{N\hat{\ell}}}{N^{k-1}\, \rho^{2k}\,\langle|A|^{2}\rangle^{k}}N\left[N^{k/2-1}\rho^{k}\langle|A|^{2} \rangle^{k/2}\left(1+\frac{\phi_{k}}{\sqrt{N\hat{\ell}}}\right)\right]^{2}\] \[\lesssim\sqrt{N\hat{\ell}}+\frac{\phi_{k}^{2}}{\sqrt{N\hat{\ell} }}\,.\]
We remark that, in order to bound \(\big{\langle}(\operatorname{Im}GA)^{k}\big{\rangle}\) in terms of \(\phi_{k}\), we added and subtracted the corresponding \(M\)-term and used the assumption that \(\Phi_{k}\prec\phi_{k}\).
|
2309.04796 | Pick interpolation and invariant distances | In this article, we study the role of invariant distances in the Pick
interpolation problem. Given a Carath\'eodory hyperbolic domain $\Omega$ in
some $\mathbb{C}^m$, we have introduced a notion of an invariant object that
gives a necessary and sufficient condition for any Pick interpolation problem
to be solvable on $\Omega$. This invariant object plays the same role as that
of Carath\'eodory pseudodistance in the two-point Pick interpolation problem.
Furthermore, a full description of the invariant object is given when $\Omega$
is the open unit disc. | Anindya Biswas | 2023-09-09T13:42:28Z | http://arxiv.org/abs/2309.04796v1 | # Pick interpolation and invariant distances
###### Abstract.
In this article, we study the role of invariant distances in the Pick interpolation problem. Given a Caratheodory hyperbolic domain \(\Omega\) in some \(\mathbb{C}^{m}\), we have introduced a notion of an invariant object that gives a necessary and sufficient condition for any Pick interpolation problem to be solvable on \(\Omega\). This invariant object plays the same role as that of Caratheodory pseudodistance in the two-point Pick interpolation problem. Furthermore, a full description of the invariant object is given when \(\Omega\) is the open unit disc.
Key words and phrases:Invariant pseudodistance, Caratheodory pseudodistance, Pick interpolation 2010 Mathematics Subject Classification: Primary 32F45, Secondary 32E30
## 1. Introduction
We start with a Caratheodory hyperbolic domain \(\Omega\) in some \(\mathbb{C}^{m}\), \(z_{1},z_{2}\in\Omega\) and \(w_{1},w_{2}\in\mathbb{D}\). It is well known that there is an \(f\in H_{1}^{\infty}(\Omega)\) satisfying \(f(z_{1})=w_{1}\) and \(f(z_{2})=w_{2}\) if and only if
\[m(w_{1},w_{2})=\Big{|}\frac{w_{1}-w_{2}}{1-\overline{w_{1}}w_{2}}\Big{|}\leq c _{\Omega}^{*}(z_{1},z_{2})\]
where \(c_{\Omega}^{*}(z_{1},z_{2})\) stands for the Caratheodory pseudodistance between \(z_{1}\) and \(z_{2}\). This condition does not have an analogous form for the case where one is interested in three or more points. In 1916, Pick gave a necessary and sufficient condition for \(n\)-point interpolation problem to be solvable when \(\Omega=\mathbb{D}\) ([13]). In 1919, Nevanlinna gave an independent proof of the same fact (see [12]). Their result states the following: Suppose \(z_{1},\dots,z_{n}\in\mathbb{D}\) are distinct points and \(w_{1},\dots,w_{n}\in\mathbb{D}\). Then the interpolation problem \(z_{j}\mapsto w_{j}\) is solvable by a function in \(H_{1}^{\infty}(\mathbb{D})\) if and only if the (Pick-)matrix
\[\Big{(}\frac{1-w_{i}\overline{w_{j}}}{1-z_{i}z_{j}}\Big{)}_{1\leq i,j\leq n} \tag{1.1}\]
is positive semi-definite. Considering the set
\[D(z_{1},\dots,z_{n})=\{(f(z_{1}),\dots,f(z_{n})):f\in H_{1}^{\infty}(\mathbb{D })\}, \tag{1.2}\]
it is equivalent to saying that the interpolation problem \(z_{j}\mapsto w_{j}\) is solvable if and only if
\[(w_{1},\dots,w_{n})\in D(z_{1},\dots,z_{n})\]
if and only if the matrix (1.1) is positive semi-definite. The set \(D(z_{1},\dots,z_{n})\) is called the _Pick body_ or the _interpolation body_ associated with the tuple \((z_{1},\dots,z_{n})\). Further works on the Pick body have been carried out by Cole, Lewis, and Wermer (see [4], [5], [6], [7], [8]).
Our attention will be on a set that is somewhat similar to (1.2). We consider an arbitrary Caratheodory hyperbolic domain \(\Omega\) and \(n\) distinct points \(z_{1},\dots,z_{n}\in\Omega\). Let
\(B_{1}^{\infty}\) denote the interior of \(H_{1}^{\infty}(\Omega)\), that is, the set of all bounded holomorphic functions on \(\Omega\) with norm strictly less than one. We now construct the following set
\[\mathscr{D}_{\Omega}(z_{1},\dots,z_{n})=\{(f(z_{1})\dots,f(z_{n})):f\in B_{1}^ {\infty}\}. \tag{1.3}\]
We will often write \(\mathscr{D}_{n}\) for \(\mathscr{D}_{\Omega}(z_{1},\dots,z_{n})\) when it is clear from the context. Note that
\[\overline{\mathscr{D}_{n}}=\{(f(z_{1})\dots,f(z_{n})):f\in H_{1}^{\infty}( \Omega)\},\]
and the boundary of \(\mathscr{D}_{n}\), denoted by \(\partial\mathscr{D}_{n}\), consists of the points \((w_{1},\dots,w_{n})\) such that, if \(f\in H_{1}^{\infty}(\Omega)\) satisfies \(f(z_{j})=w_{j}\), then \(||f||=1\), that is, \(\partial\mathscr{D}_{n}\) consists of the points that can not be interpolated by functions with norm strictly less than one.
The main part of the article is divided into two sections. In Section 2, we describe the invariant object and some familiar notions connected to the object. In Section 3, we give a way of finding the invariant object for \(\mathbb{D}\).
## 2. Description of the invariant object
Let us begin with describing a few properties of the set \(\mathscr{D}_{n}\).
**Proposition 2.1**.: \(\mathscr{D}_{n}\) _is an open balanced convex subset of the unit polydisc \(\mathbb{D}^{n}\)._
Proof.: For any \(f,g\in B_{1}^{\infty}\) we have \((1-t)f+tg\in B_{1}^{\infty}\) for all \(t\in[0,1]\) and \(\lambda f\in B_{1}^{\infty}\) for all \(\lambda\in\overline{\mathbb{D}}\). So \(\mathscr{D}_{n}\) is balanced and convex. Obviously \(\mathscr{D}_{n}\) is a subset of \(\mathbb{D}^{n}\) and it contains \(0\). Note that, if the standard basis for \(\mathbb{C}^{n}\) is denoted by \(e_{j},1\leq j\leq n\), we can always find nonzero \(\lambda_{j}\in\mathbb{C}\) such that \(\lambda_{j}e_{j}\in\mathscr{D}_{n}\) (so convexity implies that \(\mathscr{D}_{n}\) contains a small neighborhood of \(0\)). To see that \(\mathscr{D}_{n}\) is open, we consider the linear map \(H:H^{\infty}(\Omega)\to\mathbb{C}^{n}\) defined by \(H(f)=(f(z_{1})\dots,f(z_{n}))\). Note that \(||H||\leq\sqrt{n}\). So \(H\) is a bounded onto linear map. By open mapping theorem, \(H\) is an open map and \(H(B_{1}^{\infty})=\mathscr{D}_{n}\). This completes our proof.
Since we have that \(\mathscr{D}_{n}\) is a balanced convex open set, let us talk about the Minkowski functional \(\mu_{\mathscr{D}_{n}}\) of \(\mathscr{D}_{n}\). We recall the following facts ([11], Chapter 2):
1. \(\mu_{\mathscr{D}_{n}}\) is a norm on \(\mathbb{C}^{n}\).
2. The boundary of \(\mathscr{D}_{n}\) is given by \(\partial\mathscr{D}_{n}=\{X\in\mathbb{C}^{n}:\mu_{\mathscr{D}_{n}}(X)=1\}\).
3. Since \(\mathscr{D}_{n}\) is a balanced convex (hence pseudoconvex), \(\mu_{\mathscr{D}_{n}}\) is plurisubharmonic (Proposition 2.2.15 in [11]). Consequently, \(\mathscr{D}_{n}\) is hyperconvex.
Let \(Aut(\mathbb{D})\) denote the group of automorphisms of \(\mathbb{D}\). We consider a \(\varphi\in Aut(\mathbb{D})\) and define \(\Phi_{\varphi}:\mathbb{D}^{n}\to\mathbb{D}^{n}\) as
\[\Phi_{\varphi}(w_{1},\dots,w_{n})=(\varphi(w_{1}),\dots,\varphi(w_{n})). \tag{2.1}\]
Since \(f\in B_{1}^{\infty}\) if and only if \(\varphi\circ f\in B_{1}^{\infty}\), we conclude that the restriction \(\Phi_{\varphi}|_{\mathscr{D}_{n}}\) is an automorphism of \(\mathscr{D}_{n}\).
Our next result gives a description of \(\mu_{\mathscr{D}_{n}}\) in terms of \(H^{\infty}(\Omega)\).
**Theorem 2.2**.: _For any \(\underline{w}=(w_{1},\dots,w_{n})\in\mathbb{C}^{n}\),_
\[\mu_{\mathscr{D}_{n}}(\underline{w})=inf\{||g||:g\in H^{\infty}(\Omega),g(z_{ j})=w_{j},j=1,\dots,n\}.\]
Proof.: Observe that the map \(H\) in Proposition 2.1 is onto. So for any \(\underline{w}\in\mathbb{C}^{n}\), there is a \(g\in H^{\infty}(\Omega)\) such that \(w_{j}=g(z_{j})\) for all \(j=1,\dots,n\). For such a \(g\) we have \(\frac{1}{||g||}\underline{w}\in\overline{\mathscr{D}_{n}}\). And hence, we have \(\mu_{\mathscr{D}_{n}}(\underline{w})\leq||g||\) for all such \(g\).
Let \(l=\inf\{||g||:g\in H^{\infty}(\Omega),g(z_{j})=w_{j},j=1,\dots,n\}\) and suppose that \(\mu_{\mathscr{D}_{n}}(\underline{w})<l\). Then \(\mu_{\mathscr{D}_{n}}(\frac{1}{l}\underline{w})<1\) and hence \(\frac{1}{l}\underline{w}\in\mathscr{D}_{n}\). By definition, there is an \(f\in B_{1}^{\infty}\) such that
\(\frac{w_{j}}{l}=f(z_{j})\) So \(w_{j}=lf(z_{j})\) for all \(j\), \(lf\in H^{\infty}(\Omega)\) and \(||lf||<l\). This contradicts our assumption. Hence \(\mu_{\mathscr{D}_{n}}(\underline{w})=l\) and this concludes the proof.
**Corollary 2.3**.: _Let \(I=I(z_{1},\ldots,z_{n})\) be the ideal in the Banach algebra \(H^{\infty}(\Omega)\) consisting of the functions that vanish at each \(z_{j},1\leq j\leq n\). Then the normed spaces \((\mathbb{C}^{n},\mu_{\mathscr{D}_{n}})\) and \((H^{\infty}(\Omega)/I,||\cdot||_{q})\) are isometrically isomorphic, where \(||\cdot||_{q}\) is the quotient norm. Moreover, \(\mu_{\mathscr{D}_{n}}\) is a Banach algebra norm (i.e., sub-multiplicative) on \(\mathbb{C}^{n}\) with component-wise product._
Using Montel's theorem, it is not hard to see that \(\mu_{\mathscr{D}_{n}}\) is attained, that is, for a given \(\underline{w}\in\mathbb{C}^{n}\), there exists a \(g\in H^{\infty}(\Omega)\) such that \(g(z_{j})=w_{j}\) for all \(j\) and \(\mu_{\mathscr{D}_{n}}(\underline{w})=||g||\).
We now want to focus on an old domain introduced by H. Cartan [3] and later studied by several mathematicians in different context and form (see [2], [4], [10]). The domain is given by
\[\mathbb{D}_{r}^{2}=\Big{\{}(w_{1},w_{2})\in\mathbb{D}^{2}:m(w_{1},w_{2})=\Big{|} \frac{w_{1}-w_{2}}{1-\overline{w_{1}}w_{2}}\Big{|}<r\Big{\}},r\in(0,1). \tag{2.2}\]
The following theorem will lead us to our invariant object.
**Theorem 2.4**.: _For two distinct points \(z_{1},z_{2}\in\Omega\) we have_
\[\mathscr{D}_{\Omega}(z_{1},z_{2})=\mathscr{D}_{2}=\mathbb{D}_{c_{\Omega}^{*}( z_{1},z_{2})}^{2}.\]
Proof.: This follows from the fact that \(c_{\Omega}^{*}(z_{1},z_{2})\) is always attained by functions of norm one, and \(m(aw_{1},aw_{2})\leq m(bw_{1},bw_{2})\) if and only if \(|a|\leq|b|\) whenever \(w_{1}\neq w_{2}\), \(a,b\in\mathbb{C},aw_{j},bw_{j}\in\mathbb{D}\), and \(j=1,2\).
This gives us that for any \(r\in(0,1)\), \(\mathbb{D}_{r}^{2}\) is convex. Note that, the boundary of \(\mathbb{D}_{r}^{2}\) can be given by
\[\partial\mathbb{D}_{r}^{2}=\{(e^{i\theta},e^{i\theta}):\theta\in\mathbb{R}\} \cup\{(w_{1},w_{2})\in\mathbb{D}^{2}:m(w_{1},w_{2})=r\}.\]
When \(n\geq 3\), there is no analogous description of \(\mathscr{D}_{n}\). To address this issue, let us consider the following procedure: Let \(\underline{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{C}^{n}-\{\mathbf{0}\}\). For this \(\underline{\alpha}\) and \(z_{1},\ldots,z_{n}\in\Omega\), we define
\[d^{\Omega}_{(\underline{z},\underline{\alpha})}(z_{i},z_{j})=d_{\underline{ \alpha}}(z_{i},z_{j})=sup\{m(f(z_{i}),f(z_{j})):f\in H^{\infty}_{1}(\Omega), \underline{f}\in\mathbb{C}\cdot\underline{\alpha}\} \tag{2.3}\]
where \(\underline{z}=(z_{1},\ldots,z_{n})\in\Omega^{n}\), \(\underline{f}=(f(z_{1}),\ldots,f(z_{n}))\in\mathbb{D}^{n}\) and \(\mathbb{C}\cdot\underline{\alpha}\) denotes the one dimensional subspace generated by the nonzero element \(\underline{\alpha}\). The following facts are easy to deduce.
1. Since \(\mathbf{0}\in\mathscr{D}_{n}\cap\mathbb{C}\cdot\underline{\alpha}\) and \(H^{\infty}_{1}(\Omega)\) contains the zero function, the set we used to define \(d\) is nonempty. So the supremum exists. Also, for any fixed \(\underline{\alpha}\) we have \[0\leq d_{\underline{\alpha}}(z_{i},z_{j})\leq c_{\Omega}^{*}(z_{i},z_{j}).\]
2. \(d_{\underline{\alpha}}(z_{i},z_{j})\) is contractible under holomorphic maps, that is, if \(F:\Omega_{1}\to\Omega_{2}\) is a holomorphic map between two Caratheodory hyperbolic domains, then \[d^{\Omega_{2}}_{(\underline{F}(z),\underline{\alpha})}(F(z_{i}),F(z_{j}))\leq d ^{\Omega_{1}}_{(\underline{z},\underline{\alpha})}(z_{i},z_{j})\] where \(\underline{z}=(z_{1},\ldots,z_{n})\in\Omega_{1}^{n}\) and \(\underline{F}(z)=(F(z_{1}),\ldots,F(z_{n}))\in\Omega_{2}^{n}\). If \(F\) injective on \(\{z_{1},\ldots,z_{n}\}\), then the fact is clear. If \(F(z_{i})=F(z_{j})\), then we use the argument as in (1).
3. For any \(\underline{\beta}\in\mathbb{C}\cdot\underline{\alpha}\) and \(\underline{\beta}\neq\mathbf{0}\), \(d_{\underline{\alpha}}(z_{i},z_{j})=d_{\underline{\beta}}(z_{i},z_{j})\). Furthermore, if \(\alpha_{i}=\alpha_{j}\), then \(d_{\underline{\alpha}}(z_{i},z_{j})=0\).
4. If we consider only two points \(z_{1},z_{2}\in\Omega\) and an \(\underline{\alpha}=(\alpha_{1},\alpha_{2})\) with \(\alpha_{1}\neq\alpha_{2}\), then \(d_{\underline{\alpha}}(z_{1},z_{2})=c_{\Omega}^{*}(z_{1},z_{2})\). This holds because for \(\alpha_{1}\neq\alpha_{2}\), there is a positive \(t\) such that \(t\alpha_{1},t\alpha_{2}\in\mathbb{D}\) and \(m(t\alpha_{1},t\alpha_{2})=c_{\Omega}^{*}(z_{1},z_{2})\), and \(c_{\Omega}^{*}(z_{1},z_{2})\) is always attained by some function in \(H_{1}^{\infty}(\Omega)\). Observe that, this \(t\) is nothing but \(\frac{1}{\mu_{\mathscr{D}_{2}}(\alpha_{1},\alpha_{2})}\).
5. For any \(i\) and \(j\), \(d_{\underline{\alpha}}(z_{i},z_{j})=m(t\alpha_{i},t\alpha_{j})\) where \(t=\frac{1}{\mu_{\mathscr{D}_{n}}(\underline{\alpha})}\). Consequently, for any \(\underline{\alpha}\) there is an \(f\in H_{1}^{\infty}(\Omega)\) such that \(\underline{f}\in\mathbb{C}\cdot\underline{\alpha}\) and \(d_{\underline{\alpha}}(z_{i},z_{j})=m(f(z_{i}),f(z_{j}))\) for all \(i\) and \(j\). To see this, note that \(\mu_{\mathscr{D}_{n}}(t\underline{\alpha})=1\). If \(\alpha_{i}=\alpha_{j}\), the result is clear. Therefore, let us assume \(\alpha_{i}\neq\alpha_{j}\), and hence \(t\underline{\alpha}\in\partial\mathscr{D}_{n}\cap\mathbb{D}^{n}\). By definition of \(\overline{\mathscr{D}_{n}}\), there is an \(f\in H_{1}^{\infty}(\Omega)\) such that \(t\underline{\alpha}=\underline{f}\). Also suppose that \(g\in H_{1}^{\infty}(\Omega)\) satisfies \(d_{\underline{\alpha}}(z_{i},z_{j})=m(g(z_{i}),g(z_{j}))\) and \(\underline{g}=\lambda\underline{\alpha}\) for some \(\lambda\in\mathbb{C}\). We now have \[m(t\alpha_{i},t\alpha_{j}) =m(f(z_{i}),f(z_{j}))\] \[\leq d_{\underline{\alpha}}(z_{i},z_{j})\] \[=m(g(z_{i}),g(z_{j}))\] \[=m(\lambda\alpha_{i},\lambda\alpha_{j}).\] So \(t\leq|\lambda|\). On the other hand \(\frac{|\lambda|}{t}=|\lambda|\mu_{\mathscr{D}_{n}}(\underline{\alpha})=\mu_{ \mathscr{D}_{n}}(\underline{g})\leq 1\), that is, \(|\lambda|\leq t\).
6. For \(\Omega=\mathbb{D}\), we have \(d_{\underline{z}}(z_{i},z_{j})=m(z_{i},z_{j})\) for all \(i\) and \(j\).
Let us now give a description of \(\mathscr{D}_{n}\) in terms of \(d\).
**Theorem 2.5**.: _Suppose \(z_{1},\ldots,z_{n}\) are \(n\) distinct points in \(\Omega\). Then \(\mathscr{D}_{\Omega}(z_{1},\ldots,z_{n})=\mathscr{D}_{n}\) is given by_
\[\mathscr{D}_{n}= \{(\alpha,\ldots,\alpha):\alpha\in\mathbb{D}\}\] \[\cup\big{[}\cup_{1\leq i<j\leq n}\{(\alpha_{1},\ldots,\alpha_{n}) \in\mathbb{D}^{n}:\alpha_{i}\neq\alpha_{j},m(\alpha_{i},\alpha_{j})<d_{ \underline{\alpha}}(z_{i},z_{j})\}\big{]}\]
Proof.: Let \(\underline{w}=(w_{1},\ldots,w_{n})\in\mathscr{D}_{n}\). By definition, there is an \(f\in B_{1}^{\infty}\) such that \(w_{j}=f(z_{j})\) for all \(j\), and hence \((f(z_{1}),\ldots,f(z_{n}))\in\mathbb{C}\cdot\underline{w}\). If \(w_{1}=\ldots=w_{n}\), then \(\underline{w}\in\{(\alpha,\ldots,\alpha):\alpha\in\mathbb{D}\}\). If \(w_{i}\neq w_{j}\) for some \(i\neq j\), then we have \(m(w_{i},w_{j})\leq d_{\underline{w}}(z_{i},z_{j})\). Suppose we have \(m(w_{i},w_{j})=d_{\underline{w}}(z_{i},z_{j})\). Note that \(f\in B_{1}^{\infty}\) and hence there is a \(t>1\) such that \(tf\in B_{1}^{\infty}\). But then we have \((tf(z_{1}),\ldots,tf(z_{n}))\in\mathbb{C}\cdot\underline{w}\) and \(d_{\underline{w}}(z_{i},z_{j})<m(tf(z_{i}),tf(z_{j}))\). This is a contradiction. Therefore, we obtain \(m(w_{i},w_{j})<d_{\underline{w}}(z_{i},z_{j})\).
Now suppose we have \(\underline{x}=(x_{1},\ldots,x_{n})\in\mathbb{D}^{n}\). If \(x_{1}=\ldots=x_{n}\), then clearly the constant function \(f(z)=x_{1}\) takes \(z_{j}\) to \(x_{j}\) and has norm strictly less than one. Now suppose \(x_{i}\neq x_{j}\) and \(m(x_{i},x_{j})<d_{\underline{x}}(z_{i},z_{j})\). Using Montel's theorem, we can claim that there is a \(g\in H_{1}^{\infty}(\Omega)\) such that \(||g||=1\), \(d_{\underline{x}}(z_{i},z_{j})=m(g(z_{i}),g(z_{j}))\) and \((g(z_{1}),\ldots,g(z_{n}))\in\mathbb{C}\cdot\underline{x}\). We can find \(t\in(0,1)\) and \(\lambda\in\mathbb{C}\) such that \(m(x_{i},x_{j})=m(tg(z_{i}),tg(z_{j}))\) and \((g(z_{1}),\ldots,g(z_{n}))=\lambda\cdot\underline{x}\). This gives us \(m(x_{i},x_{j})=m(t\lambda x_{i},t\lambda x_{j})\). This holds if and only if \(|t\lambda|=1\). So \(t\lambda=e^{-i\theta}\) for some \(\theta\in\mathbb{R}\). Let \(h=e^{i\theta}tg\). Then \(||h||=t<1\) and \((h(z_{1}),\ldots,h(z_{n}))=(x_{1},\ldots,x_{n})\), that is, \((x_{1},\ldots,x_{n})\in\mathscr{D}_{n}\), and we are done.
**Corollary 2.6**.: _An \(n\)-point Nevanlinna-Pick interpolation problem \(\Omega\ni z_{j}\mapsto w_{j}\in\mathbb{D},1\leq j\leq n\), is solvable if and only if either \(w_{1}=\ldots=w_{n}\in\mathbb{D}\) or with \(w_{i}\neq w_{j}\) (for at least one pair \((i,j)\)) the condition \(m(w_{i},w_{j})\leq d_{\underline{w}}(z_{i},z_{j})\) is satisfied._
We can consider this Corollary 2.6 as a _generalized or multi-point Schwarz-Pick lemma_ for Caratheodory hyperbolic domains. For multi-point Schwarz-Pick lemma on \(\mathbb{D}\) see [1] and [9].
## 3. Values of \(d^{\mathbb{D}}\)
Here we will describe the values of \(d^{\mathbb{D}}_{\underline{\alpha}}\) for all nonzero \(\underline{\alpha}\)s. First we consider the following \(\underline{\alpha}\):
1. \(\underline{\alpha}=(0,0,\ldots,0,1)\).
2. \(\underline{\alpha}=(0,0,\ldots,0,\alpha_{n-1},\alpha_{n})\) with \(\alpha_{n-1}\alpha_{n}\neq 0\).
**Proposition 3.1**.: _If \(\underline{\alpha}=(0,0,\ldots,0,1)\) then_
\[d_{\underline{\alpha}}(z_{i},z_{j}) =0,\text{ for }1\leq i,j\leq n-1,\] \[=\prod_{l=1}^{n-1}\Big{|}\frac{z_{l}-z_{n}}{1-\overline{z_{l}}z_{ n}}\Big{|},\text{ for }1\leq i\leq n-1\text{ and }j=n.\]
Proof.: It is easy to see that for \(1\leq i,j\leq n-1\), \(d_{\underline{\alpha}}(z_{i},z_{j})=0\). Now suppose \(j=n\) and \(1\leq i\leq n-1\). There is an \(f\in H^{\infty}_{1}(\mathbb{D})\) such that \(\underline{f}=(f(z_{1}),\ldots,f(z_{n}))\in\mathbb{C}\cdot\underline{\alpha}\) and \(d_{\underline{\alpha}}(z_{i},z_{n})=m(f(z_{i}),f(z_{n}))\). Since \(f(z_{1})=\cdots=f(z_{n-1})=0\), there is an \(f_{n}\in H^{\infty}_{1}(\mathbb{D})\) such that
\[f(z)=f_{n}(z)\prod_{l=1}^{n-1}\Big{(}\frac{z_{l}-z}{1-\overline{z_{l}}z}\Big{)}.\]
Note that this \(f\) satisfies \(m(f(z_{i}),f(z_{n}))\leq\prod_{l=1}^{n-1}\Big{|}\frac{z_{l}-z_{n}}{1- \overline{z_{l}}z_{n}}\Big{|}\). Now the function
\[\varphi(z)=\prod_{l=1}^{n-1}\Big{(}\frac{z_{l}-z}{1-\overline{z_{l}}z}\Big{)}\]
is in \(H^{\infty}_{1}(\mathbb{D})\) and \(\underline{\varphi}\in\mathbb{C}\cdot\alpha\). Hence
\[m(\varphi(z_{i}),\varphi(z_{n}))\leq d_{\underline{\alpha}}(z_{i},z_{n})=m(f( z_{i}),f(z_{n}))\leq\prod_{l=1}^{n-1}\Big{|}\frac{z_{l}-z_{n}}{1-\overline{z_{l}}z_ {n}}\Big{|}=m(\varphi(z_{i}),\varphi(z_{n}))\]
and this completes the proof.
Before proceeding with the other \(\underline{\alpha}\), we want to point out that for a given pair \((\beta_{1},\beta_{2})\) of distinct complex numbers and any two distinct \(z_{1},z_{2}\) in \(\Omega\), there is a \(t>0\) such that \(m(t\beta_{1},t\beta_{2})=c_{\Omega}^{*}(z_{1},z_{2})\). This \(t\) is just \(\frac{1}{\mu_{\mathscr{G}_{\Omega}(z_{1},z_{2})(\beta_{1},\beta_{2})}}\).
Now for a given \(\underline{\alpha}=(0,0,\ldots,0,\alpha_{n-1},\alpha_{n})\) with \(\alpha_{n-1}\alpha_{n}\neq 0\), suppose
\[\alpha_{n-1}^{\prime}=\frac{\alpha_{n-1}}{\prod_{l=1}^{n-2}\big{(}\frac{z_{l}- z_{n-1}}{1-\overline{z_{l}}z_{n-1}}\big{)}}\neq\frac{\alpha_{n}}{\prod_{l=1}^{n-2} \big{(}\frac{z_{l}-z_{n}}{1-\overline{z_{l}}z_{n}}\big{)}}=\alpha_{n-1}^{ \prime}. \tag{3.1}\]
Then there is a \(t>0\), depending only on \(\underline{\alpha}\) and \(z_{j}\)s such that
\[m(t\alpha_{n-1}^{\prime},t\alpha_{n}^{\prime})=c_{\mathbb{D}}^{*}(z_{n-1},z_{n })=m(z_{n-1},z_{n}).\]
By the Schwarz-Pick lemma, there is a \(\varphi_{t}\in Aut(\mathbb{D})\) such that \(\varphi_{t}(z_{n-1})=t\alpha_{n-1}^{\prime}\) and \(\varphi_{t}(z_{n})=t\alpha_{n}^{\prime}\). Now consider the function
\[f_{t}(z)=\varphi_{t}(z)\prod_{l=1}^{n-2}\Big{(}\frac{z_{l}-z}{1-\overline{z_{l} }z}\Big{)}. \tag{3.2}\]
This \(f_{t}\) is in \(H^{\infty}_{1}(\mathbb{D})\) and \(\underline{f_{t}}=(0,0,\ldots,0,t\alpha_{n-1},t\alpha_{n})\in\mathbb{C}\cdot \underline{\alpha}\).
**Proposition 3.2**.: _Suppose we are given \(\underline{\alpha}=(0,0,\ldots,0,\alpha_{n-1},\alpha_{n})\) with \(\alpha_{n-1}\alpha_{n}\neq 0\). Then we have the following._
1. \(d_{\underline{\alpha}}(z_{i},z_{j})=0,\text{ for }1\leq i,j\leq n-2.\)__
2. _If_ \(\frac{\alpha_{n-1}}{\prod_{l=1}^{n-2}\left(\frac{z_{l}-z_{n-1}}{1-2\overline {l}z_{n-1}}\right)}=\frac{\alpha_{n}}{\prod_{l=1}^{n-2}\left(\frac{z_{l}-z_{n}} {1-2\overline{l}z_{n}}\right)}\)_, then_ \[d_{\underline{\alpha}}(z_{i},z_{j}) =|f_{0}(z_{j})|,\text{ for }1\leq i\leq n-2,j=n-1,n,\] \[=m(f_{0}(z_{n-1}),f_{0}(z_{n})),\text{ for }i=n-1,j=n\] _where_ \(f_{0}(z)=\prod_{l=1}^{n-2}\left(\frac{z_{l}-z_{j}}{1-2\overline{l}z_{n}}\right)\)_._
3. _If_ \(\frac{\alpha_{n-1}}{\prod_{l=1}^{n-2}\left(\frac{z_{l}-z_{n-1}}{1-2\overline {l}z_{n-1}}\right)}\neq\frac{\alpha_{n}}{\prod_{l=1}^{n-2}\left(\frac{z_{l}-z_{ n}}{1-2\overline{l}z_{n}}\right)}\)_, then_ \[d_{\underline{\alpha}}(z_{i},z_{j}) =|f_{t}(z_{j})|,\text{ for }1\leq i\leq n-2,j=n-1,n,\] \[=m(f_{t}(z_{n-1}),f_{t}(z_{n})),\text{ for }i=n-1,j=n\] _where_ \(f_{t}(z)\) _is given by (_3.2_)._
Proof.:
1. The first statement is trivial.
2. For proving the second statement, let us suppose we have \(\frac{\alpha_{n-1}}{f_{0}(z_{n-1})}=\frac{\alpha_{n}}{f_{0}(z_{n})}\), that is, \(\mathbb{C}\cdot\underline{\alpha}=\mathbb{C}\cdot\underline{f}\). Let \(f\in H_{1}^{\infty}(\mathbb{D})\) be such that \(\underline{f}\in\mathbb{C}\cdot\underline{f_{0}}\). Then there is an \(f_{1}\in H_{1}^{\infty}(\mathbb{D})\) such that \(f(z)=f_{0}(z)f_{1}(z)\) for all \(z\in\mathbb{D}\). Hence, for any \(i\neq n-1,n\), we have \(m(f(z_{i}),f(z_{n-1}))=|f(z_{n-1})|\leq|f_{0}(z_{n-1})|\). Also \(|f_{0}(z_{n-1})|\leq d_{\underline{\alpha}}(z_{i},z_{n-1})\). Since \(d_{\underline{\alpha}}(z_{i},z_{n-1})\) is attained by some function in \(\in H_{1}^{\infty}(\mathbb{D})\), we obtain that \(d_{\underline{\alpha}}(z_{i},z_{n-1})=|f_{0}(z_{n-1})|\). Similarly we can show that \(d_{\underline{\alpha}}(z_{i},z_{n})=|f_{0}(z_{n})|\). Now suppose \(f\in H_{1}^{\infty}(\mathbb{D})\) satisfies \(\underline{f}=\lambda\underline{f_{0}}\) for some \(\lambda\in\mathbb{C}\) and \(d_{\underline{\alpha}}(z_{n-1},z_{n})=m(f(z_{n-1}),f(z_{n}))\). So we have \[m(f_{0}(z_{n-1}),f_{0}(z_{n})) \leq d_{\underline{\alpha}}(z_{n-1},z_{n})\] \[=m(\lambda f_{0}(z_{n-1}),\lambda f_{0}(z_{n}))\] which implies \(1\leq|\lambda|\). On the other hand, if \(f_{1}\in H_{1}^{\infty}(\mathbb{D})\) is the function that satisfies \(f=f_{0}f_{1}\), then we have \[f_{1}(z_{n-1})=\frac{f(z_{n-1})}{f_{0}(z_{n-1})}=\lambda=\frac{f(z_{n})}{f_{0}( z_{n})}=f_{1}(z_{n}).\] Thus \(|\lambda|\leq 1\). This concludes the proof.
3. Let \(\frac{\alpha_{n-1}}{\prod_{l=1}^{n-2}\left(\frac{z_{l}-z_{n-1}}{1-2\overline {l}z_{n-1}}\right)}\neq\frac{\alpha_{n}}{\prod_{l=1}^{n-2}\left(\frac{z_{l}-z_ {n}}{1-2\overline{l}z_{n}}\right)}\) hold. We consider \(f_{t}\) and \(\varphi_{t}\) as constructed in (3.2), and \(f_{0}\) as above. For \(i\neq n-1,n\), there is an \(f\in H_{1}^{\infty}(\Omega)\) such that \(\underline{f}\in\mathbb{C}\cdot\underline{\alpha}\) and \(d_{\underline{\alpha}}(z_{i},z_{n-1})=m(f(z_{i}),f(z_{n-1}))=|f(z_{n-1})|\). Moreover, there is a \(\lambda\in\mathbb{C}\) and an \(f_{1}\in H_{1}^{\infty}(\Omega)\) such that \(\underline{f}=\lambda\underline{f_{t}}\) and \(f=f_{0}f_{1}\). Clearly, \(f_{1}(z_{n-1})=\lambda\varphi_{t}(z_{n-1})\) (and similarly \(f_{1}(z_{n})=\overline{\lambda}\varphi_{t}(z_{n})\)). So we can write \[m(\lambda\varphi_{t}(z_{n-1}),\lambda\varphi_{t}(z_{n})) =m(f_{1}(z_{n-1}),f_{1}(z_{n}))\] \[\leq m(z_{n-1},z_{n})\] \[=m(\varphi_{t}(z_{n-1}),\varphi_{t}(z_{n}))\] which suggests \(|\lambda|\leq 1\). Also \(|f_{t}(z_{n-1})|\leq d_{\underline{\alpha}}(z_{i},z_{n-1})=|f(z_{n-1})|=| \lambda f_{t}(z_{n-1})|\) implies \(1\leq|\lambda|\). Similarly, one can show that \(d_{\underline{\alpha}}(z_{i},z_{n})=|f_{t}(z_{n})|\).
Now we show that \(d_{\underline{\alpha}}(z_{n-1},z_{n})=m(f_{t}(z_{n-1}),f_{t}(z_{n}))\). We find \(g,g_{1}\in H_{1}^{\infty}(\mathbb{D})\) and \(\lambda\in\mathbb{C}\) such that \(\underline{g}=\lambda f_{t}\), \(d_{\underline{\alpha}}(z_{n-1},z_{n})=m(g(z_{n-1}),g(z_{n}))\) and \(g=f_{0}g_{1}\). This gives us \(g_{1}(z_{n-1})=\lambda\varphi_{t}(z_{n-1})\) and \(g_{1}(z_{n})=\lambda\varphi_{t}(z_{n})\). So we have
\[m(\lambda\varphi_{t}(z_{n-1}),\lambda\varphi_{t}(z_{n})) =m(g_{1}(z_{n-1}),g_{1}(z_{n}))\] \[\leq m(z_{n-1},z_{n})\] \[=m(\varphi_{t}(z_{n-1}),\varphi_{t}(z_{n}))\]
and hence \(|\lambda|\leq 1\). On the other hand
\[m(f_{t}(z_{n-1}),f_{t}(z_{n})) \leq d_{\underline{\alpha}}(z_{n-1},z_{n})\] \[=m(g(z_{n-1}),g(z_{n}))\] \[=m(\lambda f_{t}(z_{n-1}),\lambda f_{t}(z_{n}))\]
implies \(|\lambda|\geq 1\). The proof is now complete.
Next let us describe the boundary of \(\mathscr{D}_{n}\). It can be shown that
\[\partial\mathscr{D}_{n}= \{(w,\ldots,w):w\in\partial\mathbb{D}\}\] \[\cup\big{[}\cup_{1\leq i<j\leq n}\{(w_{1},\ldots,w_{n})\in \mathbb{D}^{n}:w_{i}\neq w_{j},m(w_{i},w_{j})=d_{\underline{w}}(z_{i},z_{j})\} \big{]}.\]
**Lemma 3.3**.: \((w_{1},\ldots,w_{n})\in\partial\mathscr{D}_{n}\cap\mathbb{D}^{n}\) _if and only if_
\[\Big{(}\frac{\varphi_{w_{n}}(w_{1})}{\varphi_{z_{n}}(z_{1})},\ldots,\frac{ \varphi_{w_{n}}(w_{n-1})}{\varphi_{z_{n}}(z_{n-1})}\Big{)}\in\partial\mathscr{D }_{n-1}\]
_where \(\varphi_{u}(v)=\frac{u-v}{1-\overline{w}},\mathscr{D}_{n}=\mathscr{D}_{ \mathbb{D}}(z_{1},\ldots,z_{n}),\) and \(\mathscr{D}_{n-1}=\mathscr{D}_{\mathbb{D}}(z_{1},\ldots,z_{n-1})\)._
Proof.: The proof follows from making the following observations:
1. \(\partial\mathscr{D}_{n}\) consists of the points that can be attained by functions of norm one but not by functions of norm strictly less than one.
2. \(\partial\mathscr{D}_{n}\) is invariant under the action of the elements of \(Aut(\mathscr{D}_{n})\) described in (2.1).
3. For any \(f\in H_{1}^{\infty}(\mathbb{D})\) with \(f(u)=0\), there is an \(f_{1}\in H_{1}^{\infty}(\mathbb{D})\) such that \(f(z)=\varphi_{u}(z)f_{1}(z)\).
The implication of the following two results are previously known in some other form. However, we produce new proofs which provide new insight of the things that are taking place underneath.
**Theorem 3.4**.: \((w_{1},\ldots,w_{n})\in\partial\mathscr{D}_{n}\) _if and only if there is a finite Blaschke product \(\varphi\) of degree at most \((n-1)\) such that \(\varphi(z_{j})=w_{j},1\leq j\leq n\)._
Proof.: \((\Leftarrow)\): We apply induction on \(n\). Let \(\varphi_{n-1}\) be a finite Blaschke product of degree at most \((n-1)\) and \(\varphi_{n-1}(z_{j})=w_{j},1\leq j\leq n\).
For \(n=1\) we have \(\varphi_{0}(z)=e^{i\theta}\), so \(\varphi_{0}(z_{1})\in\partial\mathscr{D}_{1}=\partial\mathbb{D}\).
For \(n=2\), \(\varphi_{1}\) is either the constant function with modulus one or is an element of \(Aut(\mathbb{D})\). In any case, \((\varphi_{1}(z_{1}),\varphi_{1}(z_{2}))\in\mathscr{D}_{2}\).
Let the statement be true for \(n=m\). Suppose \(\varphi_{m}\) is a Blaschke product of degree at most \(m\) and \((w_{1},\ldots,w_{m+1})=(\varphi_{m}(z_{1}),\ldots,\varphi_{m}(z_{m+1}))\). If the degree of \(\varphi_{m}\) is zero, the result is clear. To see the other case, consider the function \(\psi_{m}=\varphi_{w_{m+1}}\circ\varphi_{m}(z)\). Then \(\psi_{m}\) is a Blaschke product of degree at most \(m\) with a zero at \(z_{m+1}\). Hence the
function \(\psi_{m-1}(z)=\frac{\psi_{m}(z)}{\varphi_{z_{m+1}}(z)}\) is a Blaschke product of degree at most \(m-1\). By induction hypothesis \((\psi_{m-1}(z_{1}),\ldots,\psi_{m-1}(z_{m}))\in\partial\mathscr{D}_{m}\), that is,
\[\Big{(}\frac{\varphi_{w_{m+1}}(w_{1})}{\varphi_{z_{m+1}}(z_{1})},\ldots,\frac{ \varphi_{w_{m+1}}(w_{m})}{\varphi_{z_{m+1}}(z_{m})}\Big{)}\in\partial\mathscr{ D}_{m}.\]
Now Lemma 3.3 gives rest of the argument.
\((\Rightarrow)\): We again apply induction on \(n\).
For \(n=1\) and \(2\), the result is clear. Suppose that the statement is true for \(n=m\) and let \((w_{1},\ldots,w_{m+1})\in\partial\mathscr{D}_{m+1}\). If \(|w_{m+1}|=1\), the constant function \(\varphi_{m}(z)=w_{m+1}\) works. For \(|w_{m+1}|<1\), we have \((w_{1},\ldots,w_{m+1})\in\partial\mathscr{D}_{m+1}\cap\mathbb{D}^{n}\). So by Lemma 3.3 we obtain
\[\Big{(}\frac{\varphi_{w_{m+1}}(w_{1})}{\varphi_{z_{m+1}}(z_{1})},\ldots,\frac{ \varphi_{w_{m+1}}(w_{m})}{\varphi_{z_{m+1}}(z_{m})}\Big{)}\in\partial\mathscr{ D}_{m}.\]
By induction hypothesis there is a Blaschke product \(\varphi_{m-1}\) of degree at most \(m-1\) such that \(\varphi_{m-1}(z_{j})=\frac{\varphi_{w_{m+1}}(w_{j})}{\varphi_{z_{m+1}}(z_{j})},1\leq j\leq m\). We now take the Blaschke product
\[\varphi_{m}(z)=\varphi_{w_{m+1}}(\varphi_{z_{m+1}}(z)\varphi_{m-1}(z))\]
which is of degree at most \(m\) and satisfies \(\varphi_{m}(z_{j})=w_{j},1\leq j\leq m+1\).
This completes our proof.
**Theorem 3.5**.: _The solution to a solvable interpolation problem \(\mathbb{D}\ni z_{j}\mapsto w_{j}\in\mathbb{D},1\leq j\leq n,\) is unique if and only if \((w_{1},\ldots,w_{n})\in\partial\mathscr{D}_{n}\)._
Proof.: The result is clear if \(n=1\). Let the statement be true for \(n=m\).
\((\Leftarrow)\): Let \((w_{1},\ldots,w_{m+1})\in\partial\mathscr{D}_{m+1}\). If \(|w_{m+1}|=1\), then the unique solution is the constant function \(\varphi(z)=w_{m+1}\). For the other case, Lemma 3.3 gives us
\[\Big{(}\frac{\varphi_{w_{m+1}}(w_{1})}{\varphi_{z_{m+1}}(z_{1})},\ldots,\frac {\varphi_{w_{m+1}}(w_{m})}{\varphi_{z_{m+1}}(z_{m})}\Big{)}\in\partial\mathscr{ D}_{m}.\]
By induction hypothesis and Theorem 3.4, there is a unique Blaschke product \(\varphi_{m-1}\) of degree at most \(m-1\) such that \(\varphi_{m-1}\) takes \(z_{j}\) to \(\frac{\varphi_{w_{m+1}}(w_{j})}{\varphi_{z_{m+1}}(z_{j})},1\leq j\leq m\). If \(g\in H^{\infty}_{1}(\mathbb{D})\) is a solution to the problem \(z_{j}\mapsto w_{j},1\leq j\leq m+1\), then \(\frac{\varphi_{w_{m+1}}(g(z))}{\varphi_{z_{m+1}}(z)}\) sends \(z_{j}\) to \(\frac{\varphi_{w_{m+1}}(w_{j})}{\varphi_{z_{m+1}}(z_{j})},1\leq j\leq m\). Using the uniqueness of \(\varphi_{m-1}\) we find that \(g(z)=\varphi_{w_{m+1}}(\varphi_{z_{m+1}}(z)\varphi_{m-1}(z))\). So this \(g\) is unique.
\((\Rightarrow)\): Let the solution to the interpolation problem \(z_{j}\mapsto w_{j},1\leq j\leq m+1\), be unique and let \(g\) be the solution. Clearly, the problem \(z_{j}\mapsto\frac{\varphi_{w_{m+1}}(w_{j})}{\varphi_{z_{m+1}}(z_{j})},1\leq j\leq m\), is solvable. If \(h_{0}\) is a solution to this problem, then it is easy to see that the function \(h(z)=\varphi_{w_{m+1}}(\varphi_{z_{m+1}}(z)h_{0}(z))\) solves the interpolation problem \(z_{j}\mapsto w_{j},1\leq j\leq m+1\), and hence, \(h=g\). This gives us \(h_{0}(z)=\frac{\varphi_{w_{m+1}}(g(z))}{\varphi_{z_{m+1}}(z)}\) (note that \(\varphi_{w_{m+1}}(g(z))\) has a zero at \(z_{m+1}\)). So the uniqueness of \(g\) passes onto \(h_{0}\) and induction hypothesis together with Lemma 3.3 imply that \((w_{1},\ldots,w_{m+1})\in\partial\mathscr{D}_{m+1}\).
This completes the proof.
The theorems above give us a way to relate the domain \(\mathscr{D}_{n}\) and the degree of the Blaschke product interpolating the boundary points of \(\mathscr{D}_{n}\).
**Corollary 3.6**.: _Let \((w_{1},\ldots,w_{n})\in\partial\mathscr{D}_{n}\) and \(\varphi\) a Blaschke product sending \(z_{j}\) to \(w_{j}\), \(1\leq j\leq n\). Then \(\varphi\) is of degree at most \(k-1\) if and only if \(k\) is the least positive integer for which there are \(w_{i_{1}},\ldots,w_{i_{k}}\in\{w_{1},\ldots,w_{n}\}\) such that_
\[(w_{i_{1}},\ldots,w_{i_{k}})\in\partial\mathscr{D}_{\mathbb{D}}(z_{i_{1}}, \ldots,z_{i_{k}}).\]
Proof.: \((\Rightarrow)\): Let \(\varphi\) has degree \(k-1\). Then using Theorem 3.4, we see that for any \(z_{i_{1}},\ldots,z_{i_{k}}\in\{z_{1},\ldots,z_{n}\}\) we have
\[(w_{i_{1}},\ldots,w_{i_{k}})=(\varphi(z_{i_{1}}),\ldots,\varphi(z_{i_{k}}))\in \partial\mathscr{D}_{\mathbb{D}}(z_{i_{1}},\ldots,z_{i_{k}}).\]
\((\Leftarrow)\): Let \(k\) be the least positive integer for which there are \(w_{i_{1}},\ldots,w_{i_{k}}\in\{w_{1},\ldots,w_{n}\}\) such that
\[(w_{i_{1}},\ldots,w_{i_{k}})\in\partial\mathscr{D}_{\mathbb{D}}(z_{i_{1}}, \ldots,z_{i_{k}}).\]
Then by Theorem 3.4, there is a Blaschke product \(\psi\) of degree at most \(k-1\) such that \(\psi(z_{i_{j}})=w_{i_{j}}\), \(1\leq j\leq k\). Note that \(k\) is the least positive integer satisfying the above. If \(\psi\) is a Blaschke product of degree less than \(k-1\), then it contradicts with the minimality of \(k\) (this follows from Theorem 3.4). By Theorem 3.5, \(\psi\) is unique. Thus, \(\psi\) is a Blaschke product of degree \(k-1\) and \(\psi=\varphi\). Thus he stated claim therefore holds true.
Lastly, let us now describe the values of \(d_{\underline{\alpha}}^{\mathbb{D}}\) for arbitrary \(\underline{\alpha}\). We recall that if \(\underline{\alpha}\in\mathbb{C}^{n}\) is a nonzero element and \(t=\frac{1}{\mu_{\varnothing_{n}}(\underline{\alpha})}\), then for any \(i\) and \(j\), \(d_{\underline{\alpha}}(z_{i},z_{j})=m(t\alpha_{i},t\alpha_{j})\) and \(t\cdot\underline{\alpha}\in\partial\mathscr{D}_{n}\). Once one finds \(t\), \(d_{\underline{\alpha}}\) can easily be computed. We know that an interpolation problem \(\mathbb{D}\ni z_{j}\mapsto w_{j}\in\mathbb{D},1\leq i,j\leq n\), is solvable if and only if the matrix
\[\mathcal{M}=\Big{(}\tfrac{1-w_{i}\overline{w_{j}}}{1-z_{i}\overline{z_{j}}} \Big{)}_{1\leq i,j\leq n}\]
is positive semidefinite. Also, the solution is a unique if and only if \(det(\mathcal{M})=0\). By Theorem 3.4 and 3.5 we can say that the positive number \(t=\frac{1}{\mu_{\mathscr{D}_{n}}(\underline{\alpha})}\) is a root of the equation
\[det\left(\tfrac{1-x^{2}\alpha_{i}\overline{w_{j}}}{1-z_{i}\overline{z_{j}}} \right)=0. \tag{3.3}\]
Since the left hand side is a polynomial in the single variable \(x\), it is solvable. Let us now consider the following quantity
\[t=max\Big{\{}x\geq 0:x\text{ is a root of \eqref{eq:1} and }\left(\tfrac{1-x^{2}\alpha_{i}\overline{\alpha_{j}}}{1-z_{i}\overline{z_{j}}} \right)\geq\mathbf{0}\Big{\}}. \tag{3.4}\]
**Theorem 3.7**.: _The quantity \(t\) given by (3.4) is \(\frac{1}{\mu_{\mathscr{D}_{n}}(\underline{\alpha})}\)._
Proof.: We have
\[det\left(\tfrac{1-t^{2}\alpha_{i}\overline{\alpha_{j}}}{1-z_{i}\overline{z_{j }}}\right)=0\text{ and }\left(\tfrac{1-t^{2}\alpha_{i}\overline{\alpha_{j}}}{1-z_{i}\overline{z_{j}}} \right)\geq\mathbf{0}.\]
So the interpolation problem \(z_{j}\mapsto t\alpha_{j}\) is solvable and its solution is unique. Theorem 3.5 implies that \(t\cdot\underline{\alpha}\in\partial\mathscr{D}_{n}\). Since \(\partial\mathscr{D}_{n}=\{\mu_{\mathscr{D}_{n}}=1\}\), the proof follows.
**Acknowledgements:** The author's research is supported by GACR (Czech grant agency) grant 22-15012J. The author thanks Dr. Anwoy Maitra for fruitful discussions. |
2309.05107 | Nonlinear Granger Causality using Kernel Ridge Regression | I introduce a novel algorithm and accompanying Python library, named
mlcausality, designed for the identification of nonlinear Granger causal
relationships. This novel algorithm uses a flexible plug-in architecture that
enables researchers to employ any nonlinear regressor as the base prediction
model. Subsequently, I conduct a comprehensive performance analysis of
mlcausality when the prediction regressor is the kernel ridge regressor with
the radial basis function kernel. The results demonstrate that mlcausality
employing kernel ridge regression achieves competitive AUC scores across a
diverse set of simulated data. Furthermore, mlcausality with kernel ridge
regression yields more finely calibrated $p$-values in comparison to rival
algorithms. This enhancement enables mlcausality to attain superior accuracy
scores when using intuitive $p$-value-based thresholding criteria. Finally,
mlcausality with the kernel ridge regression exhibits significantly reduced
computation times compared to existing nonlinear Granger causality algorithms.
In fact, in numerous instances, this innovative approach achieves superior
solutions within computational timeframes that are an order of magnitude
shorter than those required by competing algorithms. | Wojciech "Victor" Fulmyk | 2023-09-10T18:28:48Z | http://arxiv.org/abs/2309.05107v1 | # Nonlinear Granger Causality using Kernel Ridge Regression
###### Abstract
I introduce a novel algorithm and accompanying Python library, named **mlacausality**, designed for the identification of nonlinear Granger causal relationships. This novel algorithm uses a flexible plug-in architecture that enables researchers to employ any nonlinear regressor as the base prediction model. Subsequently, I conduct a comprehensive performance analysis of **mlacausality** when the prediction regressor is the kernel ridge regressor with the radial basis function kernel. The results demonstrate that **mlacausality** employing kernel ridge regression achieves competitive AUC scores across a diverse set of simulated data. Furthermore, **mlacausality** with kernel ridge regression yields more finely calibrated \(p\)-values in comparison to rival algorithms. This enhancement enables **mlacausality** to attain superior accuracy scores when using intuitive \(p\)-value-based thresholding criteria. Finally, **mlacausality** with kernel ridge regression exhibits significantly reduced computation times compared to existing nonlinear Granger causality algorithms. In fact, in numerous instances, this innovative approach achieves superior solutions within computational timeframes that are an order of magnitude shorter than those required by competing algorithms.
**Keywords:** machine learning, Granger causality, time-series, nonlinearity, kernel ridge regression, mlcausality, synthetic network, causal discovery, nonparametric methods.
## 1 Introduction
Identifying the direction of relationships amongst simultaneously observed time-series continues to generate significant interest among researchers from many different fields. Since Nobel prize-winning economist Clive Granger first introduced the concept of Granger causality (Granger, 1969), significant theoretical contributions towards the development of Granger-causal methods have been made by computer scientists, biostatisticians, physicists, mathematicians, and many others. Much of the recent research has focused on identifying nonlinear Granger causality, possibly in the presence of other confounding time-series (Wismuller et al., 2021; Rosol et al., 2022; Lacasa et al., 2015; Gao et al., 2017; D'Souza et al., 2017). The novel approach proposed herein, which I call **mlacausality**, tackles the problem of identifying nonlinear relationships using a regressor-agnostic non-parametric test. Consequently, **mlacausality** has a plug-in architecture that allows the researcher to use any nonlinear regressor, such as a kernel ridge regressor, a support vector regressor, a random forest regressor, or a gradient boosting regressor, as the base prediction model. Due to **mlacausality**'s plug-in architecture and the large number of regressors with which it can be used, the rest of this paper focuses exclusively on analyzing **mlacausality**'s performance when the kernel ridge regressor with the radial basis function kernel is used, with an analysis of the performance of other regressors deferred to subsequent papers. For many nonlinear networks **mlacausality** with kernel ridge regression outperforms rival algorithms in terms of both network recovery performance as well as execution speed.
## 2 Methodology
### An introduction to Granger causality
Fundamentally, Granger causality describes predictability: given two time-series \(X\) and \(Y\), if the histories of both \(X\) and \(Y\) predict future values of \(Y\) better than the history of \(Y\) alone, then \(X\) is said to Granger cause \(Y\). Before continuing, it is important to note that Granger causality itself is a misnomer because no truly causal or mechanical link between the time-series in question is implied. In other words, if \(X\) Granger causes \(Y\), then it could be the case that \(X\) actually causes \(Y\), or it could be the case that \(X\) simply leads \(Y\) and therefore provides useful information about the future value(s) of \(Y\) without actually causing \(Y\).
Classical Granger causality is formulated as follows. Given two time-series \(X\) and \(Y\) and all available information \(U\), \(X\) Granger causes \(Y\) if the variance of the prediction error when the history of \(X\) is included is lower than the variance of the prediction error when \(X\) is excluded:
\[\sigma^{2}(Y|U)<\sigma^{2}(Y|(U-X)) \tag{1}\]
where \((U-X)\) denotes all available information except for time series \(X\).
To evaluate equation 1 two linear models on lagged variables are estimated, an "unrestricted" model that includes the lags of \(X\) and a "restricted" model that excludes the lags of \(X\). Although it is theoretically possible to include different amounts of lags for each time-series, in practice an equal amount of lags for all time-series is typically used. Given an equal amount of lags for all variables, the classical Granger causality model evaluates:
\[Y(t)=\alpha_{r,0}+\sum_{l=1}^{L}\alpha_{r,l}Y(t-l)+E_{r} \tag{2}\] \[Y(t)=\alpha_{u,0}+\sum_{l=1}^{L}\alpha_{u,l}Y(t-l)+\sum_{l=1}^{L }\beta_{u,l}X(t-l)+E_{u} \tag{3}\]
where \(L\) is the total number of lags; \(\alpha_{m,0}\) are the intercepts for the restricted \(r\) and unrestricted \(u\) models \(m\); \(\alpha_{m,l}\) are the coefficients for lags \(l\) of \(Y\); \(\beta_{m,l}\) are the coefficients for lags \(l\) of \(X\), and _E_m_ are the error terms.
Note that, under classical Granger causality, the restricted model is a nested model of the unrestricted model. For linear regression this guarantees that the fit of the unrestricted model will be no worse than that of the restricted model. In order to test whether the variance of the unrestricted model is significantly lower, in the statistical sense, than the variance of the restricted model, an F-test is performed:
\[F=\frac{RSS_{r}-RSS_{u}/L}{RSS_{u}/(N-2L-1)} \tag{4}\]
where \(N\) is the total number of training samples for which all lag terms exist, \(RSS_{m}\) is the residual sum of squares for model \(m\), and \(L\) is the total number of lags. Under the null hypothesis, if the fit from the unrestricted model is not significantly better than that of the restricted model, the _F_-statistic from equation 4 follows an _F_-distribution with \((L,N-2L-1)\) degrees of freedom.
The primary limitation of the classical Granger causality model is that, by construction, the classical model is only able to identify Granger causality if \(X\) and \(Y\) are linearly related. Identifying nonlinear relationships using classical Granger causality would require performing nonlinear data transformations, such as by taking natural logs of the data. However, making such explicit data transformations is not always a trivial task, especially in cases where the data generating process is not well known to the researcher.
On a positive note, the addition of superfluous confounding time-series into the classical model is possible as long as the denominator degrees of freedom remain strictly positive. In other words, if there was an additional confounding time-series \(Z\) that one would want to include in the analysis, then one could simply add \(\sum_{l=1}^{L}\gamma_{m,l}Z(t-l)\) to both the restricted and unrestricted models in equations 2 and 3 and alter the denominator degrees of freedom in equation 4 to \(N-3L-1\). The null hypothesis with confounding time-series \(Z\) is that \(\sigma^{2}(Y|X_{t-1},...,X_{t-L},Y_{t-1},...,Y_{t-L},Z_{t-1},...,Z_{t-L})\)\(\not\sim\sigma^{2}(Y|Y_{t-1},...,Y_{t-L},Z_{t-1},...,Z_{t-L})\). Additional confounding time-series can be added analogously.
### Kernel Ridge Regression
The following provides a brief review of kernel ridge regression. For a more thorough treatment, the reader is encouraged to review Murphy (2012); Vovk (2013); and Exterkate et al. (2016).
Let \(\mathbf{x}\in\mathbb{R}^{D}\) represent a feature vector for \(D\) features, and \(\mathbf{X}\in\mathbb{R}^{N\times D}\) represent a design matrix for \(N\) observations. Given a target vector \(\mathbf{y}\in\mathbb{R}^{N}\), ridge regression minimizes the following objective function:
\[\begin{split}&\min_{w}\ ||\mathbf{y}-\mathbf{X}\mathbf{w}||_{2}^{2}+ \lambda||\mathbf{w}||_{2}^{2}\\ =&\min_{w}\ (\mathbf{y}-\mathbf{X}\mathbf{w})^{T}(\mathbf{y}-\mathbf{X} \mathbf{w})+\lambda\mathbf{w}^{T}\mathbf{w}\end{split} \tag{5}\]
where \(\mathbf{w}\in\mathbb{R}^{D}\) are the regression coefficients and \(\lambda\) is a penalty term that penalizes the magnitudes of those coefficients. Note that \(\lambda\) acts as a regularization term: if \(\lambda=0\), then objective function 5 collapses to a linear regression, while values of \(\lambda\) greater than zero encourage smaller coefficient magnitudes at the optimum.
The solution is found by taking the derivative of the objective function and setting that derivative equal to zero:
\[\mathbf{w} =(\mathbf{X}^{T}\mathbf{X}+\lambda\mathbf{I}_{D})^{-1}\mathbf{X}^{T}\mathbf{y} \tag{6}\] \[\mathbf{w} =\mathbf{X}^{T}(\mathbf{X}\mathbf{X}^{T}+\lambda\mathbf{I}_{N})^{-1}\mathbf{y} \tag{7}\]
where equation 7 is due to the Sherman-Morrison-Woodbury formula (Woodbury, 1950).
The ridge regression model, as described above, produces nothing more than a linear solution that penalizes the magnitudes of the coefficients in relation to \(\lambda\) and is therefore incapable of efficiently modelling nonlinear relationships by itself. In order to introduce nonlinearity, the ridge regression model is kernalized. Intuitively, kernel methods map feature vectors into a higher, possibly infinitely dimensional space. The goal of kernel ridge regression is to then apply (inherently linear) ridge regression on the mappings of the feature vectors in that higher dimensional space. However, from a computational perspective, it turns out that one does not need to actually map feature vectors into a different space at
all; rather, it is sufficient to just calculate relationships between all feature vectors using a kernel function. This process is commonly referred to as the "kernel trick."
Formally, define a symmetric kernel function \(\kappa(\mathbf{x},\mathbf{x}^{\prime})=\kappa(\mathbf{x}^{\prime},\mathbf{x})\in\mathbb{R}\) as a real-valued function for feature vectors \(\mathbf{x},\mathbf{x}^{\prime}\in\chi\), with \(|\chi|=N\). If the Gram matrix \(\mathbf{K}\) with elements \(\kappa(\mathbf{x}_{i},\mathbf{x}_{j})_{ij}\)\(\forall\)\(i,j\in\{1,...,N\}\) is positive definite, then by Mercer's theorem (Mercer, 1909) there exists a function \(\mathbf{\phi}\) mapping \(\mathbf{x}\) to \(\mathbb{R}^{\mathbf{M}}\) such that
\[\kappa(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{\phi}(\mathbf{x})^{T}\mathbf{\phi}(\mathbf{x}^{\prime}) \tag{8}\]
Moreover, Mercer's theorem guarantees that \(M\), the dimension of the space to which \(\mathbf{\phi}\) maps feature vectors into, can be arbitrarily large, potentially infinite. A kernel that satisfies the above Gram matrix condition is called a Mercer kernel.
Ridge regression can be kernalized as follows. Suppose that one wishes to evaluate ridge regression not on the feature vectors \(\mathbf{x}\) directly but rather on some mapping \(\mathbf{\phi}(\mathbf{x})\) into a higher, possibly infinitely dimensional, space. Moreover, let \(\mathbf{\phi}(\cdot)\) represent the multi-observational analogue of \(\mathbf{\phi}(\cdot)\) that admits \(\mathbf{X}\), the design matrix that stores feature vectors \(\mathbf{x}\) for multiple observations, as an input. Then ridge regression could be partially kernalized by replacing \(\mathbf{X}\mathbf{X}^{T}\) in equation 7 with the Gram matrix \(\mathbf{K}\) for some Mercer kernel \(\kappa(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{\phi}(\mathbf{x})^{T}\mathbf{\phi}(\mathbf{x}^{\prime})\), and by replacing \(\mathbf{X}^{T}\) with \(\mathbf{\phi}(\mathbf{X})^{T}\). The partially kernalized solution then becomes:
\[\mathbf{w}=\mathbf{\phi}(\mathbf{X})^{T}(\mathbf{K}+\lambda\mathbf{I}_{N})^{-1}\mathbf{y} \tag{9}\]
Now, let \(\mathbf{\alpha}=(\mathbf{K}+\lambda\mathbf{I}_{N})^{-1}\mathbf{y}\). Equation 9 then becomes:
\[\mathbf{w} =\mathbf{\phi}(\mathbf{X})^{T}\mathbf{\alpha} \tag{10}\] \[=\sum_{i=1}^{N}\alpha_{i}\mathbf{\phi}(\mathbf{x}_{i}) \tag{11}\]
Note that equation 10 is not yet usable because the kernalization is only partial: we still have to deal with \(\mathbf{\phi}(\mathbf{X})^{T}\), which is potentially difficult to evaluate because all feature vectors in \(\mathbf{X}\) would have to be transformed using \(\mathbf{\phi}(\cdot)\) directly. However, if just the in-sample predictions from the kernel ridge regression are needed, then for each feature vector \(\mathbf{x}\), one just has to evaluate
\[\hat{f}(\mathbf{x}) =\mathbf{w}^{T}\mathbf{\phi}(\mathbf{x}) \tag{12}\] \[=\sum_{i=1}^{N}\alpha_{i}\mathbf{\phi}(\mathbf{x}_{i})^{T}\mathbf{\phi}(\mathbf{x})\] (13) \[=\sum_{i=1}^{N}\alpha_{i}\kappa(\mathbf{x}_{i},\mathbf{x}) \tag{14}\]
where the last line completes the kernalization. It is thus possible to generate nonlinear in-sample predictions by fitting an (inherently linear) ridge regression on a mapping of the feature vectors into a higher dimensional space, without actually performing the mapping into that higher dimensional space. In order to perform out-of-sample predictions, one would
first fit equation 14 using the training data and then use the obtained in-sample predictions to precompute \(\mathbf{\alpha}\); out-of-sample predictions could then be made directly using equation 14 with that precomputed \(\mathbf{\alpha}\) and a previously unseen feature vector \(\mathbf{x}\) that comes from the test set.
Using kernel ridge regression in practice involves choosing a suitable kernel function and, by design, the **mlacausality** Python library does not provide any restrictions with regards to this: the user is free to choose any kernel function supported by the **scikit-learn** Python library. However, all of the subsequent analysis presented herein was performed using the radial basis function (RBF) kernel, also known as the Gaussian kernel:
\[\kappa(\mathbf{x},\mathbf{x}^{\prime})=e^{-\gamma||\mathbf{x}-\mathbf{x}^{\prime}||_{2}^{2}} \tag{15}\]
where \(\gamma\in(0,\infty)\) is a hyperparameter. The RBF kernel is infinitely dimensional, as can be seen in the multinomial expansion in Shashua (2009).
### The mlcausality Algorithm
Suppose there is a set \(\mathbf{X}\) of equally-spaced time-series that share a common time index and do not contain any gaps. Suppose that each of these time series is represented by a column vector \(\mathbf{x}\in\mathbf{X}\) and that each of these vectors has length \(N+L\). Suppose that \(\mathbf{X}\) contains at least 2 such time series, that is, that \(|\mathbf{X}|=G\geq 2\). Let \(L\) represent the number of lags. Moreover, let \(\mathbf{X}_{u}\in\mathbb{R}^{N\times LG}\) be a matrix of \(L\) lags \(\forall\)\(G\) time-series in \(\mathbf{X}\), as depicted in the matrix below:
\[\mathbf{X_{u}}=\begin{bmatrix}\mathbf{x}_{1,1}&\cdots&\mathbf{x}_{1,L}& \cdots\cdots&\mathbf{x}_{G,1}&\cdots&\mathbf{x}_{G,L}\\ \mathbf{x}_{1,2}&\cdots&\mathbf{x}_{1,L+1}&\cdots\cdots&\mathbf{x}_{G,2}& \cdots&\mathbf{x}_{G,L+1}\\ \mathbf{x}_{1,3}&\cdots&\mathbf{x}_{1,L+2}&\cdots\cdots&\mathbf{x}_{G,3}& \cdots&\mathbf{x}_{G,L+2}\\ \vdots&\cdots&\vdots&\ddots&\vdots&\cdots&\vdots\\ \mathbf{x}_{1,N-2}&\cdots&\mathbf{x}_{1,N+L-3}&\cdots\cdots&\mathbf{x}_{G,N-2 }&\cdots&\mathbf{x}_{G,N+L-3}\\ \mathbf{x}_{1,N-1}&\cdots&\mathbf{x}_{1,N+L-2}&\cdots\cdots&\mathbf{x}_{G,N-1 }&\cdots&\mathbf{x}_{G,N+L-2}\\ \mathbf{x}_{1,N}&\cdots&\mathbf{x}_{1,N+L-1}&\cdots\cdots&\mathbf{x}_{G,N}& \cdots&\mathbf{x}_{G,N+L-1}\end{bmatrix}\]
where the first coordinate in the subscript of an element indicates the time-series and the second coordinate represents the time index. Note that a single row in the above matrix represents a concatenation of \(1,...,L\) lags \(\forall\)\(G\) time-series in \(\mathbf{X}\), and that only rows that have no missing lags are kept. Similarly, let \(\mathbf{X}_{r}\in\mathbb{R}^{N\times L(G-1)}\) be a matrix of \(L\) lags \(\forall\)\(G-1\) time-series in \(\mathbf{X}\) other than time-series \(\mathbf{q}\in\mathbf{X}\). Then, in order to test whether time-series \(\mathbf{q}\in\mathbf{X}\) Granger-causes time-series \(\mathbf{y}\in\mathbf{X}\) with \(\mathbf{q}\neq\mathbf{y}\), **mlacausality** evaluates
\[\hat{\mathbf{y}}_{r} =f_{r}(\mathbf{X}_{r},\mathbf{y}_{true},*_{r}) \tag{16}\] \[\hat{\mathbf{y}}_{u} =f_{u}(\mathbf{X}_{u},\mathbf{y}_{true},*_{u}) \tag{17}\]
where the subscripts \(r\) and \(u\) represent the restricted and unrestricted models, respectively; \(f\) represents any nonlinear regressor, such as the kernel ridge regressor described above; and \(*\) is a placeholder for any additional hyperparameters the chosen regressor accepts.
For kernel ridge regression with the RBF kernel \(*_{r}=(\lambda,\gamma_{r})\) and \(*_{u}=(\lambda,\gamma_{u})\). Note that, in theory, \(\lambda\) can be varied between the restricted and unrestricted models, however, different
\(\lambda\) values for the restricted and unrestricted models would make it difficult to assess whether the restricted and unrestricted models perform differently solely on the bases of the exclusion or inclusion of time-series \(\mathbf{q}\); as such, \(\lambda\) will be kept the same for both models. It is not clear which value of \(\lambda\) should be chosen for the nonlinear Granger causality identification task, although simulation results suggest that the performance of **mlacausality** with the kernel ridge regressor and the RBF kernel is not very sensitive to the choice of \(\lambda\). Consequently, all results presented herein use \(\lambda=1\), which is the **scikit-learn** Python library default value for the penalty term in kernel ridge regression.
On the other hand, \(\gamma\) is subscripted with the model type because it is natural to use \(\gamma=1/D\), where \(D\) is the number of features, i.e. the number of columns in either \(\mathbf{X}_{r}\) or \(\mathbf{X}_{\mathbf{u}}\), for the problem at hand. This is because parameter \(\gamma\) in the RBF kernel acts as a multiplier for the square of the Euclidean norm, where the number of elements in the sum of that norm is equal to the number of elements in a feature vector. Using an identical \(\gamma\) for both models would not account or correct for the greater amount of features in the unrestricted model stemming from its inclusion of time-series \(\mathbf{q}\). As such, all results presented herein use \(\gamma=1/D\) which acts as a weight that scales the squared norm in the RBF kernel by the number of features in that norm.
#### 2.3.1 The Statistical Test
A notable distinction between classical Granger causality and **mlacausality** lies in the substitution of linear regression with a nonlinear regressor, such as the kernel ridge regressor or the support vector regressor. The task now is to identify an appropriate approach for assessing whether the forecasts generated by the unrestricted model exhibit superior performance compared to those produced by the restricted model in the context of the usage of that nonlinear regressor.
Recall that classical Granger causality relies on an \(F\)-test to determine if the variance of the unrestricted model significantly differs from that of the restricted model. Nevertheless, the incorporation of nonlinear regressors within the framework of **mlacausality** can potentially lead to suboptimal outcomes due to the \(F\)-test's sensitivity to normality, or even to an outright violation of some of the assumptions underpinning the \(F\)-test. Furthermore, depending on the specific regressor employed, it may prove challenging or even infeasible to precisely calculate the required degrees of freedom for executing the \(F\)-test. Consequently, instead of resorting to the \(F\)-test, **mlacausality** opts for the sign test (Dixon and Mood, 1946), a non-parametric test that imposes minimal assumptions.
The sign test compares the counts of positive and negative values and checks whether they follow a binomial distribution with the probability of success set to 0.5. The null hypothesis of the sign test is that the median of the distribution is equal to zero, while the one-sided alternative hypothesis is that the the median is greater than zero.
The data subjected to the sign test is formulated in the following manner. First, the absolute values of the errors from the restricted and unrestricted models are calculated:
\[|\mathbf{E}_{r}| =|\hat{\mathbf{y}}_{r}-\mathbf{y}_{true}| \tag{18}\] \[|\mathbf{E}_{u}| =|\hat{\mathbf{y}}_{u}-\mathbf{y}_{true}| \tag{19}\]
where \(|\mathbf{v}|\) indicates, through an abuse of notation quite common in computer science, the absolute value of every element in vector \(\mathbf{v}\). Then, a difference between the absolute values of the errors is found: \(\mathbf{\delta}=|\mathbf{E}_{r}|-|\mathbf{E}_{u}|\). The sign test is subsequently employed on \(\mathbf{\delta}\).
There are several issues related to the usage of the sign test in this context that must be addressed. First off, note that, although the sign test does not come with very restrictive assumptions, there is one key assumption that must be met: that of independence of all elements in \(\mathbf{\delta}\). For the context presented herein independence can be assured by splitting the original set of time-series data into distinct train and test sets on the time domain. Once the split is formed, both the restricted and unrestricted models are trained on the train data only, with the difference vector \(\mathbf{\delta}\) being constructed solely from the restricted and unrestricted models' predictions of the test data. Note that the train-test split has to account for the time-series characteristics of the underlying data; in other words, the training data has to precede the test data, and there has to be a gap of at least \(L\) observations between the train and test sets in order to avoid data leakage. In order to satisfy the above, by default, the **mlacausality** Python library uses the first 70% of the observations as the training data and the remaining 30% of the observations as the testing data, with a gap of \(L\) observations between these two datasets. All results presented in this paper are with respect to this default data split, although superior results could be obtained with different splits in some cases. Note that the need to split the data into distinct train and test sets represents a departure from classical Granger causality where the train and test sets are identical and equal to all the data that is available. The main implication of this departure is that, in order for the **mlacausality** test to work as intended, the train and test data have to come from the same distribution.
Secondly, it is important to acknowledge that the sign test allocates identical weights to all time periods and does not take into account the extent to which one model outperforms the other at a specific time point. An alternative to the sign test that assigns more weight to time periods in \(|\mathbf{\delta}|\) that have greater values is the Wilcoxon signed rank test (Wilcoxon, 1945). The null hypothesis of the Wilcoxon signed rank test is that the difference between two related paired samples is symmetric around a value less than or equal to zero, while the alternative hypothesis is that the difference is symmetric around a value greater than zero. The Wilcoxon signed rank test evaluates the following test statistic:
\[T=\sum_{i=1}^{N}sgn(\delta_{i})R_{i} \tag{20}\]
where \(sgn(\delta_{i})\) is the sign of \(\delta_{i}\), and \(R_{i}\) is the rank of the magnitude of \(\delta_{i}\) in vector \(\mathbf{\delta}\), with the smallest magnitude being assigned rank one, the second smallest magnitude being assigned rank two, and so on.
The Wilcoxon signed rank test would provide greater power than the sign test if the assumptions of the Wilcoxon signed rank test were fully met; in particular, if all elements in \(\mathbf{\delta}\) were independent, and if the distribution of \(\mathbf{\delta}\) was symmetric. The independence assumption can be satisfied with a suitable train-test split, but the symmetry assumption required by the Wilcoxon signed rank test is more restrictive than the minimalist assumptions of the sign test, which only requires the satisfaction of independence. Consequently, the **mlacausality** Python library uses the sign test by default, although there are overrides that allow the
usage of the Wilcoxon signed rank test for those that wish to use it. In practice, at least for the simulated networks presented herein, the sign test tends to perform very well, and there is typically no need to depart from the sign test in favour of the Wilcoxon signed rank test.
#### 2.3.2 Data Preprocessing
Prior to feeding the raw data into the **mlcausality** algorithm, as defined above, some preprocessing steps may be needed or desired.
First off, as is typical for these types of analysis, one should ensure that all time-series fed into the **mlcausality** algorithm are stationary. Moreover, depending on the regressor that is used, additional transformations may be desired. When the kernel ridge regressor is used all time-series should be, at a minimum, scaled to similar magnitudes. This is necessary because, if there are large discrepancies in the magnitudes of the features, then the effect of penalty parameter \(\lambda\) will differ substantially from one feature to the next, which could cause some features to either under-penalized or over-penalized compared to others based on their scale alone.
In practice, for the **mlcausality** algorithm exclusively, the time-series data for analysis presented herein underwent a transformation using a quantile transformer. To be more specific, each individual time-series was partitioned into 1000 quantiles and then subjected to a transformation that mapped them onto a uniform distribution scaled within the range of 0 to 1. This particular transformation yields outstanding results when employed in conjunction with the sign test utilized by **mlcausality**. With this quantile transformation, the criterion for better prediction is one that forecasts a quantile closer to the quantile of the actual value, regardless of whether the prediction is closer in terms of the units of the outcome variable. In essence, the quantile transformation effectively compels the sign test to assess models based on their capacity to predict the quantile of the actual value, rather than their capability to predict the actual value itself.
## 3 Results
In what follows, the performance of **mlcausality** (MLC in the following plots) is compared to the following three algorithms: the large-scale nonlinear granetary causality (lsNGC) algorithm (Wismuller et al., 2021); the mutual nonlinear cross-mapping methods using local models (LM) algorithm (Sugihara et al. (2012), using the software implementation provided by Javier (2021)); and the PC-momentary conditional independence (PCMCI) algorithm (Runge et al., 2019). For lsNGC, \(c_{f}\) = 25 and \(c_{g}\) = 5, which are suggested in Wismuller et al. (2021), unless the lsNGC algorithm threw an error, at which point \(c_{f}\) was progressively decreased until all simulated data ran successfully. For all other algorithms the software defaults were kept.
All models were run on a computer with an AMD Ryzen 5 3600 6-Core processor with 12 threads in total. In order to ensure that the comparison between **mlcausality** and other competing algorithms is as fair as possible, all algorithms were parallelized to run on 12 processes, one for each thread.
The results presented herein are for lag orders selected using Cao's minimum embedding dimension selection method (Cao, 1997). In particular, for every network and time-series length combination, Cao's algorithm is run on all time-series in that combination, with the
largest identified minimum embedding dimension used as the basis for construction that combination's lag order.
### The Simulated Networks
The following briefly describes the data generating processes for the simulated networks for which performance results are presented. All networks were initialized using normally distributed white noise \(w(t)\) with mean = 0 and variance = 1. Every network was analyzed for time-series lengths of 500, 1000, 1500, and 2000 time-points. For every network and every time-series length, 50 independent sets were generated after discarding a 500 time-point burn-in. The network plots for all tested networks are available in figure 1.
_5-node linear network:_ This network was first proposed as example 3 in Baccala and Sameshima (2001). The network is generated using the following multivariate autoregressive model:
\[\begin{split} x_{1}(t)&=0.95\sqrt{2}x_{1}(t-1)-0.9 025x_{1}(t-2)+w_{1}(t)\\ x_{2}(t)&=0.5x_{1}(t-2)+w_{2}(t)\\ x_{3}(t)&=-0.4x_{1}(t-3)+w_{3}(t)\\ x_{4}(t)&=-0.5x_{1}(t-2)+0.25\sqrt{2}x_{4}(t-1)+0.2 5\sqrt{2}x_{5}(t-1)+w_{4}(t)\\ x_{5}(t)&=-0.25\sqrt{2}x_{4}(t-1)+0.25\sqrt{2}x_{5} (t-1)+w_{5}(t)\end{split} \tag{21}\]
This network has the following causal connections: \(x_{1}\to x_{2}\), \(x_{1}\to x_{3}\), \(x_{1}\to x_{4}\), \(x_{4}\to x_{5}\), and \(x_{5}\to x_{4}\). This network structure has the potential to present several difficulties for a Granger causality recovery algorithm. For instance, although there is no direct causal link
Figure 1: Network plots
between \(x_{2}\), \(x_{3}\), and \(x_{4}\), they may all be correlated because of the causal effect of \(x_{1}\) on all of them.
_5-node nonlinear network:_ This network was introduced in Wismuller et al. (2021). The causal connections in this 5-node nonlinear network are identical to those of the 5-node linear network, except some of those connections are converted into nonlinear ones:
\[\begin{split}& x_{1}(t)=0.95\sqrt{2}x_{1}(t-1)-0.9025x_{1}(t-2)+w_{ 1}(t)\\ & x_{2}(t)=0.5x_{1}^{2}(t-2)+w_{2}(t)\\ & x_{3}(t)=-0.4x_{1}(t-3)+w_{3}(t)\\ & x_{4}(t)=-0.5x_{1}^{2}(t-2)+0.5\sqrt{2}x_{4}(t-1)+0.25\sqrt{2}x _{5}(t-1)+w_{4}(t)\\ & x_{5}(t)=-0.5\sqrt{2}x_{4}(t-1)+0.5\sqrt{2}x_{5}(t-1)+w_{5}(t) \end{split} \tag{22}\]
_7-node nonlinear network:_ This network introduces many very complicated nonlinear interactions:
\[\begin{split}& x_{1}(t)=0.95\sqrt{2}x_{1}(t-1)-0.9025x_{1}(t-2)+w_{ 1}(t)\\ & x_{2}(t)=-0.04x_{1}^{3}(t-3)+0.04x_{1}^{3}(t-1)+w_{2}(t)\\ & x_{3}(t)=-0.04\sqrt{2}x_{2}^{3}(t-1)+0.04\sqrt{2}x_{2}^{3}(t-2 )+w_{3}(t)\\ & x_{4}(t)=ln(1+|x_{3}(t-1)|)*sgn(x_{3}(t-1))+0.001x_{7}^{3}(t-2) -0.001x_{7}^{3}(t-3)+w_{4}(t)\\ & x_{5}(t)=0.04*clip(w_{5a}(t),-1,1)*x_{6}^{5}(t-2)+w_{5b}(t)\\ & x_{6}(t)=0.04*x_{1}^{3}(t-2)+0.04*x_{3}^{3}(t-1)+w_{6}(t)\\ & x_{7}(t)=clip(w_{7a}(t),-0.5,0.5)*(0.04x_{1}^{3}(t-2)+0.1x_{6}^ {2}(t-1)-0.1x_{6}^{2}(t-2))+w_{7b}(t)\end{split} \tag{23}\]
where \(clip(v,a,b)\) is a function that limits \(v\) to the range \([a,b]\) and \(sgn(v)\) is the sign of \(v\). The connections for this network are as follows: \(x_{1}\to x_{2}\), \(x_{1}\to x_{6}\), \(x_{1}\to x_{7}\), \(x_{2}\to x_{3}\), \(x_{3}\to x_{4}\), \(x_{3}\to x_{6}\), \(x_{6}\to x_{5}\), \(x_{6}\to x_{7}\), and \(x_{7}\to x_{4}\).
_9-node nonlinear network:_ This network combines many nonlinear interactions with autoregressive terms for all variables:
\[\begin{split}& x_{1}(t)=0.95\sqrt{2}x_{1}(t-1)-0.9025x_{1}(t-2)+w_{ 1}(t)\\ & x_{2}(t)=0.5x_{1}^{2}(t-2)+0.5x_{2}^{2}(t-1)-0.4x_{2}^{2}(t-2)+ w_{2}(t)\\ & x_{3}(t)=-0.4x_{1}(t-3)+0.5x_{3}^{2}(t-1)-0.4x_{3}^{2}(t-2)+w_{ 3}(t)\\ & x_{4}(t)=-0.5x_{1}^{2}(t-2)+0.5x_{4}^{2}(t-1)-0.4x_{4}^{2}(t-2) +0.5\sqrt{2}x_{4}(t-1)+0.25\sqrt{2}x_{5}(t-1)+w_{4}(t)\\ & x_{5}(t)=-0.5\sqrt{2}x_{4}(t-1)+0.5\sqrt{2}x_{5}(t-1)+w_{5}(t) \\ & x_{6}(t)=sgn(x_{4}(t-1))*ln(|x_{4}(t-1)|+1)+0.5x_{6}^{2}(t-1)-0. 4x_{6}^{2}(t-2)+w_{6}(t)\\ & x_{7}(t)=0.04*clip(w_{7a}(t),-1,1)*x_{6}^{5}(t-2)+0.5x_{7}^{2}(t -1)-0.4x_{7}^{2}(t-2)+w_{7b}(t)\\ & x_{8}(t)=0.4x_{1}(t-2)+0.25x_{3}^{3}(t-1)+0.5x_{8}^{2}(t-1)-0.4 x_{8}^{2}(t-2)+w_{8}(t)\\ & x_{9}(t)=clip(w_{9a}(t),-0.5,0.5)*(0.2x_{1}(t-2)+0.1x_{8}^{2}(t -1)-0.1x_{8}^{2}(t-2))+0.5x_{9}^{2}(t-1)-0.4x_{9}^{2}(t-2)+w_{9b}(t),\end{split} \tag{24}\]
where \(clip(v,a,b)\) is a function that limits \(v\) to the range \([a,b]\) and \(sgn(v)\) is the sign of \(v\). The connections of this network are as follows: \(x_{1}\to x_{2}\), \(x_{1}\to x_{3}\), \(x_{1}\to x_{4}\), \(x_{1}\to x_{8}\), \(x_{1}\to x_{9}\), \(x_{3}\to x_{8}\), \(x_{4}\to x_{5}\), \(x_{4}\to x_{6}\), \(x_{5}\to x_{4}\), \(x_{6}\to x_{7}\), and \(x_{8}\to x_{9}\).
_11-node nonlinear network:_ Yet another network with many nonlinear interactions:
\[\begin{split} x_{1}(t)&=0.25x_{1}^{2}(t-1)-0.25x_{1}^{2 }(t-2)+w_{1}(t)\\ x_{2}(t)&=ln(1+|x_{1}(t-2)|)*sgn(x_{1}(t-2))+w_{2}(t) \\ x_{3}(t)&=-0.1x_{2}^{3}(t-3)+w_{3}(t)\\ x_{4}(t)&=-0.5x_{2}^{2}(t-2)+0.5\sqrt{2}x_{4}(t-1)+0. 25\sqrt{2}x_{5}(t-1)+w_{4}(t)\\ x_{5}(t)&=-0.5\sqrt{2}x_{4}(t-1)+0.5\sqrt{2}x_{5}(t-1 )+w_{5}(t)\\ x_{6}(t)&=ln(1+|x_{4}(t-1)|)*sgn(x_{4}(t-1))+w_{6}(t) \\ x_{7}(t)&=0.04*clip(w_{7a}(t),-1,1)*x_{6}^{5}(t-2 )+w_{7b}(t)\\ x_{8}(t)&=0.4x_{1}(t-2)+0.25x_{3}^{3}(t-1)+w_{8}(t) \\ x_{9}(t)&=clip(w_{9a}(t),-0.5,0.5)*(0.2x_{1}(t-2 )+0.1x_{8}^{2}(t-1)-0.1x_{8}^{2}(t-2))+w_{9b}(t)\\ x_{10}(t)&=0.25x_{1}^{2}(t-3)-0.01x_{2}^{2}(t-3)+0. 15x_{3}^{3}(t-3)+w_{10}(t)\\ x_{11}(t)&=0.1x_{2}^{4}(t-1)-0.1x_{2}^{4}(t-2)+0.1x_ {6}^{3}(t-3)+w_{11}(t)\end{split} \tag{25}\]
where \(clip(v,a,b)\) is a function that limits \(v\) to the range \([a,b]\) and \(sgn(v)\) is the sign of \(v\). The connections of this network are as follows: \(x_{1}\to x_{2}\), \(x_{1}\to x_{8}\), \(x_{1}\to x_{9}\), \(x_{1}\to x_{10}\), \(x_{2}\to x_{3}\), \(x_{2}\to x_{4}\), \(x_{2}\to x_{10}\), \(x_{2}\to x_{11}\), \(x_{3}\to x_{8}\), \(x_{3}\to x_{10}\), \(x_{4}\to x_{5}\), \(x_{4}\to x_{6}\), \(x_{5}\to x_{4}\), \(x_{6}\to x_{7}\), and \(x_{8}\to x_{9}\).
_34-node Zachary1 and Zachary2 networks:_ These networks are identical to the Zachary1 and Zachary2 networks in Wismuller et al. (2021) and are constructed using the undirected connections in the Zachary karate club dataset (Zachary, 1977). The Zachary dataset is a social network composed of 34 members of a karate club that lists links between pairs of members that interacted outside the club. The nodal interactions, adapted from Marinazzo et al. (2008), are as follows:
\[x_{i}(t)=\Bigg{(}1-\sum_{j=1}^{n}c_{ij}\Bigg{)}(1-ax_{i}^{2}(t-1))+\sum_{j=1} ^{n}c_{ij}(1-ax_{j}^{2}(t-1))+sw_{i}(t) \tag{26}\]
where \(c_{ij}\) indicates the coupling \(j\to i\). For the Zachary1 network, all 78 edges linking the 34 nodes in the network are assumed to be bidirectional and \(a=1.8\), \(s=0.01\), and \(c=0.025\). For the Zachary2 network, 5 of the 78 edges linking the 34 nodes in the network are randomly selected to be bidirectional, while for the rest of the links, the direction is assigned randomly. Moreover, for the Zachary2 network \(c=0.05\). Note that each of the 50 independent sets of Zachary2 networks could have a different network structure because the directions of the links are randomly assigned for each of the 50 independent sets.
### Evaluating mlcausality's Network Recovery Performance
#### 3.2.1 Non-thresholded Metrics
Figure 2 shows the area under the receiver operating characteristic curve (AUC) for all model, network, and time-series length combinations. For nonlinear networks up to around 11 nodes in size **mlcausality** with kernel ridge regression exhibits leading, joint-leading, or near-leading AUC performance, with AUC performance significantly declining for the
34-node Zachary1 and Zachary2 networks. For networks with 9 nodes or less substantial performance improvements are not observed as the time-series length increases. For the 11-node nonlinear, Zachary1, and Zachary2 networks **mlacausality**'s performance steadily improves as the time-series length increases: this indicates that for networks greater than around 10 nodes in length at least 2000 time-points are needed to achieve peak performance with the default 70-30 train-test split. A similar pattern is observed for rival algorithms when recovering the Zachary1 and Zachary2 networks but not the 11-node nonlinear network. This implies that **mlacausality** is more "data hungry" than competing algorithms, a fact that should not be surprising given that **mlacausality** splits the data into separate train and test sets. It can therefore be concluded that for networks of up to 10 nodes in size that have at least 500 time-points **mlacausality** achieves leading or near-leading AUC performance. Moreover, given the data-hungry nature of **mlacausality** and the trajectory of improving AUC scores as the number of time-points increases for the 34-node Zachary1 and Zachary2 networks, it is plausible to assume that **mlacausality** might be capable of matching or exceeding the AUC performance of other algorithms given sufficently long time-series data.
Figure 3 compares the Brier scores for all network, time-series length, and model combinations. Here, we see that **mlacausality** exhibits significantly lower Brier scores than competing algorithms. The implication of lower Brier scores is that **mlacausality**'s \(p\)-values are better calibrated than rival algorithms and are a truer reflection of actual probabilities. Furthermore, for most of the analyzed networks **mlacausality**'s Brier scores tend to decrease as the time-series length increases: this implies that the \(p\)-values tend to converge towards true probabilities as time-series lengths increase. The same is not true for any of the other
Figure 2: AUC boxplots for time-series of different lengths. In all cases the median of the distribution is represented by a black square; the upper and lower edges of a box represent the 25th and 75th percentiles; and the whiskers extend to the distribution maximums and minimums. MLC indicates the **mlacausality** model; lsNGC indicates the large-scale nonlinear granger causality algorithm; LM indicates the mutual nonlinear cross-mapping methods using local models approach; and PCMCI indicates the PC-momentary conditional independence algorithm.
tested networks, with lsNGC in particular generally exhibiting higher Brier scores with increased time-series lengths. The above suggests that _p_-value-based thresholding rules are likely to perform better for **mlacausality** than competing algorithms, especially with long time-series data.
#### 3.2.2 Metrics at G-mean Optimal Thresholds
Figure 4 shows the accuracy at optimal geometric mean of sensitivity and specificity (G-mean) _p_-value thresholds. Specifically, for every model, network, and time-series combination, 100 equally-spaced _p_-value thresholds were tested, with the _p_-value threshold that generated the highest median G-mean for the 50 independent sets in that combination being chosen. Figure 4 indicates that **mlacausality** exhibits leading or joint-leading accuracy at optimal _p_-value thresholds for networks up to and including 11 nodes in size, and near-leading accuracy for the 34-node Zachary networks when the number of observations is high (2000+ time-points).
Moreover, figure 4 suggests that relying on AUC scores alone can be a somewhat misleading indicator of true model performance when a thresholding criteria must be implemented. For instance, despite near-perfect AUC scores for the lsNGC model on the 5-node linear network, at optimal G-mean thresholds, the accuracies for those same model and network combinations are well below 1. This seemingly disturbing discrepancy is not the result of an error. The lsNGC AUC scores indicate that, for nearly every independent instance of the 5-node linear model, there exists some split that leads to perfect classification. On the other hand, the accuracy scores in figure 4 are evaluated at thresholds that maximize the
Figure 3: Brier score boxplots for time-series of different lengths. In all cases the median of the distribution is represented by a black square; the upper and lower edges of a box represent the 25th and 75th percentiles; and the whiskers extend to the distribution maximums and minimums. MLC indicates the **mlacausality** model; lsNGC indicates the large-scale nonlinear graner causality algorithm; LM indicates the mutual nonlinear cross-mapping methods using local models approach; and PCMCI indicates the PC-momentary conditional independence algorithm.
median G-mean for _all_ 50 independent sets in a network-time-series combination. Although almost every single instance of the 5-node linear network can be perfectly classified by some _p_-value split when using the lsNGC model, that split is not stable from one independent set of data to the next. Furthermore, as the number of time-points increases, the accuracy of the lsNGC model tends to decline, which is entirely consistent with the increasing Brier scores in time-series lengths observed in figure 3.
Figure 5 shows the balanced accuracy scores at the maximal G-mean threshold. Balanced accuracy differs from accuracy in that balanced accuracy measures the average of accuracy scores for both the minority and majority classes (in effect, this is equivalent to taking the average of the sensitivity and the specificity). The balanced accuracy scores confirm that **mlaculsality** exhibits leading performance at optimal _p_-value thresholds for networks up to and including 9 nodes in size. For the 11-node and Zachary2 networks **mlaculsality** achieves near-leading balanced accuracy only when the number of observations is very high (2000+ time-points). For the Zachary1 network near-leading balanced accuracy is never achieved, but **mlaculsality** does exhibit rapidly rising balanced accuracy scores in time-series length which suggests that leading or near-leading performance may be possible with a sufficiently long time-series data.
#### 3.2.3 Metrics at the _p_-value = 0.05 Threshold
When Granger causality algorithms are used in practice, the true nature of the Granger causal relationships in the studied networks are unknown to the researcher, and therefore the researcher will not be able to use an optimal threshold derived from unobserved sensitivity
Figure 4: Accuracy boxplots for a threshold at the G-mean max for time-series of different lengths. In all cases the median of the distribution is represented by a black square; the upper and lower edges of a box represent the 25th and 75th percentiles; and the whiskers extend to the distribution maximums and minimums. MLC indicates the **mlaculsality** model; lsNGC indicates the large-scale nonlinear grancer causality algorithm; LM indicates the mutual nonlinear cross-mapping methods using local models approach; and PCMCI indicates the PC-momentary conditional independence algorithm.
and specificity such as G-mean or Youden's index. However, the researcher will have to pick and settle on some threshold anyway in order to recover the relationships and make the analysis meaningful. Figures 6 and 7 below show the accuracy and balanced accuracy scores when the \(p\)-value threshold is set to 0.05, a commonly used significance level. Comparing accuracy and balanced accuracy scores in figures 6 and 7 to their counterparts at G-mean optimal thresholds in figures 4 and 5 reveals that the 0.05 \(p\)-value threshold yields excellent, near-optimal results for the **mlacausality** algorithm for the analyzed networks. Moreover, at the 0.05 threshold, **mlacausality** achieves superior accuracy to all competing networks, and leading or joint-leading balanced accuracy scores for networks of 9 nodes or less. For networks with 11 nodes or more **mlacausality**'s balanced accuracy at the 0.05 threshold increases with time-series length, which suggests that, for sufficiently long time-series, **mlacausality** may be able to match or exceed the balanced accuracy performance of competing algorithms.
### Evaluating mlcausality's Runtime Performance
Table 1 presents the runtimes for all model, network, and time-series length combinations. In all cases, **mlacausality** with kernel ridge regression and the RBF kernel runs significantly faster than rival algorithms, in some cases more than 10 times faster than the second fastest algorithm. Moreover, **mlacausality** appears to scale much better, in terms of runtime, to increasing network sizes and time-series lengths than competing algorithms. When coupled with improving AUC, accuracy, and balanced accuracy performance in time-series length, **mlacausality**'s excellent runtime performance provides strong arguments in favor of its usage to handle exceptionally large and complex time-series networks.
Figure 5: Balanced accuracy boxplots for a threshold at the G-mean max for time-series of different lengths. In all cases the median of the distribution is represented by a black square; the upper and lower edges of a box represent the 25th and 75th percentiles; and the whiskers extend to the distribution maximums and minimums. MLC indicates the **mlacausality** model; lsNGC indicates the large-scale nonlinear granger causality algorithm; LM indicates the mutual nonlinear cross-mapping methods using local models approach; and PCMCI indicates the PC-momentary conditional independence algorithm.
Figure 6: Accuracy boxplots for a threshold at \(p\)-value=0.05 for time-series of different lengths. In all cases the median of the distribution is represented by a black square; the upper and lower edges of a box represent the 25th and 75th percentiles; and the whiskers extend to the distribution maximums and minimums. MLC indicates the **mleaulsality** model; lsNGC indicates the large-scale nonlinear granger causality algorithm; LM indicates the mutual nonlinear cross-mapping methods using local models approach; and PCMCI indicates the PC-momentary conditional independence algorithm.
Figure 7: Balanced accuracy boxplots for a threshold at \(p\)-value=0.05 for time-series of different lengths. In all cases the median of the distribution is represented by a black square; the upper and lower edges of a box represent the 25th and 75th percentiles; and the whiskers extend to the distribution maximums and minimums. MLC indicates the **mleaulsality** model; lsNGC indicates the large-scale nonlinear granger causality algorithm; LM indicates the mutual nonlinear cross-mapping methods using local models approach; and PCMCI indicates the PC-momentary conditional independence algorithm.
## 4 Conclusion
In this paper I presented a new method and associated Python library, **mlacausality**, for identifying nonlinear Granger causal relationships. Despite **mlacausality** containing a plug-in architecture that allows for the usage of any nonlinear regressor, the analysis presented herein focused specifically on the kernel ridge regressor with the radial basis function kernel. On simulated networks **mlacausality** with the kernel ridge regressor and the RBF kernel achieves excellent performance at runtimes that are, in many cases, up to 10 times lower than the second fastest competing algorithm. For moderately sized networks of up to 11 nodes **mlacausality** achieves leading or highly competitive recovery performance as measured by AUC, and tends to achieve superior accuracy and balanced accuracy when thresholded using \(p\)-value-based criteria. Furthermore, **mlacausality**'s improving AUC, accuracy, balanced accuracy and Brier scores in time-series length for large networks, when coupled with **mlacausality**'s excellent runtime performance, position this algorithm well to handle large networks with many time-steps. In short, **mlacausality** achieves leading nonlinear Granger causality performance at a fraction of the computational time of rival algorithms.
\begin{table}
\begin{tabular}{|c c||c c c c|} \hline \multicolumn{2}{|c||}{**Network Details**} & \multicolumn{3}{c|}{**Runtime for 50 iterations using multiprocessing (seconds)**} \\
**Network** & **Num. obs.** & **mlacausality** & **lsNGC** & **LM** & **PCMCI** \\ \hline \hline \multirow{4}{*}{_5-linear_} & _500_ & \textless{}1 & 23 & 4 & 5 \\ & _1000_ & \textless{}1 & 47 & 11 & 6 \\ & _1500_ & 2 & 72 & 22 & 7 \\ & _2000_ & 5 & 95 & 36 & 7 \\ \hline \multirow{4}{*}{_5-nonlinear_} & _500_ & \textless{}1 & 24 & 4 & 4 \\ & _1000_ & \textless{}1 & 48 & 11 & 5 \\ & _1500_ & 2 & 72 & 22 & 6 \\ & _2000_ & 5 & 95 & 35 & 6 \\ \hline \multirow{4}{*}{_7-nonlinear_} & _500_ & \textless{}1 & 36 & 9 & 9 \\ & _1000_ & 1 & 73 & 24 & 11 \\ & _1500_ & 3 & 111 & 45 & 14 \\ & _2000_ & 7 & 147 & 72 & 17 \\ \hline \multirow{4}{*}{_9-nonlinear_} & _500_ & \textless{}1 & 60 & 16 & 11 \\ & _1000_ & 1 & 120 & 42 & 13 \\ & _1500_ & 4 & 180 & 77 & 15 \\ & _2000_ & 9 & 237 & 120 & 17 \\ \hline \multirow{4}{*}{_11-nonlinear_} & _500_ & \textless{}1 & 74 & 25 & 17 \\ & _1000_ & 2 & 147 & 67 & 20 \\ & _1500_ & 6 & 220 & 124 & 23 \\ & _2000_ & 11 & 294 & 198 & 27 \\ \hline \multirow{4}{*}{_34-Zachary1_} & _500_ & 5 & 245 & 261 & 344 \\ & _1000_ & 15 & 479 & 674 & 319 \\ & _1500_ & 33 & 719 & 1262 & 403 \\ & _2000_ & 62 & 942 & 2012 & 473 \\ \hline \multirow{4}{*}{_34-Zachary2_} & _500_ & 5 & _245_ & 259 & 329 \\ & _1000_ & 16 & 492 & 680 & 438 \\ \cline{1-1} & _1500_ & 33 & 730 & 1267 & 532 \\ \cline{1-1} & _2000_ & 61 & 976 & 2006 & 632 \\ \hline \end{tabular}
\end{table}
Table 1: Runtimes (in seconds) for different models and networks for a computer with an AMD Ryzen 5 3600 6-Core dual-thread processor and 32 Gb of ram. 50 independent sets of each network were evaluated. All algorithms were parallelized to run 12 processes, one for each thread on the Ryzen processor.
## 5 Code Availability
The **mlacausality** library itself is publicly available at [https://github.com/WojtekFulmyk/mlacausality](https://github.com/WojtekFulmyk/mlacausality). Replication codes for all results in this paper are available at [https://github.com/WojtekFulmyk/mlacausality-krr-paper-replication](https://github.com/WojtekFulmyk/mlacausality-krr-paper-replication).
|
2309.11172 | Partition-A-Medical-Image: Extracting Multiple Representative
Sub-regions for Few-shot Medical Image Segmentation | Few-shot Medical Image Segmentation (FSMIS) is a more promising solution for
medical image segmentation tasks where high-quality annotations are naturally
scarce. However, current mainstream methods primarily focus on extracting
holistic representations from support images with large intra-class variations
in appearance and background, and encounter difficulties in adapting to query
images. In this work, we present an approach to extract multiple representative
sub-regions from a given support medical image, enabling fine-grained selection
over the generated image regions. Specifically, the foreground of the support
image is decomposed into distinct regions, which are subsequently used to
derive region-level representations via a designed Regional Prototypical
Learning (RPL) module. We then introduce a novel Prototypical Representation
Debiasing (PRD) module based on a two-way elimination mechanism which
suppresses the disturbance of regional representations by a self-support,
Multi-direction Self-debiasing (MS) block, and a support-query, Interactive
Debiasing (ID) block. Finally, an Assembled Prediction (AP) module is devised
to balance and integrate predictions of multiple prototypical representations
learned using stacked PRD modules. Results obtained through extensive
experiments on three publicly accessible medical imaging datasets demonstrate
consistent improvements over the leading FSMIS methods. The source code is
available at https://github.com/YazhouZhu19/PAMI. | Yazhou Zhu, Shidong Wang, Tong Xin, Zheng Zhang, Haofeng Zhang | 2023-09-20T09:31:57Z | http://arxiv.org/abs/2309.11172v1 | Partition-A-Medical-Image: Extracting Multiple Representative Sub-regions for Few-shot Medical Image Segmentation
###### Abstract
Few-shot Medical Image Segmentation (FSMIS) is a more promising solution for medical image segmentation tasks where high-quality annotations are naturally scarce. However, current mainstream methods primarily focus on extracting holistic representations from support images with large intra-class variations in appearance and background, and encounter difficulties in adapting to query images. In this work, we present an approach to extract multiple representative sub-regions from a given support medical image, enabling fine-grained selection over the generated image regions. Specifically, the foreground of the support image is decomposed into distinct regions, which are subsequently used to derive region-level representations via a designed Regional Prototypical Learning (RPL) module. We then introduce a novel Prototypical Representation Debiasing (PRD) module based on a two-way elimination mechanism which suppresses the disturbance of regional representations by a self-support, Multi-direction Self-debiasing (MS) block, and a support-query, Interactive Debiasing (ID) block. Finally, an Assembled Prediction (AP) module is devised to balance and integrate predictions of multiple prototypical representations learned using stacked PRD modules. Results obtained through extensive experiments on three publicly accessible medical imaging datasets demonstrate consistent improvements over the leading FSMIS methods. The source code is available at [https://github.com/YazhouZhu19/PAMI](https://github.com/YazhouZhu19/PAMI).
Few-shot Learning, Medical Image Segmentation, Prototype Learning, Representation Debiasing.
## I Introduction
Medical image segmentation [1] aims to identify surface properties or volume of specific anatomical structures in various medical images, including X-ray, Ultrasonography, PET/CT, and MRI scans. Deep learning-based algorithms [2, 3] are particularly adept at this task because they can generate measurements and segments from medical images without the time-consuming manual work required by traditional methods [4]. The effectiveness of deep learning algorithms depends heavily on the availability of large-scale, high-quality data that is fully annotated in a pixel-wise manner, which is naturally scarce in the field of medical image computing. Therefore, how to build a deep learning algorithm to effectively segment medical images using only a limited amount of labelled data is a critical yet challenging task.
To tackle this challenge, Few-Shot Learning (FSL) [5, 6] is introduced to enable deep learning algorithms to extract useful knowledge when annotations are scarce. Formally, standard FSL algorithms first extract representative information from a small set of annotated data (_Support Set_), and then ensure that the learned knowledge can be generalised to larger unannotated data (_Query Set_). In general, there are three common strategies in FSL scenarios: meta-learning [7], prototypical network [6] and matching network [8]. Meta-learning methods aim to learn a good initialisation or set of parameters that allow models to quickly adapt to new tasks while both prototype and matching networks focus on extracting semantic or correlated information between support and query images.
Analogues to the aforementioned FSL, Few-shot Semantic Segmentation (FSS) [9, 10] mainly follows the idea based on PrototypicalNet [6] and MatchingNet [11], both of which aim to extract and aggregate class-specific semantic representations for fast adaptation from support to query. Specifically, PrototypicalNet-based methods learn to generate the key prototypical representation by applying Masked Average Pooling (MAP) on features extracted from support and query features, such as via clustering algorithms [12, 13] and attention mechanisms [14, 15], and then refine prototypes by incorporating additional information from the background [16, 17, 18]. MatchingNet-based methods seek how to establish robust associations between support and query features [11, 19]. They might also incorporate intricate pixel-wise interactions between images and masks [20] to further enhance their performance. The core of these MatchingNet-based strategies lies in extracting dense correspondences between query images and their corresponding support annotations. This process, In turn, greatly contributes to enhancing generalisation ability from support to query.
Given the advantages of few-shot learning in terms of sample size requirements, it has been naturally introduced into the field of medical image processing which is known as Few-Shot Medical Image Segmentation (FSMIS) [21]. Existing algorithms in this domain [21, 22, 23, 24, 25, 26, 27, 28, 29] can be broadly classified
into two categories: interactive methods derived from SENet [22] (shown in Fig. 1(a)) and prototypical network-based methods [27, 28, 21, 29] (shown in Fig. 1(b)). The key to leading the success of the interaction-based approach is the use of non-local attention mechanism [23, 30] and contrastive learning [25, 31] to work in parallel between the support and query arms in an interactive manner. Prototypical network-based approaches have emerged as dominant methods in FSMIS research. The core idea of some prominent examples like SSL-ALPNet [21], ADNet [28], and SR&CL [29] is to obtain semantic-level prototypes by compressing support features and subsequently produce predictions by matching them with query features.
Despite the success of the above methods, they fail to address the problem of the large **intra-class variations** posed by significant intra-class variations resulting from the inherent diversity of a specific organ, including _size_, _shape_ and _contour_ which can vary across different patients or under distinct acquisition protocols. In particular, a certain number of disparate regions (identified as _perturbing_ regions) arise between the support images and query images which have the potential to degrade the generalisation capability of the obtained prototypes to some extent. The prototypes generated by the conventional simple masked average pooling (MAP) operation, therefore, are consistently inefficient and imprecise for the FSMIS task.
To cope with this challenge, as depicted in Fig. 2, we introduce a new concept, Partition-A-Medical-Image (PAMI), which aims to learn multiple precise sub-region representations by partitioning a medical image into multiple sub-regions and mitigating the impact of the _perturbing_ sub-regions at the prototypical representation level, while refining the remaining areas. Concretely, it presents a Regional Prototypical Learning (RPL) module to first strip the perturbing regions from the support foreground using a Voronoi-based method [32, 33], and then generate multiple separated regional-level prototypical representations. We also introduce the Prototypical Representation Debiasing (PRD) module, which employs two elimination methods: a self-support approach and a support-query interactive approach, capable of re-weighting region-level prototypical representations while selecting non-perturbing prototypical representations. By stacking multiple PRD modules, it can produce debiased prototypical representations for both supports and queries, which will be assembled and used for prediction. Our contributions are summarised as follows:
* We introduce the new method, Partition-A-Medical-Image (PAMI) to alleviate the effects of intra-class variations by suppressing perturbations of regional prototypes.
* A Regional Prototypical Learning (RPL) module is designed to derive multiple regional prototypes for the support and the coarse query prototype.
* A Prototypical Representation Debiasing (PRD) module is proposed to remove the biases of regional prototypical representations under the synergistic work of a self-support, Multi-direction Self-debiasing (MS) block and a support-query, Interactive Debiasing (ID) block.
* The proposed PAMI method can achieve state-of-the-art performance on three experimental datasets commonly used in medical image segmentation tasks.
## II Related Work
### _Medical Image Segmentation_
Medical image segmentation [34, 35, 36] constitutes a vital and foundational technique in numerous clinical research endeavours and practical applications. In recent years, methods based on deep learning have showcased unparalleled performance across an extensive array of medical image segmentation tasks, encompassing diverse areas such as tissues, organs, lesions, and tumours. The most acclaimed method is U-Net [37], which employs an encoder-decoder architecture combined
Fig. 1: Comparison between previous few-shot medical image segmentation methods and our proposed method: (a) two-branch interaction-based method, (b) prototypical network-based method, (c) our proposed method which consists of a weights-shared feature encoder, a Regional Prototypical Learning (RPL) module, a stacked of Prototypical Representation Debiasing (PRD) modules and an Assembled Prediction (AP) module.
Fig. 2: Motivation: The objective is to eliminate the perturbing elements in the partial prototypical representations within multiple foreground support prototypes while preserving and refining the remaining general prototypical representations for rapid generalization to query images.
with specialised skip connections, facilitating advanced high-level structure extraction while maintaining texture fidelity in segmentation tasks. Subsequent to the development of U-Net, various modified versions have been proposed to further augment performance. Examples include U-Net++ [38] and nnUNet [39], which place emphasis on optimizing internal skip connections and granting increased consideration to data preprocessing pipelines. Additionally, the integration of self-attention mechanisms within the domain has yielded notable results, as evidenced by works such as vanilla Transformer based methods [40, 41], Swin-UNet [42], TransBTS [43], and TransFuse [44]. These approaches have demonstrated exceptional outcomes on a series of publicly accessible medical imaging datasets. However, it is important to note that the effectiveness of contemporary medical image segmentation methods remains heavily contingent upon the presence of copious manual annotations. This dependency may potentially impose constraints on the applicability of these methods within real-world clinical settings.
### _Few-Shot Semantic Segmentation_
The field of FSS has emerged as an innovative solution to address the scarcity of annotated data in semantic segmentation. Currently, FSS methodologies can be broadly delineated into two primary categories: PrototypicalNet-based approaches and MatchingNet-based approaches. PrototypicalNet-based methods primarily focus on constructing accurate and generalizable prototypical representations derived from the extracted support and query features. To achieve this objective, several cutting-edge FSS strategies suggest aggregating multiple prototypical representations at the pixel-level or region-level for distinct semantic classes by employing advanced methodologies, including clustering [45, 46], Expectation-Maximization (EM) [13], transformers [47, 48], and others. MatchingNet-based FSS approaches aim to resolve the issue by establishing correspondences between support and query features [19] and implementing a pixel-wise interaction mechanism between images and masks [20]. The foundation of MatchingNet-based approaches is to extract dense correspondences between query images and support annotations, ultimately aiming to enhance the method's generalization capabilities. Moreover, contemporary research efforts have reevaluated the FSS task from novel research perspectives, such as leveraging knowledge from non-target regions to bolster generalization ability [17, 18, 49] and re-engineering the feature extractor while addressing various FSS-related challenges [50].
### _Few-Shot Medical Image Segmentation_
In the field of medical imaging, FSMIS has garnered substantial interest among researchers due to the practical challenges associated with accessing large-scale medical imaging datasets, considering legal, ethical, and user privacy constraints. FSMIS methods can be classified into two primary research streams: those employing a two-branch interactive structure [22, 23, 24, 25, 26] and those based on prototypical network structures [27, 21, 28, 29]. Distinct characteristics of medical images, as opposed to natural images, necessitate the development of unique algorithmic approaches for processing medical images compared to those used in natural image processing. For instance, the diverse and heterogeneous textures inherent in medical images warrant greater emphasis on extracting general and generalizable features, while the considerable variance in image intensity requires methods with enhanced discriminatory capabilities. Specific challenges emerge in few-shot medical image segmentation when transferring medical support images to medical query images. The first category of FSMIS methods focuses on constructing innovative connections and interactions between support and query images using novel mechanisms, considering that organs and lesions typically occupy specific locations within the abdomen and tumors. A common practice among these FSMIS methods incorporates various attention mechanisms into the interaction block. Approaches such as SE-Net [22], MRrNet [23], and GCN-DE [24] combine attention mechanism variants with specialized architectures tailored for medical scenarios. Additionally, AAS-DCL [25] proposes implementing contrastive learning among different prototypes to elevate the performance of FSMIS methods to new heights. The second category of FSMIS methods primarily adheres to the technical principles of classical prototypical network-based FSS methods. A representative work, SSL-ALPNet [21], introduces a novel Adaptive Local Prototype pooling module (ALP) designed to augment the generalization ability of prototype representations by extracting localized object information.
## III Methodology
### _Problem Settings_
The goal of FSMIS is to accurately segment the object of unseen classes \(\mathcal{C}_{novel}\) by using a limited number of well-annotated images of known classes \(\mathcal{C}_{known}\) from the base dataset \(\mathcal{D}_{base}\), where \(\mathcal{C}_{known}\cap\mathcal{C}_{novel}=\emptyset\). Concretely, \(\mathcal{D}_{base}\) contains a collection of image-mask pairs, expressed by: \((\mathbf{I}^{j},\mathcal{M}^{j})_{j=1}^{N}\), in which \(\mathcal{M}^{j}\) is the semantic mask for the training image \(\mathbf{I}^{j}\), and \(N\) represents the total number of image-mask pairs. In the testing phase, the support image set \(S=(\mathbf{I}^{i}_{s},\mathcal{M}^{i}_{s})_{i=1}^{k}\in\mathcal{C}_{novel}\) is introduced into the task, where \(\mathbf{I}^{i}_{s}\) denotes the support image and \(\mathcal{M}^{i}_{s}\) is the corresponding mask for foreground object in \(\mathbf{I}^{i}_{s}\), and \(k\) is the number of image-mask pairs within the support set which is usually set as 1 for 1-shot or 5 for 5-shot. The evaluation of FSMIS methods is conducted on the query set: \(Q=(\mathbf{I}_{q},\mathcal{M}_{q})\in\mathcal{C}_{novel}\), where \(\mathbf{I}_{q}\) denotes the query image and \(\mathcal{M}_{q}\) is the corresponding ground-truth mask. In essence, FSMIS methods employ the support set \(S\) to generate a predicted segmentation mask \(\tilde{\mathcal{M}}_{f}\) for each image \(\mathbf{I}_{q}\) within the query set \(Q\).
### _Architecture Overview_
The overall workflow of the proposed method is depicted in Fig. 1(c), which contains four key components: (a) weights-shared encoder for feature extraction \(\mathbf{F}=f_{\theta}(\mathbf{I})\), where \(\theta\) denotes the model parameters; (b) a Regional Prototypical Learning (RPL) module for disassembling foreground region and bringing in auxiliary query information; (c) a stacked of
multiple Prototypical Representation Debiasing (PRD) modules for prototypical feature debiasing, fusion and regeneration; (d) An Assembled Prediction (AP) module for the query mask prediction. Details of the proposed framework are illustrated in Fig. 3. Following the experimental settings in [28], the ResNet101 [3] is chosen as the backbone of \(f_{\theta}\) which has been pre-trained on the MS-COCO dataset. Then, the feature \(\mathbf{F}_{s}\) and \(\mathbf{F}_{q}\) extracted from support image and query image by extractor \(f_{\theta}\) are fed into the RPL module for generating multiple regional prototypical representations \(P_{s,enhanced}\), which will be rectified and debiased by following stacked PRD modules to produce the optimal prototypical representation. Notably, the input data is first processed using a 3D-based supervoxel clustering algorithm [21, 28] to generate pseudo-masks, where the generated masks are treated as supervision to later implement few-shot learning in a meta-learning-based episodic training manner.
### _Regional Prototype Learning_
The Regional Prototype Learning (RPL) module is proposed based on the fact that not all sub-regions extracted according to the support foreground are firmly related to the query image. To this end, we attempt to partition the support foreground using the Voronoi-based method to produce multiple region prototypes, where the perturbation information in the corresponding prototypical representations can be debiased by subsequent operations.
As shown in Fig. 3, the RPL module consists of two branches computing in parallel: the regional prototypes computation branch and the coarse prototype computation branch. Concretely, the regional prototypes computation branch responds to produce the foreground region of the support image \(R_{f}\) by taking the product of a given support image \(\mathbf{I}_{s}\in\mathbb{R}^{H\times W}\) and the corresponding foreground mask \(\mathcal{M}^{f}\in\mathbb{R}^{H\times W}\). The generated foreground region will be partitioned by using the Voronoi-based method [32, 33] to produce \(N_{f}\) partitioned regions \(S=\left\{\mathcal{R}_{n}\right\}_{n=1}^{N_{f}}\) and a set of regional masks \(\left\{\mathcal{V}_{n}\right\}_{n=1}^{N_{f}},\mathcal{V}_{n}\in\mathbb{R}^{H \times W}\), where \(N_{f}\) is set as 64 in our method. The workflow of this step is illustrated in Fig. 4, and the details of how to determine this value will be depicted in Section IV. With the available region masks \(\left\{\mathcal{V}_{n}\right\}_{n=1}^{N_{f}}\), we can determine that the set of the initial prototypical representations of the regional support \(P_{s,initial}=\left\{\mathbf{P}_{n}\right\}_{n=1}^{N_{f}},\mathbf{P}_{n}\in \mathbb{R}^{1\times C}\) by:
\[\mathbf{P}_{n}=\mathrm{MAP}(\mathbf{F}_{s},\mathcal{V}_{n})=\frac{1}{\left| \mathcal{V}_{n}\right|}\sum_{i=1}^{HW}\mathbf{F}_{s,i}\mathcal{V}_{n,i}, \tag{1}\]
where \(\mathbf{P}_{n}\) denotes one regional support prototypical representation, \(\mathbf{F}_{s}\in\mathbb{R}^{C\times h\times w}\) is the support feature extracted by the encoder \(f_{\theta}\) and \(\mathbf{F}_{s}\) is up-sampled into shape \((C,H,W)\), \(\mathcal{V}_{n,i}\) denotes the \(i_{th}\) regional mask and \(\mathrm{MAP}(\cdot)\) is the masked average pooling operation.
The coarse query prototype computation branch operates in parallel with the regional prototypes computation branch in order to enhance the information capability of the query image. Specifically, it first calculates the intermediate support prototype \(\hat{\mathbf{P}}\in\mathbb{R}^{1\times C}\) on the support feature: \(\hat{\mathbf{P}}=\mathrm{MAP}(\mathbf{F}_{s},\mathcal{M}^{f})\) using the MAP, and then produces the coarse query prototype
Fig. 4: Workflow for generating region masks. The support foreground region is computed by multiplying the support image with its mask. Next, the partitioned regions of the support image can be obtained by performing a Voronoi-based method on the foreground region and then resulting in the region masks.
Fig. 3: Details of the proposed Region Prototype Learning (RPL) module and the internal structures of stacked Prototypical Representation Debiasing (PRD) modules. The RPL module contains two parallel calculating pathways: the regional prototypes computation branch (top), and the coarse prototype computation branch (bottom). Each PRD module has three functional blocks: a Multi-direction Self-debiasing (MS) block, an Interactive Debiasing (ID) block and a Prototype Regeneration (PR) block.
\(\tilde{\textbf{P}}_{q}\in\mathbb{R}^{1\times C}\) by feeding \(\hat{\textbf{P}}\) into the Query Prototype Generation (QPG) module.
As illustrated in Fig. 5, the coarse query foreground mask \(\widetilde{\mathcal{M}}_{q}^{f}\) is first calculated by:
\[\widehat{\mathcal{M}}_{q}^{f}=1-\sigma(S(\textbf{F}_{q},\tilde{\textbf{P}})- \tau), \tag{2}\]
where \(\textbf{F}_{q}\in\mathbb{R}^{C\times h\times w}\) is the feature extracted from the query image \(\textbf{I}_{q}\in\mathbb{R}^{H\times W}\) by using the feature extractor \(f_{\theta}\), \(S(a,b)=-acosa(a,b)\) is the negative cosine similarity with a fixed scaling factor \(\alpha=20\), \(\sigma\) denotes the _Sigmoid_ activation function, and \(\tau\) denotes a learnable threshold which can be derived by applying a single average pooling operation and a function \(\mathrm{FC}(\cdot)\) containing two fully-connected layers to the query feature, expressed as \(\tau=\mathrm{FC}(\textbf{F}_{q})\). As a consequence of this, the coarse query prototype \(\tilde{\textbf{P}}_{q}\in\mathbb{R}^{1\times C}\) can be obtained by the masked average pooling operation, written as:
\[\tilde{\textbf{P}}_{q}=\mathrm{MAP}(\textbf{F}_{q},\widetilde{\mathcal{M}}_{ q}^{f}). \tag{3}\]
Moreover, we also merge the coarse query prototype and obtained support prototypical representations to aggregate the query and the support information, which will lead to a set of enhanced prototypical representations of the query \(P_{s,enhanced}=\left\{\textbf{P}_{n}^{*}\right\}_{n=1}^{N}\), where \(\textbf{P}_{n}^{*}=\textbf{P}_{n}+\tilde{\textbf{P}}_{q}\), \(\textbf{P}_{n}^{*}\in\mathbb{R}^{1\times C}\).
### _Stacked Prototypical Representation Debiasing Modules_
The purpose of introducing the stacked Prototypical Representation Debiasing (PRD) Modules is to filter the perturbation information present in \(P_{s,enhanced}\) to the greatest extent. The number of PRD modules, \(M\) is experimentally set to 5 for optimal performance (see Section IV for details). Each PRD module contains three key components, including the Multi-direction Self-debiasing (MS) block, the Interactive Debiasing (ID) block, and the Prototype Regeneration (PR) block. Taking the first PRD module as an example, the MS block takes the support prototypes \(P_{s,enhanced}\) as the input to achieve the debiased prototype representations \(\textbf{P}_{\alpha},\textbf{P}_{\beta}\) in a self-biasing manner including inter-prototype debiasing and intra-prototype debiasing. Meanwhile, the coarse query prototype \(\tilde{\textbf{P}}_{q}\) is fed into the ID block together with \(P_{s,enhanced}\) to calculate the affinity map which can realise prototype representation debiasing based on the self-selection mechanism, and result in the representation \(\textbf{P}_{\gamma}\). Afterwards, the PR block fuses the representations of \(\textbf{P}_{\alpha},\textbf{P}_{\beta}\) and \(\textbf{P}_{\gamma}\) and regenerates them into new prototypical presentations \(\textbf{P}_{s}^{{}^{\prime}}\) and \(\textbf{P}_{q}^{{}^{\prime}}\) as input to the next PRD module.
#### Iii-D1 Multi-direction Self-debiasing
The input to the MS block is the support prototypical representation \(\tilde{\textbf{P}}_{s}\in\mathbb{R}^{N\times C}\), which is obtained by reshaping and rearranging all elements in \(P_{s,enhanced}\). In particular, the support prototypical representation \(\textbf{P}_{s}\) can be re-weighted through two-way debiasing operations, which are the inter-prototype (element dimension) and intra-prototype (channel dimension).
In the inter-prototype self-debiasing pathway, the given \(\tilde{\textbf{P}}_{s}\) is initially reconstructed into a representation \(\textbf{P}_{r}\in\mathbb{R}^{N\times C}\) using a vanilla transformer encoder [51], \(\textbf{P}_{r}=\mathrm{E}_{t}(\tilde{\textbf{P}}_{s})\), where \(\mathrm{E}_{t}\) consists of two sub-blocks: a multi-head attention (MHA) block and a multilayer perceptron (MLP) block. Formally, this process is denoted by:
\[\begin{split}\textbf{P}^{i}=&\,\mathrm{LN}(\mathrm{ MHA}(\tilde{\textbf{P}}_{s},\tilde{\textbf{P}}_{s},\tilde{\textbf{P}}_{s})+ \tilde{\textbf{P}}_{s}),\\ \textbf{P}_{r}=&\,\mathrm{LN}(\mathrm{MLP}(\textbf{P} ^{i})+\textbf{P}^{i}),\end{split} \tag{4}\]
where \(\textbf{P}^{i}\in\mathbb{R}^{N\times C}\) is the generated intermediate prototype, \(\mathrm{LN}(\cdot)\) corresponds to the layer normalization, \(\mathrm{MLP}(\cdot)\) represents the multilayer perception and \(\mathrm{MHA}(\cdot)\) signifies the multi-head attention layer. We then conduct average-pooling and max-pooling over the representation \(\textbf{P}_{r}\) in element dimension to preserve the most useful information which leads to new features \(\mathcal{I}_{avg}^{inter},\mathcal{I}_{max}^{inter}\in\mathbb{R}^{N\times 1}\). This can be simply expressed as:
\[\begin{cases}\mathcal{I}_{avg}^{inter}=\mathrm{AvgPool}(\textbf{P}_{r})\\ \mathcal{I}_{max}^{inter}=\mathrm{MaxPool}(\textbf{P}_{r}).\end{cases} \tag{5}\]
The obtained features are concatenated along the channel dimension, enabling the multilayer perception block to be exploited to project the outcome into a single-channel feature map \(\mathcal{I}^{inter}\), denoted as:
\[\mathcal{I}^{inter}=\mathrm{MLP}(\mathcal{I}_{avg}^{inter}\oplus\mathcal{I}_{ max}^{inter}). \tag{6}\]
where \(\mathcal{I}^{inter}\in\mathbb{R}^{N\times 1}\) will be activated by the sigmoid function and then multiplied by \(\textbf{P}_{r}\) to produce the initial calibrated prototypical representation \(\tilde{\textbf{P}}_{\alpha}\in\mathbb{R}^{N\times C}\). Formally, it is expressed as:
\[\tilde{\textbf{P}}_{\alpha}=\sigma(\mathcal{I}^{inter})\odot\textbf{P}_{r}, \tag{7}\]
Fig. 5: The Query Prototype Generation (QPG) block.
Fig. 6: The Multi-direction Self-debiasing (MS) block is a two-way, self-debiasing block: an inter-prototype way and an intra-prototype way. Through these two self-debiasing methods, we can finally obtain the debiased prototypical representations \(\textbf{P}_{\alpha}\) and \(\textbf{P}_{\beta}\), respectively.
where \(\odot\) denotes the element-wise multiplication broadcast along the channel dimension, and \(\sigma\) denotes the sigmoid activation function. The final output processed by the inter-prototype self-debiasing pathway is denoted as \(\textbf{P}_{\alpha}=\mathrm{E}_{t}(\hat{\textbf{P}}_{\alpha}),\textbf{P}_{\alpha} \in\mathbb{R}^{N\times C}\), where \(\mathrm{E}_{t}\) is the transformer encoder for enhancing the perceptive ability of \(\hat{\textbf{P}}_{\alpha}\).
The intra-prototype self-debiasing pathway is designed to run in parallel with the inter-prototype route and work in a similar manner. Formally,
\[\mathcal{I}^{intra}=\mathrm{MLP}(\mathcal{I}^{intra}_{avg} \oplus\mathcal{I}^{intra}_{max}), \tag{8}\] \[\hat{\textbf{P}}_{\beta}=\sigma(\mathcal{I}^{intra})\odot \textbf{P}_{r},\]
where \(\odot\) denotes the element-wise multiplication broadcast along the spatial dimension. The output of the intra-prototype self-debiasing is \(\textbf{P}_{\beta}=\mathrm{E}_{t}(\hat{\textbf{P}}_{\beta}),\textbf{P}_{\beta} \in\mathbb{R}^{N\times C}\).
#### Iii-B2 Interactive Debiasing
We propose a novel Interactive Debiasing (ID) module which introduces the coarse query prototypes \(\tilde{\textbf{P}}_{q}\in\mathbb{R}^{1\times C}\) to interactively assist the support prototypes debiasing.
As illustrated in Fig. 7, the prototypical representation \(\tilde{\textbf{P}}_{s}\) (originated from \(P_{s,enhanced}\)) is first mapped into \(\textbf{P}_{r}\in\mathbb{R}^{N\times C}\) by using the transformer encoder \(\mathrm{E}_{t}\). Given the coarse prototype \(\tilde{\textbf{P}}_{q}\) generated from the QPG module, an affinity map \(\mathcal{A}=\textbf{P}_{r}\tilde{\textbf{P}}_{q}^{{}^{\prime}}\), where \(\mathcal{A}\in\mathbb{R}^{N\times 1}\), is calculated to measure the correlations between the query and support representations. With the available affinity map \(\mathcal{A}\), a self-selection mechanism is applied and results in the feature map \(\mathcal{S}\in\mathbb{R}^{N\times 1}\). This can be written as:
\[\mathcal{S}_{i}(\mathcal{A}_{i})=\begin{cases}0&\text{if }\mathcal{A}_{i}>= \xi\\ -\infty&otherwise\end{cases},i\in\{0,1,...,N\}\,, \tag{9}\]
where \(\xi\) is a threshold that can be determined by using \(\xi=(min(\mathcal{A})+mean(\mathcal{A}))/2\), \(\mathcal{S}\) indicates the chosen regions from multiple regional prototypical representations that can be integrated with the query prototype. With the use of \(\mathcal{S}\), the heterogeneous or perturbing regions of the support foreground will be suppressed by the _Softmax_ function to yield the prototypical representation \(\textbf{P}_{r}^{q}\in\mathbb{R}^{N\times C}\):
\[\textbf{P}_{r}^{q}=\mathrm{softmax}(\mathcal{A}+\mathcal{S})\tilde{\textbf{P} }_{r}, \tag{10}\]
where \(\tilde{\textbf{P}}_{r}\) denotes the global prototype delivered by using the global pooling \(\tilde{\textbf{P}}_{r}=\mathrm{GlobalPool}(\textbf{P}_{r}),\tilde{\textbf{P} }_{r}\in\mathbb{R}^{1\times C}\). Then, \(\tilde{\textbf{P}}_{r}^{q}\) will be added to \(\textbf{P}_{r}\) together with \(\tilde{\textbf{P}}_{q}\) for further query information interaction which will produce the representation \(\tilde{\textbf{P}}_{r}\in\mathbb{R}^{N\times C}\):
\[\tilde{\textbf{P}}_{r}=\mathrm{LN}(\textbf{P}_{r}^{q}+\textbf{P}_{r}^{{}^{ \prime}}), \tag{11}\]
where \(\textbf{P}_{r}^{{}^{\prime}}=\textbf{P}_{r}+\mathrm{repeat}(\tilde{\textbf{ P}}_{q}),\textbf{P}_{r}^{{}^{\prime}}\in\) and \(\mathrm{repeat}(\cdot)\) denotes to repeat \(\tilde{\textbf{P}}_{q}\) by N times to form a \((N,C)\) tensor. The interactive debiasing prototypical representation \(\textbf{P}_{\gamma}\in\mathbb{R}^{N\times C}\) can be obtained by:
\[\textbf{P}^{ii}=\mathrm{LN}(\mathrm{MHA}(\tilde{\textbf{P}}_{r}, \textbf{P}^{{}^{\prime}},\textbf{P}^{{}^{\prime}})+\tilde{\textbf{P}}_{r}), \tag{12}\] \[\textbf{P}_{\gamma}=\mathrm{LN}(\mathrm{MLP}(\textbf{P}^{ii})+ \textbf{P}^{ii}).\]
#### Iii-B3 Prototype Regeneration
Given debiased prototypical representations \(\textbf{P}_{\alpha}\), \(\textbf{P}_{\beta}\) and \(\textbf{P}_{\gamma}\), support prototypical representation \(\textbf{P}_{s}^{{}^{\prime}}\in\mathbb{R}^{N\times C}\) and coarse query prototype \(\textbf{P}_{q}^{{}^{\prime}}\in\mathbb{R}^{1\times C}\) for the next PRD module are calculated by using the Prototype Regeneration (PR) module. As illustrated in Fig. 8, the PRG module has two main components: a fusion step and a regeneration step.
For the fusion step, we first concatenate the prototypical representations \(\textbf{P}_{\alpha},\textbf{P}_{\beta}\in\mathbb{R}^{N\times C}\) generated by the MS block to obtain the first fused representation \(\textbf{P}_{f}^{1}\in\mathbb{R}^{2N\times C}\), which is calculated as follows:
\[\textbf{P}_{f}^{1}=\textbf{P}_{\alpha}\oplus\textbf{P}_{\beta}. \tag{13}\]
Subsequently, the prototypical representation \(\textbf{P}_{\gamma}\in\mathbb{R}^{N\times C}\) generated from the ID block is concatenated with \(\textbf{P}_{f}^{1}\) to get the second fused representation \(\textbf{P}_{f}^{2}\in\mathbb{R}^{3N\times C}\), written as:
\[\textbf{P}_{f}^{2}=\textbf{P}_{f}^{1}\oplus\textbf{P}_{\gamma}. \tag{14}\]
The coarse fused prototypical representation \(\textbf{P}_{f,coarse}\in\mathbb{R}^{N\times C}\) simply aggregates the projected \(\mathrm{MLP}(\textbf{P}_{f}^{1})\) and
Fig. 8: The Prototype Regeneration (PR) block consists of a _Fusion_ step and a _Regeneration_ step. Three types of prototypical representations \(\textbf{P}_{\alpha}\), \(\textbf{P}_{\beta}\) and \(\textbf{P}_{\gamma}\) are first fused into an intermediate representation \(\textbf{P}_{f}\) and then starts regeneration to produce the support prototypical representation \(\textbf{P}_{s}^{{}^{\prime}}\) and the query prototype \(\textbf{P}_{q}^{{}^{\prime}}\) used for the next PRD module.
Fig. 7: Details of the Interactive Debiasing (ID) block. An affinity map \(\mathcal{A}\) is calculated between the support prototypical representation and the query prototype. A self-selection mechanism is then applied to \(\mathcal{A}\) to deliver the feature map \(\mathcal{S}\) which is beneficial to suppress perturbing parts and reconstruct the representations.
\(\mathrm{MLP}(\textbf{P}_{f}^{2})\) where \(\tilde{\textbf{P}}_{f}^{1}\) and \(\tilde{\textbf{P}}_{f}^{2}\) are the mapped intermediate fused prototypical representations. Formally,
\[\begin{split}\textbf{P}_{f,coarse}&=\tilde{\textbf{P} }_{f}^{1}+\tilde{\textbf{P}}_{f}^{2}\\ &=\mathrm{MLP}(\textbf{P}_{f}^{1})+\mathrm{MLP}(\textbf{P}_{f}^{2 }).\end{split} \tag{15}\]
where \(\textbf{P}_{f}\) is used to produce the final fused prototypical representation \(\textbf{P}_{f}\) by using the vanilla transformer encoder \(\mathrm{E}_{t}\), represented as:
\[\textbf{P}_{f}=\mathrm{E}_{t}(\textbf{P}_{f,coarse}),\textbf{P}_{f}\in\mathbb{ R}^{N\times C}. \tag{16}\]
The regeneration step aims to produce the query prototype \(\textbf{P}_{q}^{{}^{\prime}}\) with the QPG module introduced in Section III-C, which can be computed by:
\[\textbf{P}_{q}^{{}^{\prime}}=\mathrm{QPG}(\textbf{F}_{q},\tilde{\textbf{P}}_{ f}), \tag{17}\]
where \(\tilde{\textbf{P}}_{f}=\mathrm{GAP}(\textbf{P}_{f}),\tilde{\textbf{P}}_{f} \in\mathbb{R}^{N\times C}\) is received by using the global average pooling operation over \(\textbf{P}_{f}\), and the support prototypical representation for the next PRD module can be obtained as:
\[\textbf{P}_{s}^{{}^{\prime}}=\textbf{P}_{f}+\mathrm{repeat}(\textbf{P}_{q}^ {{}^{\prime}}). \tag{18}\]
### _Assembled Prediction_
The debiased support prototypical representation \(\textbf{P}_{s}\) and the regenerated query prototype \(\textbf{P}_{q}\) can be achieved by processing the \(\tilde{\textbf{P}}_{s}\) and \(\tilde{\textbf{P}}_{q}\) with \(M\) stacked PRD modules. Formally, this can be denoted as:
\[\{\textbf{P}_{s},\textbf{P}_{q}\}=\mathrm{PRD}^{M}(\tilde{\textbf{P}}_{s}, \tilde{\textbf{P}}_{q}). \tag{19}\]
We can then infer the predictions for the query by:
\[\begin{cases}\mathcal{M}_{s}^{f}=1-\sigma(S(\textbf{F}_{s},\mathrm{GlobalPool }(\textbf{P}_{s}))-\tau);\\ \mathcal{M}_{q}^{f}=1-\sigma(S(\textbf{F}_{q},\textbf{P}_{q})-\tau),\end{cases} \tag{20}\]
and the final predicted foreground can be obtained by:
\[\mathcal{M}^{f}=\lambda\mathcal{M}_{s}^{f}+(1-\lambda)\mathcal{M}_{q}^{f}, \tag{21}\]
where \(\lambda\) is the assembling coefficient, and is set to 0.7 (more details can be found in the experiment section).
The binary cross-entropy loss \(\mathcal{L}_{ce}\) is adopted to determine the error between the predict masks \((\mathcal{M}^{f},\mathcal{M}^{b})\) and the given ground-truth \((\tilde{\mathcal{M}}^{f},\tilde{\mathcal{M}}^{b})\). Formally,
\[\mathcal{L}_{ce}=-\frac{1}{HW}\sum_{i,j}\tilde{\mathcal{M}}_{(i,j)}^{f}log( \mathcal{M}_{(i,j)}^{f})+\tilde{\mathcal{M}}_{(i,j)}^{b}log(\mathcal{M}_{(i,j) }^{b}). \tag{22}\]
## IV Experiments
### _Experimental Setting_
#### Iv-A1 Datasets
We comprehensively evaluate the proposed method on three publicly available datasets with different modalities and anatomical structures, including (a) **CHAOS**: an abdominal MRI dataset published in ISBI 2019 Combined Healthy Abdominal Organ Segmentation Challenge [52], (b) **SABS**: an abdominal CT dataset from MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge [53] and (c) **CMR**: a cardiac MRI dataset from MICCAI 2019 Multi-Sequence Cardiac MRI Segmentation Challenge [54]. **CHAOS** and **SABS** share the same categories of labels which are _liver_, _spleen_, _left kidney_ (LK) and _right kidney_ (RK). **CMR** dataset has three categories of labels including _left ventricular myocardium_ (LV-MYO), _right ventricle blood pool_ (LV-BP) and _right ventricle_ (RV). Considering the efficiency of method training, we here reformat the 3D scans of these datasets into 2D axial and 2D short-axis slices with size of \(256\times 256\) to segment 3D images using a 2D method.
#### Iv-A2 Implementation Details
Before method training, the pseudo masks for training scans are generated using the super-voxel clustering method according to [28]. To conduct episodic training for meta-learning method, we select the support slices and query slices based on the strategy in [22]. For each 3D image in both query and support sets, we systematically subdivide the region-of-interest into 3 equally-sized segments. Each query segment's corresponding support samples consist of central slices from matching segments across all support scans. The five folds cross validation is conducted following the protocol in [28].
Our method is implemented using PyTorch (v1.10.2) based on the SSL-ALPNet [21] and ADNet [28] implementations. The ResNet101 [3] pretrained on part of MS-COCO dataset is employed as the backbone of feature extractor \(f_{\theta}\). Data pre-processing pipeline is based on [21, 28]. Each single channel 2D slice will be reproduced by three times and concatenated along channel for suitable for convolutional network input. The method is trained using 1-way 1-shot configuration for over 50K iterations with the SGD optimizer. During training phase, the initial learning rate is set to \(1\times 10^{-3}\) with a step decay of 0.8 each 1000 iteration.
#### Iv-A3 Evaluation Protocol
For evaluation purposes, Dice Similarity Coefficient (DSC) score is employed to compare the predictions given by method with the ground truth of segmentations. Furthermore, we employ two settings to challenge the proposed method in terms of ability of generalizing into new data. Specifically, two settings are: **Setting 1**: some of slices containing the test classes might appear during the training phase; **Setting 2**: those slices containing test classes are completely removed during training phase. It is worth noticing that Setting 2 cannot be implemented on scans from CMR dataset, theoretically because all organ classes are likely to be present on one slice simultaneously.
### _Comparison with State-of-the-Arts_
#### Iv-B1 Quantitative Results
As shown in Table I and Table II, we compare the performance of our proposed method with some baseline methods including the vanilla PANet [10], SENet [22], SSL-ALPNet [21], ADNet [28], AAS-DCL [25], SR&CL [29], CRAPNet [26] and Q-Net [55] under two experimental settings on three datasets **CHAOS**, **SABS** and **CMR**. In Table I, we illustrate the mean DSC score of five cross-validation folds on four organs (LK, RK, Spleen and Liver) and the mean DSC score of these four organs results under two experimental settings (setting 1&2). Our proposed method first outperforms all of other methods in mean DSC score of four organs under these two experimental settings. Specifically, the proposed method achieve the highest
mean DSC scores of 82.38% and 79.53% on the CHAOS dataset under setting 1 and setting 2, surpassing the second-best methods (Q-Net and SR&CL) by 1.36% and 1.88% respectively. On the SABS dataset, the proposed method also attains the optimal DSC scores on organs including right kidney (RK), left kidney (LK) and spleen. The mean DSC score on the SABS dataset outperforms the second-best result by 0.45% and 1.77% under setting 1 and setting 2, respectively. Additionally, testing on the CMR dataset, our method also exhibits a remarkable improvements on the three regions (LV-BP, LV-MYO and RV). As shown in Table II, the proposed method obtains the optimal DSC scores of 89.57%, 66.82%, 80.17% on LV-BP, LV-MYO, RV, achieving improvements of 0.08%, 0.21% on LV-MYO and RV, respectively, and only has 0.68% degradation on LV-BP compared to Q-Net. However, we can obtain 0.7% improvement compared on the metric of the mean value.
#### Iv-A2 Quantitative Results
To analysis segmentation performance of our method intuitively, we present the qualitative
Fig. 9: The qualitative results of our method and other baseline methods on the **CHAOS** dataset and the **SABS** dataset.
results of our method and other baseline methods on three datasets CHAOS, SABS and CMR in Fig. 9 and Fig. 10. We compare our segmentation results with the methods PANet, SSL-ALPNet and ADNet, which have provided the complete code repositories. On the CHAOS dataset, segmentation results of our method have the better border accuracy on the left kidney and spleen organs. Especially, only our method can segment the entire region boundary on the liver organ against other segmentation methods. On the left kidney and right kidney segmentation, our method can produce a finer segmentation performance in the edge details. On the SABS dataset, the proposed method significantly optimizes the segmentation outcomes of both the spleen and the liver, providing more refined organ boundaries and clearer textures compared to other baseline methods. Overall, these qualitative results provide the evidences for the effectiveness of the proposed method, and highlights its potential as a valuable tool in clinical diagnosis and treatment under the limited data scenario.
### _Ablation Study_
In this section, we conduct the ablation study for analysing the effectiveness of each components and settings of hyperparameter in our method. All of ablation experiments are implemented on the CHAOS dataset under the setting 2.
#### Iv-C1 Analysis of Partitioned Regions \(N_{f}\)
We analysis the performance of proposed method with the different settings of partitioned regions number \(N_{f}\) in Fig. 11. We evaluate the each DSC scores results on four organs (LK, RK, spleen and liver) and mean DSC score of them under a series of \(N_{f}\) settings. As shown in Fig. 11, each DSC scores of LK, RK, spleen and liver increase concomitantly with the increase of partitioned regions number \(N_{f}\). In details, the DSC scores have a rapid growth from \(N_{f}=4\) to \(N_{f}=56\), and it tends to flatten out and achieves highest around setting of \(N_{f}=64\). Then the DSC score will not further show growth with the increase of \(N_{f}\) value, even the DSC score of spleen organ shows a noticeable decline when \(N_{f}\) continues to increase to value of 200. This phenomenon reveals that more partitions can produce more representative sub-regions, thus purifying the learned prototype by eliminating those perturbing sub-regions. However, too many subregions imply excessive segmentation, which may introduce segmentation noise and lead to performance degradation.
#### Iv-C2 Effect of Prototypical Representation Debiasing Module
In our proposed method, the core debiasing operation employs a stacked of \(M\) PRD modules. For conducting effectiveness analysis, we adopt the mean DSC scores of each cross-validation fold as the evaluation protocol, and the results of different settings of \(M\) are presented in Fig. 12 and Table III. By observing the box plot, we can analyse the
Fig. 11: Analysis of partitioned regions settings \(N_{f}\). The DSC score of each organs (LK, RK, Spleen and Liver) and the mean DSC score of them under a series of \(N_{f}\) settings are illustrated.
Fig. 12: The box-plot for mean DSC scores of five cross-validation folds under different settings of \(M(M=\{1,3,5,7,9,15\})\).
Fig. 10: The qualitative results of our method and other baseline methods on the **CMR** dataset.
distribution and variability of DSC scores of each folds under the different settings of \(M\). Specifically, from \(M=1\) to \(M=5\), the DSC scores show a continuous increase in median and mean, which shows the effect of PRD module in improving performance. The distribution concentration of data is gradually improved and DSC score median of five cross-validation folds increases from around 78.8% to around 79.48%. After that, with the increase of number of PRD modules (from \(M=7\) to \(M=15\)), the mean DSC score of five cross-validation folds fluctuates around 79.50%, and it has no obvious signs of growth. Additionally, DSC score shows a decrease of almost 0.36% by setting \(M=15\), which means that score may have converged around setting \(M=5\) and more stacked PRD modules lead to the overfitting. The box plot shows that, from \(M=5\) to \(M=9\), the data distribution concentration abilities of these settings have a little difference and medians are all close to the mean DSC scores. Moreover, the setting of \(M=5\) gains the highest DSC score in mean (79.532%), fold-1 (74.838%), fold-2 (84.941%), fold-3 (79.376%) and fold-4 (78.399%), and the best distribution concentration performance. Additionally, the qualitative results of liver segmentation under different \(M\) settings are also shown in Fig. 13 for more intuitive comparison.
#### Iv-B3 Analysis of Assembling Coefficient \(\lambda\)
In this section, we discuss the effectiveness of different settings of assembling coefficient \(\lambda\) used in the AP module. As illustrated in Fig. 14, the mean DSC score of four segmentation organs on the CHAOS dataset shows a significant growth with the increase of value of \(\lambda\), which means that the prediction map \(\mathcal{M}_{q}^{f}\) using single query prototype information is not the optimal choice and more information from support prototypical representations should improve the accuracy. Based on this tendency, we can obtain the mean DSC score over 79.5% after setting \(\lambda=0.5\) and the best mean DSC score of 79.53% can be achieved around setting of \(\lambda=0.7\). As the value of \(\lambda\) continues to increase, the mean DSC score does not show the corresponding growth although more support prototypical information is included. In addition, we also give the segmentation results in spleen organ under the setting of \(\lambda=\{0,0.2,0.5,0.7,0.8,1\}\) in Fig. 15 which shows the details of segmentation region under different settings.
## V Conclusion And Discussion
In this work, we focus on the hard intra-class variation problem in few-shot medical image segmentation task. Here we introduce to partition the image into multiple sub-regions, and suppress the perturbing sub-regions of foreground of support image in prototypical level and then refine the rest part information into ideal prototypical representations for fast adapting. Rigorous experimentation and comprehensive analysis on three unique datasets indicate that our proposed method outstrips the current state-of-the-art techniques. This testifies to its formidable ability to generalize, along with its high reliability and sturdiness under diverse circumstances.
While the proposed method has shown promise, it does possess certain inherent limitations that warrant further investigation. First, the use of a self-supervised super-voxel technique for seen data annotation may inadvertently compromise segmentation accuracy to some degree, signifying a potential area for improvement through the incorporation of a more accurate automatic annotation method. Second, there's a need for a stronger pre-trained feature extractor to effectively boost its core generalization capability rather than overly focusing on feature alignment and refinement operations. Lastly, to construct a more discriminative prototype, it is highly recommended to extract and introduce supplementary information from the background region, thereby enriching the context and improving overall performance.
Fig. 14: The curve of mean DSC score of four segmentation organs under different settings of coefficient \(\lambda\).
Fig. 13: The results of liver segmentation on the **CHAOS** dataset under the setting of \(M=\{1,3,5,7,9,15\}\).
Fig. 15: The results of spleen segmentation in **CHAOS** dataset under the settings of \(\lambda=\{0,0.2,0.5,0.7,0.8,1\}\). |
2309.07710 | Tolerance and breakdown of topological protection in a disordered
waveguide | We consider a disordered waveguide consisting of trivial dielectric and
non-trivial magnetically anisotropic material. A topologically-protected edge
mode appears owing to the broken time-reversal symmetry of the non-trivial
lattice. While the edge mode maintains under other position and radius
disorders, the protection is immediately broken by applying a radius disorder
to the non-trivial lattice. This breakdown originates from donor and acceptor
modes occupying the topological bandgap. Furthermore, via the calculation of
the Bott index, we show that Anderson localization occurs as a metal conducting
gap changes to a topological gap along with increasing disorders. | Kiyanoush Goudarzi, Moonjoo Lee | 2023-09-14T13:44:44Z | http://arxiv.org/abs/2309.07710v1 | # Tolerance and breakdown of topological protection in a disordered waveguide
###### Abstract
We consider a disordered waveguide consisting of trivial dielectric and non-trivial magnetically anisotropic material. A topologically-protected edge mode appears owing to the broken time-reversal symmetry of the non-trivial lattice. While the edge mode maintains under other position and radius disorders, the protection is immediately broken by applying a radius disorder to the non-trivial lattice. This breakdown originates from donor and acceptor modes occupying the topological bandgap. Furthermore, via the calculation of the Bott index, we show that Anderson localization occurs as a metal conducting gap changes to a topological gap along with increasing disorders.
Topologically protected state is generally known to be stable and robust against perturbations. This topological protection (TP) holds significant importance in various disciplines, including photonics [1; 2], condensed matter [3; 4], atomic physics [5; 6], and acoustics [7]. In particular, TP provides a solution for certain problems in photonics through the unique transport property of the topological edge mode. For instance, in a nanophotonic device, the transmission of an electromagnetic (EM) wave is often affected by backreflection and scattering losses. The topological waveguide enables the transmission of reflection-free waves, even in the presence of substantial structural disorder [8; 9]. Besides, photonic systems with TP have been excellent testbeds for exploring non-Hermitian physics [10; 11], non-linear optics [12; 13], higher-order band topology [14], and Floquet physics [15; 16].
This TP, however, can be influenced or even broken by specific environmental conditions. In condensed matter physics, extensive studies were performed regarding the breakdown of the TP. For example, the breakdown was observed in graphene edges due to antidots and upstream modes in an electrostatic regime [17; 18]. The TP breakdown also occurred in a two-dimensional electron gas by enhanced vacuum fluctuations [19], and a smooth edge potential broke the TP via edge reconstruction [20]. In contrast, such investigations in photonics are relatively scarce. One study indicated TP breaking by transitioning from the amorphous to crystalline structural state of the constitutive material Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{2}\)[21]. Another study revealed a TP breakdown using rotational symmetry in the unit cell of periodic Kekule patterns [22].
Here, we explore the tolerance and breakdown of TP of an edge mode in a disordered waveguide. Our topological waveguide (TW) consists of a trivial lattice interfaced with a non-trivial lattice. When both the position and radius disorders are applied to the trivial lattice, the unidirectionality of the edge mode sustains, exhibiting the TP nature of the mode. The unidirectionality also maintains under the position disorder of the non-trivial lattice. However, when the radius disorder is applied to the non-trivial lattice, all different localized defect modes prevent the formation of the topological bandgap, which breaks the edge mode. Moreover, as the radius disorder increases, a frequency domain of bulk modes becomes a non-trivial bandgap, accompanying Anderson localization (AL) of EM waves in the topological regime. Such topological nature is characterized with the Bott index (BI), a topological invariant obtained from the real-space wavefunctions.
Our TW includes a trivial and a non-trivial mirror. The trivial mirror contains dielectric rods with a relative permittivity of \(\epsilon_{\rm t}=16\), a relative permeability of \(\mu_{\rm t}=1\), and a radius of \(R_{\rm t}=0.30a\) in air where \(a\) is the lattice constant. The trivial lattice corresponds to the upper half of Figs. 1(a) and (b). The non-trivial lattice consists of the magneto-optical material of yttrium-iron-garnet (YIG) ferrite rods with a relative permittivity of \(\epsilon_{\rm n}=15\) and a permeability of \(\mu_{\rm n}=\mathbf{\mu}\) with a radius of \(R_{\rm n}=0.11a\) in air (lower half of Figs. 1(a) and (b)). Near an operating frequency of 4.5 GHz, the anisotropic permeability \(\mathbf{\mu}\) under applying a static magnetic field is
\[\mathbf{\mu}=\left[\begin{array}{ccc}\mu&i\kappa&0\\ -i\kappa&\mu&0\\ 0&0&\mu_{0}\end{array}\right], \tag{1}\]
where \(\mu=14\mu_{0}\) and \(\kappa=12.4\mu_{0}\), where \(\mu_{0}\) is the vacuum permeability [8].
Fig. 1(c) shows the dispersion diagram of the trivial mirror with the three trivial TM bandgaps of BG\({}_{\rm t1}\), BG\({}_{\rm t2}\), and BG\({}_{\rm t3}\). The first two gaps of BG\({}_{\rm t1}\) and BG\({}_{\rm t2}\) are Mie bandgaps with TM\({}_{01}\) and TM\({}_{11}\) modes, respectively. The TM\({}_{01}\) and TM\({}_{11}\) modes constitute pure Mie bandgaps that exhibit high tolerance to the position and radius disorderings, and thus the penetration of EM waves into the trivial mirror is well suppressed. Differently from BG\({}_{\rm t1}\) and BG\({}_{\rm t2}\), both Mie and Bragg scatterings generate BG\({}_{\rm t3}\) with the TM\({}_{21}\) mode, less tolerant to the disorderings, causing that the EM waves would penetrate gently more into the trivial lattice than BG\({}_{\rm t1}\) and BG\({}_{\rm t2}\)[23].
In the non-trivial mirror, we find three bandgaps of \(\rm BG_{n1}\), \(\rm BG_{n2}\), and \(\rm BG_{n3}\) in the dispersion diagram of Fig. 1(d). The lowest bandgap of \(\rm BG_{n1}\) is a trivial Mie bandgap with \(\rm TM_{01}\) mode. The second and third bandgaps are non-trivial topological gaps. Due to the broken time-reversal symmetry in the non-trivial lattice, the high-symmetry points of M and \(\Gamma\) in the irreducible Brillouin zone break, resulting in the creation of \(\rm BG_{n2}\) and \(\rm BG_{n3}\), respectively [8, 24]. Breaking the time-reversal symmetry originates from the fact that the anisotropic material of YIG has imaginary off-diagonal elements in the permeability tensor with \(\mathbf{\mu}\neq\mathbf{\mu}^{T}\), where \(T\) is the transpose operation [25]. The calculations of the Chern number and dispersion diagram are elaborated in Ref. [26].
We proceed to the calculation of Chern numbers in both the trivial and non-trivial lattices. The Chern numbers over the first Brillouin zone are zero for all bands of the trivial lattice. The zero value of the Chern numbers is based on preserving the time-reversal symmetries [8, 24, 27]. As contrary, we obtain the Chern numbers of \(0,1,-2\), and \(-1\) for the first four bands, from low to high frequency, in the non-trivial lattice [26].
Each band's Chern number constitutes the value of the Chern number for each bandgap, i.e., the bandgap's Chern number (\(\rm C_{g}\)) is defined as the sum of bands' Chern number below the bandgap [8, 24, 27]. As a consequence, we obtain \(\rm C_{g}=0,1\), and \(-1\) for \(\rm BG_{n1}\), \(\rm BG_{n2}\), and \(\rm BG_{n3}\), respectively. The overlap between \(\rm BG_{t2}\) and \(\rm BG_{n1}\) gives rise to the creation of the bandgap of \(\rm BG_{w}\) at normalized frequencies \(0.353<a/\lambda<0.451\) in the dispersion diagram of the TW: This results in no edge mode, because the difference of \(\rm C_{g}\) of \(\rm BG_{n1}\) and that of \(\rm BG_{t2}\) is zero. Differently from \(\rm BG_{w}\), the overlap between \(\rm BG_{t3}\) and the two bandgaps of \(\rm BG_{n2}\) and \(\rm BG_{n3}\) creates two unidirectional edge modes at \(0.556<a/\lambda<0.578\) and \(0.613<a/\lambda<0.637\) with positive and negative group velocities, respectively, as shown in Fig. 1(e). The edge modes are visualized in Figs. 1(a) and (b). The out-of-plane electric fields, \(E_{z}(x,y)\), at \(a/\lambda=0.570\) and \(0.630\) represent the propagation of EM waves to the right and left directions.
Next, we explore the impact of disorders to the unidirectionality of the edge modes. We consider two types of disorderings, in the position and radius of the rods. The disordered position of the rod \(i\) is defined as \((x^{i},y^{i})\), where \(x^{i}=x_{0}^{i}+\sigma_{\rm P}F_{x}^{i}\) and \(y^{i}=y_{0}^{i}+\sigma_{\rm P}F_{y}^{i}\) with the original position \((x_{0}^{i},y_{0}^{i})\), the strength of the position disorder is \(\sigma_{\rm P}\), and \(F_{x}^{i}\) and \(F_{y}^{i}\) are uniformly distributed random variables between \(-1\) and \(1\) for the \(i\)th rod along the \(x\) and \(y\) directions. The position disordering parameter is defined as \(\eta_{\rm P}=\sigma_{\rm P}/a\). In a similar way, the disordered radius of the rod \(i\) in the trivial (non-trivial) lattice is \(R_{\rm t(n)}^{i}=R_{\rm t(n),0}+\sigma_{\rm R}F_{\rm R}^{i}\), where \(R_{\rm t(n),0}\), \(\sigma_{\rm R}\), and \(F_{\rm R}^{i}\) stand for the original radius of the rod in the trivial (non-trivial) lattice, strength of the radius disordering, and the uniformly distributed random parameter over the interval of \([-1,1]\). We define the parameter of the radius disordering as \(\eta_{\rm R}=\sigma_{\rm R}/R_{\rm t(n),0}\).
We first obtain \(E_{z}(x,y)\) under the influence of the position disordering. The parameters of the trivial mirror are \(R_{\rm t,0}=0.30a\), \(\epsilon_{\rm t}=18\), and \(\mu_{\rm t}=1\) and those of the non-trivial mirror are \(R_{\rm n,0}=0.11a\), \(\epsilon_{\rm n}=15\), and \(\mu_{\rm n}=\mathbf{\mu}\). The normalized frequency of the TM polarized dipole source is \(a/\lambda=0.570\) where the edge mode of a positive group velocity appears. The calculation results are shown when the disorder is \(\eta_{\rm P}=40\%\) in either the trivial or non-trivial mirror (Figs. 2(a), (b), (e), and (f)): The unidirectionality of the EM waves is preserved at this substantial strength of disorder.
We explore the unidirectionality more quantitatively by calculating the transmittance \(T_{\rm LR}\) of the edge mode. We judge that the EM wave has the unidirectionality transmission when \(T_{\rm LR}>0.5\). Figs. 2(i) and (j) show \(T_{\rm LR}\) at several disorderings and \(\epsilon_{\rm t}\). Each transmittance is obtained by averaging the results of 100 numerical simulations. Under applying the position disorders to both the trivial and non-trivial mirrors, the unidirectionality is preserved and \(T_{\rm LR}\) moderately decreases as the disorder increases from \(\eta_{\rm P}=0\) to 50%.
Similar behavior is observed when the radius disorder
Figure 1: Normalized \(E_{z}(x,y)\) at \(a/\lambda=0.570\) in (a) and \(0.630\) in (b). Black arrows denote the location of light source. (c)–(d) Dispersion diagrams of trivial and non-trivial lattices. Green regions show photonic bandgaps and \(\Gamma\), M, and X represent vortices of the irreducible Brillouin zone. Lattice constant, wavelength, and Chern number are referred to as \(a\), \(\lambda\), and C, respectively. (e) Dispersion diagram of TW. Red and black curves show edge modes with positive and negative group velocities. Red and black arrows indicate \(a/\lambda=0.570\) and \(0.630\).
is applied to the trivial lattice (Figs. 2(c), (g), and (k)). However, the result is entirely different when the radius of non-trivial lattice is disordered. Figs. 2(d) and (h) show that the EM waves propagate to \(+x,-x\), and \(-y\) directions and the penetration to the trivial lattice (\(+y\) direction) is suppressed. \(T_{\rm LR}\) abruptly decreases from 1 to 0.4 at \(\eta_{\rm R}=10\%\), and to \(\sim 0\) at \(\eta_{\rm R}=20\%\)--the unidirectionality is broken completely (Fig. 2(l)).
We describe the origin of the tolerence and breakdown of the edge mode. Fig. 3 shows the dispersion diagrams of the TW, containing a supercell with \(5\times 8\) of dielectric rods with \(\epsilon_{\rm t}=18\), \(\epsilon_{\rm n}=15\), and \(5\times 8\) YIG rods of \(\mathbf{\mu}\)[26]. When the position disorder \(\eta_{\rm P}=40\%\) is applied to either the trivial or non-trivial lattice (Figs. 3(a) and (b)), the edge mode with a positive group velocity still exists and thus the unidirectionality of the EM wave maintains. This point is associated with the mode profile in the rods and the interference of the EM waves. In Figs. 2(e) and (f), we identify that the localized modes inside each dielectric and YIG rod under the position disordering are TM\({}_{21}\) and TM\({}_{11}\), respectively. The unidirectionality to the \(+x\) direction is a result of preserving the localized mode inside each rod in the non-trivial lattice. The gentle penetration of EM waves to the \(+y\) (Figs. 2(a) and (e)) and \(-y\) (Figs. 2(b) and (f)) directions is owing to the random increase in the distances between some rods. The increased distances result in decreasing the coupling between quasi-bound states, which makes the EM waves penetrate between the rods [28]. However, all identical TM\({}_{11}\) modes in YIG rods sustain the slope of the edge mode.
In case of radius disorders, defect modes affect the directionality of the edge mode. In this configuration, each disordered rod corresponds to a point defect of the waveguide. The point defects with larger (smaller) radii, increase (decrease) the effective refractive index, resulting in the appearance of donor (acceptor) modes. The donor (acceptor) modes, defect modes, fall into the adjacent bandgap from the upper edge (lower edge) of the gap, resulting in filling the bandgap with the defect modes. In other words, donor and acceptor modes experience red and blue shifts, respectively; the acceptor modes between BG\({}_{\rm t2}\) and BG\({}_{\rm t3}\) (radius disorder to trivial lattice), acceptor modes between BG\({}_{\rm n1}\) and BG\({}_{\rm n2}\) (radius disorder to non-trivial lattice), and donor modes between BG\({}_{\rm n2}\) and BG\({}_{\rm n3}\) (radius disorder to non-trivial lattice) generate the bulk modes that occupy the gap.
Note that this description holds for the cases of weak or moderate radius disorders. As the disorder increases, the impact of donor mode becomes more dominant than that of accepter mode, because more modes can exist in a rod of larger radius (see the explanation for Fig. 4(b) below);
Figure 2: (a)–(h) Normalized \(E_{\rm z}(x,y)\) under position and radius disorders. (a), (e) Trivial mirror with \(\eta_{\rm P}=40\%\), (b), (f) non-trivial mirror with \(\eta_{\rm P}=40\%\), (c), (g) trivial mirror with \(\eta_{\rm R}=40\%\), and (d), (h) non-trivial mirror with \(\eta_{\rm R}=40\%\). TM polarized dipole sources are denoted as black arrows at the interface of the mirrors with a normalized frequency \(a/\lambda=0.570\). Transmission \(T_{\rm LR}\) when position disorder is applied to (i) trivial and (j) non-trivial lattices, and radius disorder to (k) trivial and (l) non-trivial lattices for several \(\epsilon_{\rm t}\). \(a/\lambda=0.570\) for \(\epsilon_{\rm t}=16,18\), and \(20\) and \(a/\lambda=0.550\) for \(\epsilon_{\rm t}=22\).
as the radius disorder increases strongly, the bandgap shifts to red.
The impact of the defect modes is different between occurred in the trivial and non-trivial lattice. Given radius disorder in the trivial lattice, the acceptor modes between \(\text{BG}_{\text{t2}}\) and \(\text{BG}_{\text{t3}}\) would fill up the gap at \(0.556<a/\lambda<0.578\). However, the localized eigenmode in YIG rods does not change, which persists the directionality of the edge mode. All identical mode profiles in YIG rods are identified in Figs. 2(c) and (g), and accordingly the positive sign of the group velocity is preserved, as shown in Fig. 3(c).
In contrast, when the radius disorder is applied to the non-trivial lattice, the topological nature is broken totally. As shown in Fig. 3(d), both the acceptor modes from the frequencies between \(\text{BG}_{\text{n1}}\) and \(\text{BG}_{\text{n2}}\) and the donor modes between \(\text{BG}_{\text{n2}}\) and \(\text{BG}_{\text{n3}}\) occupy the gap, and the resulting bulk modes exhibit no directionality. The underlying reason of this breakdown is that all the eigenmodes in YIG rods become different under the radius disorder. All distinct these modes disturb the formation of the topological bandgap of the non-trivial lattice, which herein causes the TP breakdown. This feature is also found in Figs. 2(d) and (h), showing that different modes are localized in every YIG rod.
We further investigate this breakdown of TP with another topological variable, the BI [29, 30, 31, 32]. The BI, obtained from the real-space electric-field distribution, is particularly useful for characterizing disordered structures where the Chern number cannot be defined. The BI and Chern number manifest the topological nature of a band with nonzero values, having the same absolute value with the opposite sign. More details of the BI calculation are provided in Ref. [26]
The calculation results of the BI are presented in Fig. 4. We consider a supercell of \(6\times 6\) YIG rods in air under position and radius disorderings over the frequency interval of \(0.430\leq a/\lambda\leq 0.660\). The BI at each frequency and disordering is obtained by averaging the results of 50 simulations. In case of position disorder, we find two topological bandgaps of \(\text{BG}_{\text{n2}}\) and \(\text{BG}_{\text{n3}}\) with the BI of \(-1\) and \(+1\), respectively, which do not change as the position disorder increases (Fig. 4(a)). This shows that the edge mode is not influenced by the position disorder of the trivial and non-trivial lattices, agreeing with the results in Figs. 2(a) and (b), and Figs. 3(a) and (b). The localization of \(\text{TM}_{11}\) mode in all YIG rods gives the same BI, which is independent of position disordering: This supports the unidirectionality of the edge mode.
As shown in Fig. 4(b), the behavior of the topological bandgap, under the radius disorder of the non-trivial lattice, is very different from that under the position disorder: Both the bandgaps of \(\text{BG}_{\text{n2}}\) and \(\text{BG}_{\text{n3}}\) undergo red shift as \(\eta_{\text{R}}\) increases. This frequency shift is attributed to the donor modes in the point defects of larger radii. Under the radius disorder, defects with both larger and smaller radii are present. While many bulk modes can appear in the rods of larger radii, the number of modes that can exist in smaller defects would be much less; for instance, certain modes cannot survive if the size of a rod is smaller than the critical radius. Dominated by the impact of defects with larger radii, the effective index of refraction of the lattice increases, causing the red shift of overall defect modes--the bandgaps shift to red as well. More quantitative explanation is offered with the calculations of \(E_{z}(x,y)\) and the mode frequencies in Ref. [26].
This red shift brings about two phenomena: The breakdown of the TP and the emergence of AL [33, 34]. First, we consider an edge mode at \(a/\lambda=0.570\). As \(\eta_{\text{R}}\) increases, the topological bandgap is occupied with bulk modes, resulting in the disappearance of the edge mode (BI changes from \(-1\) to \(0\)): This is the reason for the breakdown of the unidirectionality as discussed above. Second, we concentrate on a region of
Figure 3: Dispersion diagrams of TW with \(5\times 8\) dielectric rods and \(5\times 8\) YIG rods. (a) \(\eta_{\text{P}}=40\%\) to trivial lattice, (b) \(\eta_{\text{P}}=40\%\) to non-trivial lattice, (c) \(\eta_{\text{R}}=40\%\) to trivial lattice, and (d) \(\eta_{\text{R}}=40\%\) to non-trivial lattice. Blue regions are bulk modes. Yellow arrow indicates \(a/\lambda=0.570\).
\(0.430<a/\lambda<0.528\) in Fig. 4(b). While this domain is occupied with bulk modes at \(\eta_{\rm R}=0\), these modes gradually disappear and the topological bandgap emerges as \(\eta_{\rm R}\) grows. Accordingly the edge mode appears in this frequency region, which accompanies not only the unidirectionality of the mode but also the localization of the EM waves under the disorder, which corresponds to AL. The behavior of AL is revealed in Fig. 4(e), where \(|E_{z}(a,y)|\) and \(|E_{z}(3a,y)|\) are fitted with the exponential decay function of \(E_{0}\cdot\exp{(-|y|/\xi)}\). The localization length is given by \(\xi\). From the fitting, we obtain \(\xi\simeq 3.3a\) for \(|E_{z}(a,y)|\) and \(\xi\simeq 3.8a\) for \(|E_{z}(3a,y)|\), and this exponentially decaying feature proves the genuine nature of AL. Summarizing the phenomena here, as the disorder increases, transition from the metal conducting gap to non-trivial bandgap happens, in tandem with a disorder-induced emergence of AL in this topological regime.
We finally remark two points regarding our work. First, as far as we are aware, our work is for the first time to explore the impact of radius disorder in a topological waveguide. While previous studies were done with position disorders [35, 36, 37], we study how the radius disorders affect the edge mode, making it possible to manifest the TP breakdown. Second, we herein envision the possibility of a similar study in the optical and infrared (IR) frequency domain. The deployed YIG material exhibits the nature of broken time-reversal frequency near 4.5 GHz [8, 9]. In order to investigate such topological behavior in the optical domain, one can utilize a honeycomb lattice including a dielectric-helical waveguide, in an ambient medium with a refractive index of 1.45 [15]. The symmetry breaks in the helical direction, which acts like time-reversal symmetry breaking in YIG, leading to the unidirectional EM wave around the lattice at optical frequencies. In the IR region, one can make use of dielectric rods with a refractive index of 3.42 in a honeycomb pattern in air, revealing pseudo-time reversal symmetry preservation. This would result in the generation of unidirectional chiral edge states in the IR domain [38].
In conclusion, we have studied the tolerance and breakdown of TP in a disordered waveguide. Both the position and radius disorders are applied to either the trivial or non-trivial lattice. The edge mode disappears under the radius disorder of non-trivial lattice, because all different donor and acceptor modes prevent the creation of the topological gap. Moreover, through the calculation of a topological variable of the BI, we show that EM waves are localized in a certain frequency region, which is AL effect associated with the topological gap. Our work offers new understanding on the one-way light propagation in nanophotonics, and also gives an insight to the development of topological optical circuits and integrated photonic devices.
We thank H. G. Maragheh for helpful discussions. We acknowledge the support from BK21 FOUR program and Educational Institute for Intelligent Information Integration, National Research Foundation (Grant No. 2019R1A5A1027055), Samsung Electronics Co., Ltd (IO201211-08121-01), and Samsung Science and Technology Foundation (SRFC-TC2103-01).
Our data are available at [https://doi.org/10.5281/zenodo.8339612](https://doi.org/10.5281/zenodo.8339612).
|
2309.08423 | A Simple Method for the Performance Analysis of Fluid Antenna Systems
under Correlated Nakagami-$m$ Fading | By recognizing the tremendous flexibility of the emerging fluid antenna
system (FAS), which allows dynamic reconfigurability of the location of the
antenna within a given space, this paper investigates the performance of a
single-antenna FAS over spatially correlated Nakagami-$m$ fading channels.
Specifically, simple and highly accurate closed-form approximations for the
cumulative density function of the FAS channel and the outage probability of
the proposed system are obtained by employing a novel asymptotic matching
method, which is an improved version of the well-known moment matching. With
this method, the outage probability can be computed {simply} without incurring
complex multi-fold integrals, thus requiring negligible computational effort.
Finally, the accuracy of the proposed approximations is validated, and it is
shown that the FAS can meet or even exceed the performance attained by the
conventional maximal ratio combining (MRC) technique. | José~David~Vega-Sánchez, Luis~Urquiza-Aguiar, Martha Cecilia Paredes Paredes, Diana~Pamela~Moya~Osorio | 2023-09-15T14:27:43Z | http://arxiv.org/abs/2309.08423v1 | A Simple Method for the Performance Analysis of Fluid Antenna Systems under Correlated Nakagami-\(m\) Fading
###### Abstract
By recognizing the tremendous flexibility of the emerging fluid antenna system (FAS), which allows dynamic reconfigurability of the location of the antenna within a given space, this paper investigates the performance of a single-antenna FAS over spatially correlated Nakagami-\(m\) fading channels. Specifically, simple and highly accurate closed-form approximations for the cumulative density function of the FAS channel and the outage probability of the proposed system are obtained by employing a novel asymptotic matching method, which is an improved version of the well-known moment matching. With this method, the outage probability can be computed simply without incurring complex multi-fold integrals, thus requiring negligible computational effort. Finally, the accuracy of the proposed approximations is validated, and it is shown that the FAS can meet or even exceed the performance attained by the conventional maximal ratio combining (MRC) technique.
Asymptotic matching, fluid antenna system, nakagami-\(m\) fading, spatial correlation, outage probability.
## I Introduction
The fifth-generation (5G) of wireless mobile networks have recently been deployed worldwide, so industry and academia have already started the race to define the shape red of the future sixth-generation (6G). Very recently, a technology that has been gaining momentum is the fluid antenna system (FAS), which is a new paradigm of antenna systems where antennas are equipped with software-controllable fluid structure (e.g., Eutectic Gallium-Indium, Mercury, Galinstan, etc.) that allows dynamic reconfigurability of its position and shape within a given space1. Particularly, FAS may help overcome practical limitations of using multiple antennas in size-constrained devices, and the cost of radio frequency (RF) chains [2, 3]. The fundamental single fluid antenna is built of one RF chain and \(N\) fixed locations, so-called ports, distributed in a linear space. Unlike conventional spatial diversity techniques (e.g., maximal ratio combining (MRC)), FAS allows an antenna to freely switch its position among the ports to obtain a more robust channel gain or lower interference, thus providing remarkable gains in diversity, multiplexing, and interference-free communications [4].
Footnote 1: Interested readers can refer to [1] for information on fluid antenna prototypes.
A plethora of works have been recently focused on investigating the performance of FAS in wireless communications, where metrics such as ergodic capacity and outage probability (OP) metrics have been investigated in different settings. For instance, in [5], Wong et al. demonstrated that a single-antenna FAS outperforms the traditional MRC in terms of the OP when the number of ports at the fluid antenna is large enough. Also, in [2, 6], Wong et al. studied the achievable performance of FAS in arbitrarily correlated Rayleigh fading channels. Khammassi et al. proposed an approximate expression for the FAS relative channel distribution in [7], where a two-stage approach was proposed. The first phase reduces the number of multi-fold integrals of the OP, while the second represents the OP in a single-integral form by assuming correlated Rayleigh fading channels. Tlebaldiyeva et al. considered a more general small-scale fading channel model on the FAS, in [8], where the OP was found in a single-integral form for a single-antenna \(N\)-port FAS over spatially Nakagami-\(m\) channel. In [9], by taking advantage of stochastic geometry tools, Skouroumounis and Krikidis derived a closed-form expression for the OP in fluid antenna for large-scale cellular networks. Moreover, Ghadi et al. derived a closed-form formulation of the OP performance in [10] by adopting copula theory to characterize the correlation model (e.g., Frank, Clayton, and Gumbel) between fading channel coefficients. Very recently, the OP behavior of FAS-aided Terahertz communication networks under correlated \(\alpha\)-\(\mu\) fading channels for non-diversity and diversity FAS receivers was investigated by Tlebaldiyeva et al. in [11]. Therein, as in [8], the OP was derived in single-integral expression due to the mathematical intractability of both the \(\alpha\)-\(\mu\) channel model and the diversity FAS underlying system.
Based on the above considerations and motivated by the potential of FAS to provide diversity and remarkable capacity benefits for forthcoming networks, this work exploits the advantages of a novel asymptotic matching method to approximate the FAS's channel distribution into a single approach. Despite the FAS system's intricacy, the authors aim to provide analytically tractable expressions for the outage metric without incurring the prohibitive complexity of special functions or multi/single-fold integrals, which have already been used in previous works. Specifically, a FAS that experiences correlated Nakagami-\(m\) fading channels is considered, where we propose to approximate the equivalent cumulative density function |
2309.17265 | Effect of structure-based training on 3D localization precision and
quality | This study introduces a structural-based training approach for CNN-based
algorithms in single-molecule localization microscopy (SMLM) and 3D object
reconstruction. We compare this approach with the traditional random-based
training method, utilizing the LUENN package as our AI pipeline. The
quantitative evaluation demonstrates significant improvements in detection rate
and localization precision with the structural-based training approach,
particularly in varying signal-to-noise ratios (SNRs). Moreover, the method
effectively removes checkerboard artifacts, ensuring more accurate 3D
reconstructions. Our findings highlight the potential of the structural-based
training approach to advance super-resolution microscopy and deepen our
understanding of complex biological systems at the nanoscale. | Armin Abdehkakha, Craig Snoeyink | 2023-09-29T14:17:31Z | http://arxiv.org/abs/2309.17265v1 | # Effect of structure-based training on 3D localization precision and quality
###### Abstract
This study introduces a structural-based training approach for CNN-based algorithms in single-molecule localization microscopy (SMLM) and 3D object reconstruction. We compare this approach with the traditional random-based training method, utilizing the LUENN package as our AI pipeline. The quantitative evaluation demonstrates significant improvements in detection rate and localization precision with the structural-based training approach, particularly in varying signal-to-noise ratios (SNRs). Moreover, the method effectively removes checkerboard artifacts, ensuring more accurate 3D reconstructions. Our findings highlight the potential of the structural-based training approach to advance super-resolution microscopy and deepen our understanding of complex biological systems at the nanoscale.
**Keywords: Super-resolution Microscopy, structure-based training, Deep Convolutional Neural Network, Localization, 3D Reconstruction**
## Introduction
Single Molecule Localization Microscopy (SMLM) is a revolutionary technique in Super-resolution Microscopy that surpasses the diffraction limit, providing enhanced imaging resolution [1]. This breakthrough allows researchers to study cellular functions relevant to both health and disease with unprecedented detail. Among various super-resolution imaging techniques, SMLM stands out by offering the highest achievable resolution, ranging from 20 to 30 nm, using relatively simple experimental equipment [2].
Despite its exceptional resolution, SMLM comes with challenges in terms of data analysis and imaging speed compared to other super-resolution techniques like STED microscopy [3], SIM [4], and NSOM [5]. SMLM data analysis is complex and time-consuming, and the imaging speed is relatively slower, requiring several minutes to capture a complete dataset.
In SMLM, localization precision, and data interpretation heavily rely on the sparsity of fluorophores, ensuring well-separated point spread functions (PSFs). Achieving this requires time-separating frames and activating individual fluorophores. While increasing the fluorophore density per frame can enhance acquisition speed, it also introduces limitations. Higher fluorophore density can lead to PSF overlap, resulting in reduced detection accuracy and localization precision [6]. Additionally, high-density frames may produce artifacts like false structures, artificial sharpening, and checkerboard artifacts [6, 7].
Given the demand for fast and accurate analysis methods that eliminate artifacts, especially in high-density frames, the development of an improved algorithm is highly sought after [7]. This advancement would enable precise localization in ultra-high densities, facilitating more accurate quantification and analysis of dynamic events, such as protein interactions in membrane fluidity analysis [8] within sub-cellular structures. Moreover, this progress in SMLM technology would have broad applications, including drug discovery, where a deeper understanding of protein interactions is essential for designing effective therapies.
Traditional mathematical-based localization algorithms, such as Maximum Likelihood Estimation (MLE) [9] and non-linear least squares (LS) [10], treat each point spread function (PSF)
independently without considering their surroundings. While effective for sparse emitters with non-overlapping PSFs, these methods suffer from reduced precision when PSFs overlap, as they are unable to extract patterns from overlapping PSFs and leverage combined information for improved accuracy.
Recent advances in SMLM have seen significant progress in addressing these limitations by utilizing Convolutional Neural Networks (CNNs). State-of-the-art CNN-based algorithms, such as DeepLoco[11], DeepSTORM3D[12], DECODE[6], and LUENN[13], have achieved remarkable improvements in analysis times and localization accuracy. These deep-learning approaches excel at handling large numbers of emitters without a significant increase in computational time. However, highly overlapped patterns still pose challenges and can compromise reconstruction quality in scenarios with dense emitter distributions.
The performance of a CNN-based localization algorithm is heavily reliant on the training method used. While supervised learning is a common approach for robust training, it necessitates a large training dataset to avoid overfitting. However, in SMLM, obtaining real experimental frames with corresponding ground-truth data is limited, posing a challenge for training. To address this challenge, researchers often employ a reasonable frame generative model capable of reproducing data as closely as possible to real frames[6, 11, 12]. The success of the AI model greatly depends on the accuracy of this generative model, and any mismatch between the simulated and experimental data could lead to reduced performance.
Frame simulation is a crucial three-step process in training a model for SMLM and 3D reconstruction of biological samples. Firstly, candidate seeds are randomly activated in a 3D grid domain, and their locations are selected within a confined domain both laterally (within the frame size) and axially (within the depth range). Next, the generative model employs the Point Spread Function (PSF) model to fill the frame with PSF distributions. Finally, camera noise is applied to simulate realistic frames that closely resemble experimental data.
While recent works on generative models have performed satisfactorily in PSF engineering modeling and camera noise estimation, less attention has been given to the sampling
methods. Improving the sampling methods in frame simulation is an area of potential advancement to enhance the overall performance and accuracy of CNN-based localization algorithms in SMLM. In the context of seed sampling for frame simulation, the traditional approach adopted by researchers is the Complete Spatial Randomness (CSR) hypothesis. This method assumes that the points in the dataset are distributed randomly and independently throughout the study area, without any specific spatial patterns or interactions. Essentially, the points follow a homogeneous Poisson process, where the probability of finding a point at any location within the study area remains uniform and is not influenced by the presence or absence of other points.
The sampling method based on the CSR hypothesis is widely used as a baseline reference for evaluating deviations from randomness and uncovering any underlying spatial structures or dependencies present in real-world point datasets. By comparing the performance of other sampling methods to CSR, researchers can assess the effectiveness and efficiency of their frame simulation approaches in accurately representing and reproducing experimental data.
In the context of Single-Molecule Localization Microscopy (SMLM) and 3D reconstruction, the primary objective is to achieve rapid and accurate reconstruction of 3D objects. Given the specific goal of accurate 3D reconstruction, the CSR method, which assumes random and independent distributions, does not capture the intricacies and spatial relationships present in real 3D structures. Instead, specialized sampling methods that consider the actual 3D structure and interactions between points are necessary to produce realistic frame simulations. This is because SMLM datasets consist of collections of 3D points that are correlated in location due to the surface or geometry of an object or environment.
In our research, we propose a structure-based training approach to enhance the performance of CNN-based algorithms, especially in ultra-high emitter densities. Structure-based training involves considering the underlying structure and relationships between neighboring PSFs, instead of treating data as independent samples. By leveraging the contextual
information and correlations between data points, the algorithm can make more informed predictions and capture dependencies present in the structured data. Sampling from cloud points, where overlapping patterns have meaningful relationships, allows the CNN to reconstruct a more accurate 3D view of the structure.
Incorporating structure-based training can provide several advantages, including improved accuracy, robustness, and generalization performance of the trained model. By advancing the localization precision of CNN-based algorithms in SMLM, we can unlock the full potential of SMLM and enhance our understanding of molecular interactions in biological systems. This research has the potential to significantly impact the field of super-resolution microscopy and contribute to breakthroughs in various biological and medical research areas.
## 2 Methods
In our study, we aimed to compare the effects of two different seed sampling methods on the performance of a CNN-based localization algorithm. To conduct the experiments, we utilized the LUENN package, which served as our AI pipeline, guiding us through each step of the process, including emitters sampling, frame simulation, model selection, model training, and performance evaluation. The overall process is illustrated in Figure1.
We utilized the Astigmatism modality in both approaches and trained the model on three distinct background-to-noise ratios: high, medium, and low. These three refer to mean photon counts of 1000, 5000, and 20,000 with background levels of 10, 50, and 200 photons per pixel, respectively. The frame generative model used in LUENN was based on the method presented by [6], and the training procedure was comprehensively explained in our previous work [13].
The primary focus of our investigation was to compare two different approaches in the emitters sampling step. In the first approach, we trained LUENN by sampling emitters based on the Complete Spatial Randomness (CSR) hypothesis. In the second approach,
emitters were sampled from randomly generated point clouds, which aim to represent real 3D structures.
After completing the full training of both models, we proceeded to compare their reconstruction and localization performance using artificially generated framesets of Microtubules, utilizing the ground-truth data from the MT0 datasets provided by the SMLM Challenge 2016. The ground-truth emitters data can be accessed through the following LINK. However, it's worth noting that the challenge only provided framesets for two specific frame densities, limiting our comparison's scope.
To overcome this limitation, we generated an additional nine sets of frames with varying nominal densities, ranging from 0.38 to 13.0 emitters per frame. It's important to mention that these densities are not the actual density that is formulated as the number of emitters per unit area in the frame. In the frames where the points' distribution follows a specific structure, such as a line or a surface, the actual density may not accurately reflect the
Figure 1: **Comparison Workflow**. Contrasting two training methodologies rooted in random emitter sampling (Approach 1) and structure-based sampling (Approach 2). In both approaches, the LUENN pipeline is employed for frame simulation, incorporating Astigmatism modality, model training, and evaluation, with calculations performed for Jaccardian Index and precision assessments.
difficulty of localization.
We employed Ripley's method [14] to calculate the average minimum distance between the emitters on the simulated frames to address this. Using cross-correlation, we then identified the frame density with a similar average minimum distance, in which the seeds were uniformly distributed. This corresponding frame density was considered the "nominal density" of the frame, providing a more accurate and broadly applicable quantification metric.
Both models followed the same on-the-fly training procedure, where training frames were generated randomly during the training process. For training LUENN with the CSR method, the seeds were randomly sampled across the entire 3D domain without any constraints. On the other hand, for the structure-based training, we created three-helix microtubules that spanned the 3D domain, each containing 5000 seeds. From these sampled point clouds, we selected candidate emitters, considering the challenges posed by overlapping PSFs along a spiral curve. It's important to note that the sampled microtubules were only used for one frame of the training data and the microtubule structures were continuously changed throughout the training process.
In Figure 2, we provide two examples of training frames along with their corresponding labeled frames. These examples showcase the effectiveness of the structure-based training approach in addressing the challenges of emitter localization, particularly in cases with overlapping PSFs, thus contributing to the improved performance of the CNN-based localization algorithm.
## Results and discussion
For the quantitative evaluation, we employed two metrics, namely the Jaccardian index (JI) and the root-mean-square error (RMSE), to assess the detection rate and localization accuracy in the X, Y, and Z directions in reconstructing the MT0 dataset. Figure 3a to c presents the Jaccardian index and volumetric RMSE plotted against the frame densities.
The results demonstrate a significant improvement in both detection rate and localization precision when using the structural-based training approach. In high signal-to-noise ratio (SNR) conditions, the structural-based model exhibited an average of 20% improvement in the detection rate and 35% improvement in RMSE compared to the traditional random-based training model.
Similar improvements are observed in medium and low SNR scenarios, as shown in Figures 3b and 3c. In these cases, the structural-based trained model outperformed the random-based trained model with 7.2% and 2.2% higher detection rates, respectively. Additionally, on average, the volumetric RMSE was 28.3 nm and 30.7 nm lower in the structural-based training approach for medium and low SNRs, respectively. These results demonstrate the robustness and superior performance of the structural-based training approach in various SNR conditions, making it a promising method for enhancing the accuracy and precision of single-molecule localization microscopy (SMLM) in 3D object reconstruction.
Figures 3d to f present the lateral and axial localization RMSE for high, medium, and
Figure 2: Investigating two sampling approaches on the performance of the LUENN on 3D reconstruction. Approach 1: sampling emitters randomly without prior information of the real object. Approach 2: Cloud point sampling to give the model rationale behind the overlapping PSFs. both approaches share the same method for frame simulation, model training, and performance evaluation that all are part of the LUENN pipeline.
low SNRs, respectively. These results underscore the substantial improvement in localization precision, particularly in the depth (Z) direction, further validating the robustness of the structural-based training approach in achieving accurate 3D object reconstruction, which is the primary goal of SMLM. An interesting trend in the results is that as the nominal density increases, our new training method exhibits even better results, as indicated by the increasing difference between the plots for both lateral and axial localization RMSE. This finding highlights the effectiveness of the structural-based training approach in handling varying emitter densities and its potential to provide superior localization accuracy across a wide range of imaging conditions.
Figures 3g to i illustrate the 3D efficiency of the two methods for high, medium, and low SNRs, respectively. The results clearly demonstrate a significant improvement in 3D efficiency by adopting the structural-based training approach. On average, the performance of LUENN improved by 20% across all three levels of SNRs. This enhancement in 3D efficiency showcases the superiority of the structural-based training method in achieving more accurate and reliable 3D reconstructions in single-molecule localization microscopy.
To visually assess the quality of the reconstructed Microtubules, we have provided reconstruction of low density frames, nominal density equal to 0.38 emitters per frame, at high, medium, and low SNRs in Figures 4 a to f. Notably, the structural-based training approach has demonstrated its effectiveness in removing checkerboard artifacts, which are pixel-level biases resulting in grid-like patterns in the reconstructed structures. These artifacts can significantly impact the accuracy of 3D reconstructions and may lead to misleading interpretations of biological structures.
By successfully mitigating these checkerboard artifacts, the structural-based training approach ensures a more reliable and accurate reconstruction of 3D structures, thereby advancing the capabilities of SMLM in studying complex biological systems. This improvement in reconstruction quality has important implications for understanding the spatial organization of cellular components and interactions within subcellular structures, contributing to further
breakthroughs in the field of super-resolution microscopy and its applications in biological and medical research.
Finally, structure-based training does not negatively impact the localization of isolated emitters. As Figure 3 shows, even at low emitter densities the structure-based training method demonstrates significant improvement in both localization accuracy and precision.
Figure 3: Quantitative comparison of training methods, encompassing both random and structural-based approaches, across varying signal-to-noise ratios (SNRs) – high, medium, and low. **a-c**. Depicting the Jaccardian index and volumetric RMSE as functions of frame densities. **e-f**. Illustrating the lateral and axial RMSE trends relative to frame densities. **g-i**. Presenting the 3D efficiency trends in relation to frame densities.
Figure 4: Visual evaluation of reconstruction quality under low-density frame conditions (with a nominal density of 0.38 emitters per frame) across diverse signal-to-noise ratios (SNRs), encompassing high, medium, and low levels. Panels **a-b** illustrate the reconstructions under high SNR, **c-d** depict the reconstructions under medium SNR, and **e-f** showcase the reconstructions under low SNR.
This means that even in the absence of multiple emitters in an image, i.e. when the neural network can't utilize additional information about structure, the neural network performs better at localization.
## Conclusion
In conclusion, our study has demonstrated the efficacy of the structural-based training approach in significantly improving the performance of CNN-based algorithms for single-molecule localization microscopy (SMLM) and 3D object reconstruction. By utilizing two key metrics, the Jaccardian index as a measure of localization accuracy (JI) and root-mean-square error (RMSE) as a measure of precision, we have quantitatively shown the superiority of the structural-based model over the traditional random-based training model.
The structural-based training approach exhibited remarkable improvements in both detection rate and localization precision across a wide range of signal-to-noise ratios (SNRs), outperforming the random-based trained model in high, medium, and low SNR conditions. With an average of 20% higher detection rate and 35% lower RMSE in high SNR, and 7.2% and 5.6% higher detection rates in medium and low SNRs, respectively, the structural-based model showcases its robustness and versatility. These advantages exist even at very low emitter densities where the structure is not obvious in every image.
Notably, the structural-based training approach excelled in achieving superior localization precision, especially in the depth (Z) direction, which is crucial for accurate 3D object reconstruction. The increasing difference between lateral and axial localization RMSE plots with higher nominal density further highlights the method's effectiveness in handling varying emitter densities and enhancing localization accuracy.
The visual assessment of reconstructed Microtubules demonstrated that the structural-based training approach effectively eliminated checkerboard artifacts, a common issue affecting the accuracy of 3D reconstructions. This improvement ensures more reliable and
accurate representations of biological structures, enhancing the capabilities of SMLM in studying complex subcellular systems.
Overall, our findings suggest that the structural-based training approach holds great promise for advancing the field of super-resolution microscopy and its applications in biological and medical research. By improving localization precision and reconstruction quality, this novel approach can lead to deeper insights into the spatial organization of cellular components and interactions, ultimately contributing to significant breakthroughs in understanding complex biological processes. As a result, the structural-based training approach can have a transformative impact on the study of biological systems at the nanoscale, offering new avenues for exploration and discoveries.
## Acknowledgement
We acknowledge the Center for Computational Research at the University at Buffalo for providing the computational support needed for training the models and for supplying the results of this study.
|
2309.08345 | Data Distribution Bottlenecks in Grounding Language Models to Knowledge
Bases | Language models (LMs) have already demonstrated remarkable abilities in
understanding and generating both natural and formal language. Despite these
advances, their integration with real-world environments such as large-scale
knowledge bases (KBs) remains an underdeveloped area, affecting applications
such as semantic parsing and indulging in "hallucinated" information. This
paper is an experimental investigation aimed at uncovering the robustness
challenges that LMs encounter when tasked with knowledge base question
answering (KBQA). The investigation covers scenarios with inconsistent data
distribution between training and inference, such as generalization to unseen
domains, adaptation to various language variations, and transferability across
different datasets. Our comprehensive experiments reveal that even when
employed with our proposed data augmentation techniques, advanced small and
large language models exhibit poor performance in various dimensions. While the
LM is a promising technology, the robustness of the current form in dealing
with complex environments is fragile and of limited practicality because of the
data distribution issue. This calls for future research on data collection and
LM learning paradims. | Yiheng Shu, Zhiwei Yu | 2023-09-15T12:06:45Z | http://arxiv.org/abs/2309.08345v3 | # Data Distribution Bottlenecks in Grounding Language Models to Knowledge Bases
###### Abstract
Language models (LMs) have already demonstrated remarkable abilities in understanding and generating both natural language and formal language. Despite these advances, their integration with real-world environments such as large-scale knowledge bases (KBs) remains an underdeveloped area, affecting applications such as semantic parsing and indulging in "hal-lucinated" information. This paper is an experimental investigation aimed at uncovering the robustness challenges that LMs encounter when tasked with knowledge base question-answering (KBQA). The investigation covers scenarios with inconsistent data distribution between training and inference, such as generalization to unseen domains, adaptation to various language variations, and transferability across different datasets. Our comprehensive experiments reveal that even when employed with our proposed data augmentation techniques, advanced small and large language models exhibit poor performance in various dimensions. While the LM is a promising technology, the robustness of the current form in dealing with complex environments is fragile and of limited practicality because of the data distribution issue. This calls for future research on data collection and LM learning paradigms1.
Footnote 1: Code and data will be public upon acceptance.
## 1 Introduction
Language models (LMs), such as BERT (Devlin et al., 2019), T5 (Raffel et al., 2020), and the GPT series (Ouyang et al., 2022; OpenAI, 2023), have demonstrated impressive capabilities in understanding and generating natural or formal language, highlighting the potential for artificial general intelligence (AGI). However, several obstacles must be overcome to achieve this goal. For example, to mitigate the issue of "hallucinated" information and expand the range of LM applications, researchers have focused on grounding LMs to real-world environments (e.g., Web, database, knowledge base) (Nakano et al., 2021; Menick et al., 2022), which enables real-time fact-checking, data validation and thereby improve the reliability of model responses. Knowledge base (KB) is one underdeveloped environment among them. The task of Knowledge Base Question Answering (KBQA) aims to parse natural language queries using KBs, such as Freebase (Bollacker et al., 2008), and Wikidata (Vrandecic and Krotzsch, 2014). Now, numerous LM-driven models (Das et al., 2021; Hu et al., 2022) continue to achieve higher scores on KBQA benchmarks, but grounding LMs to KBs has not been fully examined for robustness and reliability. A few critical gaps in existing works prompt the necessity for this work. First, while the evaluation of LMs typically occurs in natural language tasks, the challenge escalates when grounding these models to real-world environments like KBs (Liu et al., 2023), where the data contains structured data instead of all unstructured natural language. Second, the metrics of the KBQA benchmark are often shallow and the robustness of the model is not adequately evaluated. Finally, recent surveys on KBQA (Lan et al., 2022; Gu et al., 2022) have largely omitted the strides made in the development and application of LMs, especially large language models (LLMs). As a result, there remains a clear gap in understanding the robustness challenges and solutions specific to grounding LMs to KBs.
For deep learning models, robustness is closely related to data distribution (Hendrycks et al., 2020). The efficacy of language models (LMs) is not just a product of their architecture, but also a reflection of the data on which they are trained. In simpler tasks and well-defined domains, large-scale corpora have been collected and used for effective training (Touvron et al., 2023). However, real-world environments are rarely so accommodating, e.g., large KBs
contain complex structures and schema items, and building a representative corpus is more challenging. Inconsistency in data distribution during training and inference, as shown in Figure 1, may negatively impact the performance and robustness of LMs. Given this backdrop, it becomes essential to approach the task of grounding LMs from a multifaceted standpoint. Specifically, we believe that a detailed understanding of this grounding task requires exploring **environmental**, **linguistic**, and **model learning** aspects.
Through these explorations, this paper aims to provide a more holistic understanding of the challenges and opportunities that arise in the LM grounding and KBQA benchmarking processes. We review existing works and identify several challenges, including: 1) generalization to unseen domains at the schema level Gu et al. (2021), 2) adaptation to paraphrases featuring diverse language variations Su et al. (2016), 3) transferability across datasets with novel schema items, query patterns, and linguistic styles Cao et al. (2022), and 4) few-shot in-context learning capabilities of grounding LLMs Li et al. (2023). These aspects allow us to dissect and critique the robustness and applicability of LMs when operating in complex real-world scenarios, thereby fulfilling our objective of effective interaction between LMs and environments.
To substantiate the presence of these challenges and obtain additional empirical insights, we undertake a comprehensive experimental investigation for the above aspects. To mitigate the negative effects of inconsistent data distributions in our experiments, we propose a data augmentation method for any LM and a retrieval augmentation method for any LLM. Our findings reveal that even when employed with such techniques (the highest EM score is achieved on the GrailQA benchmark), advanced small and large LMs still fall short of effectively tackling the majority of these challenges. A striking example is the large difference between the best practice without WebQSP Yih et al. (2016) fine-tuning (F1 43.0%) compared to the fine-tuned state-of-the-art (F1 79.6%) (SS6), suggesting the weak robustness of LM-driven KBQA models on an unseen dataset. Such observations highlight an urgent need for future research in data collection methodologies and LM learning paradigms. Meanwhile, we expect the evaluation approaches in this paper to provide a reference for future benchmark construction, developing metrics that take robustness into account (SS3).
## 2 Preliminary
In this paper, the knowledge base (KB) refers to an RDF2 graph, consisting of triples \((s,r,o)\), where \(s\) is a subject, \(r\) is a relation and \(o\) is an object. **Logical form** refers to a formal query that can be executed on a KB like SPARQL and S-expression Gu et al. (2021). **Schema** refers to the rdfs:Class (class) and rdf:Property (relation) in the RDF Schema3.
Footnote 2: Resource Description Framework by W3C standard.
Footnote 3: [https://www.w3.org/TR/rdf12-schema/](https://www.w3.org/TR/rdf12-schema/)
## 3 Robustness Challenge
In this paper, the robustness of a KBQA model refers to its ability to adapt to various natural language inputs and maintain consistent performance when data distribution shifts. Due to the data discrepancy between the training corpus of LMs and
Figure 1: Inconsistent KBQA data distribution between training and inference.
KB environments, LMs face challenges from environmental, linguistic, and model learning aspects.
### Environmental Aspect
One of the principal challenges from an environmental perspective is **schema-level generalization**. The RDF Schema offers a data-modeling vocabulary, which is essential for querying a KB. As shown in Table 1, most KBQA benchmarks operate under the assumption that the schema distribution is identically and independently distributed (i.i.d.) between training and testing scenarios. This means that the schema items encountered during testing are already familiar to the model from the training data. However, this assumption often falls short in large-scale KBs, which contains thousands of schema items. Currently, only a limited number of benchmarks aim to evaluate the ability to generalize at the schema level, especially when it comes to handling non-i.i.d. schema items. The SimpleQuestions-Balance dataset Wu et al. (2019) reconfigures the i.i.d. SimpleQuestions dataset Petrochuk and Zettlemoyer (2018) to ensure that half of the questions in the testing (or validation) set are mapped to KB relations not present in the training data. GrailQA Gu et al. (2021) is structured to evaluate three levels of schema-level generalization: i.i.d. (25%), compositional generalization (25%), and zero-shot generalization (50%). GraphQuestions Su et al. (2016) presents an even more rigorous test as its testing set features schema items seldom encountered in the training data. Given the impracticality of maintaining the i.i.d. assumption in real-world applications, these datasets offer a more realistic portrayal of the generalization challenges that QA systems face in practical settings. Despite recent advances Shu et al. (2022); Gu et al. (2022), challenges in compositional and zero-shot generalization are far from solved.
### Linguistic Aspect
Natural language is variable in form, making question understanding challenging for KBQA models. One common way this variety shows up is through paraphrasing. In this paper, a paraphrase set denotes different ways to express the same logical form, as illustrated in Table 14. To gauge how well a KBQA model can adapt to these different forms of natural language, a straightforward approach is to test if the model can accurately answer paraphrased questions that it has already answered correctly before. Unfortunately, as shown in Table 1, many KBQA benchmarks do not account for paraphrasing with only one utterance for each logical form. Exceptionally, some datasets Su et al. (2016); Dubey et al. (2019); Gu et al. (2021) are based on automatically generated logical forms and include multiple natural language expressions for the same logical form (template). These data characteristics highlight the difficulties in adapting to paraphrased questions.
### Integrated Aspect
Evaluating KBQA benchmarks often hinges on a single dataset, thereby complicating the task of ascertaining the model performance consistency across novel scenarios. This form of robustness, termed as **cross-dataset transfer** in this paper,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Benchmark** & **KB** & **Size** & **LF** & **Generalization** & **Para.** \\ \hline WebQuestions Berant et al. (2013) & Freebase & 5,810 & N/A & i.i.d. & ✗ \\ SimpleQuestions Bordes et al. (2015) & Freebase & 108,442 & N/A & i.i.d. & ✗ \\ WebQuestionsSP Yih et al. (2016) & Freebase & 4,737 & SPARQL & i.i.d. & ✗ \\ GraphQuestions Su et al. (2016) & Freebase & 5,166 & Graph query & comp.+zero & ✓ \\ LC-QuAD Trivedi et al. (2017) & DBpedia & 5,000 & SPARQL & i.i.d. & ✗ \\ CWQ Talmor and Berant (2018) & Freebase & 34,689 & SPARQL & i.i.d. & ✗ \\ LC-QuAD 2.0 Dubey et al. (2019) & Wikidata & 30,000 & SPARQL & i.i.d. & ✓ \\ SQB Wu et al. (2019) & Freebase & 108,443 & N/A & i.i.d.+zero & ✗ \\ CFQ Keysers et al. (2020) & Freebase & 239,357 & SPARQL & comp & ✗ \\ GrailQA Gu et al. (2021) & Freebase & 64,331 & S-expression & i.i.d.+comp.+zero & ✓ \\ KQA Pro Cao et al. (2022) & Wikidata & 117,970 & KoPL & i.i.d. & ✗ \\ QALD series Perevalov et al. (2022) & DBpedia & 558 & SPARQL & comp. & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Selected KBQA benchmarks. LF: logical forms. Generalization settings follow Gu et al. (2021). _i.i.d._ denotes that the schema distribution in the test set is the same as the training set. _comp._ and _zero_ denote compositional and zero-shot generalization, respectively. _Para._ denotes paraphrases, i.e., questions containing the same semantics (machine-generated paraphrases are not included).
combines both the environmental and linguistic aspects discussed earlier and is more difficult to achieve. This is because construction methods vary across datasets, as do schema distributions and natural language expressions. Specifically, KBQA dataset construction generally falls into two distinct categories: 1) Graph Search and Crowdsourcing: in this approach, logical forms or triples are initially extracted from a KB, where structures or operators of logical form are usually finite. Subsequently, these are converted into natural language utterances through crowdsourcing techniques Bordes et al. (2015); Trivedi et al. (2017). 2) Human Curation and Parsing: logical forms are labeled directly from human-provided utterances Berant et al. (2013); Perevalov et al. (2022). Existing works Gu et al. (2021); Cao et al. (2022) suggest that models pre-trained on large-scale datasets can adapt reasonably well to other target datasets, such as WebQuestionsSP Yih et al. (2016). However, the necessity for fine-tuning these pre-trained models on the intended target dataset remains imperative for achieving optimal performance. That is, despite the advantages offered by pre-training on expansive KBQA datasets, models still encounter challenges in transferring directly to previously unseen target datasets while sustaining high performance.
### Learning Aspect
Aside from considering environmental and linguistic factors, it is crucial to focus on the model learning process. Recently, LLMs like GPT series OpenAI (2023) have demonstrated exceptional capabilities across a variety of tasks, outperforming smaller yet potent LMs such as BERT Devlin et al. (2019) and T5 Raffel et al. (2020). Despite these advancements, these LLMs face substantial challenges when interacting with environments. One notable issue is their predominant reliance on an in-context **learning paradigm** as opposed to fine-tuning, as a trade-off between computational cost and model efficiency. In comparison to fine-tuning, in-context learning offers the advantage of reduced training costs but at the expense of the lacking perception of the environment. Inconsistent data distribution between natural language pre-training and reasoning over structured knowledge contexts leads to poor performance. For instance, a discernible performance gap exists between KBQA models that employ in-context learning with Codex Chen et al. (2021) and those built on fine-tuned LMs Gu et al. (2022); Li et al. (2023). Therefore, it is crucial to consider the limitations of the commonly used in-context learning paradigm and improve the grounding approaches.
## 4 Approach
The goal of our proposed approaches is to reduce the negative effects of inconsistent data distributions and to maximize the potential of LMs in our experimental investigations. We introduce both data augmentation and retrieval augmentation techniques.
### Data Augmentation for LMs
Off-the-shelf datasets of limited size may make LM easily overfitted and not adaptable to large KBs. To address the problem that many domains in the KB are often not collected as training data, we propose a data augmentation method named **G**raph se**A**rch and quest**I**on** generatio**N** (**GAIN**). GAIN applies to KBQA corresponding to logical forms or triples, and scales data volume and distribution through four steps: 1) Graph search: Sampling logical forms or triples from arbitrary domains in the KB, without being restricted to any particular KBQA dataset. 2) Training question generator on existing KBQA datasets, i.e., learning to convert logical forms or triples into natural language questions. 3) Verbalization: Using the question generator from step 2 to verbalize sampled logical forms or triples from step 1, thus creating synthetic questions. 4) Training data expansion: Before fine-tuning any neural models on KBQA datasets, GAIN-synthetic data can be used to train these models or to expand the corpus of in-context samples for LLMs. That is, as a data augmentation method, GAIN is not a KBQA model, but it is used to augment a base KBQA model.
### Retrieval Augmentation for LLMs
As the trade-off between cost and effectiveness, we experiment with the prevalent in-context learning paradigm but attempt to improve the quality of in-context samples. We use advanced retrieval methods based on smaller LMs as plug-ins to augment the LLM, similar to the SuperICL approach Xu et al. (2023). Specifically, our steps to generate an LLM prompt for each question include the following. 1) Given an input question, we retrieve \(k\) questions (\(k\)-shot) with BM25 Robertson et al. (2009) from the corpus (the combination of KBQA training set and the GAIN-synthetic dataset). 2)
The role of retrieval augmentation for KB environments has been shown by fine-tuned LMs (Shu et al., 2022). To assist with grounding LLM, we retrieve KB contexts with off-the-shelf retrievers for \(k\) samples and the input question4.
Footnote 4: The prompt example is demonstrated in Appendix A.
## 5 Experiments
### Setup
DatasetsWe use GrailQA (Gu et al., 2021) and GraphQuestions (Su et al., 2016) for schema-level generalization and paraphrase settings. We also study generalization on SimpleQuestions-Balance (SQB) (Wu et al., 2019) with unseen relations. WebQuestionsSP (WebQSP) (Yih et al., 2016) is employed for cross-dataset transfer experiments as it is based on practical human-curated search logs. All experiments use S-expression as the logical form due to its clear and concise structure. Entity linking results are taken from TIARA (Shu et al., 2022) for GrailQA and WebQSP, and ArcaneQA (Gu and Su, 2022) for GraphQuestions.
ModelWe report the performance of comparison models from their papers. For the relation linking task on SQB, we use BERT (Devlin et al., 2019) as the base model for GAIN. For KBQA tasks, we use the open-source advanced model TIARA (Shu et al., 2022) as the base model for GAIN, due to its strong performance on zero-shot schema items5. TIARA is composed of multi-grained retrievers and a generator, with the retrievers providing KB contexts6 for the generator. The term "TIARA+GAIN" represents a model (both the retrievers and the generator) that is first tuned using GAIN synthetic data and subsequently fine-tuned on a target dataset. For LLM evaluation, we use the latest gpt-3.5-turbo-06137 model, and the few-shot contexts are retrieved from the combination of GrailQA training set and synthetic dataset using the TIARA+GAIN retrievers.
Footnote 5: Pangu (Gu et al., 2022) also uses entity linking results from TIARA.
Footnote 6: Entities, exemplary logical forms (ELF) and schema items are retrieved.
MetricsFollowing previous works, we use Exact Match (EM), F1, and Hits@1 to measure the performance of KBQA models. To evaluate adaptability to paraphrases (SS3.2), we calculate the standard deviation (std) of EM/F1 for questions of each logical form template. As shown in Equation 1, suppose there are \(n\) sets of paraphrases in the dataset, each set of paraphrases corresponds to a logical form template with \(m\) natural language expressions, and the F1 score obtained by the KBQA model on the \(j\)-th question of the \(i\)-th set of paraphrases is \(F1_{i,j}\). The metric \(Std_{F1}\) first calculates the standard deviation of the F1 scores obtained by the model on the \(m\) questions for each set of paraphrases and then calculates the average of the \(n\) standard deviations. This metric is used to measure the robustness of the model to different representations of the same semantics, i.e., whether it can cope with diverse natural language expressions. A lower standard deviation indicates that the model is more adaptive to different expressions. \(Std_{EM}\) is calculated in the same way.
\[Std_{F1}=\frac{1}{n}\sum_{i=1}^{n}\sqrt{\left(\frac{\sum_{j=1}^{m}(F1_{i,j}-F \bar{1}_{i})^{2}}{m}\right)} \tag{1}\]
### Implementation Details
Our experiments are done on the machine with an NVIDIA A100 GPU and up to 504GB of RAM. We implement our models utilizing PyTorch (Paszke et al., 2019) and Hugging Face8. TIARA+GAIN (T5-3B) takes about 100 hours to train the logical form generator on the synthetic dataset.
Footnote 8: [https://huggingface.co/](https://huggingface.co/)
Training Question GeneratorWe fine-tune the T5-base model (Raffel et al., 2020) to convert S-expression or triple to natural language questions.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**\#question** & **\#one-hop** & **\#two-hop** & **\#domain** \\ \hline
127,329 & 78,668 & 48,661 & 759 \\ \hline
**\#none** & **\#count** & **\#comparatives** & **\#superlatives** \\ \hline
115,221 & 7,115 & 1,874 & 3,119 \\ \hline
**\#class** & **\#relation** & **\#entity** & \\ \hline
5,078 & 12,942 & 46,645 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics for the synthetic dataset of logical forms. _none_ denotes no function.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**\#question** & **\#relation** & **\#subject** & **\#domain** \\ \hline
162,557 & 7,349 & 108,804 & 673 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics for the synthetic dataset of triples. _Subject_ denotes subject entities.
We set the beam size to 10, the learning rate to 3e-5, the number of epochs to 10, and the batch size to 8.
Training TIARAThe training of the TIARA model [20] follows its original settings, including the setting of hyperparameters and the calculation of metrics. Note that Hits@1 on TIARA is obtained by randomly selecting one answer for each question 100 times. Both the schema retriever and generator of TIARA are pre-trained on synthetic data and then fine-tuned on KBQA datasets. Since GraphQuestions has no official training-valid split, we randomly take 200 questions from the original training set as the valid set.
Training the Relation Linking ModelWe use the BERT-base-uncased model [10] to rank candidate relations for SQB, and the input form is the same as the schema retriever of TIARA. We set the learning rate to 3e-5, the batch size to 256, and the maximum number of epochs to 3 with early stopping.
### Data Augmentation
The statistics of GAIN-synthetic datasets for both logical forms and triples are shown in Table 2 and 39. Note that theoretically, the sampling of the GAIN method is not limited to the scale of the synthetic data we use here.
Footnote 9: Details of synthetic data are shown in Appendix B.
## 6 Analysis
Schema-Level GeneralizationAs shown in Tables 4 and 5, the models perform significantly better on i.i.d. than compositional and zero-shot generalization, with the zero-shot generalization being the most challenging. Besides, an increased number of model parameters, combined with richer data from GAIN, significantly enhance the generalization capabilities of T5 models. TIARA+GAIN achieves the highest EM scores, including that on zero-shot scenes. This demonstrates promising ideas for further improving LM generalization capabilities, i.e., the positive effect of synthetic data and parametric scales on training LMs. However, it is important to note that fine-tuned models consistently outperform
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Overall**} & \multicolumn{2}{c}{**I.1D.**} & \multicolumn{2}{c}{**Compositional**} & \multicolumn{2}{c}{**Zero-shot**} \\ \cline{2-10}
**Model on GrailQA Test Set** & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** \\ \hline \hline \multicolumn{10}{c}{_Fine-tuned Models_} \\ BERT + Ranking [20] & 50.6 & 58.0 & 59.9 & 67.0 & 45.5 & 53.9 & 48.6 & 55.7 \\ Rn-KBQA [20] & 68.8 & 74.4 & 86.2 & 89.0 & 63.8 & 71.2 & 63.0 & 69.2 \\ TIARA (T5-base) [20] & 73.0 & 78.5 & 87.8 & 90.6 & 69.2 & 76.5 & 68.0 & 73.9 \\ DecAF (FiD-3B) [20] & 68.4 & 78.8 & 84.8 & 89.9 & 73.4 & 81.8 & 58.6 & 72.3 \\ Pangu (BERT-base) [20] & 73.7 & 79.9 & 82.6 & 87.1 & 74.9 & 81.2 & 69.1 & 76.1 \\ Pangu (T5-large) [20] & 74.8 & 81.4 & 82.5 & 87.3 & **75.2** & **82.2** & 71.0 & 78.4 \\ Pangu (T5-3B) [20] & 75.4 & **81.7** & 84.4 & 88.8 & 74.6 & 81.5 & 71.6 & **78.5** \\ \hline \hline \multicolumn{10}{c}{_Codex-driven Models_} \\ B-BINDER (6)-R [19] & 53.2 & 58.5 & 72.5 & 77.4 & 51.8 & 58.3 & 45.0 & 49.9 \\ Pangu (Codex) [20] & 56.4 & 65.0 & 67.5 & 73.7 & 58.2 & 64.9 & 50.7 & 61.1 \\ \hline \hline \multicolumn{10}{c}{_GAIN-augmented Models_} \\ TIARA + **GAIN** (T5-base) & 75.1 & 80.6 & 88.3 & 91.0 & 73.0 & 79.6 & 69.9 & 76.4 \\ TIARA + **GAIN** (T5-3B) & **76.3** & 81.5 & **88.5** & **91.2** & 73.7 & 80.0 & **71.8** & 77.8 \\ GPT-3.5-turbo (5-shot) & 66.6 & 71.4 & 82.7 & 85.3 & 60.5 & 66.3 & 61.9 & 67.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: EM and F1 scores (%) on the hidden test set of GrailQA.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model on GraphQuestions** & **F1**(\(\uparrow\)) & **Std**(\(\downarrow\)) \\ \hline \hline \multicolumn{3}{c}{_GraphQuestions on Freebase 2013-07_} \\ UDepLambda [21] & 17.7 & - \\ PARA4QA [22] & 20.4 & - \\ SPARQA [20] & 21.5 & - \\ BERT + Ranking [20] & 25.0 & - \\ ATEAQA [20] & 31.8 & - \\ TIARA\({}^{\clubsuit}\)[20] & 37.9 & **0.141** \\ KB-BINDER (6) [19] & 39.5 & - \\ TIARA + **GAIN** (T5-base) & 45.5 & 0.153 \\ TIARA + **GAIN** (T5-3B) & **48.7** & 0.180 \\ \hline \multicolumn{3}{c}{_GraphQuestions on Freebase 2015-08-09_} \\ BERT + Ranking [20] & 27.0 & - \\ ATEAQA [20] & 34.3 & - \\ TIARA\({}^{\clubsuit}\)[20] (T5-base) & 41.2 & **0.157** \\ Pangu (Codex) [20] & 44.3 & - \\ PANG (T5-3B) [20] & **62.2** & - \\ TIARA + **GAIN** (T5-base) & 49.5 & 0.170 \\ TIARA + **GAIN** (T5-3B) & 53.0 & 0.200 \\ \hline \hline \end{tabular}
\end{table}
Table 5: F1 scores (%) and average standard deviation (std) of F1 scores for each set of paraphrases on the test set of GraphQuestions. The setting for Freebase 2015-08-09 is described by Gu and Su (2022). \({}^{\clubsuit}\) denotes our replication results.
few-shot learning models, regardless of whether the schema is seen or not. Given the training and inference costs of LLMs, their performance has yet to show any superiority in this semantic parsing task.
Cross-Dataset TransferTo emulate an unseen real-world scenario, we evaluate the performance of pre-trained models on the human-curated WebQSP dataset without fine-tuning, as depicted in Table 8. BERT+Ranking (Gu et al., 2021) and TIARA+GAIN (Shu et al., 2022) are trained on the large-scale GrailQA dataset. We compare these results to the state-of-the-art Pangu (Gu et al., 2022), which is fine-tuned on WebQSP and achieves an F1 score of 79.6%. Although we recognize that GAIN and large models offer few advantages, the performance of these pre-trained models without fine-tuning is considerably lower than Pangu's. We attribute this to the significant differences between training and test data, as shown in Table 9. The question length, the difficulty of entity/relation linking10, and the proportion of unseen schema vary dramatically across KBQA datasets. These discrepancies arise from the dataset construction process: WebQSP is an annotation of search logs, whereas the remaining datasets are derived from graph search and crowdsourcing. To further enhance robustness in cross-dataset transfer, we believe that better data collection methods are required to obtain diverse and balanced training data. Additionally, the representation of the logical form increases the transfer difficulty, as the S-expression used in GrailQA dataset cannot express all queries in WebQSP.
Footnote 10: Measured by literal similarity: [https://anhaidgroup.github.io/py_stringmatching/v0.3.x/PartialRatio](https://anhaidgroup.github.io/py_stringmatching/v0.3.x/PartialRatio).
LLM with In-context LearningWe evaluate the performance of GPT-3.5 with in-context learning on the GrailQA dataset. In the prompt, we provide the task description and the few-shot KB contexts. As illustrated in Table 10, when provided with contexts from the TIARA+GAIN retrievers, GPT-3.5 outperforms the ELF contexts but falls short compared to T5 generators. Among the GPT-3.5 predictions, 79.62% come directly from the substring of the corresponding prompts, achieving an average F1 score of 86.19% for this portion. However, the remaining predictions are not part of their prompts and are entirely new predictions generated by GPT-3.5, with an average F1 score of merely 30.29%. Although a baseline level is attained, these results suggest that GPT-3.5 cannot be accurately grounded to the KB environment when it does not directly utilize the retrievers' contexts. It also means the severance of natural language pre-training and KB contexts for the LLM, and the faithfulness and controllability of grounding LLMs are not yet guaranteed under the current approach (Gu et al., 2022). To mitigate this problem, alternative paradigms should be explored, such as tool learning (Schick et al., 2023) and multi-step planning (Liu et al., 2023), which enables more refined access and control over environments.
## 7 Conclusion
We find that despite the voluminous training corpus available for LMs, incorporating data that adequately captures the intricate complexities of environments such as large-scale KBs is not yet fulfilled. This underscores the importance of focusing on the robustness challenges. Notably, LLMs sometimes blindly follow the provided prompt. It indicates that the existing methodologies for grounding LLMs are yet to prove their efficacy and superiority. Future research issues include collecting more balanced environment-specific corpora and improving the LLM learning paradigm. For the corpus collection problem, our experiments show some potential for data augmentation techniques.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Overall**} & \multicolumn{2}{c}{**I.I.D.**} & \multicolumn{2}{c}{**Compositional**} & \multicolumn{2}{c}{**Zero-shot**} \\ \cline{2-9}
**Model on GrailQA Valid Set** & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** & **EM** & **F1** \\ \hline BERT + Ranking (Gu et al., 2021) & 51.0 & 58.4 & 58.6 & 66.1 & 40.9 & 48.1 & 51.8 & 59.2 \\ TIARA ELF only (Shu et al., 2022) & 67.2 & 72.9 & 72.8 & 76.7 & 55.3 & 60.7 & 69.7 & 76.3 \\ RnG-KBQA (Ye et al., 2022) & 71.4 & 76.8 & 86.7 & 89.0 & 61.7 & 68.9 & 68.8 & 74.7 \\ DecAF (FiD-3B) (Yu et al., 2022) & - & 81.4 & - & 89.7 & - & **80.1** & - & 78.4 \\ TIARA (T5-base) (Shu et al., 2022) & 75.3 & 81.9 & 88.4 & 91.2 & 66.4 & 74.8 & 73.3 & 80.7 \\ Pangu (T5-3B) (Gu et al., 2022) & 75.8 & 83.4 & - & - & - & - & - & - \\ \hline TIARA + **GAIN** ELF only & 67.4 & 73.6 & 72.7 & 77.5 & 54.7 & 60.5 & 70.3 & 77.3 \\ TIARA + **GAIN** (T5-base) & 77.1 & 83.5 & 89.0 & 91.9 & 68.6 & 75.5 & 75.4 & 83.2 \\ TIARA + **GAIN** (T5-3B) & **77.1** & **83.8** & **89.0** & **92.1** & **68.8** & 76.1 & **75.4** & **83.4** \\ GPT-3.5-turbo (5-shot) & 69.7 & 74.8 & 83.0 & 85.5 & 58.7 & 64.6 & 68.6 & 74.4 \\ \hline \hline \end{tabular}
\end{table}
Table 10: EM and F1 scores (%) on the GrailQA valid set. ELF denotes exemplary logical form (Shu et al., 2022).
### Limitations
For question generation, the verbalization process of the GAIN method relies heavily on large-scale KBQA annotations. The generated questions may be similar to the training data, and overly complex logical forms (e.g., with three and more hops) are difficult to convert into natural language questions. Besides, synthetic data is less diverse and natural than human annotations, though it improves generalization performance.
## Ethics Statement
The proposed approach GAIN could be used on any KB for data augmentation. The Freebase Bollacker et al. (2008) used in this work is a KB that has been publicly released and manually reviewed. For uncensored KBs, if harmful information is collected, it could make synthetic data contain harmful information and make LMs generate harmful answers.
|
2301.13621 | Spectral response between particle and fluid kinetic energy in decaying
homogeneous isotropic turbulence | In particle-laden turbulence, the Fourier Lagrangian spectrum of each phase
is regularly computed, and analytically derived response functions relate the
Lagrangian spectrum of the fluid- and the particle phase. However, due to the
periodic nature of the Fourier basis, the analysis is restricted to
statistically stationary flows. In the present work, utilizing the bases of
time-focalized proper orthogonal decomposition (POD), this analysis is extended
to temporally non-stationary turbulence. Studying two-way coupled
particle-laden decaying homogeneous isotropic turbulence for various Stokes
numbers, it is demonstrated that the temporal POD modes extracted from the
dispersed phase may be used for the expansion of both fluid- and particle
velocities. The POD Lagrangian spectrum of each phase may thus be computed from
the same set of modal building blocks, allowing the evaluation of response
functions in a POD frame of reference. Based on empirical evaluations, a model
for response functions in non-stationary flows is proposed. The related
energies of the two phases is well approximated by simple analytical
expressions dependent on the particle Stokes number. It is found that the
analytical expressions closely resemble those derived through Fourier analysis
of statistically stationary flows. These results suggest the existence of an
inherent spectral symmetry underlying the dynamical systems consisting of
particle-laden turbulence, a symmetry which spans across
stationary/non-stationary particle-laden flow states. | Martin Schiødt, Azur Hodzic, Fabien Evrard, Max Hausmann, Berend Van Wachem, Clara M. Velte | 2023-01-31T13:25:14Z | http://arxiv.org/abs/2301.13621v1 | Spectral response between particle and fluid kinetic energy in decaying homogeneous isotropic turbulence
###### Abstract
In particle-laden turbulence, the Fourier Lagrangian spectrum of each phase is regularly computed, and analytically derived response functions relate the Lagrangian spectrum of the fluid- and the particle phase. However, due to the periodic nature of the Fourier basis, the analysis is restricted to statistically stationary flows. In the present work, utilizing the bases of time-focalized proper orthogonal decomposition (POD), this analysis is extended to temporally non-stationary turbulence. Studying two-way coupled particle-laden decaying homogeneous isotropic turbulence for various Stokes numbers, it is demonstrated that the temporal POD modes extracted from the dispersed phase may be used for the expansion of both fluid- and particle velocities. The POD Lagrangian spectrum of each phase may thus be computed from the same set of modal building blocks, allowing the evaluation of response functions in a POD frame of reference. Based on empirical evaluations, a model for response functions in non-stationary flows is proposed. The related energies of the two phases is well approximated by simple analytical expressions dependent on the particle Stokes number. It is found that the analytical expressions closely resemble those derived through Fourier analysis of statistically stationary flows. These results suggest the existence of an inherent spectral symmetry underlying the dynamical systems consisting of particle-laden turbulence, a symmetry which spans across stationary/non-stationary particle-laden flow states.
+
Footnote †: Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, United States
+
Footnote †: Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, United States
## I Introduction
Recent years have seen renewed attention directed towards particle-laden turbulence, due to its relevance in numerous engineering and natural settings (Brandt and Coletti (2022)). Theoretical models and improved experimental and numerical methods have led to advancements in our understanding of particle dynamics, herein counting acceleration statistics, preferential sampling and particle clustering to name a few (Toschi and Bodenschatz (2009); Gustavsson and Mehlig (2016); Maxey (2017)).
One focus of study has been the modulation of turbulence induced by two-way coupling (Druzhinin and Elghobashi (1999); Ferrante and Elghobashi (2003)). Here, the presence of particles in flows under zero gravity conditions has been shown to attenuate turbulent kinetic energy (TKE) at low wavenumbers and augment it at higher wavenumbers, leading to an increase in dissipation (Squires and Eaton (1994)). Inertial particles may, however, also act as sources of increased turbulence energy, and the total TKE may be either augmented or attenuated by the presence of a dispersed phase (Ferrante and Elghobashi (2003)). A key parameter identified in this regard is the particle Stokes number. Letournel _et al._ (2020) investigated TKE totals as a function of the Stokes number, and found an approximate threshold below which turbulence was augmented, and above which it was attenuated. Nevertheless, the same authors underlined the lack of consensus on a unique criterion for turbulence modulation by particles.
Ireland, Bragg, and Collins (2016a) investigated the large scale single-particle velocity statistics of inertial particles in homogeneous isotropic turbulence (HIT). Driven by the effects of inertial filtering and preferential sampling, the average particle kinetic energy normalized by the average fluid kinetic energy was shown to approximately follow a simple relation dependent on the Stokes number. Similar studies were conducted under gravity conditions by Good _et al._ (2014) and Ireland, Bragg, and Collins (2016b).
Although the study of particle-laden turbulence has rapidly progressed over the past decade, new theoretical tools are still needed in order to gain further insights into the dynamics (Brandt and Coletti (2022)). One such tool may be the particle proper orthogonal decomposition (PPOD) formulated by Schisdt _et al._ (2022), where Lagrangian particle velocities are decomposed into a set of modes that represent temporal particle dynamics. This tool is utilized in the present study, where the extracted modes are compared to those extracted for the fluid measured at fixed Eulerian mesh points using the temporal formulation of POD introduced by Aubry, Guyonnet, and Lima (1991). Both formulations of POD are briefly outlined in section II, and the constraints required for direct comparisons of fluid- and particle POD modes are listed.
Modal decomposition of fluid- and particle temporal dynamics allows for the evaluation of the Lagrangian spectrum of both phases in a POD frame of reference. In the current work, this leads to formulations of POD-based response functions, that relate the energy of the two phases on a modal level. Although response functions based on the Fourier decomposition have previously been studied in stationary flows (Casnady (1963); Zhang, Legendre, and Zamansky (2019); Berk and Coletti (2021)), the advantage of the POD-based approach is that stationarity is not required, and the present study
is therefore focused on the analysis of various simulations of two-way coupled particle-laden decaying HIT. The analysis culminates in analytic expressions of POD-based response functions closely resembling those derived through Fourier analysis of stationary flows.
Section II gives a brief outline of the formulation of POD and the structure of the ensembles that will produce temporal modes representing fluid- and particle dynamics. A summary of the simulation setup is given in III, which is followed by a presentation and discussion of results in section IV. Finally, our conclusions are given in section V.
## II Proper orthogonal decomposition
The main objective of POD is to extract a set of empirical basis functions \(\varphi=\{\varphi_{\alpha}\}_{\alpha=1}^{M}\) that represent dominating features of the studied dynamical system. The basis functions, also known as modes, are extracted by solving the eigenvalue problem
\[\mathcal{R}\varphi_{\alpha}=\lambda_{\alpha}\varphi_{\alpha},\quad\alpha\in[ 1:M], \tag{1}\]
where \(\mathbf{\lambda}=\{\lambda_{\alpha}\}_{\alpha=1}^{M}\) are the eigenvalues connected to each mode, and for the cases we study, these are real and sorted such that \(\lambda_{1}\geq\cdots\geq\lambda_{M}\geq 0\). The operator \(\mathcal{R}:\mathcal{H}\rightarrow\mathcal{H}\) is defined from the ensemble of empirical data \(\mathbf{u}=\{u^{(i)}\}_{i=1}^{N_{e}}\), and is dependent on the definition of the Hilbert space \(\mathcal{H}\) for which \(\varphi\) serves as an empirical orthonormal basis. Though the basis is not necessarily complete in \(\mathcal{H}\), each ensemble member may be decomposed into a weighted sum of modes, thus
\[u^{(i)}=\sum_{\alpha=1}^{M}c_{\alpha}^{(i)}\varphi_{\alpha},\quad i\in[1:N_{e}], \tag{2}\]
where the weights \(c_{\alpha}^{(i)}\) are known as the projection coefficients given by
\[c_{\alpha}^{(i)}=(u^{(i)},\varphi_{\alpha}). \tag{3}\]
Here \((\cdot,\cdot)\) denotes the inner product of \(\mathcal{H}\). The projection coefficients are connected to the eigenvalues \(\mathbf{\lambda}\) by the relation
\[\lambda_{\alpha}=\left\langle\left\{c_{\alpha}^{(i)}c_{\alpha}^{(i)}\right\}_ {i=1}^{N_{e}}\right\rangle,\quad\alpha\in[1:M], \tag{4}\]
where \((^{*})\) denotes both the complex conjugate transpose for a scalar and Hermitian transpose for a vector, and \(\left\langle\{\cdot\}_{i=1}^{N_{e}}\right\rangle\) is the ensemble average operator.
The definition of \(\mathcal{H}\) and what constitutes an ensemble member determines the interpretation of \(\varphi\) and \(\mathbf{\lambda}\). In section II.1 and section II.2 we briefly outline the discrete formulations of the Eulerian- and the Lagrangian (particle) POD, respectively, and show their dependency on the definition of \(\mathbf{u}\).
### Eulerian POD
The most common application of POD is based on the fluid velocity \(\mathbf{u}_{f}(\mathbf{x},t)\in\mathbb{R}^{D}\) measured at fixed mesh points in a Eulerian grid at equidistant sample times. Following the classical interpretation of POD (Lumley (1967)) an ensemble member may in this discrete case be formed by
\[\mathbf{u}^{(i)}=\left[\mathbf{u}_{f}^{(i)}(\mathbf{x}_{1},t_{0})^{*}\ \cdots\ \mathbf{u}_{f}^{(i)}(\mathbf{x}_{N_{g}},t_{0})^{*}\ \cdots\ \ \mathbf{u}_{f}^{(i)}(\mathbf{x}_{N_{g}},t_{N_{1}})^{*}\right]^{*}. \tag{5}\]
Here \(\mathbf{u}_{f}^{(i)}\) is the \(i\)'th fluid velocity realization, \(\mathbf{x}_{g}\in\mathbb{R}^{D}\), \(g\in[1:N_{g}]\) are the Eulerian mesh points and \(t_{n}\in T\), \(n\in[0:N_{t}-1]\) are the sample times of the temporal domain \(T\). In this case \(\mathbf{u}^{(i)}\in\mathcal{H}=\mathbb{R}^{N}\), where \(N=DN_{g}N_{t}\). \(\mathcal{H}\) is equipped with the standard inner product \((\mathbf{w}_{1},\mathbf{w}_{2})=\mathbf{w}_{2}^{*}\mathbf{w}_{1}\), and the operator \(\mathcal{R}\) in equation (1) is given by
\[\mathcal{R}=\left\langle\left\{\mathbf{u}^{(i)}\mathbf{u}^{(i)*}\right\}_{i=1}^{N_{e}} \right\rangle\in\mathbb{R}^{N\times N}. \tag{6}\]
Solving equation (1) then results in a set of spatio-temporal modes that are optimal with respect to energy, where \(\mathbf{\lambda}\) represents the energy of each mode. However, the amount of data needed to generate \(\varphi\) often makes this classical approach infeasible, as several uncorrelated fluid flow realizations are needed to generate the data. Instead, an approach popularized by Sirovich (1987) and Aubry _et al._ (1988) is to extract spatially orthogonal modes, with time dependent projection coefficients. This is what Towne, Schmidt, and Colonius (2018) refers to as the _space-only_ POD, and in a statistically stationary flow an ensemble member may be given by the fluid velocity measured at all grid points at a single sample time. From one fluid realization several ensemble members may thus be generated, and the ensemble average operator reduces to a temporal average.
In the current work we will focus on what we term the _time-only_ POD (TPOD) and its relation to PPOD. The TPOD is also formulated in the continuous case (Aubry, Guyonnet, and Lima (1991); Aubry (1991)) and as an analogy to its spatial counterpart it produces a set of temporally orthogonal modes, with spatially dependent projection coefficients. An ensemble member is in this case given by
\[\mathbf{u}^{(i)}=\left[\mathbf{u}_{f}(\mathbf{x}_{i},t_{0})^{*}\ \cdots\ \mathbf{u}_{f}(\mathbf{x}_{i},t_{N_{i}-1})^{*}\right]^{*},\quad i\in[1:N_{e}], \tag{7}\]
i.e. the fluid velocity at a grid point \(i\) measured at sample times \(t_{n}\). Note that \(N_{e}\leq N_{g}\) when the ensemble members are taken from the same fluid realization, and that the fluid flow in that case should be homogeneous (Aubry (1991)), signifying that the temporal evolution is statistically equivalent in all grid points. The ensemble average operator then reduces to a spatial average and the operator \(\mathcal{R}\) is still given as in equation (6), although here \(N=DN_{i}\).
The modes extracted with TPOD represent the temporal evolution of the fluid velocity through a Eulerian mesh point, and \(\mathbf{\lambda}\) is connected to the energy
\[E(t)=\frac{1}{2}\left\langle\left\{\mathbf{u}_{f}^{*}(\mathbf{x}_{i},t)\mathbf{u}_{f}(\mathbf{ x}_{i},t)\right\}_{i=1}^{N_{e}}\right\rangle, \tag{8}\]
by
\[\sum_{n=0}^{N_{b}-1}E(t_{n})=\frac{1}{2}\sum_{\alpha=1}^{M}\lambda_{\alpha}\,. \tag{9}\]
### Particle POD
Schigdt _et al._ (2022) formulated PPOD as a method for decomposing the velocity of Lagrangian particles into a weighted sum of empirical modes. Like TPOD the method produces a set of temporal modes, however, the modes represent the dynamics of Lagrangian particles rather than the fluid dynamics at fixed Eulerian mesh points. The ensemble \(\mathbf{u}\) is in this formulation defined by the ensemble members
\[\mathbf{u}^{(i)}=\left[\mathbf{v}^{(i)}(t_{0})^{*}~{}\cdots~{}~{}\mathbf{v}^{(i)}(t_{N_{b}- 1})^{*}\right]^{*},\quad i\in[1:N_{e}]\,, \tag{10}\]
where
\[\mathbf{v}^{(i)}(t_{n})=\left[\mathbf{v}^{(i)}_{1}(t_{n})^{*}~{}\cdots~{}~{}\mathbf{v}^{(i )}_{N_{p}}(t_{n})^{*}\right]^{*}\,, \tag{11}\]
is the velocity of \(N_{p}\) Lagrangian particles measured at sample times \(t_{n}\in T\), \(n\in[0:N_{t}-1]\). Here \(\mathbf{u}^{(i)}\in\mathbb{R}^{N}\) with \(N=DN_{p}N_{t}\), since \(\mathbf{v}^{(i)}_{p}(t)\in\mathbb{R}^{D}\) is the velocity of a single particle. Choosing \(N_{p}=1\) for the remainder of the current work, we see that PPOD and TPOD ensemble members belong to the same Hilbert space \(\mathcal{H}=\mathbb{R}^{N}\), \(N=DN_{t}\). The mode-sets extracted with respectively TPOD and PPOD are therefore in this case directly comparable.
To generate a meaningful ensemble of Lagrangian particle velocities, the ensemble particles should belong to similar flows or be sampled from the same flow containing certain symmetries. We elaborate further on this point in section III.2.
In section IV both TPOD and PPOD analysis is applied to the Reynolds decomposed \(\mathbf{u}^{(i)}_{flux}=\mathbf{u}^{(i)}-\langle\{\mathbf{u}^{(i)}\}_{i=1}^{N_{p}}\rangle\) rather than \(\mathbf{u}^{(i)}\). Thus, \(E(t)\) in equations (8)-(9) becomes a measure of TKE, and
\[\mathbf{u}^{(i)}=\left\langle\left\{\mathbf{u}^{(i)}\right\}_{i=1}^{N_{e}}\right\rangle +\sum_{\alpha=1}^{M}c^{(i)}_{\alpha}\mathbf{\varphi}_{\alpha},\quad i\in[1:N_{e}]\,. \tag{12}\]
However, \(\langle\{\mathbf{u}^{(i)}\}_{i=1}^{N_{e}}\rangle\approx\mathbf{0}\) for all TPOD and PPOD ensembles considered, and we will therefore interchangeably refer to \(\mathbf{\varphi}\) as the mode-set spanning both the signal \(\mathbf{u}^{(i)}\) and \(\mathbf{u}^{(i)}_{flux}\).
## III Simulation
In the current work we consider the simulation of one single-phase flow and four different simulations of two-way coupled particle-laden turbulence. All simulations are performed within a periodic cube with edge length \(\ell\), discretized into \(N_{g}\) computational cells.
### Dynamical equations
We apply the Euler-Lagrange point-particle approach (Elghobashi and Truesdell (1992)) where the fluid velocity \(\mathbf{u}_{f}\) is computed at each time step by numerical integration of the incompressible Navier-Stokes equations on a Eulerian mesh, and particle velocities are obtained by integrating the governing particle equations of motion forward in time. For the Navier-Stokes equations a constant dynamic viscosity \(\mu_{f}\) and mass density \(\rho_{f}\) are used, and with \(\mathbf{p}\) denoting pressure the equations are given by
\[\nabla\cdot\mathbf{u}_{f} =0\,, \tag{13a}\] \[\frac{\partial\mathbf{u}_{f}}{\partial t}+\nabla\cdot(\mathbf{u}_{f} \otimes\mathbf{u}_{f}) =-\frac{1}{\rho_{f}}\nabla\mathbf{p}+\frac{\mu_{f}}{\rho_{f}}\nabla^{2 }\mathbf{u}_{f}+\mathbf{F}_{p}+\mathbf{F}\,. \tag{13b}\]
Here \(\mathbf{F}_{p}\) is the force that the dispersed particles exert on the carrier fluid, and \(\mathbf{F}\) is an artificial source term applied in an initial forcing period. In section III.3 the details of \(\mathbf{F}_{p}\) and \(\mathbf{F}\) are outlined.
The particles considered are monodisperse solid spheres with diameter \(d_{p}\), volume \(V_{p}\) and density \(\rho_{p}\). Assuming particles are only accelerated according to drag force, the dynamic equations for particle motion are given by
\[\frac{\mathrm{d}\mathbf{x}_{p}}{\mathrm{d}t} =\mathbf{v}_{p}\,, \tag{14a}\] \[V_{p}\rho_{p}\frac{\mathrm{d}\mathbf{v}_{p}}{\mathrm{d}t} =\mathbf{F}_{D}=\frac{\pi}{8}d_{p}^{2}\rho_{f}C_{D}|\mathbf{u}_{f\oplus p }-\mathbf{v}_{p}|(\mathbf{u}_{f\oplus p}-\mathbf{v}_{p})\,, \tag{14b}\]
where \(\mathbf{x}_{p}\) and \(\mathbf{u}_{f\oplus p}=\mathbf{u}_{f}(\mathbf{x}_{p},t)\) are the particle position and fluid velocity at particle position, respectively. \(\mathbf{F}_{D}\) denotes the drag force and \(C_{D}\) is the drag coefficient given by (Schiller and Naumann (1933))
\[C_{D}=\frac{24}{Re_{p}}\left(1+0.15Re_{p}^{0.687}\right)\,, \tag{15}\]
and
\[Re_{p}=\frac{d_{p}\rho_{f}|\mathbf{u}_{f\oplus p}-\mathbf{v}_{p}|}{\mu_{f}}\,, \tag{16}\]
is the particle Reynolds number. Equation (15) holds for \(0<Re_{p}\leq 1000\), which is the only range considered in the current work.
A second-order finite-volume solver (Denner, Evrard, and van Wachem (2020)) is used to integrate (13) forward in time, and the Verlet scheme is used for the forward integration of (14).
### Decaying homogenous isotropic turbulence
To study two-way coupling effects in an idealized test case, we analyze a particle-laden fluid with decaying HIT. This case is chosen over stationary HIT because the effects of the forcing term \(\mathbf{F}\) would overlap with the particle-fluid interaction
energy in the latter (Abdelsamie and Lee (2012)). In addition, the properties of decaying HIT signifies that the fluid velocity in all Eulerian mesh points evolves in a statistically equivalent manner. The inertial particles are thermalized to the fluid (see section III.3) and thus have a statistically equivalent evolution throughout the temporal domain. Therefore, a meaningful ensemble of realizations can be generated for both TPOD and PPOD from a single simulation of particle-laden turbulence. For TPOD, the ensemble members are formed by sampling the fluid velocity at \(N_{e}\) equidistantly spaced mesh points at sample times \(t_{n}\in T\), \(n\in[0:N_{t}-1]\), and for PPOD the ensemble members are formed by randomly choosing \(N_{e}\) particle records to track over the same sample times. The inertial particles are initially spaced randomly throughout the cubic domain in order to avoid introducing bias.
### Forces
Each simulation can be split into two periods - a forcing period, and a decaying period. The forcing period is the initial part of the simulation, in which HIT is obtained by applying the source term \(\mathbf{F}\) in equation (13). This period is necessary to initiate decay from a fully developed turbulent velocity field. The forcing procedure follows the forcing scheme developed by Mallouppas, George, and van Wachem (2013) and is the same as the one briefly outlined in Schiudt _et al._ (2022).
During the forcing period particles are present within the fluid, but two-way coupling is deactivated, i.e. \(\mathbf{F}_{p}=\mathbf{0}\) in equation (13). This allows for the thermalization of particles under one-way coupling conditions, which minimizes the transitional regime when two-way coupling is activated (Ferrante and Elghobashi (2003)).
We define the end of the forcing period as time \(t_{0}=0\)s, which also denotes the start of the decaying period. Here \(\mathbf{F}=\mathbf{0}\), and two-way coupling is activated for the multiphase simulations, but remains zero for the single-phase simulation.
The two-way coupling term, \(\mathbf{F}_{p}\), in equation (13) is modelled as suggested by Crowe, Sharma, and Stock (1977) where
\[\mathbf{F}_{p}=-\frac{1}{\rho_{f}V_{g}}\sum_{p^{\prime}=1}^{N_{p,g}}\mathbf{F}_{D,p^{ \prime}}\,. \tag{17}\]
Here \(V_{g}\) is the volume of cell \(g\) in the discretized domain, and \(N_{p,g}\) is the number of particles present in that cell. \(\mathbf{F}_{D,p^{\prime}}\) is the drag force exerted by the fluid on particle \(p^{\prime}\).
### Setup
#### iv.4.1 Fluid
We use the setup of Mallouppas, George, and van Wachem (2017) for the fluid simulation. Here the cube edge length is given by \(\ell=0.128\)m, and the domain is discretized into \(N_{g}=128^{3}\) computational cells. Fluid viscosity is given by \(\mu_{f}=1.72\times 10^{-5}\)Pa s, and fluid density by \(\rho_{f}=1.17\)kg m\({}^{-3}\). For all of the subsequent cases studied the Taylor Reynolds number at \(t_{0}\) is given by \(Re_{\lambda}=58.0\), where the integral-, Taylor-, and Kolmogorov length scales are respectively \(I=1.129\times 10^{-2}\)m, \(\lambda=6.134\times 10^{-3}\)m and \(\eta=4.0\times 10^{-4}\)m. The Kolmogorov time scale at \(t_{0}\) is \(\tau_{\eta}=10^{-2}\)s. The reader is referred to Schiudt _et al._ (2022) for a more thorough outline of the temporal evolution of the fluid characteristics in the single-phase simulation.
#### iv.4.2 Particles
The different multiphase simulations considered are characterized by the Stokes number \(St(t)=\tau_{p}(t)/\tau_{\eta}(t)\) of the inertial particles at \(t=t_{0}\). Here \(\tau_{p}\) (equation (23)) is the particle response time. The particle diameter is set to \(d_{p}=1.0\times 10^{-4}\)m, and the particle mass fraction \(\phi_{m}\approx 1\). Since the particle density \(\rho_{p}\) is tweaked in each case to obtain different Stokes numbers, this signifies that the number of particles present in the fluid varies between each case. Letting \(St_{0}=St(t_{0})\), the Stokes numbers considered are \(St_{0}=0.25\), \(St_{0}=0.75\), \(St_{0}=1.5\), and \(St_{0}=3.0\).
## IV Results & Discussion
All subsequent results are based on fluid- and inertial particle velocities during the decaying period, which lasts for \(0.4\)s of physical time. The velocities are sampled every \(\delta=10^{-3}\) seconds, amounting to \(N_{t}=400\) temporal samples. The temporal domain is normalized with respect to the reference time scale \(t_{ref}=\tau_{\eta}(t_{0})=10^{-2}\)s which is shared between all simulations.
### Fluid statistics
Figure 1 shows the temporal evolution of the carrier phase TKE, \(E(t)\), in the single- and multiphase simulations. The
Figure 1: Evolution of normalized turbulent kinetic energy.
TKE is normalized by \(E(0)\), and the figure illustrates that turbulence is increasingly attenuated for increasing Stokes numbers. However, at \(St_{0}=0.25\) there is a slight augmentation of turbulence for \(t/t_{ref}>38\). Similar observations have been reported in previous studies (Sundaram and Collins (1999); Ferrante and Elghobashi (2003); Letournel _et al._ (2020)).
The Fourier turbulence energy spectrum \(E(\kappa)\) of the carrier phase at time \(t/t_{ref}=40\) is seen in Figure 2. As observed in previous work (Druzhinin and Elghobashi (1999); Ferrante and Elghobashi (2003); Letournel _et al._ (2020)) the presence of inertial particles modulates the spectrum, shifting energy from low to high wavenumbers. The degree with which this energy transfer occurs is dependent on the Stokes number, where more energy is observed to be transferred at lower Stokes numbers.
Increased energy at high wavenumbers implies more energetic small scale turbulence structures. The fluid velocity measured over time at a fixed spatial point will therefore, on average, contain more fluctuations for the multiphase flows compared to the single-phase flow. This behaviour is indeed observed when considering the TPDO eigenspectra of Figure 3. Here, the extracted modes \(\mathbf{\varphi}\) and corresponding eigenvalues \(\mathbf{\lambda}\) are based on the 3-D fluid velocity measured in \(N_{e}=16^{3}=4096\) equidistantly spaced Eulerian mesh points, where these ensemble members are assumed to represent the dynamics of all \(128^{3}\) mesh points (see section III.2).
Figure 3 shows, for all simulations, the energy \(\lambda_{\alpha}\) of each TPDO mode for \(\alpha\in[1:400]\). A brief glance at the eigenspectra depicted reveals a distinct difference of shape between the single- and multiphase simulations. The figure also illustrates that modal energy is slightly higher in the single-phase case when the mode number is low, whereas for higher mode numbers the modal energy is higher in the multiphase cases. As will be shown later (Figure 6) the higher numbered modes contain more fluctuations, and this observation therefore aligns well with the intuition of how TPDO modal energy should be distributed in accordance to the spatial structures. It is notable that the modal energy is larger for some mode numbers in the multiphase cases compared to the single-phase case even though the total modal energy in the latter is larger (see Figure 1). This further underlines the observation that a larger fraction of energy is distributed to more rapidly fluctuating TPDO modes when the fluid is laden with inertial particles and two-way coupling is activated.
### PPOD convergence
PPOD is applied to the velocity of \(N_{e}=4096\) randomly selected inertial particles, initially distributed throughout the spatial domain. This is performed for all multiphase simulations under the assumption that these subsets of particles represent the dynamics of all particles within each respective simulation. Let \(E_{a,modal}(m)\) denote the fraction of accumulated POD modal energy up until mode number \(m\):
\[E_{a,modal}(m)=\sum_{\alpha=1}^{m}\lambda_{\alpha}\Bigg{/}\sum_{\beta=1}^{M} \lambda_{\beta},\quad m\in[1:M]. \tag{18}\]
Although \(m\in[1:M]\), \(M=\min(N,N_{e})=1200\), the statistic is only shown for \(m\leq 40\) in Figure 4 for the sake of readability. The figure clearly shows that almost all of the PPOD modal
Figure 4: Convergence of PPOD accumulated modal energy.
Figure 3: TPDO eigenspectrum showing the distribution of modal energy of the carrier phase in each simulation case.
Figure 2: Fourier turbulence energy spectrum \(E(\kappa)\) of each simulation at final time step \(t/t_{ref}=40\).
energy is contained within the first \(\sim 4\%\) of modes. Moreover, it is observed that the rate of convergence towards unity increases as the Stokes number increases.
There are several contributing factors to the observed behaviour of convergence. Firstly, the particles characterized by higher Stokes numbers are heavier, thus requiring more energy to be accelerated. Due to inertial filtering, the velocities of these particles fluctuate less around the mean (ensemble) velocity compared to lower Stokes number particles (Ayyalasomayajula, Warhaft, and Collins (2008); Salazar and Collins (2012)). Secondly, as seen in Figure 1 the increasing attenuation of TKE for increasing Stokes numbers implies a less energetic fluid surrounding the higher Stokes number particles, and the higher Stokes number particles are thus accelerated by smaller energies than the lower Stokes number particles. Thirdly, for lower Stokes numbers the small scale turbulent structures of the surrounding fluid are more energetic (Figure 2). The particles are in these cases accelerated by a wider range of turbulent structures resulting in more fluctuating particle velocities. Ultimately, these factors imply an increase in fluctuating particle velocities for low Stokes numbers compared to higher Stokes numbers, and hence a wider range of PPOD modes are required to account for these particle dynamics. The modal energy is thus more widely distributed for the lower Stokes number case, decreasing the convergence rate of \(E_{a,modal}\).
### Component decomposition
In stationary flows it is commonly accepted that fluid- and particle velocities may appropriately be decomposed with Fourier modes spanning the temporal domain (Tchen (1947); Csanady (1963); Hinze (1975); Glauser and George (1992); Delville _et al._ (1999); Citriniti and George (2000); Johansson, George, and Woodward (2002); Iqbal and Thomas (2007); Muralidhar _et al._ (2019)). The Fourier decomposition is applied such that each velocity component is decomposed separately. In analogy to this we now apply PPOD componentwise, i.e. with dimension \(D=1\) we extract \(M=DN_{t}=400\) modes and eigenvalues separately for the particle velocities in coordinate directions \(x_{1}\), \(x_{2}\) and \(x_{3}\). Figure 5a shows up until \(m=40\) the convergence rate of \(E_{a,modal}\) for component PPOD applied to the case \(St_{0}=0.25\). Since the particles are suspended in decaying HIT, there is not a preferential direction, and the convergence rates are equivalent for all velocity component.
In Figure 5b the parallelity of the extracted modes is assessed by evaluating
\[p_{\alpha,\beta}^{i,j}=|(\varphi_{\alpha}^{i},\varphi_{\beta}^{j})|,\quad i, j\in[1:3],\,\alpha,\beta\in[1:M], \tag{19}\]
where \(\varphi_{\alpha}^{i}\) is the \(\alpha\)'th mode extracted for coordinate direction \(x_{i}\). When \(P_{\alpha,\beta}^{i,j}=1\) the modes are completely parallel, whereas \(P_{\alpha,\beta}^{i,j}=0\) indicates orthogonality. The figure shows that along the diagonal (\(\alpha=\beta\)) there is almost complete parallelity for low mode numbers (\(\alpha\leq 20\)), signifying that the mode-sets extracted are basically the same. For higher mode numbers this is not the case, however as seen in Figure 5a these modes carry little energy, and they account for ensemble-specific variance rather than dominating particle dynamics. The importance of these modes is thus negligible, and it may be concluded that PPOD analysis of velocities in coordinate direction \(x_{i}\) in decaying HIT yields the same qualitative results regardless of the value of \(i\). Although only shown here for \(St_{0}=0.25\), upon closer inspection of the data it is found that this conclusion may be drawn for every Stokes number considered, and similarly for fluid velocity modes extracted with component TPOD. For the remainder of this work, we will hence consider component PPOD and TPOD applied to velocities in coordinate direction \(x_{1}\), and consider the results representative of all coordinate directions.
### Modes
A sample of the modes extracted with component TPOD (solid) for both the single- and multiphase simulations are
Figure 5: (a) Convergence rates of \(E_{a,modal}\) are equivalent between each velocity component and (b) the extracted modes are almost completely parallel for \(\alpha\leq 20\).
shown in Figure 6 alongside a corresponding sample of the modes extracted with PPOD (dotted) in the multiphase cases. The modes are shown as functions of \(t\), where \(\varphi_{\alpha}(t)\) denotes the element of \(\varphi_{\alpha}\) connected to sample time \(t\). All mode-sets resemble slightly damped harmonic oscillators, where the local wavelength of each mode increases over time. The damping of amplitude may be attributed to the temporal decay of TKE (Figure 1). The increase of wavelength shows that the energy-optimal modes for the decomposition of fluid- and particle velocities in decaying HIT are not Fourier modes, keeping in mind that in the stationary HIT case the component-wise TPOD and PPOD modes would be well approximated by Fourier modes (see e.g. Aubry (1991) for TPOD modes).
A high correlation between all POD mode-sets is observed, indicating that the energetically dominating fluid- and particle dynamics do not vary considerably across Stokes numbers. This point is further investigated in Figure 7 where the parallelity between \(\varphi\) and \(\psi\) is evaluated. Here, \(\varphi\) is chosen as a reference mode-set given by the TPOD modes extracted from the \(St_{0}=0.25\) simulation, and \(\psi\) represents the TPOD mode-set of the single-phase case (Figure 7a), the TPOD mode-set of the \(St_{0}=3.0\) case (Figure 7b), and the PPOD mode-set of the \(St_{0}=0.25\) case (Figure 7c). Though figures 7b and 7c do not exhibit complete parallelity between \(\varphi\) and \(\psi\), they still illustrate a strong parallelity at lower mode numbers, and linear dependency of similarly numbered modes at higher mode numbers. This underlines that fluid- and particle dynamics are fairly similar across phase and Stokes number. Figure 7a also exhibits strong parallelity at lower mode numbers, whereas \(\varphi\) is linearly dependent on many \(\psi\)-modes for higher mode numbers. This shows that the dominating dynamics between the single- and multiphase flow are similar, and suggests that the two-way coupling information which is not captured in the single-phase modes is embedded in the higher numbered POD modes of the multiphase mode-sets.
The high parallelity observed raises an important question: do the extracted bases span the same vector space? As briefly noted in section II the extracted POD-bases are not necessarily complete in \(\mathcal{H}\), and it is therefore not guaranteed that the velocities of one ensemble may be fully decomposed by the modes extracted from another ensemble. However, if one mode-set \(\varphi\) can be completely decomposed by another mode
Figure 6: Modes (\(\varphi_{\alpha}\), \(\alpha\in[1:12]\)) extracted with TPOD (solid) and PPOD (dotted) for each simulation case. The nuance indicates Stokes number where the Stokes number increases from darker to lighter grey. The black solid lines are the TPOD modes for the single-phase case.
set \(\mathbf{\psi}\), then the ensemble members generating \(\mathbf{\varphi}\) can also be fully decomposed by \(\mathbf{\psi}\). In total nine mode-sets are extracted in the current work (five TPOD and four PPOD) and it turns out that each of these can fully reconstruct (down to machine precision) the other eight mode-sets, i.e.
\[\left|\left|\mathbf{\varphi}_{\beta}-\sum_{\alpha=1}^{M}(\mathbf{\varphi}_{\beta},\mathbf{ \psi}_{\alpha})\mathbf{\psi}_{\alpha}\right|\right|=0,\quad\beta\in[1:M]. \tag{20}\]
All mode-sets therefore span the same vector space, and both particle- and fluid velocities may be expanded in the same basis. Though not shown here, \(\mathbf{u}_{f\oplus p}\) (equation (14)) may also be fully decomposed with respect to the POD bases extracted in the current study. This enables the use of a single empirically determined basis to be used for the expansion of all data sets in question, illustrating the versatility of the PPOD method in the application to multiphase flows.
### Response - velocity
Following the procedure of Csanady (1963) it may be shown for statistically stationary flows that the Fourier Lagrangian spectrum of the particle velocity, \(E_{p}\), is connected to the equivalent spectrum of the fluid velocity at particle position, \(E_{f\oplus p}\), by a response function \(H^{2}\). The relation is given by
\[E_{p}(\alpha)=H^{2}(\alpha)E_{f\oplus p}(\alpha),\quad\alpha\in[1:M], \tag{21}\]
where \(E_{p}(\alpha)\) and \(E_{f\oplus p}(\alpha)\) is the ensemble averaged energy connected to the \(\alpha\)'th Fourier mode for respectively the dispersed- and carrier phase. An analytic expression for \(H^{2}\) may be found by replacing \(\mathbf{v}_{p}\) and \(\mathbf{u}_{f\oplus p}\) in equation (14) with their Fourier expansions. This was also done by Berk and Coletti (2021) where they showed that
\[H^{2}(\alpha)=\frac{1}{1+(\omega(\alpha)\tau_{p})^{2}}\,,\quad\alpha\in[1:M]\,. \tag{22}\]
Here \(\omega(\alpha)\) is the angular frequency of the \(\alpha\)'th Fourier mode and
\[\tau_{p}=\frac{\rho_{p}d_{p}^{2}}{18\mu_{f}}\left(1+0.15Re_{p}^{0.687}\right)^ {-1}. \tag{23}\]
To the authors knowledge, no analytic expressions exist for the response function \(H^{2}\) in non-stationary flows at the time of writing. However, we may study the fraction \(E_{p}(\alpha)/E_{f\oplus p}(\alpha)\) and let this serve as an empirical response function.
Considering Fourier modes as a special case of POD modes, i.e. those derived empirically for a statistically stationary flow (Glauser and George (1992)), we conjecture that some of the properties of the Fourier basis may also apply to the POD basis in general. Indeed, for select test functions representing stationary dynamics, Hodzic, Olesen, and Velte (2022) observed a high correlation between the POD eigenspectrum and the analytical Fourier spectrum. Interestingly, the correlation exceeded that of the analytical Fourier spectrum and the spectrum of the discrete Fourier transform (DFT), indicating a close spectral symmetry between the analytical Fourier basis and the POD basis in _locally_ statistically stationary flows.
Based on these considerations, and recalling that PPOD modes may, in our simulations, completely expand both \(\mathbf{v}_{p}\) and \(\mathbf{u}_{f\oplus p}\), we hypothesize that the empirical response function based on PPOD modes follow a trend similar to \(H^{2}\) (equation (22)). The hypothesis is validated in Figure 8 where \(H^{2}\), \(H^{2}_{four}\) and \(H^{2}_{pot}\) are shown. Here \(H^{2}_{four}\) (Figure 8b) is shown as a reference case, representing the empirical response function where \(E_{p}\) and \(E_{f\oplus p}\) are computed based on Fourier modes. \(H^{2}_{pot}\) (Figure 8c) represents the empirical response function computed based on the PPOD modes of each simulation. For \(H^{2}\), \(\tau_{p}=\tau_{p}(t_{0})\) is used in each case, although the quantity is dependent on time since the studied flow is non-stationary. However, for the Stokes numbers and temporal domain considered it is observed that
\[\frac{|\tau_{p}(t_{0})-\tau_{p}(t_{N-1})|}{\tau_{p}(t_{0})}\leq 8\%\,. \tag{24}\]
It is therefore assumed that the choice of \(\tau_{p}\) is reasonably representative of the dynamics over the entire temporal domain in each simulation case.
Figure 7: Parellicity between \(\mathbf{\varphi}\) denoting the TPOD modes extracted for \(S_{0}=0.25\) and \(\mathbf{\psi}\) denoting the (a) TPOD modes of the single-phase simulation (b) TPOD modes extracted for \(S_{0}=3.0\) and (c) PPOD modes extracted for \(S_{0}=0.25\).
Though equation (22) is derived based on the Fourier transform, Figures 8a and 8b show little correlation between \(H^{2}\) and \(H_{pow}^{2}\). The inherent periodicity of Fourier modes justifies this result, since expansion of non-periodic signals will lead to spectral leakage. We are studying decaying HIT, and as a consequence, the fluid- and particle velocity signals are not periodic and the Fourier modes do not form an appropriate basis for the expansion of these signals (Lumley (2007)).
Conversely, Figures 8a and 8c show a high correlation between \(H^{2}\) and \(H_{pow}^{2}\), as hypothesized. Feasibly, the result reflects some deeper spectral symmetry related to energy optimality, from which equation (22) follows, rather than the equation strictly following from the properties of Fourier modes.
In Figure 9 the correlation is more clearly illustrated by the depiction of \(H_{pol}^{2}\) (markers) and \(H_{jh}^{2}\) (solid). Here \(H_{jh}^{2}\) is a least squares fit of \(H^{2}\) to \(H_{pol}^{2}\) given by
\[H_{jh}^{2}(\alpha)=\frac{1}{1+((\alpha-1)\omega^{*}\tau_{p})^{2}}\,,\quad \alpha\in[1:M]\,. \tag{25}\]
The product \((\alpha-1)\omega^{*}\) represents a "POD-frequency", and the fitting parameter \(\omega^{*}=9.0647\) is found through minimization of the objective
\[\min_{\omega^{*}}\sum_{\tau_{p}}\sum_{\alpha}\left|\left|\frac{H_{jh}^{2}( \alpha)-H_{pol}^{2}(\alpha)}{H_{jh}^{2}(\alpha)}\right|\right|_{2}\,. \tag{26}\]
Summation over \(\tau_{p}\) represents the fitting of \(\omega^{*}\) to the data of all Stokes numbers simultaneously. The figure shows a clear connection between the PPOD empirical response function and a modified version of the analytical model (22). These results show potential for the ability to approximate the modal energy of the carrier phase sampled at the particle position in decaying HIT, directly from the Stokes number and particle velocities.
### Response - relative velocity
Csanady (1963) derived a relation between the mean square relative velocity and \(E_{f@\!\!\!\!/\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
energetically dominant modes for \(St_{0}\leq 1.5\). For \(St_{0}=3.0\) the model fit at the energetically dominant modes is not on point. This suggests a range of Stokes numbers at which the model is appropriate. However, in all cases, the trend of \(E_{rel}/E_{f@_{P}}\) is similar to that of \(H_{rel}^{2}\), highlighting the similarities between the models derived through Fourier analysis of stationary flows and POD analysis of the current non-stationary flow.
Equations (21), (25), (27) and (28) provide a method for approximating \(E_{f@_{P}}\) and \(E_{rel}\) in particle-laden decaying HIT based on particle velocity measurements. The method requires knowledge of \(\tau_{p}\), and if this quantity changes significantly over the considered temporal domain, or if it has rapid fluctuations, the current accurateness of the model may deteriorate since it was assumed that \(\tau_{p}=\tau_{p}(t_{0})\) for the generation of the fit.
## V Conclusions
A study of the temporal dynamics of two-way coupled particle-laden decaying HIT for various Stokes numbers was conducted. Using time-localized formulations of POD - TPOD and PPOD for respectively the decomposition of fluid- and particle velocities - sets of energy-optimal modes were extracted representing the temporal dynamics of the two phases. For both phases it was observed that the extracted modes resembled damped harmonic oscillators, where the local wavelength of each mode increased over time. Moreover, the modes exhibited a high correlation in the dominating dynamics between the carrier- and dispersed phase.
The TPOD eigenspectrum of each simulation was inspected and compared to the eigenspectrum of a corresponding single- phase simulation. A distinct difference of shape between the single- and multiphase spectra was observed. In addition, the TPOD spectra were compared to the Fourier turbulence energy spectrum generated at the final time step of each simulation. Here an increase of energy at high wavenumbers of the turbulence spectrum was observed to correlate with a relative increase in the TPOD eigenspectrum at high mode numbers.
It was demonstrated that the POD mode-set extracted from the velocity of one phase could span the velocity of both phases. Therefore, the Lagrangian spectrum based on PPOD modes could be computed for both the carrier- and dispersed phase. A relation between these spectra was evaluated empirically giving rise to analytical expressions of response functions in a PPOD frame of reference. The response functions related the modal energy of the inertial particles to that of the surrounding fluid through simple expressions dependent on the Stokes number. Notably, the expressions, fitting the data of the current non-stationary flow, resembled those derived through Fourier analysis of stationary flows. This suggested a deeper symmetry between POD and Fourier spectra.
The current PPOD analysis was applied to an ideal test case of a non-stationary flow. The results outlined, and the theoretical applicablity of PPOD to any non-stationary flow, indicate that PPOD analysis may provide insightful dynamical information for the Lagrangian dynamics of alternative flows in future studies of particle-laden turbulence.
###### Acknowledgements.
AH and CMV acknowledge financial support from the European Research Council: This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 803419). MS acknowledges financial support from the Poul Due Jensen Foundation: Financial support from the Poul Due Jensen Foundation (Grundfos Foundation) for this research is gratefully acknowledged.
## Declaration of interest
The authors report no conflicts of interest.
## Data availability statement
The data that forms the basis of this study is available from the corresponding author upon reasonable request.
|
2309.15472 | Voxel Graph Operators: Topological Voxelization, Graph Generation, and
Derivation of Discrete Differential Operators from Voxel Complexes | In this paper, we present a novel workflow consisting of algebraic algorithms
and data structures for fast and topologically accurate conversion of vector
data models such as Boundary Representations into voxels (topological
voxelization); spatially indexing them; constructing connectivity graphs from
voxels; and constructing a coherent set of multivariate differential and
integral operators from these graphs. Topological Voxelization is revisited and
presented in the paper as a reversible mapping of geometric models from
$\mathbb{R}^3$ to $\mathbb{Z}^3$ to $\mathbb{N}^3$ and eventually to an index
space created by Morton Codes in $\mathbb{N}$ while ensuring the topological
validity of the voxel models; namely their topological thinness and their
geometrical consistency. In addition, we present algorithms for constructing
graphs and hyper-graph connectivity models on voxel data for graph traversal
and field interpolations and utilize them algebraically in elegantly
discretizing differential and integral operators for geometric, graphical, or
spatial analyses and digital simulations. The multi-variate differential and
integral operators presented in this paper can be used particularly in the
formulation of Partial Differential Equations for physics simulations. | Pirouz Nourian, Shervin Azadi | 2023-09-27T08:11:55Z | http://arxiv.org/abs/2309.15472v1 | # Voxel Graph Operators
###### Abstract
In this paper, we present a novel workflow consisting of algebraic algorithms and data structures for fast and topologically accurate conversion of vector data models such as Boundary Representations into voxels (topological voxelization); spatially indexing them; constructing connectivity graphs from voxels; and constructing a coherent set of multivariate differential and integral operators from these graphs. Topological Voxelization is revisited and presented in the paper as a reversible mapping of geometric models from \(\mathbb{R}^{3}\) to \(\mathbb{Z}^{3}\) to \(\mathbb{N}^{3}\) and eventually to an index space created by Morton Codes in \(\mathbb{N}\) while ensuring the topological validity of the voxel models; namely their topological thinness and their geometrical consistency. In addition, we present algorithms for constructing graphs and hyper-graph connectivity models on voxel data for graph traversal and field interpolations and utilize them algebraic in elegantly discretizing differential and integral operators for geometric, graphical, or spatial analyses and digital simulations. The multi-variate differential and integral operators presented in this paper can be used particularly in the formulation of Partial Differential Equations for physics simulations.
**Keywords:** Topological Voxelization, Topological Data Analysis, Voxel Networks, Digital Differential Geometry
## 1 Introduction
Raster data models or images are discrete data models based on a regular grid of small elements: pixels in 2D images and voxels in 3D images. Raster data models can be advantageous in certain operations and queries in Geospatial Sciences, Spatial Analysis, Scientific Visualization, Medical Imaging, Computer Graphics, and Engineering Optimization.
Central to these applications is working with discrete representations of functions on a network or a manifold object possibly of high genera (many holes), making spatial analysis highly complex. In medical imaging, voxel data is often the natural source of data generated directly from MRI or CT scans, which is already a convenient representation for spatial analysis. In scientific simulations and engineering optimization, however, it is sometimes necessary to convert the so-called vector data models of manifold spatial objects in the form of points, curves, surfaces, or solids into voxels for various reasons, namely discrete simulations such as those solved by the Finite-Difference Method, removing noise from data, or in general for regularizing some geometry processing tasks. Keeping the topological invariants of the input manifold objects in such conversions intact is a challenging task.
For example, converting a point cloud to a raster model can simplify neighbour search and segmentation by reducing confusion and irregularities in point clouds. The significant advantage of raster representation of geometric objects can be described as unifying geometrical and topological data into one representation. This means that with a raster model
of an object, one can not only decipher geometric shape from the position of the voxels but also discern the topological properties from the neighbourhood of the voxels in question. In doing so, one can perform almost all geometry processing operations as vectorized (i.e. algebraic) operations with matrices and vectors. This is highly advantageous for modern computing practices as it increases the potential for parallelization, e.g. in treating geometric data as image data or graph data for Machine Learning or Deep Learning applications. However, rasterization can potentially corrupt some of the topological information contents that exist in the original vector data models, especially if done without an explicit idea of topological adequacy. This paper presents a systematic workflow for topological voxelization and voxel-graph construction that can provide confidence, efficiency, and elegance in dealing with geometry processing problems using raster representations. The outlook of utilizing this workflow is to convert some complicated or error-prone geometry processing problems in computational geometry to integer computing problems in computational topology that do not suffer from floating precision issues common to computational geometry problems. For example, problems such as Voronoi tesselations or Agent-Based Modelling can be easily converted into topological and combinatorial problems by approximating the input geometric domains as 3D images.
In particular, we consider some essential meta-level use-cases of voxel-representations and voxel graphs, such as solving Partial Differential Equations (PDE) and the typical interpolations in the numerical methods of scientific computing (q.v. LeVeque, 2007). More specifically, one of the paper's contributions is a comprehensive and algebraic characterization of interpolation, differential, and integral operators on voxel graphs and their utility for solving Partial Differential Equations (PDE) in scientific computing. These algebraic differential/integral operators are based on some proposed hyper-graph data models to be constructed based on raster data models. One fundamental use-case of these hyper-graphs (i.e. graphs whose edges can be higher-dimensional cells, a.k.a. topological cell complexes) is to simplify the vectorization or visualization of level-sets in ordinary graphics pipelines for creating iso-surfaces in regular 3D images and 3-manifolds or for creating iso-curves on flat pictures and manifold surfaces by facilitating the use of algorithms such as Marching Cubes and Marching Tetrahedra Osher and Fedkiw, 2001. Note that iso-curve and iso-surface creation has remained an active area of research, firstly due to the need for creating alternatives to the patented Marching Cubes algorithm and later for enhancing precision or dealing with irregularities, see, e.g. the Dual Contouring by Ju et al., 2002, Surface Nets by Gibson, 1998, Manifold Dual Contouring by Schaefer et al., 2007, 2.5D Dual Contouring for building model reconstruction from aerial point-clouds by Zhou and Neumann, 2010 and Neural Dual Contouring for the challenge of surface reconstruction from noisy data by Z. Chen et al., 2022. While these use-cases require hyper-graphs, a more straightforward yet less frequent use-case is conducting graph-traversals such as optimal path algorithms, neighbourhood search, sorting, random-walk/diffusion processes Nourian, 2016a.
In short, this paper presents a novel computational workflow consisting of compatible data structures and algorithms for efficiently converting piece-wise linear geometric data into topologically adequate voxel data and constructing hyper-graphs from voxels. The proposed pipeline is an algebraic, vectorized, efficient, and explicitly topological modelling process that can be easily generalized to higher-dimensional data analytics procedures for use cases in Topological Data Analysis (TDA) Zomorodian, 2012. The efficiency of the algorithms is claimed in terms of minimal space [memory] and [computing] time complexity. Additionally, the adequacy or efficacy of processes is claimed in terms of topological correctness and sufficiency, i.e. preserving such topological properties as continuity, closeness, and thinness.
In the following sections, we first highlight notable works in the same direction to illustrate this work's background and then elaborate on our framework by describing the relationship between the mathematical objects used in this pipeline. The central part of this paper is the methodology section, where we explain all the steps in the proposed pipeline: mesh sampling, point cloud rasterization, construction of proximity graphs, and construction of discrete differential/integral operators. Lastly, we demonstrate the advantages of this novel approach by showing the results of a \(L_{p}\) Voronoi tessellation of a 3D manifold based on a diffusion distance obtained by random-walk simulations on a graph obtained from synthetic voxel data. Additionally, we present the open-source computational implementation of a significant part of the proposed pipeline in the topoGenesis python package Azadi and Nourian, 2020. The notational conventions of the paper are explained in Table 9 in Appendix A.
## 2 Background
In this section, we cover the general context of rasterization and voxelization methods; we look into state of the art in voxelization algorithms that explicitly aim to maintain the topological properties of the original model; and finally, we elaborate on the two major application domains that this pipeline can contribute to: (1) ensuring geographical-topological consistency in geo-spatial analysis and geospatial simulations. (2) alleviating the notorious challenges of mesh generation and mesh dependency in numerical methods, e.g. solving Partial Differential Equations in Finite Element Method (FEM) or Spectral Methods.
### Voxelization
Since the advent of raster displays, rasterization became a fundamental process in modern computers for visualizing objects on digital screens (q.v. Amanatides and Woo, 1987; Uno and Matsuka, 1979). The regularity of raster representations makes them more suitable for mathematical modelling and analyses. Therefore, rasterization of so-called _vector geometry_ inputs is a common step in more complex computational procedures. Principally, voxelization is the volumetric generalization of the rasterization process that aims to convert vector representation of geometry data into volumetric pixels, an operation typically referred to as Scan Conversion.
Various voxelization procedures directly inspired by the Scan Conversion have been presented in the computer science literature, the oldest of which date back to the 1980s. Most early works in this area are based on the idea of making a Digital Differential Analyzer (DDA), i.e. an algorithm that converts the slope of lines from a float to an integer quotient describing the rhythmic number of steps in \(\mathrm{X},\,\mathrm{Y},\mathrm{Z}\) directions to voxelate them efficiently. The most prominent example of which is the Fast Voxel Traversal method by Amanatides and Woo, 1987 in that it was a visionary extension of the idea of the DDA to 3D voxelization of lines. Notably, another method for Scan Conversion-based DDA was proposed in the same year for voxelization of triangles by Kaufman and Shimony, 1987. This algorithm is particularly important as triangle meshes are the most common data models for representing digital surfaces and the boundary representation of solids or 3-manifolds. Adding the introduction of the Marching Cubes by Lorensen and Cline also in 1987 marks the year 1987 as the de facto birthdate of voxelization in Computer Graphics, i.e. 35 years ago.
The DDA approach to voxelization is followed by Cohen-Or and Kaufman, 1995; Kaufman, 1988 mainly focusing on direct voxelization of geometric primitives using their parametric definitions (more recently in Lai and Cheng, 2006). In image-based approaches to voxelization of solids (3-manifolds), the bounding box of the geometric model is first sliced into 2-dimensional images. As proposed by Yuen-Shan Leung and Wang, the images then are filled by shooting rays perpendicular to the slicing direction (Yuen-Shan Leung and Wang, 2013). To increase the efficiency of the costly intersection computations, Young and Krishnamurthy, 2018 generalized the image-based approach to perform a multi-level voxelization. Similarly, the Octree structure can be used to minimize the computational cost. Namely, in Crassin and Green, 2012 the authors project each triangle to the dominant side of it instead of shooting rays to form a sparse voxel model based on Octree. With the same aim, Kampe et al., 2013 proposed a new data structure for voxels based on a directed acyclic graph (DAG) instead of Sparse Voxel Octree. Despite their efficiency and generality, none of these methods can guarantee the preservation of the topological properties of the geometric model during the conversion. See a recent literature review on voxelization algorithms and data structures by Aleksandrov et al., 2021.
### Topological Properties
Explicit attention to topological accuracy can be traced back to Huang et al., 1998 followed by a seminal book on digital geometry with various chapters dedicated to topological connectivity in voxel datasets by Klette and Rosenfeld, 2004, a concise mathematical introduction to the matter by Laine, 2013, and an algorithmic definition by Nourian, Goncalves, et al., 2016. Laine, 2013 presents the mathematical ideas of topological correctness of voxelated shapes and presents the mathematical essence of topological voxelization but does not explicitly present algorithms for voxelization of input geometry. Nourian, Goncalves, et al., 2016 present algorithms for 0D, 1D, 2D, and 3D inputs, respectively, for voxelating point clouds, curves, surfaces, and solids. Although these algorithms are implemented and tested for processing geospatial data, they need to be improved in terms of time complexity and output data structures. The complexity of the algorithms presented in that paper is in the order of \(\mathrm{O}(\mathrm{UVW})\) where \(\mathrm{U},\mathrm{V},\mathrm{W}\) respectively represent the number of voxels in \(\mathrm{X},\,\mathrm{Y},\mathrm{Z}\) directions. For comparison, here we present an algorithm with the complexity of \(\mathrm{O}(\mathrm{UV}+\mathrm{VW}+\mathrm{WU})\), which consists of three procedures arguably comparable in efficiency to the Scan-Conversions by Cohen-Or and Kaufman, 1995 and GPU computing approaches (e.g. the one by Young and Krishnamurthy, 2018) for pixelating inputs while offering explicit control on connectivity levels.
### Geospatial Consistency
Many of the physical behaviours and correlated features of the modelled phenomena heavily depend on the notion of closeness that is naturally represented by the topological properties of data and models. Loss of global topological properties such as connectedness & closure during the geospatial data modelling process can be detrimental to the usability of the final results of the spatial analysis and simulation procedures. Preservation of the global topological properties ultimately requires consistency at the level of the local topology of the digital spatial objects that are defined based on the notion of adjacency between the cells of the background space. Additionally, maintaining geographic consistency in the modelling process requires the preservation of topological relations of objects as well as the existence of a reversible geospatial indexing schema to ensure that tiled or dissected models can be collated consistently together to make models of larger areas. When integrating different geographical data sources, it is essential to measure the
similarity of objects, e.g. a road represented by a polyline in one map and the same road represented by a polygonal region in another, by comparing their topological properties at a certain level of detail Belussi et al., 2005.
The issue of topological consistency is not unique to raster representation. In fact, it can be more challenging to preserve topological consistency in workflows requiring multi-scale vector data that needs to be generalized (lowering the geometric level of detail in the geospatial analysis is referred to as generalization), as explained by van der Poorten et al., 2002. To provide for a consistent query structure across vector and raster data models, Egenhofer and Sharma, 1993 investigated the similarity of topological properties in vector and raster data; proposing that the topological properties in \(\mathbb{R}^{2}\) are a subset of topological properties in \(\mathbb{Z}^{2}\). Correspondingly, there have been various approaches to the unification of the notion of topological properties between raster and vector data, such as the mathematical work by Winter and Frank, 2000 and the work on data models by Voudouris, 2010. While the term geospatial usually has the connotation of referring to outdoor large-scale 2D domains, a relatively new line of research on 3D indoor models of large and complex buildings shows the importance of geospatial consistency in small-scale 3D spatial analyses, see, e.g. this work on indoor navigation by Gorte et al., 2019.
## 3 Proposed Framework
The central point of the proposed framework is the reintroduction of graphs and [regular] cell complexes as locally-linear and globally non-linear discrete models of manifolds. This is not only for discrete geometry processing methods (see this seminal book by Klette and Rosenfeld, 2004) to benefit from image processing techniques (see this presentation L. Chen, 2004) but also for Topological Data Analysis (see a topology oriented introduction by Zomorodian, 2012, and an applied statistics-oriented introduction by Wasserman, 2016), Manifold Learning (see an introduction by Izenman, 2012), and algorithm design for algebraic Simulations (in particular for solving Partial Differential Equations (PDE), e.g. using graph theoretical formulations of the Finite Element Method similar to those proposed by Christensen, 1988.)
In this regard, we reflect on the general utility of graphs as _interpolation objects_ that provide locally linear interpolation spaces for manifold spaces that are globally non-linear and thus irreducible to linear spaces. The utility of graphs and simplicial complexes as discrete interpolation objects (or approximate manifolds) must be evident due to the guaranteed linearity of k-simplexes; however, 2D quadrilateral cells or 3D hexahedral cells might be non-planar in general and thus require more sophisticated interpolations On the other hand, k-cubes (interpolation objects in voxel graphs) are rendered planar due to the regularity of their grids, thus admitting multilinear interpolations on par with barycentric interpolations in simplexes (see Table1 for a summary.)
To recapitulate, the local linearity of graphs and hyper-graphs or topological cell complexes in their edges or k-cells (k-dimensional finite elements such as surface elements and volume elements) is a major advantage for a wide range of algorithms in that it allows for breaking down a complex and non-linear space into bounded linear subspaces whose data
\begin{table}
\begin{tabular}{c c|l l l} \hline \hline \multirow{2}{*}{Representation Type} & \multirow{2}{*}{Field Object} & \multicolumn{3}{l}{combinatorial topological objects for constructing graphs and/or cell complexes} \\ \cline{2-4} & Field Samples & 1D-Manifold & 2D-Manifold & 3D-Manifold \\ & & 1D Interpolation & 2D Interpolation & 3D Interpolation \\ \hline K-Cube Objects & Voxel & Segment & Square & Cube \\ & Regular Complexes, & & (or 2 triangles) & (or 6 tetrahedrons) \\ Multi-Linear Interpolations & & Linear & Bilinear & Trilinear \\ Regular Networks & Voxel Cloud & Raster Curve & Raster Surface & Raster Volume \\ K-cube Level-Sets (Contours) & & Marching Lines: & Marching Squares: & Marching Cubes: \\ & & Contour Points & Contour Curves & Contour Surfaces \\ \hline K-Simplex Objects & Cell & Line-Edge & Tri-Face & Tet-Cell \\ Irregular Simplices, & \(\rightarrow\) & 1-Simplex & 2-Simplex & 3-Simplex \\ Barycentric Interpolations & & & \\ Irregular Networks & Point Cloud & Line Network & Surface Mesh & Volumetric Mesh \\ K-Simplex Level-Sets (Contours) & & Marching Lines: & Marching Triangles: & Marching Tetrahedra: \\ & & Contour Points & Contour Curves & Contour Surfaces \\ \hline Intersection Targets & Hyper-Plane & Plane & Line & Point \\ & 3D Intervals & 2D Intervals & 1D Intervals & 0D Intervals \\ \hline \hline \end{tabular}
\end{table}
Table 1: Piece-wise Linear Data Models for approximating non-manifold and manifold spatial domains in which fields can be sampled, interpolated, and analysed, note that non-manifold spatial domains can be generally represented by point-clouds or voxel-clouds as simplicial/multi-linear complexes
models can be represented simply as tuples of integers, literally. While we have chosen to emblematically reintroduce graphs and hyper-graphs (a.k.a. topological cell complexes) as to their utility for interpolations, in fact, their utmost utility is for elegantly defining sparse differential and integral operators, specifically discrete differential operators such as the gradient, divergence, and Laplace-Beltrami operator or Riemannian integral operators.
What all the applications listed above have in common is that one often needs to work with a finite number of samples of a scalar-valued or vector-valued function (also known as a scalar field or a vector field.) These samples are typically extracted from a non-trivially shaped spatial region, where there is an assumption that this region (known as the [input] domain of the function) is a d-dimensional manifold (typically a 2-manifold or 3-manifold). A d-manifold is a globally [non-linear] region in a Euclidean space that is locally homeomorphic or similar to a Euclidean space of dimension \(d\) or \(\mathbb{R}^{d}\), thus locally linear and smooth enough such that its tangent spaces at small-enough neighbourhoods can be approximated by linear spaces such as simplexes (1D line-segments, 2D triangles, or 3D tetrahedrons) or [straight] k-cubes (1D line-segments, 2D squares, 3D cubes). This means that an arbitrary point in the manifold cannot be expected to be constructible from an interpolation of a finite number of points from the manifold in general. However, the manifoldness means that in the small neighbourhoods of the points in the manifold region, such linear interpolations would cover the local neighbourhood space, i.e. every point in the local neighbourhood can be addressed and obtained with interpolation parameters in a fashion similar to coordinates in a Euclidean space.
The so-called _piece-wise_ linear geometric models are indispensable for many engineering and scientific computing applications concerned with PDE. This is common, inter alia, in Finite Element Analysis (FEA), where some quantities pertaining to a finite set of vertices are computed as solutions to a PDE, which can then be interpolated in the finite surface or volume elements between the vertices for obtaining a smooth picture of the field in question. This makes it possible to compute the results of the equations on a finite set of points while having the opportunity to interpolate and obtain smooth results as required per application. Note that the interpolation objects defined here on digital images (pixel grids or voxel grids) are dual to the k-cells of the discrete manifold objects in question, unlike simplicial complexes where the k-cells are the same as interpolation objects (i.e. triangles of the marching triangles will be the same as faces and so forth). To understand this, loosely speaking, consider the Poincare duality of the combinatorial cube objects in the Marching Cubes algorithm by Lorensen and Cline, 1987 to the voxels in the raster (digital 3D image) or the duality of the combinatorial square objects to the pixels in the Marching Squares algorithm (see a related introduction to the concept of duality by Lee and Kwan, 2005).
In TDA, interpolation (and extrapolation) can be considered the ultimate aim of the analytic procedure with the objective to predict the properties of data points between or around those sampled before. To interpolate scalar/vector fields between the spatial data points, there must exist some topological objects of higher dimensions. These higher-dimensional topological objects are best understood in the language of Algebraic Topology (a.k.a. Combinatorial Topology see the definition in Zomorodian, 2012 or Hatcher, 2009).
In the following subsections, we describe the importance of topological properties of manifolds, the discrete way to model or approximate them with combinatorial objects, what these objects are with regards to the bigger picture of the global topological properties of the manifold space, as well as the topological similarity of these manifolds to such things as disks, washers, balls, tori, and alike, as described in the Euler-Poincare characteristic of their discrete representations. Finally, the last subsections will present the basis of the proposed methods for graph construction and the derivation of the differential/integral operators on voxel graphs.
### Point Set Topology & Algebraic Topology
In the two broad categories of use cases introduced above as TDA (particularly spatial data analysis) and spatial Simulations (i.e. first principles formulated as PDEs), the topology of the object under study is key to the way it conducts flows; be it the flow of fluids, heat, forces, pedestrians, or electrical current. Such flows are core concepts in defining the system's state in simulations, or the geodesic distances through the manifold that shapes the basis of _spatial analysis_ and set it apart from other data analyses. Evidently, the way a manifold conducts or resists (admits or impedes) such flows depends on its topology or network structure as to which essential analogies between graphs and electrical circuits or hydraulic networks can be made to utilize the fundamental theorems of Kirchhoff for computing flows in circuits.
In practice, the prerequisite of flow analysis or flow simulation is constructing a topological picture of theoretically continuous manifold spaces using discrete data points for making networks (graphs) or discrete manifolds (cell complexes). A common challenge is the correspondence of the global topological properties of the discrete manifold model with the supposedly continuous domain it models. A key concept that connects the seemingly disparate worlds of continuous manifolds and discrete manifolds is the notion of a locus or a point-set as defined in terms of a crisp membership function or a predicate defining the explicit (i.e. based on analytical equations for the coordinates) or implicit (i.e. a level-set description of a function only dealing with a predicate pertaining to the scalar or vector attributes of points) characteristics of points belonging to the point-set. Two definitions of topology are particularly relevant to
the discussion here: one concerning the topological properties of the global picture of the objects in terms of such things as their genera, inner shells, and alike, as well as another one concerning the topological constructs building a discrete picture of the local neighbourhoods in the computational representation of the objects. The first sense is, in fact, General Topology or Point-Set Topology, which regards objects as point-sets or loci. Of course, the point-set topological properties of single objects affect the spatial relations between such spatial objects, e.g. see a paper by Egenhofer and Franzosa, 1991 on the subject of spatial relations between spatial regions in 2D, and a comprehensive introduction to the 3D cases by Zlatanova, 2015. These properties, however, are known to be scale-dependent, as evident in the idea of Persistent Homology (the core of TDA in Computational Topology, see an explanation by Edelsbrunner and Harer, 2022), which can be described as the study of the topological _big picture_ of a data set, in a manner of speaking. Mathematically, the topological big picture of a manifold spatial region is summarised and coded into its Euler-Poincare characteristic, which in the case of 2-manifolds without borders or cavities is defined as:
\[\chi(\mathcal{M})=V-E+F=2-2g. \tag{1}\]
Leonhard Euler originally proposed the left-hand side (LHS) of this equation for describing the topology of polyhedrons, referring for the first time to the combinatorial topological constructs (vertices, edges, and faces) constituting the digital model of the object (an algebraic topology picture). Henri Poincare added the right-hand side (RHS) to refer to the global topological properties of the object, such as the number of genera/tunnels (denoted as \(g\)), cavities or shells (not present in this simple formula), thus describing a picture of the entire point set topology as a locus. Naturally, the LHS is easier to check and obtain for a computer, while the RHS is supposedly easier to compute visually for a human. Ascertaining the equality of the two, i.e. the supposed value of the RHS with the actual value of the LHS, is a matter of topological validation of data. See an extended version of the equation and ways to compute the challenging RHS for validating the topology of voxelated manifolds as proposed by Sanchez-Cruz et al., 2013, a thorough introduction to the ideas of topological validity and sufficiency in K. Weiler, 1985, K. J. Weiler, 1986, and a comprehensive introduction to the generalization of the idea to non-manifold objects by Masuda, 1993.
The prerequisite of understanding such a topological big-picture in a digital setting is that one needs to have made a simplicial complex or a topological cell complex of the dataset that contains algebraic or combinatorial topological objects known as k-cells (see 1) defining the edges or hyper-edges as interpolation objects between data points. In this sense, simplicial complexes such as the Vietoris-Rips Complexes or Alpha Complexes generalize the idea of graphs to hyper-graphs.
In summary, it suffices to say that without an implicit notion of _natural topology_ amongst some data points, they together represent nothing more than a shape less _powder_. It is precisely this notion of topology that allows one to conceive of the existence of points in between sampled points. However natural or trivial this notion of topology may seem, this intuition about the existence or meaningfulness of points in between points can lead to mistakes in dealing with non-trivially shaped spatial regions or manifolds, which are locally Euclidean thus leading our intuition in this direction but globally non-Euclidean. An archetypical example of such common mistakes is the assumption that two points whose coordinates are close in the Euclidean/Cartesian sense are actually close in space while they might be on the two sides of a river and only connected through a non-trival manifold of roads of high genera (with many holes, so to speak) thus actually far apart from the sense of geodesic flows. TIn the following sections we shall see how the proposed workflow can simplify the explicit construction of models of topology for unambiguously describing the connexions between voxels; explicit models defined as incidence and adjacency matrices representing bipartite or unipartite graphs explicitly encoding topological links amongst the k-dimensional cells of topological cell complexes made up of voxels (see the Section 4.3 for algorithmic details, Table2 for a summary, and Section4.4).
### Topological Complexes As The Interpolation Space
The so-called k-cells are combinatorial constructs defined in algebraic topology (q.v. Hatcher, 2009), which are introduced under the umbrella of computational topology, see a short introduction by Zomorodian, 2009, and two books by Zomorodian, 2005 & Edelsbrunner and Harer, 2022. The cells introduced in this paper are voxel links as line segments, squares of the marching squared the cubes of marching cubes (see Table 1).
Combinatorial topological cell complexes can be made by _glueing together_ linear cells, hence the name piece-wise linear representations [of regions in space]. This framework suggests utilizing combinatorial graphs constructed from the commonalities (adjacency-relations between same-dimensional cells or incidence relations between different-dimensional cells) as a convenient replacement for the so-called _natural topology_ of the Euclidean space (open balls or open intervals). While the construction of a navigable topological object (a graph or hyper-graph) in the Euclidean space is generally hard or ambiguous, as elaborated in the next section, in the completely discretized world of voxels, such graphs or hyper-graphs can be easily and unambiguously constructed algebraically from incidence matrices as proposed in Table 2.
Graphs are the most common and versatile topological models of spatial regions consisting of only 0D-Vertices and 1D-Edges (geometric graphs are often denoted as an ordered pair of vertices and edges dubbed \(G=(V,E),\ E\subset V\times V\)). However, there do also exist higher-dimensional topological models of spatial regions with surface elements or volume elements such as simplicial complexes (triangular or tetrahedral meshes, generally denoted as ordered pairs of vertices, edges, faces, dubbed as \(M=(V,E,F),E\subset V\times V,F\subset V\times V\times V\)). The necessity of higher-dimensional k-cells in our proposed framework is, respectively, for interpolation or iso-curve construction on 2D slice pictures of 3D images and iso-surface construction out of 3D scalar fields. A thorough introduction to such combinatorial objects for meshes can be found in K. Weiler, 1985.
### Graph Construction
Constructing proximity graphs or topological cell complexes for approximating manifolds from point samples is generally a difficult task specially on point clouds. The two famous methods of constructing the so-called epsilon-graphs and k-nearest neighbour graphs are only two examples of a larger family of neighbourhood definitions for constructing proximity graphs (for general purpose TDA as epsilon ball graphs, KNN graphs, Gabriel Graphs, Relative Neighbourhood Graphs beta-skeletons or for specific applications like modelling ad-hoc telecommunication networks with unit disk graphs). The variety of the methods should already convey the difficulties of obtaining a persistent topological picture from such unstructured data. On the contrary, as shall be presented in this paper, constructing graphs on voxel data is very elegant and straight forward as compared to point clouds. In addition to this the Cartesian regularity of voxel graphs makes it easy to decipher geometric information from topological information contents. This makes voxel graphs particularly appealing for constructing differential/integral operators.
Based on a given voxel cloud, various neighbourhoods can be defined. In this paper, we propose to use a computational object called _stencil_ that represents a graph that can function as a kernel on the voxel cloud. Like the kernels in the image processing, the stencil describes a set of conditions based on relative indices, which can be checked by moving the kernel on the image, similar to discrete convolution. However, in contrast to the discrete convolution where the output is a pixel, the stencil also describes a set of edges or hyper-edges that will be constructed if the conditions were true. Naturally, the edges refer to the relative indices. Therefore, the graph representing the local topology of the model can be constructed. Stencils are easily generalizable to higher dimension cells as they are essentially graphs. Furthermore, stencils can be customized to represent different local topologies to fit the particular application case.
### Discrete Vector Calculus Operators for Voxel Graphs
Here we briefly explain the motivation and the basis for the derivation of the proposed differential/integral operators for voxel graphs. A particular operator of interest is the versatile Laplacian Operator (a.k.a. the Laplace-Beltrami Operator), known in discrete settings firstly as the Combinatorial Laplacian for graphs that have been extensively discussed in the literature of Spectral Graph Theory Chung and of the Mathematical Sciences, 1997; Nourian, 2016a; Spielman, 2007. Another important and versatile variant of the Laplace-Beltrami operator is the Mesh Laplacian, defined for Computer Graphics applications (mostly _mesh processing_, see Botsch et al., 2010) discrete simplicial (triangular) surfaces Sorkine, 2005.
Spectral Graph Theory (SGT) is mostly focused on the connection between topological or metric properties of graphs and Markov Chains with the spectrum (eigen-pair) of their Laplacian matrices, which have achieved remarkable results in application areas such as Random Walk Simulations, Web Indexing for large-scale search applications, Clustering, Signal Processing, and Machine Learning on graphs. Spectral Mesh Processing (SMP) focuses on relating common operations on meshes such as smoothing, simplification, segmentation, interpolation, diffusion and so on to the spectrum of the typically cotangent Laplacian defined for discrete triangular surfaces Levy and Zhang, 2010; H. Zhang et al., 2010. Nevertheless, it is rather uncommon in SGT or SMP literature to derive the Laplace operator
\begin{table}
\begin{tabular}{|c||c||c|c|c|} \hline Combinatorial Graphs & 0D-Vertexes & 1D-Edges & 2D-Faces & 3D-Cells \\ \hline
0D-Vertexes & \(\mathbf{A}_{VV}\) & \(\mathbf{\widetilde{M}}_{VE}\) & \(\mathbf{M}_{VF}\) & \(\mathbf{M}_{VC}\) \\ \hline
1D-Edges & \(\mathbf{\widetilde{M}}_{EV}\) & \(\mathbf{A}_{ER}\) & \(\mathbf{\widetilde{M}}_{EF}\) & \(\mathbf{\widetilde{M}}_{EC}\) \\ \hline
2D-Faces & \(\mathbf{M}_{FV}\) & \(\mathbf{\widetilde{M}}_{FE}\) & \(\mathbf{A}_{FF}\) & \(\mathbf{\widetilde{M}}_{FC}\) \\ \hline
3D-Cells & \(\mathbf{M}_{CV}\) & \(\mathbf{\widetilde{M}}_{CE}\) & \(\mathbf{\widetilde{M}}_{CF}\) & \(\mathbf{A}_{CG}\) \\ \hline \end{tabular}
\end{table}
Table 2: _Combinatorial Graphs between topological primitives up to dimension 3: the diagonal entries represent adjacency relations among complexes of the same dimension, whereas the off-diagonal entries represent incidence relations among complexes of different dimensions._
directly and explicitly from the combination of its constituents, i.e. the Gradient operator and the Divergence operator. Here, due to our attention to the use of all of these operators in discretizing PDE and solving them, we pay a particular attention to the physical dimension and the physical interpretation of the operators constituting the Laplacian operator. Additionally, we also define a Riemannian integral operator for discrete functions sampled on voxelated domains.
## 4 Methodology
As it has been outlined earlier, the proposed workflow in this paper consists of four main steps which will be described in the following subsections: Mesh Sampling, Topological Voxelization, Graph Construction, and Derivation of Discrete Differential/Integral Operator. The Mesh Sampling step is an extension of the topological voxelization idea proposed by Laine, 2013 and the algorithms of Nourian, Goncalves, et al., 2016 which aims to take topologically adequate samples based on the idea of Poincare Duality between \(k\)-dimensional objects and \((n-k)\)-dimensional objects embedded in an n-dimensional Euclidean space (\(\mathbb{R}^{n}\)) from input data models; i.e. extract 0D (point cloud) samples from 3D (volumetric mesh), 2D (surface mesh), and 1D (line networks) manifolds embedded in \(\mathbb{R}^{3}\) ion such way that the corresponding topological properties can be derived from the samples (see Figure 3). This step includes three separate algorithms for each data type: Algorithm 1 for line-networks, Algorithm 2 for mesh surfaces, and Algorithm 3 for volumetric mesh. The Topological Voxelization step voxelates, shifts, and encodes the sampled point clouds into voxel clouds that are represented by globally unique spatial Morton indices (see Figure 3 and Algorithm 4). The Graph Construction step derives the topological properties of the original mesh based on a given stencil and encapsulates them in a hyper-graph that can function as the interpolation space (see Figure 6 and Algorithm 5). The last step derives the discrete integration, differentiation, and interpolation operators on the hyper-graph domain (see Figure 6).
### Mesh Sampling
The Mesh Sampling algorithms aim to sample an irregular point cloud from 1D, 2D, or 3D manifolds represented by meshes embedded in \(\mathbb{R}^{3}\). Thus, in this section, we propose three algorithms to sample a point cloud from Line Networks (Algorithm 1), Mesh Surfaces (Algorithm 2), and Volumetric Meshes (Algorithm 3) in a manner that retrieving the topological properties of the original mesh is relatively easy in the graph construction step (see Section 4.3). These algorithms can be formalized as a mapping from a manifold domain in \(\mathbb{R}^{3}\) to a point cloud in \(\mathbb{R}^{3}\). In each of the algorithms, we construct a set of intersection objects with the \((D-d)\) dimensions, where \(d\) is the highest number of dimensions in mesh elements and \(D=3\) as the embedding space is \(\mathbb{R}^{3}\). Later in the process, we will voxelize the point cloud to achieve a voxel cloud (in \(\mathbb{Z}^{3}\)) and reconstruct the topological properties of the original mesh from that voxel cloud. To be able to reconstruct the topological structure of mesh, we need to arrange the intersection objects according to the most basic connectivity type in the stencil used in the graph construction method (see Appendix 8).
#### 4.1.1 Line Network Sampling
The first algorithm describes a Line Network Sampling using planes as intersection objects. We extend the Ray-Triangle intersection of Moller and Trumbore to a line-plane intersection algorithm that reduces the computational cost by exploiting the regularity of the voxel-grid Moller and Trumbore, 1997. As Nourian, Goncalves, et al. has showed, different intersection objects can capture different kinds of connectivity within the original mesh Nourian, Goncalves, et al., 2016. We need to construct the intersection objects in accordance with the simplest form of connectivity type (see Table 8) in the stencil that the Graph Construction (Algorithm 5) uses further down the pipeline. In the algorithm below, we utilize the _Conservative voxelization_. Thus the intersection planes are aligned with voxel boundaries; in an example grid of \(m\) by \(n\) by \(o\) voxels, respectively aligned with the \(X\), \(Y\), and \(Z\) axes. Consequently, we need \(m+1\), \(n+1\), and \(o+1\) such planes to sample the input line network within a bounding grid.
To extend the algorithm of Moller and Trumbore, we need to find the intersection of two loci: the line locus (\(\mathbf{p}^{\prime}=\mathbf{p}+r\mathbf{d},r\in[0,1],d:=\mathbf{v}_{1}- \mathbf{v}_{0}\)) and plane locus (\(\mathbf{p}^{\prime}=\mathbf{c}+s\mathbf{u}+t\mathbf{v},s,t\in[0,1]\)); thus \(\mathbf{p}-\mathbf{c}=-r\mathbf{d}+s\mathbf{u}+t\mathbf{v}\). Note that here we are looking at a bounded frame made up of two principal vectors of the exact length of the bounded frame in the \(u\) and \(v\) directions. Therefore, intersection parameters (\(r,s,t\)) out of these ranges (i.e. \([0,1]\)) will not be acceptable. We can rewrite our equality algebraically:
\[[-\mathbf{d},\mathbf{u},\mathbf{v}]_{3\times 3}\begin{bmatrix}r\\ s\\ t\end{bmatrix}=[\mathbf{p}-\mathbf{c}]_{3\times 1} \tag{2}\]
This is a system of linear equations in the form of \(\mathbf{Ax}=\mathbf{b}\), that we are going to solve analytically by forming the inverse matrix \(\mathbf{A}^{-1}\). With this approach, we can speed up the computation not only because of the closed-form nature
of the solver but also because the coefficients of the equation \(\mathbf{A}:=[-\mathbf{d}|\mathbf{u}|\mathbf{v}]_{3\times 3}\) (and thus \(\mathbf{A}^{-1}\)) will be the same for each batch of planes perpendicular to the axes X, Y, and Z of the voxelization domain.
\[\begin{bmatrix}r\\ s\\ t\end{bmatrix}=\mathbf{A}^{-1}[\mathbf{p}-\mathbf{c}]_{3\times 1}=\mathbf{A}^{-1} \mathbf{b}=\frac{1}{-|\mathbf{d},\mathbf{u},\mathbf{v}|}\begin{bmatrix}+| \mathbf{b},&\mathbf{u},&\mathbf{v}|\\ -|\mathbf{d},&\mathbf{b},&\mathbf{v}|\\ -|\mathbf{d},&\mathbf{u},&\mathbf{b}|\end{bmatrix} \tag{3}\]
To be able to exploit the similarity of the parallel planes we introduce \(\mathbf{w}:=\mathbf{u}\times\mathbf{v}\) and \(\mathbf{e}:=\mathbf{d}\times\mathbf{b}\) and rewrite the determinants in equation 3 as the following:
\[\begin{bmatrix}r\\ s\\ t\end{bmatrix}=\frac{1}{-\mathbf{d}^{T}\mathbf{w}}\begin{bmatrix}+\mathbf{b}^{T }\mathbf{w}\\ -\mathbf{e}^{T}\mathbf{v}\\ +\mathbf{e}^{T}\mathbf{u}\end{bmatrix} \tag{4}\]
At this point, we lay out the Algorithm 1 that iterates over the dimensions, intersection-planes, and lines to exploit their similarity. Accordingly, in order, we compute and check \(\mathbf{w},\delta,\mathbf{b},r,\mathbf{e},s,t\) to ensure that we are not wasting computation if the intersection is outside of the boundaries of line-segment and frame.
### Algorithm 1: Line Network Sampling
```
1:LineNetworkSampling(\(\mathcal{L}\), \(\mathcal{P}\), \(\boldsymbol{\sigma}\))
2:\(\mathbf{B}_{\mathcal{L}}=[BoundingBox(\mathcal{L})\oslash\boldsymbol{\sigma}]\)//dividingtheboundingboxoftmeshin\(\mathbb{R}^{3}\)basedonthe\(\mathcal{P}\)frameby\(\boldsymbol{\sigma}\)element-wise,andthenroundingittonearestinteger\(\in\mathbb{Z}^{3}:[[l_{min},j_{min},k_{min}],[i_{max},j_{max},k_{max}]]\)
3:\([m,n,o]=\mathbf{B}_{\mathcal{L}}[1]-\mathbf{B}_{\mathcal{L}}[0]\)
4:initialize\(\mathbf{X}=[]\)foreachaxis\(a_{n}\in\{0,1,2\}\)do
5:\(a_{r}=(a_{n}+1)\%3/\)identifytherightaxis
6:\(a_{f}=(a_{n}+2)\%3/\)identifythefrontaxis
7:\(\mathbf{u}=\text{diag}([m,n,o]^{T})\boldsymbol{\sigma}[a_{r},:]\)
8:\(\mathbf{v}=\mathbf{u}\times\mathbf{v}/\)compute\(\mathbf{w}\)foralltheplanesalongthecurrentaxis\(a_{n}\)foreachplane\(\boldsymbol{p}=(\mathbf{c}_{k},\mathbf{u},\mathbf{v})\)preprocluctor\(a_{n}\);enumeratedby\(k\in\{1,...,[m,n,o]^{T}[a_{n}]\}\)do
9:\(\mathbf{c}_{k}[a_{n}]=\mathbf{B}_{\mathcal{L}}[a_{n},0]+(k-0.5)\boldsymbol{ \sigma}[a_{n}]\)
10:\(\mathbf{c}_{k}[a_{r}]=\mathbf{B}_{\mathcal{L}}[a_{r},0]\)
11:\(\mathbf{c}_{k}[a_{f}]=\mathbf{B}_{\mathcal{L}}[a_{f},0]\)
12:forachline\(\boldsymbol{p}=(\mathbf{p},\mathbf{d})\)in\(\mathcal{L}\)do
13:\(\delta:=-\mathbf{d}^{T}\mathbf{w}\)//computethedeterminant
14:if\(\delta\in[-\epsilon,+\epsilon]\)thencontinuewithnextline;
15:\(\mathbf{b}:=\mathbf{p}-\mathbf{c}\)
16:if\(r:=\frac{\mathbf{b}^{T}\mathbf{w}}{\delta}\notin[0,1]\)thencontinuewithnextline;
17:\(\mathbf{e}:=\mathbf{d}\times\mathbf{e}^{T}\mathbf{w}\notin[0,1]\)thencontinuewithnextline;
18:if\(s:=\frac{\mathbf{e}^{T}\mathbf{w}}{\delta}\notin[0,1]\)thencontinuewithnextline;
19:if\(t:=-\frac{\mathbf{e}^{T}\mathbf{w}}{\delta}\notin[0,1]\)thencontinuewithnextline;
20:\(\mathbf{x}:=\mathbf{p}+r\mathbf{d}\)
21:append\(\mathbf{x}\)to\(\mathbf{X}\)
22:
23:
24:
```
**Algorithm 1**Line Network Sampling
#### 4.1.2 Mesh Surface Sampling
The second algorithm describes a Mesh Surface Sampling process using lines as the intersection object. We utilize the mathematical extension of the Ray-Triangle intersection in the previous section (see equation 4) to exploit the regularity of the voxel grid. We need to construct the intersection objects in accordance with the simplest form of connectivity type (see Table 8) in the stencil that the Graph Construction (Algorithm 5) uses further down the pipeline. In the algorithm below, we utilize the _Conservative voxelization_. Thus the intersection lines are aligned with voxel boundaries; in an example grid of \(m\) by \(n\) by \(o\) voxels, respectively aligned with the \(X\), \(Y\), and \(Z\) axes. Consequently, we need \((n+1)\times(o+1)\), \((o+1)\times(m+1)\), and \((m+1)\times(n+1)\) intersection-lines or rays for sampling the input surface mesh within such a bounding grid. Figure 1 visualizes the ray origins and Figure 2 indicates the extracted sampling of the mesh surface as a point cloud.
Utilizing the algebraic formulation of intersection in equation 4, we lay out the Algorithm 2 that iterates over the dimensions, triangles, and intersection-lines to exploit their similarity. Accordingly, in order, we compute and check \(\mathbf{w},\delta,\mathbf{b},r,\mathbf{e},s,t\) to ensure that we are not wasting computation if the intersection is outside of the boundaries of line and triangle.
```
1MeshSurfaceSampling(\(\mathcal{M},\mathcal{P},\sigma\))
2\(\mathbf{B}_{\mathcal{M}}=\lfloor BoundingBox(\mathcal{M})\;\sigma\rfloor\)//dividingtheboundingboxofthesmeshin\(\mathbb{R}^{3}\)basedonthe\(\mathcal{P}\)frameby\(\sigma\)element-wise,andthenroundingittonearestinteger\(\in\mathbb{Z}^{3}\cdot[[l_{min},j_{min},k_{min}],[l_{max},j_{max},k_{max}]]\)\([m,n,o]=\mathbf{B}_{\mathcal{M}}[1]-\mathbf{B}_{\mathcal{M}}[0]\)
3 initiateQ
4foreachtriangle\(=(\mathbf{x_{0}},\mathbf{x_{1}},\mathbf{x_{2}})\in\mathcal{M}\)do
5\(\mathbf{u}=\mathbf{x_{1}}-\mathbf{x_{0}};\mathbf{v}=\mathbf{x_{2}}-\mathbf{x_ {0}};\mathbf{e}=\mathbf{x_{0}};\)
6\(\mathbf{w}:=\mathbf{u}\times\mathbf{v}\)
7foreachaxis\(a_{m}=\{0,1,2\}\)do
8\(a_{r}=(a_{m}+1)\%3\cdot a_{f}=(a_{m}+2)\%3\)//identifyrightandfrontaxis
9\(a=\text{diag}[(m,n,o]^{T})[a_{m},;]\)
10\(\delta:=-\mathbf{d}^{T}\mathbf{w}\)//computethedeterminant
11if\(\delta\in[-\epsilon,+\epsilon]\)thencontinuewithnextaxis;
12forachvextrowin\(a_{r}\);enumededby\(k_{1}\in\{0,1,...,[m,n,o]^{T}[a_{f}]\}\)do
13foreachvextrowin\(a_{f}\);enumedby\(k_{1}\in\{0,1,...,[m,n,o]^{T}[a_{f}]\}\)do
14\(\mathbf{p}[a_{m}]=\mathbf{B}_{\mathcal{M}}[a_{m},0]\); \(\mathbf{e}[a_{r}]=k_{0}-0.5\); \(\mathbf{e}[a_{f}]=k_{1}-0.5\)
15\(\mathbf{b}:=\mathbf{p}-\mathbf{e}\)
16if\(r=\frac{\mathbf{b}^{T}\mathbf{w}}{\delta}\notin\{r_{min},r_{max}\}\)thencontinuewithnextline;
17\(\mathbf{e}:=\mathbf{d}\times\mathbf{b}\)
18if\(s=-\frac{\mathbf{e}^{T}\mathbf{w}}{\delta}\notin[s_{min},s_{max}]\)thencontinuewithnextline;
19if\(t=\frac{\mathbf{e}^{T}\mathbf{w}}{\delta}\notin[t_{min},t_{max}]\)thencontinuewithnextline;
20\(\mathbf{q}[a_{m}]=r;\mathbf{q}[a_{r}]=\mathbf{e}[a_{r}];\)\(\mathbf{q}[a_{f}]=\mathbf{e}[a_{f}]\)
21add\(\mathbf{q}\)toQ
```
**Algorithm 2**Mesh Surface Sampling Algorithm
#### 4.1.3 Volumetric Mesh Sampling
The volumetric meshes (3D) embedded in \(\mathbb{R}^{3}\), are mainly represented by the surface mesh that represents their boundary, with the condition that the boundary mesh is closed. Therefore, we adjust the Algorithm 2 to sample boundary
mesh and construct intervals on each intersection-line. Next, we discretize the 1D intervals to create samples within the volume.
More specifically, we alter the order of the iterations in algorithm 3 to the axis, intersection-line, and triangles as we need to construct the intervals on the line-basis. The core idea is that based on the winding number, we can assume that a closed boundary would always intersect with a ray an even number of times, given that the ray extends across the bounding box. Therefore, we maintain that pairs of intersection points create intervals on the line corresponding to a one-dimensional sample of the volumetric mesh. Therefore, by discretizing these intervals, we will generate enough samples to reconstruct the volumetric mesh's interior in the graph construction later.
### Point Cloud Voxelization
Irregular point clouds (e.g. LIDAR images) represent samples in the \(\mathbb{R}^{3}\) without any inherent topological structure that specifies the relation of sample points. However, if these point clouds are the output of the mesh sampling proposed earlier, the topological properties of the original mesh can be retrieved from them. Nevertheless, this retrieval is dependent on using the same voxel unit size \(\mathbf{\sigma}\) and plane \(\mathcal{P}\) for the Algorithm 4. This algorithm is comprised of three smaller steps: Voxelation, Shifting(Ioxelation), and Encoding (see Figure 3).
Voxelization (\(f\)) on its own is a function that takes in the voxel unit size and maps the points from \(\mathbb{R}^{3}\) to \(\mathbb{Z}^{3}\). This step is the core of the discretization process. It is important to note that the reverse function of voxelization \(f^{-1}\) does not reproduce the original points, as the remainder of the coordinates values that are smaller than the voxel size \(\mathbf{\sigma}\) is lost:
\[Voxelate\ Points:f:\mathbb{R}^{3}\mapsto\mathbb{Z}^{3} \mathbf{v}:=f(\mathbf{p})=\lfloor\mathbf{p}\oslash\mathbf{\sigma}\rceil \tag{5}\] \[Poxelate\ Voxels:f^{-1}:\mathbb{Z}^{3}\mapsto\mathbb{R}^{3} \mathbf{p}:=f^{-1}(\mathbf{v})=\mathbf{v}\oslash\mathbf{\sigma} \tag{6}\]
The next step is to shift the voxels into the first quadrant so their coordinates in \(\mathbb{Z}^{3}\) maps to \(\mathbb{N}^{3}\) and can be regarded as three-dimensional indices: Ioxelate (\(g\)). The necessary input for this function is the minimum corner of the voxel cloud (\(\mathbf{c}\)).
\[Ioxelate\ Voxels:g:\mathbb{Z}^{3}\mapsto\mathbb{N}^{3} \mathbf{\rho}:=g(\mathbf{v})=\mathbf{v}-\mathbf{c} \tag{7}\] \[Voxelate\ Ioxels:g^{-1}:\mathbb{N}^{3}\mapsto\mathbb{Z}^{3} \mathbf{v}:=g^{-1}(\mathbf{\rho})=\mathbf{\rho}+\mathbf{c} \tag{8}\]
The last step is encoding the three-dimensional indices into Morton code, using the three-dimensional interleaving method. This would effectively map the \(\mathbb{N}^{3}\) to \(\mathbb{N}\) which allows for globally unique indices for each voxel.
\[Encode\ Ioxels:h:\mathbb{N}^{3}\mapsto\mathbb{N} \iota:=h(\mathbf{\rho})=Morton(\mathbf{\rho}) \tag{9}\] \[Decode\ Indices:h:\mathbb{N}\mapsto\mathbb{N}^{3} \mathbf{\rho}:=h^{-1}(\mathbf{\iota})=Norton(\mathbf{\iota}) \tag{10}\]
Altogether, they start from a point cloud and generate a voxel cloud represented by globally unique spatial Morton indexing. Figures 4 and 5 overlay the point cloud that is the process's input onto the voxel centroids and voxel domains that are the output.
Figure 1: Ray Oigins Figure 2: Sampled Point Cloud
```
Input Data Type Input Name: Notes \(\mathcal{M}(\mathrm{V},\mathrm{F})\) Mesh Including faces and vertices representing the boundary of a volumetric mesh. \(\mathcal{P}\) Plane in \(\mathbb{R}^{3}\) An oriented plane in \(\mathbb{R}^{3}\)consisting of 1+2 vectors, respectively indicating the origin, the X-axis, and the Y-axis of the plane, with the default value as the global XY plane. \([\mathbf{\sigma}]_{3\times 1}\) Vector of float Unit Size Vector: a vector whose components represent the desired voxel size in X, Y, and Z directions \([\mathbf{\sigma}]_{3\times 1}\) Vector of float Unit Size Vector: a vector whose components represent the desired voxel size in X, Y, and Z directions Output Data Type Output Name: Notes \(\mathbf{\iota}\) Array of int Array of global spatial morton indices \(\mathbf{\sigma}\)
```
**Problem**: Given an array of points \(\mathbf{Q}\) which is oriented in plane \(\mathcal{P}\), and a vector of sizes \(\mathbf{\sigma}\), it is desired a set of morton indices \(\mathbf{\iota}\subset\mathbb{N}\) as a discrete approximation of the point cloud in question such that the set of voxels \(\mathcal{V}\) corresponding to the indices compactly and correctly represents the input point cloud. Correctness must be verifiable in terms of point-set topological properties of the input point-cloud and the output voxel cloud being on a par with one another.
```
1VolumetricMeshSampling ( \(\mathcal{M},\mathcal{P},\sigma\))
2B\({}_{\mathcal{M}}=[BoundingBox(\mathcal{M})\odot\mathcal{P}]\)// dividing the bounding box of the mesh in \(\mathbb{R}^{3}\) based on the \(\mathcal{P}\) frame by \(\sigma\) element-wise, then rounding it to nearest integer \(\in\mathbb{Z}^{3}:[[i_{min},j_{min},k_{min}],[i_{max},j_{max},k_{max}]]\)\([m,n,\sigma]=\mathbf{B}_{\mathcal{M}}[1]-\mathbf{B}_{\mathcal{M}}[0]\)
3 initiate \(\mathbf{Q}\)
4foreach axis \(a_{m}\in\{0,1,2\}\)do
5\(a_{r}=(a_{m}+1)\%3\); \(a_{f}=(a_{m}+2)\%3\)// identify right and front axis
6\(\mathbf{d}=\mathrm{diag}([m,n,\sigma]^{T})[a_{m},:]\)
7foreach voxel row in \(a_{r}\); enumerated by \(k_{0}\in\{0,1,...,[m,n,\sigma]^{T}[a_{f}]\}\)do
8 initiate \(\mathbf{r}\)
9for each voxel column in \(a_{f}\); enumerated by \(k_{1}\in\{0,1,...,[m,n,\sigma]^{T}[a_{f}]\}\)do
10for each triangle \(=(\mathbf{x_{0}},\mathbf{x_{1}},\mathbf{x_{2}})\in\mathcal{M}\)do
11\(\mathbf{u}=\mathbf{x_{1}}-\mathbf{x_{0}};\mathbf{v}=\mathbf{x_{2}}-\mathbf{x_ {0}};\mathbf{c}=\mathbf{x_{0}}\);
12\(\mathbf{w}:=\mathbf{u}\times\mathbf{v}\)// compute the determinant
13\(\delta:=-d^{T}\mathbf{w}\)// compute the determinant
14if\(\delta\in[-\epsilon_{+},\epsilon]\)then continue with next triangle;
15\(\mathbf{p}[a_{m}]=\mathbf{B}_{\mathcal{M}}[a_{m},0]\); \(\mathbf{c}[a_{r}]=k_{0}-0.5\); \(\mathbf{c}[a_{f}]=k_{1}-0.5\)
16\(\mathbf{b}:=\mathbf{p}-\mathbf{c}\)
17\(\mathbf{e}:=d\times\mathbf{b}\)
18if\(r=\frac{\mathbf{b}^{T}\mathbf{w}}{\delta}\in[r_{min},r_{max}]\)and \(s=-\frac{\mathbf{e}^{T}\mathbf{w}}{\delta}\in[s_{min},s_{max}]\)and \(t=\frac{\mathbf{e}^{T}\mathbf{w}}{\delta}\in[t_{min},t_{max}]\)then
19 add \(r\) to \(\mathbf{r}\)
20if\(len(\mathbf{r})\%2\) /= 0thenreturn the boundary is not closed;
21for\(k_{m}\in\{0,1,...,[m,n,\sigma]^{T}[a_{m}]\}\)do
22if\(Sum(k_{m}>\mathbf{r})\%2\) /= 0then
23\(\mathbf{q}[a_{m}]=k_{m};\mathbf{q}[a_{r}]=\mathbf{c}[a_{r}]\); \(\mathbf{q}[a_{f}]=\mathbf{c}[a_{f}]\)
24 add \(\mathbf{q}\) to \(\mathbf{Q}\)
```
**Algorithm 3**Volumetric Mesh Sampling Algorithm
```
1VolumetricPointCloude ( \(\mathbf{Q},\mathcal{P},\sigma\)):
2 initiate \(\mathbf{\iota}\)
3forpoint\(\mathbf{x}\in\mathbf{Q}\)do
4\(\mathbf{o}:=[\mathbf{x}\odot\mathbf{\sigma}]\)// voxelate
5\(\mathbf{\rho}:=\mathbf{o}-\mathcal{P}_{\epsilon}\)// localate
6\(\iota:=MortonInterleave3D(\mathbf{\rho})\)// generate morton index
7 add \(\iota\) to \(\mathbf{\iota}\)
```
**Algorithm 4**Point Cloud Voxelization
```
1VolumetricMeshSampling ( \(\mathcal{M},\mathcal{P},\sigma\)):
2B\({}_{\mathcal{M}}=[BoundingBox(\mathcal{M})\odot\mathcal{P}]\)// dividing the bounding box of the mesh in \(\mathbb{R}^{3}\) based on the \(\mathcal{P}\) frame by \(\sigma\) element-wise, then rounding it to nearest integer \(\in\mathbb{Z}^{3}:[[i_{min},j_{min},k_{min}],[i_{max},j_{max},k_{max}]]\)\([m,n,\sigma]=\mathbf{B}_{\mathcal{M}}[1]-\mathbf{B}_{\mathcal{M}}[0]\)
3 initiate \(\mathbf{Q}\)
4foreach axis \(a_{m}\in\{0,1,2\}\)do
5\(a_{r}=(a_{m}+1)\%3\); \(a_{f}=(a_{m}+2)\%3\)// identify right and front axis
6\(\mathbf{d}=\mathrm{diag}([m,n,\sigma]^{T})[a_{m},:]\)
7foreach voxel row in \(a_{r}\); enumerated by \(k_{0}\in\{0,1,...,[m,n,\sigma]^{T}[a_{r}]\}\)do
8 initiate \(\mathbf{r}\)
9for each voxel column in \(a_{f}\); enumerated by \(k_{1}\in\{0,1,...,[m,n,\sigma]^{T}[a_{f}]\}\)do
10for each voxel column in \(a_{f}\); enumerated by \(k_{1}\in\{0,1,...,[m,n,\sigma]^{T}[a_{f}]\}\)do
11for each voxel column in \(a_{f}\); enumerated by \(k_{1}\in\{0,1,...,[m,n,\sigma]^{T}[a_{f}]\}\)do
12for each triangle \(=(\mathbf{x_{0}},\mathbf{x_{1}},\mathbf{x_{2}})\in\mathcal{M}\)do
13\(\mathbf{u}=\mathbf{x_{1}}-\mathbf{x_{0}};\mathbf{v}=\mathbf{x_{2}}-\mathbf{x_ {0}};\mathbf{c}=\mathbf{x_{0}};\mathbf{v}=\mathbf{u}\times\mathbf{v}\)
14\(\mathbf{v}:=\mathbf{u}\times\mathbf{v}\)
15\(\mathbf{d}:=-d^{T}\mathbf{w}\)// compute the determinant
16if\(\delta\in[-\epsilon_{+},\epsilon]\)then continue with next triangle;
17\(\mathbf{p}[a_{m}]=\mathbf{B}_{\mathcal{M}}[a_{m},0]\); \(\mathbf{c}[a_{r}]=k_{0}-0.5\); \(\mathbf{c}[a_{f}]=k_{1}-0.5\)
18\(\mathbf{b}:=\mathbf{p}-\mathbf{c}\)
19\(\mathbf{e}:=\mathbf{d}\times\mathbf{b}\)
20if\(r=\frac{\mathbf{b}^{T}\mathbf{w}}{\delta}\in[r_{min},r_{max}]\)then
21add \(r\) to \(\mathbf{r}\)
22
23if\(len(\mathbf{r})\%2\) /= 0thenreturn the boundary is not closed;
24for\(k_{m}\in\{0,1,...,[m,n,\sigma]^{T}[a_{m}]\}\)do
25if\(Sum(k_{m}>\mathbf{r})\%2\) /= 0then
26\(\mathbf{q}[a_{m}]=k_{m};\mathbf{q}[a_{r}]=\mathbf{c}[a_{r}];\)\(\mathbf{q}[a_{f}]=\mathbf{c}[a_{f}]\)
27 add \(\mathbf{q}\) to \(\mathbf{Q}\)
```
**Algorithm 5**Point Cloud Voxelization
```
1VolumetricPointCloude ( \(\mathbf{Q},\mathcal{P},\sigma\)):
2 initiate \(\mathbf{\iota}\)
3forpoint\(\mathbf{x}\in\mathbf{Q}\)do
4\(\mathbf{o}:=[\mathbf{x}\odot\mathbf{\sigma}]\)// voxelate
5\(\mathbf{\rho}:=\mathbf{o}-\mathcal{P}_{\epsilon}\)// localate
6\(\iota:=MortonInterleave3D(\mathbf{\rho})\)// generate morton index
7 add \(\iota\) to \(\mathbf{\iota}\)
```
**Algorithm 5**Point Cloud Voxelization
**Problem**: Given an array of points \(\mathbf{Q}\) which is oriented in plane \(\mathcal{P}\), and a vector of sizes \(\mathbf{\sigma}\), it is desired a set of morton indices \(\mathbf{\iota}\subset\mathbb{N}\) as a discrete approximation of the point cloud in question such that the set of voxels \(\mathcal{V}\) corresponding to the indices compactly and correctly represents the input point cloud. Correctness must be verifiable in terms of point-set topological properties of the input point-cloud and the output voxel cloud being on a par with one another.
```
1VolumetricMeshSampling ( \(\mathcal{M},\mathcal{P},\sigma\)):
3\(\mathbf{P}:=[\mathbf{p}\odot\mathbf{P}]\)// compute the determinant
4
### Graph Construction
In this step, given the voxel cloud represented by Morton indices \(\iota\), we construct a topological model of their connectivity. Since the domain space is discrete, we utilize the concept of the stencil to construct a topological model that represents a particular local neighbourhood for each voxel. Stencils were initially proposed by Emmons, 1944 for Finite Difference Method operations required for finding numerical solutions to PDEs in regular Cartesian grids (find a recent review in Engwer et al., 2017). Here we generalize the idea of stencils to extract a topological description of the local connectivity of voxels using bitwise operations on their Morton codes.
The stencils proposed in this paper can be described by a condition array of relative indices \(\mathcal{S}_{v}\) and a relative hyper-edge \(\mathcal{S}_{e}\). The stencil checks whether a voxel \(\mathbf{o}\) of the volume has a specific connectivity pattern described in \(\mathcal{S}_{v}\) and returns a boolean value. If the boolean value is \(TRUE\), we can construct the hyper-edge based on the relative vertex indices of the condition array \(\mathcal{S}_{v}\). Here is an example of a stencil describing square connectivity in the \(YZ\) plane:
\[\mathcal{S}_{v}:= \Big{[}[0,0,0],[0,1,0],[0,1,1],[0,0,1]\Big{]} \tag{11}\] \[\mathcal{S}_{e}:= \Big{\{}(0,1),(1,2),(2,3),(3,0)\Big{\}} \tag{12}\]
The selected voxel \(\mathbf{o}\) is considered as \((0,0)\), and \(\mathcal{S}_{v}\) describes which relative neighbours of \(\mathbf{o}\) should be filled for the edges to be constructed. \(\mathcal{S}_{e}\) describes which edges need to be made within the hyper-edge by their index in \(\mathcal{S}_{v}\). In this sense, \(\mathcal{S}\) can be understood as a graph that functions similar to a kernel in image processing.
To utilize this concept with the Morton index of the voxels, we need to describe the relative neighbours \(\mathcal{S}_{v}\) of a voxel with their Morton index. Accordingly, the edges between the neighbours \(\mathcal{S}_{e}\) can also be interpreted as a point in a discrete 2D space of the source voxels and destination voxels; the edge space: \(\mathbb{N}^{2}\). Therefore, the second Morton indexing process allows us to have unique Morton indices for the edges:
Figure 4: _Voxelization_ Figure 5: _Voxels_
Figure 3: _the first part of the proposed workflow: Digital Geometry Sampling and Topological Voxelization_
\[\mathcal{S}_{\epsilon} := Morton(\mathcal{S}_{v})=\left\{\texttt{0b000},\texttt{0b010},\texttt{0b011}, \texttt{0b001}\right\} \tag{13}\] \[\mathcal{S}_{\epsilon} :=\left\{\texttt{0b001000},\texttt{0b001110},\texttt{0b000111}, \texttt{0b000001}\right\} \tag{14}\]
This indicates that given a voxel index, we can perform a bitwise sum on the interleaved sequence to find the neighbours and check their value. More specifically, as the voxels are embedded in the \(\mathbb{Z}^{3}\) space, we will utilize the \(MortonSum3D\) to find the index of the voxel neighbours based on \(\mathcal{S}_{\epsilon}\). When the conditions are checked, we utilize the \(MortonSum6D\) to construct the Morton index of the corresponding edges based on \(\mathcal{S}_{\epsilon}\).
**Algorithm 5**Graph Construction
```
1:MortonSum3D (\(\texttt{4}_{0}\)-\(\texttt{1}_{1}\))
2:\(x^{\prime}\) = 0b001001; \(y^{\prime}\) = 0b0100100; \(z^{\prime}\) = 0b100100;
3:return\(\texttt{4}_{sum}\) = (
4:((\(\texttt{1}_{0}\) / \(x^{\prime}+y^{\prime})\) + \((\texttt{1}_{1}\) \(\texttt{2}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(\texttt{4}_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\) \(4_{1}\)) \(4_{1}\) \(4_{1}\) \(4_{1}\) \(4_{1}\)) \(
### Derivation of Differential Operators
In Machine Learning on Graphs and Signal Processing on graphs or meshes, and especially in Scientific Computing and Computer Aided Engineering for Simulations involving PDE or the so-called Geosimulations in geospatial sciences it is common to represent functions of space and time NOT in analytical closed forms for mapping input Cartesian coordinates to output values but as _sampled_ fields represented with a finite set of samples, i.e. scalar or vector values attributed to some sample points in a spatial domain, which may be meshed or connected through a network or just form a point cloud with attributes. For a discrete function defined as a vector of float attributed generally to the vertices of a graph, or specifically to the voxels of a voxel graph \(\Gamma=(V,E),E\subset V\times V,n:=|V|,m:=|E|\) we can form a vector in the form of \(\mathbf{f}=[f_{i}]_{n\times 1}\). In such settings, one typically requires to compute such things as spatial integrals (line/curve integrals, surface integrals, or volume integrals) and spatial partial derivatives (for computing such things as the gradient, divergence, curl, and Laplacian). Then, from a computational mathematics point of view, it is desirable to have differential and integral operators in the form of matrices that can be multiplied from the left to such vectors. By such discrete operators one can have an inherently discrete version of operations defined in vector calculus such as gradient, divergence, Laplacian differential operators, and even a Reimann integral operator. Without diving into the details of the Cartesian coordinates or other geometric details for that matter, here we propose a topologically general idea for representing partial derivatives and differential forms (integrands) within the higher-dimensional in-between spaces of k-cells connecting the vertices. Without loss of generality for higher-dimensional topological cell complexes or even irregular meshes, here we briefly introduce a family of operators for graphs constructed out of voxels.
We present two new mathematical results: one of which is a new formulation of the discrete Laplacian, derived from two exactly dual discrete operators (gradient and divergence) that are generally applicable to graphs of any kind, regardless whether they are obtained from voxels or not; the other one being a discrete Reimanian integral operator in the form of a covector (1-form or row vector). Remarkably, these results are derived thanks to the characterization of the edge-space of the graphs as the natural place for defining partial differentials and Reimann integrands.
The key idea here is to utilize oriented adjacency matrices as introduced in Table2 for defining differential forms and their unoriented versions for defining spatial integrals. Here we show a chain of derivations all of which are obtained from the oriented edge-vertex incidence matrix on voxel graphs.
Before moving on to the derivation of the differential/integral operators, it is necessary to also clarify the claim implicitly shown in the Table2: The adjacency matrices indicating undirected or bi-directed relations between cells of the same dimensions can be obtained by matrix multiplication of [sparse] unoriented incidence matrices and the oriented incidence matrices can be multiplied with one another to obtain other incidence matrices (extending the definitions given in Nourian, Rezvani, et al., 2016,Batty, 2004, andK. Weiler, 1985). For example the Vertex to Vertex adjacency matrix can be obtained from unoriented Edge to Vertex incidence matrix and its transpose:
\[\mathbf{A}_{|V|,|V|}=\overline{\mathbf{M}}_{|V|,|E|}\overline{\mathbf{M}}_{|E|,|V|} \tag{15}\]
or the Face to Face adjacency matrix can be obtained from Face to Vertex incidence matrices which are by definition unoriented:
\[\mathbf{A}_{|F|,|F|}=\overline{\mathbf{M}}_{|F|,|V|}\overline{\mathbf{M}}_{|V|,|F|} \tag{16}\]
, which can in turn be derived from unoriented Face to Edge and Edge to Vertex incidence matrices:
\[\mathbf{M}_{|F|,|E|}=\overline{\mathbf{M}}_{|F|,|E|}\overline{\mathbf{M}}_{|E |,|V|} \tag{17}\]
Of course, these matrices will almost always be sparse and so they should practically all be represented as sparse matrices. One particularly interesting example is the Oriented Edge to Vertex incidence matrix, from which we obtain our new results (a new Laplacian operator and spatial integral operators):
\[\overrightarrow{\mathbf{M}}_{|E|,|V|}\in\{-1,0,1\}^{m\times n} \tag{18}\]
, whose vertices and edges in a voxel graph are indexed with 3D and 6D Morton Indices (respectively denoted as \(\iota\) and \(\varepsilon\)) as proposed above (with a simplified notation for the sake of brevity):
\[\mathbf{M}:=\overrightarrow{\mathbf{M}}_{|E|,|V|}=[M_{\varepsilon,\iota}]_{m \times n} \tag{19}\]
, in which the oriented Edge to Vertex incidence entries are defined as:
\[M_{\varepsilon,\iota}=\begin{cases}-1&if\ \varepsilon=(\iota,t),\forall t\in V \\ +1&if\ \varepsilon=(s,\iota),\forall s\in V\\ 0&otherwise\end{cases}. \tag{20}\]
From this matrix we derive the differential operators and from its unoriented version defined below we derive bary-rventres with which we can define Riemannian integral operators for discrete line/curve integrals, surface integrals or volume integrals.
\[\overline{\mathbf{M}}=\text{abs}\left(\overrightarrow{\mathbf{M}}_{|E|,|V|}\right) \tag{21}\]
For defining the differential operators that are by definition oriented and as proposed aligned with the edges, we shall need edge vectors, which can remarkably be obtained from the same Oriented Edge to Vertex incidence matrix. To do so, we firstly need to define a matrix containing vertex coordinates as below:
\[\mathbf{V}:=[\overrightarrow{\mathbf{o}}_{\epsilon_{i},:}]_{n,3} \tag{22}\]
, and so, the edge vectors can be obtained in one go, i.e. algebraically, as below:
\[\mathbf{E}=[\overrightarrow{\mathbf{e}}_{\epsilon_{i},:}]_{m\times 3}=\mathbf{M} \mathbf{V} \tag{23}\]
; using which the edge lengths can be found as the squared 2-norms of the edge vectors:
\[\boldsymbol{\xi}^{2}=[\xi_{e}^{2}]_{m\times 1}=\text{diagonal}(\mathbf{E} \mathbf{E}^{T}). \tag{24}\]
\[\boldsymbol{\xi}=[\xi_{e}]_{m\times 1}=\text{sqrt}\left(\text{diagonal}(\mathbf{E} \mathbf{E}^{T})\right) \tag{25}\]
Now, we can define a diagonal matrix containing the edge lengths, the reciprocals of which can be used to make its inverse that will be used in the definition of the differential operators.
\[\boldsymbol{\Xi}=[\xi_{e,e}]_{m\times m}=\text{diag}(\boldsymbol{\xi}) \tag{26}\]
The first fundamental operator of interest is the discrete gradient operator defined over the edge space of the graph. We propose the following discrete gradient operator as a mapping from the vertex space of the graph to its edge space:
\[\overline{\nabla}=\mathbf{G}:=\boldsymbol{\Xi}^{-1}\mathbf{M}\in\mathbb{R}^{m \times n},\overline{\nabla}:\mathbb{R}^{n}\mapsto\mathbb{R}^{m} \tag{27}\]
The claim is that this operator accurately and unambiguously approximates the continuous gradient operator:
\[\text{grad}(f(\mathbf{x}))=\nabla f(\mathbf{x})=\left[\frac{\partial f( \mathbf{x})}{\partial x_{i}}\right]_{d},\mathbf{x}\in\mathbb{R}^{d}. \tag{28}\]
Nevertheless, the point is that the defenition above is focused on functions that are analytically defined and thus it needs to be computed and evaluated at every point of the space if the analytic definition of the function is at hand. However, in situations weher the function is only sampled in space, this is not convenient. In fact, the Finite Difference Method for discretizing gradients based on this definition also brings about other challenges such as the need for differentiating between Forward Differences, Backward Differences, and Central Differences because of the missing half-spaces of the edge pixels or voxels of images. On the contrary, our proposed gradient operator conveniently lives in the m-dimensional space of the edges of the graph and so the direction of the edges to which the partial differentials are attributed makes the gradient vectors at every locality without the need to attribute them to unoriented vertices, since the edges are already oriented. thus we claim the following (omitting similar claims about the divergence and Laplacian operator for brevity):
\[\text{grad}(f(\mathbf{x}))=\nabla f(\mathbf{x})=\left[\frac{\partial f( \mathbf{x})}{\partial x_{i}}\right]_{d}\bigg{|}_{\mathbf{x}},\forall\mathbf{x }\in\Omega\approx\mathbf{G}\mathbf{f}. \tag{29}\]
Similarly, by virtue of the duality of the gradient and divergence operators and also according to the Divergence Theorem, we propose the discrete divergence operator as a mapping from the edge space to the vertex space of the graph:
\[\overline{\nabla}^{T}=\mathbf{D}:=\mathbf{M}^{T}\boldsymbol{\Xi}^{-1}\in \mathbb{R}^{n\times m},\overline{\nabla}^{T}:\mathbb{R}^{m}\mapsto\mathbb{R}^{n} \tag{30}\]
Now we can define the discrete Laplace-Beltrami Operator conveniently as a mapping from the vertex space to the vertex space of the graph by consecutive application of the gradient and divergence operators:
\[\underline{\Delta}=\overline{\nabla}^{T}\overline{\nabla}=\mathbf{L}:=\mathbf{ D}\mathbf{G}=\mathbf{M}^{T}\boldsymbol{\Xi}^{-2}\mathbf{M}\in\mathbb{R}^{n \times n},\underline{\Delta}=\overline{\nabla}^{T}\overline{\nabla}:\mathbb{ R}^{n}\mapsto\mathbb{R}^{n} \tag{31}\]
The inclusion of edge length reciprocals in this formulation makes it uniquely accurate in contrast to the commonly used Combinatorial Laplacian that is devoid of the metric dimension of the space modelled by the graph in question.
The last differential operator to be introduced here is the Curl operator, that can measure rotations in a vector field. Here we assume, according to our proposed method, that vector fields are represented in the edge space of graphs; and
naturally expect the curl vector field to be attributed to the face space of the graph. Considering this asumption, it is easy to see that the curl operator requires an integral area element; and so we shall introduce the curl operator after our integral operators.
By using the edge space or the hyper-edge space of voxel graphs or voxel complexs we can propose Discrete Spatial Integral Operators. The first one is the Discrete Line/Curve-Integral Operator as a functional or a mapping from the Vertex space of the graph to the space of real numbers, which again uses the intermediate edge space of the graph for making the Riemann integrands:
\[\underline{\int}=\boldsymbol{\mathfrak{s}}^{(1)}:=\frac{1}{2}\boldsymbol{ \xi}^{T}\overline{\mathbf{M}}_{|E|,|V|}\in\mathbb{R}^{1\times n},\underline{ \int}=\boldsymbol{\mathfrak{s}}^{(1)}:\mathbb{R}^{n}\mapsto\mathbb{R} \tag{32}\]
The second one is the Discrete Surface-Integral Operator defined as a mapping from the Vertex space of the graph to the space of real numbers through the intermediate Face space of the topological cell complex:
\[\underline{\iint}=\boldsymbol{\mathfrak{s}}^{(2)}:=\frac{1}{4}\boldsymbol{ \alpha}^{T}\overline{\mathbf{M}}_{|F|,|V|}\in\mathbb{R}^{1\times n}, \underline{\iint}=\boldsymbol{\mathfrak{s}}^{(2)}:\mathbb{R}^{n}\mapsto \mathbb{R} \tag{33}\]
, in which \(\boldsymbol{\alpha}=[\alpha_{\varphi}]_{|F|\times 1}\) is a vector containing the areas of the faces of the cell complex. Note that for simplicial complexs the coefficient of equality would be \(\frac{1}{3}\) in the above surface integral operator. The third one is the Discrete Volume-Integral Operator defined as a mapping from the Vertex space of the graph to the space of real numbers through the intermediate Cell space of the topological cell complex:
\[\underline{\iiint}=\boldsymbol{\mathfrak{s}}^{(3)}:=\frac{1}{8}\boldsymbol{ \beta}^{T}\overline{\mathbf{M}}_{|C|,|V|}\in\mathbb{R}^{1\times n},\underline{ \iiint}=\boldsymbol{\mathfrak{s}}^{(3)}:\mathbb{R}^{n}\mapsto\mathbb{R} \tag{34}\]
, in which \(\boldsymbol{\beta}=[\alpha_{\varphi}]_{|C|\times 1}\) is a vector containing the volumes of the cells of the cell complex.
Similarly, the claim here is that these integral operators adequately approximate the continuous spatial integral operators, e.g.:
\[\iiint_{\boldsymbol{\mathbf{x}}\in\Omega}f(\boldsymbol{\mathbf{x}})dV\approx \boldsymbol{\mathfrak{s}}^{(3)}\boldsymbol{\mathbf{f}} \tag{35}\]
, in which the \(dV\) is the volume element of the continuous integral operator.
The curl operator requires an oriented mapping from the oriented edge space of the graph to its oriented face space. It is straight-forward to see and check for small graphs that the oriented Face-Edge Incidence Matrix of a hyper-graph (mesh) maps the Edge-Vertex Incidence vectors into a null space (zero vectors). Denoting the Face-Edge Incidence matrix of a hyper graph as \(\boldsymbol{\Omega}:=[\omega_{\varphi,\varepsilon}]_{|F|\times|E|}\), this means the following:
\[\boldsymbol{\Omega}\mathbf{M}=\boldsymbol{0}_{|F|\times|V|} \tag{36}\]
In other words, \(\boldsymbol{\Omega}^{T}\) is the null space (kernel) of the transposed Edge-Vertex Incidence Matrix of the same hyper-graph (after Imperatore and Pepe, 2016 and Grady and Polimeni, 2010):
\[\mathbf{M}^{T}\boldsymbol{\Omega}^{T}=\boldsymbol{0}_{|V|\times|F|}. \tag{37}\]
This practically means that the so-called cycle basis of the hypergraph (the elementary and irreducible faces of the mesh) that is represented by \(\boldsymbol{\Omega}\) as the solution to the following system of linear equations:
\[\boldsymbol{\Omega}=\left(\mathbf{M}^{T}\backslash\boldsymbol{0}_{|V|\times|F| }\right)^{T} \tag{38}\]
, where, \(\mathbf{A}\backslash\mathbf{b}\) dentoes the solution to the linear equation \(\mathbf{A}\mathbf{x}=\mathbf{b}\). Now, it is straight-forward to approximate the continuous curl operator on a hypergraph as a mapping from the edge space of the hypergraph to its face space obtained by applying \(\boldsymbol{\Omega}\) as a linear map, i.e.:
\[\boldsymbol{\nabla}\times\mathbf{F}\approx\boldsymbol{\mathcal{A}}^{-1} \boldsymbol{\Omega}\mathbf{F} \tag{39}\]
, where \(\boldsymbol{\mathcal{A}}^{-1}\) is a diagonal matrix whose entries are the reciprocals of the face areas of the mesh, with the same indexing as the Face-Edge Incidence Matrix.
Whilst the relations between such differential operators and the incidence matrices has been hinted to previously in some sources such as those referenced above, an exact algebraic treatment with the consideration of length, area, and volume normalizations seems to be missing in the litreature. It is easy to see that with these differential and integral operators Partial Differential Equations can be written elegantly in an inherently discrete manner to be solved by linear algebraic solvers on complex (even non-manifold) spatial domains.
In addition to the operators introduced thus far, the Jacobian operator (or the multivariate derivative) can be obtained by applying the gradient operator to a vector field attributed to the vertices of a hypergraph, i.e.:
\[\boldsymbol{\hat{\jmath}}\mathbf{F}:=\left[\mathbf{J}_{\iota}(\mathbf{f}_{ \iota})\bigg{|}_{\mathbf{x}}\right]_{3n\times 3}:=\left[\left[\frac{\partial f_{ \iota}}{\partial x_{i}}\right]_{3\times 3}\bigg{|}_{\mathbf{x}}\right]_{n\times 1} \approx\left(\mathbf{1}_{n\times 1}\otimes\mathbf{E}^{T}\right)\boldsymbol{ \overline{\nabla}}\mathbf{F} \tag{40}\]
, where \(\mathbf{F}:=\left[\left(\mathbf{f}_{\iota,\varepsilon}\right)^{T}\right]_{m \times 3}\).
Note that the resultant Jacobian matrices will be listed as a block matrix or an array of \(3\times 3\) matrices of size \(3n\times 3\). This definition of the Jacobian matrices is useful when aiming to produce the best linear approximation of a vector function near the vertices of a mesh or hypergraph.
The Hessian (the Jacobian of the gradient) of a scalar field can be obtained by firstly applying the gradient operator; producing the gradient vectors along the edges; applying the transposed edge-averaging operator (the onoriented Incidence matrix) to project back gradient vectors from edges to the vertices and then applying the gradient operator again, as follows:
\[\boldsymbol{\mathcal{HG}}:=\left[\mathbf{H}_{\iota}(f_{\iota})\bigg{|}_{ \mathbf{x}}\right]_{3n\times 3}:=\left[\left[\frac{\partial^{2}f_{\iota}}{ \partial x_{i}\partial x_{j}}\right]_{3\times 3}\bigg{|}_{\mathbf{x}}\right]_{n \times 1}\approx\boldsymbol{\mathcal{JB}}^{-1}\overline{\mathbf{M}}^{T} \boldsymbol{\mathcal{G}}\mathbf{E} \tag{41}\]
, where \(\boldsymbol{\mathcal{G}}:=diag(\boldsymbol{\overline{\nabla}}\mathbf{f})\) denotes a diagonal matrix made up of the gradients attributed to the edges and \(\mathbf{D}:=diag\left(diagonal\left(\mathbf{A}_{|V|\times|V|}\right)\right)\) denotes a diagonal matrix made up of the node degrees of the graph. The Hessian operator as such produces an array of \(3\times 3\) matrices (a block matrix), each of whose elements is a Hessian matrix for the function value attributed to the corresponding vertex. Hessian matrices can be utilized in obtaining the Taylor approximations of scalar functions in Newton-type optimization procedures, which are better than the Jacobian approximations because they take the second derivative into account.
Remarkably, our differential or integral operators do not even require the integration region to be a manifold. In fact the line integral operator proposed here can even operate on networks of any genera. Without loss of generality, for the higher-dimensional k-cells, in our derivation we particularly focus on the "edge-space" or the "in-between space of vertices", which we have so far introduced as interpolation space as the space to which we can conveniently attribute partial differentials and "differential forms" (integrands). In the low-dimensional setting of the edge-space introduced here, we derived these operators for the the physical dimension of the edges, faces, and cells between voxels that will be the Length, Area, and Volume of the dimensions \(L\), \(L^{2}\), and \(L^{3}\), respectively, in terms of the elemental or base quantities in Physics (out of 7 elemental quantities, namely Mass, Length, Time, Electric Current, Absolute Temperature, Amount of Substance, and Luminous Intensity; see a brief introduction to Dimensional Analysis by Nourian, 2016b).
Figure 6: the proposed workflow for Graph Construction & derivation of Discrete Differential/Integral operators for voxel graphs
Implementation: topoGenesis
The workflow that has been put forth in this paper is implemented, for the large part, as a library of vectorized functions in the python programming language, released as the open-source package topoGenesis for computational geometry and topology operations in spatial computing and generative design (source available) Azadi and Nourian (2020). The package topoGenesis is created on top of ubiquitous numeriacal and scientific python libraries such as Numpy Harris et al. (2020) and SciPy Virtanen et al. (2020) to ensure that the computational procedures are as accessible and efficient as possible. Furthermore, python notebooks implementing the presented workflow with test cases are publicly accessible in topoGenesis example workflows.
## 6 Conclusion: Application Outlook
The shape of a spatial region affects the geodesic flows and the resultant geodesic distances within the manifold and thus it effectively influences almost any dynamic phenomenon or emergent pattern in space that is of practical interest in scientific and engineering applications. The configuration of a non-trivially shaped spatial region needs to be modelled as a discrete manifold for digital computing. In scientific and engineering applications (particularly in Computational Science and Engineering) the shape of the spatial region is almost always non-trivial, and so, one needs to go beyond using constructs defined in the context of continuous mathematics for spatial interpolation, differentiation, and integration.
In summary, we can outline the two major application areas of the proposed methods as geo-spatial topological data analysis and geo-spatial simulations based on the so-called first principles encapsulated in PDE, Agent-Based Models or Cellular Automata. For the first category of applications, the notion of manifold distance (geodesic distance from within the manifold) or network distance is key to defining metrics of similarity. For the second category of applications, the notion of geodesic flow (of forces, electricity, fluids, pedestrians, and so on) is the central concept and often the goal of simulations to predict.
The proposed methods for construction of explicit topological models of spatial regions as graphs (technically as adjacency matrices or incidence matrices) pave the way for computing geodesics and geodesic distances on non-trivially shaped spatial regions.
Contrary to the common confusions and uncertainties typical to the topological analysis of data in the Euclidean space due to the ambigious choices about the notions of neighbourhood for constructing cell complexes, the proposed framework for utilizing cell complexes as their sparse graph representations not only effectively removes all such ambiguities from the picture by putting forward only one explicit scale vector for defining the resolution of spatial analysis but also keeps all operations as efficient as operations on irregular simplicial complexes. Reflecting on this issue should ideally resolve the false dichotomy or the dilemma of image representations (typically assumed to be necessarily dense) and sparse representations associated with the simplicial complexes. In other words, the voxel-graph representations bring the best of the both worlds together coherently. Furthermore, the comprehensive spatial indexing scheme proposed here based on Morton Codes not only indexes the vertex space of the voxel graphs but also their edge spaces or even their higher-dimensional k-cell spaces elegantly and thus providing for unambiguous application of sparse matrices even in view of large-scale geo-spatial computations.
Note that generalized Morton Codes defined here are globally unique identifiers of all subspaces of voxel complexes. In this way the partial maps or models of space can be easily concatenated together without the need for special Euler operations for editing the graphs or meshes representing them. Algebraic computation of exact/deterministic metric geodesics and geodesic distances can be easily achieved by applying common graph traversal algorithms on the proposed graphs. Alternatively, the stochastic or spectral counterparts of geodesics and geodesic distances known as random walks and diffusion distances can be computed very efficiently at scale on massive manifolds to enable simulations and spatial analyses (see the example of Google PageRank by Page et al., 1999 and family of Random Walk models for such purposes in Nourian, 2016). The proposed discrete differential and integral operators are unique in their algebraic simplicty and versatile efficacy and efficiency for spatial computing (computational spatial analysis and spatial simulation) in conjunction with the topological data models presented in Table2.
We can conclude by reflecting on a quote from the mathematician Doron Zeilberger: "Conventional wisdom, fooled by our misleading "physical intuition", is that the real world is _continuous_, and that discrete models are necessary evils for approximating the _real_ world, due to the innate discreteness of the digital computer."
|
2309.05661 | Boundary exceptional sets for radial limits of superharmonic functions
on non-positively curved Harmonic manifolds of purely exponential volume
growth | By classical Fatou type theorems in various setups, it is well-known that
positive harmonic functions have non-tangential limit at almost every point on
the boundary. In this paper, in the setting of non-positively curved Harmonic
manifolds of purely exponential volume growth, we are interested in the size of
the exceptional sets of points on the boundary at infinity, where a suitable
function blows up faster than a prescribed growth rate, along radial geodesic
rays. For Poisson integrals of complex measures, we obtain a sharp bound on the
Hausdorff dimension of the exceptional sets, in terms of the mean curvature of
horospheres and the parameter of the growth rate. In the case of the Green
potentials, we obtain similar upper bounds and also construct Green potentials
that blow up faster than a prescribed rate on lower Hausdorff dimensional
realizable sets. So we get a gap in the corresponding Hausdorff dimensions due
to the assumption of variable pinched non-positive sectional curvature. We also
obtain a Riesz decomposition theorem for subharmonic functions. Combining the
above results we get our main result concerning Hausdorff dimensions of the
exceptional sets of positive superharmonic functions. | Utsav Dewan | 2023-09-11T17:57:25Z | http://arxiv.org/abs/2309.05661v1 | Boundary exceptional sets for radial limits of superharmonic functions on non-positively curved harmonic manifolds of purely exponential volume growth
###### Abstract.
By classical Fatou type theorems in various setups, it is well-known that positive harmonic functions have non-tangential limit at almost every point on the boundary. In this paper, in the setting of non-positively curved Harmonic manifolds of purely exponential volume growth, we are interested in the size of the exceptional sets of points on the boundary at infinity, where a suitable function blows up faster than a prescribed growth rate, along radial geodesic rays. For Poisson integrals of complex measures, we obtain a sharp bound on the Hausdorff dimension of the exceptional sets, in terms of the mean curvature of horospheres and the parameter of the growth rate. In the case of the Green potentials, we obtain similar upper bounds and also construct Green potentials that blow up faster than a prescribed rate on lower Hausdorff dimensional realizable sets. So we get a gap in the corresponding Hausdorff dimensions due to the assumption of variable pinched non-positive sectional curvature. We also obtain a Riesz decomposition theorem for subharmonic functions. Combining the above results we get our main result concerning Hausdorff dimensions of the exceptional sets of positive superharmonic functions.
Key words and phrases:Superharmonic functions, Boundary behavior, Hausdorff dimensions, Harmonic manifolds 2020 Mathematics Subject Classification: Primary 31C05; Secondary 31C15, 53C20
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Gromov Hyperbolic Spaces
* 2.2 Harmonic Manifolds
* 2.3 Hausdorff Outer Measure and Hausdorff Dimension
* 3 Boundary behavior of Poisson integrals
* 3.1 Estimates of Maximal Function
* 3.2 Upper bound on the Hausdorff dimension
* 3.3 The sharpness result
* 4 Riesz decomposition for subharmonic functions
* 4.1 Riesz measure
* 4.2 Harmonic majorants
* 4.3 Riesz decomposition
* 5 Boundary behavior of Green potentials
5.1 Upper bound on the Hausdorff dimension * 5.2 Construction of Green potentials on realizable sets
* 6 Proof of Theorem 1.7
## 1. Introduction
The boundary behavior of harmonic functions is one of the most well-studied topics in classical potential theory. The celebrated theorem of Fatou tells us that any positive harmonic function on the unit disk admits a non-tangential limit at almost all points on the boundary. This fact was generalized to rank one Riemannian symmetric spaces of non-compact type (for admissible limits) by Koranyi [14] and then to Hadamard manifolds of pinched negative curvature by Anderson and Schoen [1].
Then a natural question to ask is: how does a positive harmonic function behave along radial geodesic rays, on the complement of this full measure subset of the boundary. More precisely, how quickly can a positive harmonic function grow or how large can the exceptional set (in the boundary) be where the positive harmonic function blows up faster than a prescribed rate. These are the questions which we will address in this note, in the setting of non-positively curved Harmonic manifolds of purely exponential volume growth.
Throughout this article, all Riemannian manifolds are assumed to be complete, simply connected and of dimension \(n\geq 3\). A Harmonic manifold is a Riemannian manifold \(X\) such that for any point \(x\in X\), there exists a non-constant harmonic function on a punctured neighbourhood of \(x\) which is radial around \(x\), that is, only depends on the geodesic distance from \(x\). By purely exponential volume growth, we mean that there exist constants \(C>1\), \(h>0\) such that the volume of metric balls \(B(x,R)\) with center \(x\in X\) and radius \(R>1\), satisfies the asymptotics:
\[\frac{1}{C}e^{hR}\leq vol(B(x,R))\leq Ce^{hR}\:.\]
It turns out that in our case, the constant \(h>0\) agrees with the mean curvature of the horospheres. It is well-known that the sectional curvature of Harmonic manifolds are bounded below (see [1, 2]), that is, there exists \(b>0\) such that \(K_{X}\geq-b^{2}\,\).
The class of non-positively curved Harmonic manifolds of purely exponential volume growth includes all the known examples of non-compact non-flat Harmonic manifolds: the rank one Riemannian symmetric spaces of non-compact type and the Damek-Ricci spaces.
Let \(X\) be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres \(h>0\). We fix an origin \(o\) in \(X\) and let \(\partial X\) be the boundary at infinity. Now for a \(\xi\in\partial X\), let \(\gamma_{\xi}\) denote the unit-speed geodesic
ray such that \(\gamma_{\xi}(0)=o,\;\gamma_{\xi}(+\infty)=\xi\) and \(d(o,\gamma_{\xi}(t))=t\) for all \(t\in(0,+\infty)\). Then the Poisson kernel of \(X\) is given by,
\[P(x,\xi)=e^{-hB_{\xi}(x)}\;,\;\text{for all}\;x\in X,\;\xi\in\partial X, \tag{1.1}\]
where \(B_{\xi}(x)\) is the Busemann function, defined by
\[B_{\xi}(x)=\lim_{t\to\infty}\left(d\left(x,\gamma_{\xi}(t)\right)-d\left(o, \gamma_{\xi}(t)\right)\right)\;. \tag{1.2}\]
The Martin representation formula [10, Corollary 5.13] asserts that the positive harmonic functions on \(X\) are given by Poisson integrals of finite, positive Borel measures on \(\partial X\). More generally, for a complex measure \(\mu\) on \(\partial X\), let \(u=P[\mu]\) be the Poisson integral of \(\mu\). Then for any \(\xi\in\partial X\) and any \(t\in(0,+\infty)\), by (1.1), (1.2) and the triangle inequality, we have
\[|u(\gamma_{\xi}(t))|=\left|\int_{\partial X}P(\gamma_{\xi}(t),\eta)\:d\mu( \eta)\right|\leq e^{ht}\;|\mu|(\partial X)\;, \tag{1.3}\]
where \(|\mu|(\partial X)\) is the total variation of \(\mu\).
Then (1.3) motivates us to consider for \(\beta\in[0,h]\) and a complex-valued function \(u\) on \(X\), the following sets
\[E_{\beta}(u):=\left\{\xi\in\partial X:\limsup_{t\to+\infty}e^{-\beta t}\left|u \left(\gamma_{\xi}(t)\right)\right|>0\right\} \tag{1.4}\]
and
\[E_{\beta}^{\infty}(u):=\left\{\xi\in\partial X:\limsup_{t\to+\infty}e^{-\beta t }\left|u\left(\gamma_{\xi}(t)\right)\right|=+\infty\right\}\;. \tag{1.5}\]
When \(K_{X}\leq-1\), there is a natural metric called the visual metric, denoted by \(\rho\) on \(\partial X\). But in the generality of our situation, \(\rho\) only defines a quasi-metric. However for \(s\in(0,s_{0})\), where \(-s_{0}^{2}\) is the asymptotic upper curvature bound of \(X\), one has a metric on \(\partial X\), say \(\rho_{s}\), bi-Lipschitz to \(\rho^{s}\). In all our results, the Hausdorff dimensions or Hausdorff outer measures are with respect to \(\rho_{s}\).
The reader is referred to section 2 for any unexplained notations and terminologies.
Our first result gives an upper bound on the Hausdorff dimensions of the sets defined in (1.4) and (1.5) for Poisson integrals of complex measures:
**Theorem 1.1**.: _Let \(X\) be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres \(h>0\). Assume \(\beta\in[0,h]\) and \(\mu\) to be a complex measure on \(\partial X\). Then_
\[dim_{\mathcal{H}}E_{\beta}(P[\mu])\leq(h-\beta)/s\,,\text{ and }\mathcal{H}^{(h- \beta)/s}\left(E_{\beta}^{\infty}(P[\mu])\right)=0\;.\]
In fact, the bounds in Theorem 1.1 are sharp. This is illustrated by the following result:
**Theorem 1.2**.: _Let \(X\) be as in the statement of Theorem 1.1. Assume \(\beta\in[0,h)\) and \(E\subset\partial X\) with \(\mathcal{H}^{(h-\beta)/s}(E)=0\). Then there exists a non-negative integrable function \(f\) on \(\partial X\) (with respect to the visibility measure \(\lambda_{o}\)) such that \(E\subset E_{\beta}^{\infty}\left(P[f]\right)\,.\)_
In the classical Euclidean setting, analogues of Theorem 1.1 were obtained by Armitage [11, Theorem 4 with Corollary of Theorem 2] for the half-space and by Bayart-Heurteaux [1, Theorem 1 or 3] for the unit ball. In the case of \(\mathbb{H}^{n}(-1)\), the \(n\)-dimensional real Hyperbolic ball with constant sectional curvature equal to \(-1\), analogues of Theorems 1.1 and 1.2 were recently obtained by Hirata [14, Theorems 3 and 5].
Now in \(\mathbb{H}^{n}(-1)\), let \(\mu\) be a non-negative Borel measure such that its Green potential \(G[\mu]\) is well-defined. Then \(G[\mu]\) has radial limit \(0\) at almost all points on the boundary, whereas its boundary behavior along other non-tangential directions need not be nice [18, Theorem 9.4.1]. Similar results for the unit ball in \(\mathbb{C}^{n}\) can be found in the works of Ullrich [19]. These results regarding well-behaved radial limits of Green potentials on a full measure subset of the boundary intrigue us to consider the same problem of exceptional sets for Green potentials. Then one notes that Green potentials are just special examples of positive superharmonic functions. Finally motivated by [14], we endeavour to obtain results similar to Theorems 1.1 and 1.2 for the class of positive superharmonic functions.
As a first step of analyzing subharmonic (or superharmonic) functions, we obtain their Riesz decomposition, which may be a result of independent interest and seems to be new even for the case of Damek-Ricci spaces. For the relevant definitions in the following statement, the reader is referred to section 4.
**Theorem 1.3**.: _Let \(X\) be as in the statement of Theorem 1.1. Let \(f\) be a subharmonic function on \(X\) such that it has a harmonic majorant. Then_
\[f(x)=F_{f}(x)-\int_{X}G(x,y)d\mu_{f}(y)\:,\text{ for all }x\in X\:,\]
_where \(F_{f}\) and \(\mu_{f}\) are the least harmonic majorant and the Riesz measure of \(f\) respectively._
Then Theorem 1.3 motivates us back to our problem of determining the size of exceptional sets for Green potentials. As the Riesz measure of a subharmonic (or superharmonic) function is a Radon measure, we are naturally interested to look at an analogue of Theorem 1.1 for Green potentials of Radon measures on \(X\):
**Theorem 1.4**.: _Let \(X\) be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres \(h>n-2\) and sectional curvature \(K_{X}\geq-b^{2}\), for some \(b>0\). Let \(\beta\in[0,h-n+2)\) and \(\mu\) be a Radon measure on \(X\) whose Green potential \(G[\mu]\) is well-defined. Then for \(b^{\prime}:=\max\{2b,1\}\), we have_
\[dim_{\mathcal{H}}E_{\beta}\left(G[\mu]\right)\leq b^{\prime}\left(h-\beta \right)/s\:,\text{ and }\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty}\left(G[\mu] \right)\right)=0\:.\]
We then have the following analogue of Theorem 1.2 for Green potentials.
**Theorem 1.5**.: _Let \(X\) be a non-positively curved Harmonic manifold of purely exponential volume growth with mean curvature of horospheres \(h>n-2\). Let \(\beta\in[0,h-n+2)\) and \(E\subset\partial X\) with \(\mathcal{H}^{(h-\beta)/s}(E)=0\). Then there exists a Green potential \(u\) on \(X\) such that \(E\subset E_{\beta}^{\infty}(u)\)._
**Remark 1.6**.:
1. _The condition_ \(h>n-2\) _is naturally posed due to the behavior of the Green function near its pole. Moreover, for any_ \(\varepsilon>0\)_, all non-compact Harmonic manifolds with sectional curvature_ \(K_{X}\leq-(1+\varepsilon)^{2}\big{(}\frac{n-2}{n-1}\big{)}^{2}\)_, satisfy this property. The last statement follows from the fact that the mean curvature of horospheres is obtained as the Laplacian of the Busemann functions and an application of the Hessian comparison theorem._
2. _Comparing with Theorems_ \(4\) _and_ \(6\) _of_ _[_10_]__, it follows that the Hausdorff dimension appearing in Theorem_ 1.5 _is optimal. Then we note the gap in the corresponding Hausdorff dimensions in Theorem_ 1.4 _and Theorem_ 1.5 _(when_ \(b>1/2\)_), whereas in the case of Theorem_ 1.1_, the upper bound was shown to be sharp by Theorem_ 1.2_. As it will be apparent from our arguments, the reason is two-fold. Firstly, unlike in the case of the Poisson kernel, the Green function has its singularity in the interior of the space. Hence while trying to compute the Hausdorff dimension of the exceptional set on the boundary, we have to project the analysis done in the interior to the boundary. This is where the geometric ingredient of variable curvature comes into play. Then for_ \(b>1/2\)_, due to the pinching condition_ \(-b^{2}\leq K_{X}\leq 0\)_, we get a gap in the corresponding Hausdorff dimensions._
Finally as a consequence of the above results we obtain our main result:
**Theorem 1.7**.: _Let \(X\) be as in the statement of Theorem 1.4. Let \(u\) be a positive superharmonic function on \(X\) and \(\beta\in\left[0,h-n+2\right)\). Then for \(b^{\prime}:=\max\{2b,1\}\), we have_
\[dim_{\mathcal{H}}E_{\beta}(u)\leq b^{\prime}\left(h-\beta\right)/s\,,\text{ and }\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty}(u)\right)=0\,.\]
_Conversely, let \(X\) be as in the statement of Theorem 1.5. Then for \(\beta\in\left[0,h-n+2\right)\) and \(E\subset\partial X\) with \(\mathcal{H}^{(h-\beta)/s}(E)=0\), there exists a positive superharmonic function \(u\) on \(X\) such that \(E\subset E_{\beta}^{\infty}(u)\)._
In the proofs of Theorems 1.1, 1.2, 1.4 and 1.5, we follow the general outline of the arguments in [10] but unlike in the case of \(\mathbb{H}^{n}(-1)\), the boundary of a non-positively curved Harmonic manifold of purely exponential volume growth is not sufficiently regular and hence our arguments take a substantial detour by estimating global geometric quantities. Unlike in the case of \(\mathbb{H}^{n}(-1)\), non-constant curvature makes it hard to get sharp estimates on Riemannian angles and the diameters of'shadows' of balls. Then in order to get workable estimates of the above, one has to rely upon comparison principles afforded by the pinching condition on the sectional curvature for'small' balls and the shadow lemma of Gromov hyperbolic spaces for 'large' balls. All of this ultimately results in the gap in the corresponding Hausdorff dimensions for Theorems 1.4 and 1.5, which can be viewed as a distinct feature of variable pinched non-positive curvature.
The arguments in the proof of Theorem 1.3 follow the classical steps presented as in [11] and [12] but unlike in their case, our space need not be a Riemannian symmetric space and hence their approach of \(M\ddot{o}bius\) group invariant potential theory breaks down. Instead, we look at a geometric manifestation of convolution and work our way through to obtain results similar to that of the homogeneous setup.
This paper is organized as follows. In section 2, we recall the required preliminaries and fix our notations. In section 3, we present our results on the Poisson integrals: Theorems 1.1 and 1.2. In section 4 we prove the Riesz decomposition theorem for subharmonic functions: Theorem 1.3. In section 5, the results for Green potentials: Theorems 1.4 and 1.5, are proved. Section 6 consists of the proof of Theorem 1.7.
## 2. Preliminaries
Throughout this article, \(C(.)\) will be used to denote positive constants whose value may vary at each occurence, with dependence on parameters or geometric quantities made explicit inside the round bracket. When required, enumerated constants \(C_{1},C_{2},\dots\) will be used to specify fixed constants.
Let \(f_{1}\) and \(f_{2}\) be two positive functions. Then the notation \(f_{1}\asymp f_{2}\) will imply that there exists \(C>1\) such that \((1/C)f_{1}\leq f_{2}\leq Cf_{1}\). Also \(f_{1}\gtrsim f_{2}\) (respectively, \(f_{1}\lesssim f_{2}\)) will imply that there exists \(C>0\) such that \(f_{1}\geq Cf_{2}\) (respectively, \(f_{1}\leq Cf_{2}\)). The indicator function of a set \(A\) will be denoted by \(\chi_{A}\).
### Gromov Hyperbolic Spaces
In this subsection we recall briefly some basic facts and definitions related to Gromov hyperbolic spaces. For more details, we refer to [1].
A _geodesic_ in a metric space \(X\) is an isometric embedding \(\gamma:I\subset\mathbb{R}\to X\) of an interval into \(X\). A metric space \(X\) is said to be a _geodesic metric space_ if any two points in \(X\) can be joined by a geodesic. A geodesic metric space \(X\) is called _Gromov hyperbolic_ if there exists a \(\delta\geq 0\) such that every geodesic triangle in \(X\) is \(\delta\)-thin, that is, each side is contained in the \(\delta\)-neighbourhood of the union of the other two sides. This \(\delta\) is called the Gromov hyperbolicity constant.
For a Gromov hyperbolic space \(X\), its _boundary at infinity_\(\partial X\) is defined to be the set of equivalence classes of geodesic rays in \(X\). Here a geodesic ray is an isometric embedding \(\gamma:[0,\infty)\to X\) of a closed half-line into \(X\), and two geodesic rays \(\gamma,\tilde{\gamma}\) are said to be equivalent if the set \(\{d(\gamma(t),\tilde{\gamma}(t))\mid t\geq 0\}\) is bounded. The equivalence class of a geodesic ray \(\gamma\) is denoted by \(\gamma(\infty)\in\partial X\).
A metric space is said to be _proper_ if closed and bounded balls in the space are compact. Let \(X\) be a proper, geodesic, Gromov hyperbolic space. There is a natural topology on \(\overline{X}:=X\cup\partial X\), called the _cone topology_ such that \(\overline{X}\) is a compact metrizable space which is a compactification of \(X\). In this case, for every geodesic ray \(\gamma\), \(\gamma(t)\to\gamma(\infty)\in\partial X\) as \(t\to\infty\), and for any \(x\in X\), \(\xi\in\partial X\) there exists a geodesic ray \(\gamma\) such that \(\gamma(0)=x,\gamma(\infty)=\xi\).
For \(x,y,z\in X\), the Gromov product of \(y,z\) with respect to \(x\) is defined by,
\[(y|z)_{x}:=\frac{1}{2}\left(d(x,y)+d(x,z)-d(y,z)\right)\,. \tag{2.1}\]
If the space \(X\) is in addition \(CAT(0)\) then for any \(x\in X\), the Gromov product \(\left(\cdot|\cdot\right)_{x}\), extends continuously to \(\partial X\times\partial X\) (see [1]) and hence we define:
\[(\xi|\eta)_{x}:=\lim_{\begin{subarray}{c}y\prec\xi\\ z\to\eta\end{subarray}}(y|z)_{x}\,.\]
We note that \((\xi|\eta)_{x}=+\infty\) if and only if \(\xi=\eta\in\partial X\). Moreover the above boundary continuity of the Gromov product results in the boundary continuity of the Busemann function defined in (1.2).
### Harmonic Manifolds
In this subsection we discuss the required preliminaries on Harmonic manifolds. The materials covered here can be found in [1].
Let \(X\) be a non-compact harmonic manifold of purely exponential volume growth, with origin \(o\in X\). By purely exponential volume growth, it is meant that there exists \(h>0\) such that for all \(R>1\), the volume of metric ball \(B(x,R)\) of center \(x\in X\) and radius \(R\) satisfies
\[vol(B(x,R))\asymp e^{hR}\,.\]
In our case, it turns out that the constant \(h>0\) agrees with the mean curvature of the horospheres.
On Harmonic manifolds, the harmonic functions satisfy the usual mean value property on balls and spheres.
For any \(v\in T^{1}_{x}X\) and \(r>0\), let \(A(v,r)\) denote the Jacobian of the map \(v\mapsto\exp_{x}(rv)\). The definition of a harmonic manifold which has been given in the Introduction is equivalent ([21, p. 224]) to the fact that this Jacobian is solely a function of the radius, that is, there is a function \(A\) on \((0,\infty)\), such that \(A(v,r)=A(r)\) for all \(v\in T^{1}X\). This function \(A\) is called the _density function_ of \(X\). \(A\) satisfies the following asymptotics:
\[A(r)\asymp\begin{cases}r^{n-1}&\text{ if }0<r\leq 1\\ e^{hr}&\text{ if }r>1\,,\end{cases} \tag{2.2}\]
In [14], it was shown that for \(X\), a simply connected non-compact harmonic manifold of purely exponential volume growth with respect to a fixed basepoint \(o\in X\), the condition of purely exponential volume growth is equivalent to either of the following conditions:
1. \(X\) is Gromov hyperbolic.
2. \(X\) has rank one.
3. The geodesic flow of \(X\) is Anosov with respect to the Sasaki metric.
Moreover, the Gromov boundary coincides with the visibility boundary \(\partial X\) introduced in [1]. This last fact follows from the work in [12].
One has a family of measures on \(\partial X\) called the visibility measures \(\{\lambda_{x}\}_{x\in X}\). For \(x\in X\), let \(\theta_{x}\) denote the normalized canonical measure on \(T^{1}_{x}X\) (the unit tangent space at \(x\)), induced by the Riemannian metric and then the visibility measure \(\lambda_{x}\) is obtained as the push-forward of \(\theta_{x}\) to the boundary \(\partial X\) under the radial projection. The visibility
measures \(\lambda_{x}\) are pairwise absolutely continuous. For \((x,\xi)\in X\times\partial X\), the Poisson kernel is obtained as the following Radon-Nykodym derivative:
\[P(x,\xi)=e^{-hB_{\xi}(x)}=\frac{d\lambda_{x}}{d\lambda_{o}}(\xi)\,.\]
As a consequence of the above identity one has that \(P[\lambda_{o}]\equiv 1\).
The following is the Martin representation formula for positive harmonic functions on \(X\), which is a consequence of [13, Corollary 5.13]:
**Lemma 2.1**.: _Let \(u\) be a positive harmonic function on \(X\). Then there is a unique, finite, positive Borel measure \(\mu\) on \(\partial X\) such that \(u=P[\mu]\)._
Next we introduce the notion of radial functions. For \(x\in X\), let \(d_{x}\) denote the distance function with respect to the point \(x\), that is, \(d_{x}(y):=d(x,y)\). A function \(f\) on X is called _radial_ around a point \(x\in X\) if \(f\) is constant on geodesic spheres centered at \(x\). Then note that for a function \(f\) radial around a point \(x\in X\), we can associate a function \(u\) on \(\mathbb{R}\) such that \(f=u\circ d_{x}\).
If we just say that a function \(f\) is radial, then it will be understood that \(f\) is radial around \(o\), that is, there exists a function \(u\) on \(\mathbb{R}\) such that \(f=u\circ d_{o}\). For \(x\in X\), one has the definition of an \(x-\)_translate_ of a radial function \(f\) as:
\[\tau_{x}f:=u\circ d_{x}\,. \tag{2.3}\]
Let \(\Delta\) denote the Laplace-Beltrami operator associated to the Riemannian metric on \(X\). Then one has the following result for Harmonic manifolds:
**Lemma 2.2**.: _Let \(f\in C^{2}(X)\) be radial. Then we have for all \(x\in X\),_
\[\tau_{x}(\Delta f)=\Delta(\tau_{x}f)\,.\]
Proof.: Let \(L_{R}\) denote the radial part of \(\Delta\), that is, the differential operator on \((0,\infty)\) defined by,
\[L_{R}:=\frac{d^{2}}{dr^{2}}+\frac{A^{\prime}(r)}{A(r)}\frac{d}{dr}\,.\]
Let \(f=u\circ d_{o}\), where \(u\) is the corresponding function on \(\mathbb{R}\). Then by repeated application of Proposition 3.2 of [1], we get
\[\tau_{x}(\Delta f)=\tau_{x}\left((L_{R}u)\circ d_{o}\right)=(L_{R}u)\circ d_{x }=\Delta(u\circ d_{x})=\Delta(\tau_{x}f)\,.\]
For a measurable function \(f\) on \(X\) and a measurable function which is radial, say \(g\) on \(X\), their convolution is defined as
\[f*g(x):=\int_{X}f(y)(\tau_{x}g)(y)dvol(y)\,, \tag{2.4}\]
whenever the above integral is well-defined.
The following Lemma summarizes a few important properties of convolution. Proofs are straightforward consequences of the definition and can also be found in [1, 2].
**Lemma 2.3**.: _(1) If \(f\) and \(g\) are both measurable radial functions on \(X\) then if their convolution is defined at \(x\in X\), one has_
\[(f*g)(x)=(g*f)(x)\:.\]
_(2) If \(f\) is a measurable function on \(X\), \(g\) and \(h\) are measurable radial functions on \(X\) such that the convolutions are defined at \(x\in X\), then_
\[(f*(g*h))(x)=((f*g)*h)(x)\:.\]
_(3) If \(f\) and \(g\) are two radial functions on \(X\) such that their convolution \(f*g\) is defined at all points in \(X\) then \(f*g\) is also a radial function._
For \(\xi\in\partial X\), the level sets of the Busemann function \(B_{\xi}\) are called horopsheres based at \(\xi\). For all \(\xi\in\partial X\), the horospheres based at \(\xi\) have the same positive, constant mean curvature \(h>0\) and can be obtained as
\[\Delta B_{\xi}\equiv h\:. \tag{2.5}\]
The following is a version of the Harnack inequality due to Yau in [10]:
**Lemma 2.4** (Harnack-Yau).: _Let \(X\) be a Hadamard manifold with \(-b^{2}\leq K_{X}\leq 0\). Then there exists a constant \(C(b,n)>0\) such that for any open set \(\Omega\subset X\) and every positive harmonic function \(u:\Omega\to(0,+\infty)\), one has_
\[\|\nabla\log u\|\leq C(b,n)\:,\text{ for all }x\in X\text{ with }d(x,\partial\Omega)\geq 1\:.\]
We next state without proof an easy consequence of Harnack-Yau:
**Lemma 2.5**.: _Let \(X\) be as in Lemma 2.4 and \(\{f_{n}\}\) be a non-decreasing sequence of harmonic functions on an open connected set \(\Omega\subset X\). Then either \(f_{n}(x)\to+\infty\) for all \(x\in\Omega\) or that \(\{f_{n}\}\) converges to a harmonic function uniformly on compact subsets of \(\Omega\)._
While working in polar coordinates, we will frequently use the following notation: for \(x\in X\) and \(v\in T_{x}^{1}X\), \(\gamma_{x,v}\) is the geodesic such that \(\gamma_{x,v}(0)=x\) and \(\gamma_{x,v}^{\prime}(0)=v\).
For \(f\in C^{2}(X)\), one has by the Taylor expansion for \(x\in X\), \(t>0\) sufficiently small:
\[\Delta f(x)\frac{t^{2}}{2n}+C(n)E(t)=\int_{T_{x}^{1}X}\left\{f\left(\gamma_{x,v}(t)\right)-f(x)\right\}\:d\theta_{x}(v)\:, \tag{2.6}\]
for some constant \(C(n)>0\) and a term \(E(t)\) which is of order \(t^{3}\:\).
We recall that for an open subset \(\Omega\subset X\), an upper semi-continuous function \(f:\Omega\to[-\infty,+\infty)\), with \(f\not\equiv-\infty\) is subharmonic on \(\Omega\) if
\[f(x)\leq\int_{T_{x}^{1}X}f\left(\gamma_{x,v}(r)\right)d\theta_{x}(v)\:, \tag{2.7}\]
for all \(x\in\Omega\) and \(r>0\) sufficiently small. It is known that if \(f\) is subharmonic on \(X\) then (2.7) is true for all \(r>0\). Moreover, \(f\) is locally integrable and bounded above on compact sets. For \(f\in C^{2}(X)\), the above notion of subharmonicity is equivalent to the condition that \(\Delta f\geq 0\). A function \(f\) is superharmonic if \(-f\) is subharmonic.
Now as in our case,
\[\int_{1}^{+\infty}\frac{1}{A(r)}\:dr<+\infty\:,\]
we have a positive Green function, which is a radial function defined by
\[G(r)=\frac{1}{C(n)}\int_{r}^{+\infty}\frac{1}{A(s)}\:ds\:, \tag{2.8}\]
for some constant \(C(n)>0\). Then (2.2) yields the following estimates of the Green function:
\[G(r)\asymp\begin{cases}\frac{1}{r^{n-2}}&\text{ if }0<r\leq 1\\ e^{-hr}&\text{ if }r>1\:,\end{cases} \tag{2.9}\]
upto a positive constant depending only on \(n\) and \(h\), denoted by \(C_{1}(h,n)\). Then for \(x\in X\) the Green function with pole at \(x\) is defined by,
\[G_{x}(y):=(G\circ d_{x})(y)=G(d(x,y))\:,\text{ for }y\in X\:,\]
and is denoted by \(G(x,y)\). Note that it is symmetric in its arguments. The distributional Laplacian of \(G_{x}\) is,
\[\Delta G_{x}=-\delta_{x}\:.\]
\(G_{x}\) is harmonic on \(X\setminus\{x\}\) and superharmonic on \(X\). For a non-negative Borel measure \(\mu\) on \(X\), we say that it has a well-defined Green potential if there exists \(x_{0}\in X\) such that
\[G[\mu](x_{0})=\int_{X}G(x_{0},y)\:d\mu(y)<+\infty\:.\]
A well-defined Green potential is again a positive superharmonic function.
If the sectional curvature, \(K_{X}\leq-1\), then \(\partial X\) is equipped with the visual metric,
\[\rho(\xi,\eta):=e^{-(\xi|\eta)_{o}}\:,\text{ for all }\xi,\eta\in\partial X\:. \tag{2.10}\]
For \(r\in(0,1]\), we have the visual balls with radius \(r\) and center \(\xi\in\partial X\),
\[\mathscr{B}(\xi,r)=\{\eta\in\partial X:\rho(\xi,\eta)<r\}\:. \tag{2.11}\]
In the general case of a Harmonic manifold of purely exponential volume growth, although \(\rho\) only defines a quasi-metric, the visibility measure \(\lambda_{o}\) satisfies the following estimate for all \(\xi\in\partial X\) and for all \(r\in(0,1]\) :
\[\lambda_{o}\left(\mathscr{B}(\xi,r)\right)\leq e^{6\delta h}r^{h}\:, \tag{2.12}\]
where \(\delta\) is the Gromov hyperbolicity constant. However, one can get a metric by raising \(\rho\) to suitable powers. For such spaces, one has the notion of 'asymptotic upper curvature bound of X', denoted by \(-s_{0}^{2}\) (see [1, 2]). It is a critical exponent \(s_{0}\in(0,+\infty]\) such that for all \(s\in(0,s_{0})\), \(\rho^{s}\) is Lipschitz metrizable, that is, there exists \(C_{2}=C_{2}(s)>1\) and a metric \(\rho_{s}\) such that
\[\frac{1}{C_{2}}\rho_{s}\leq\rho^{s}\leq C_{2}\rho_{s}\:. \tag{2.13}\]
For such a fixed \(s\in(0,s_{0})\), we work with the metric \(\rho_{s}\). By \(\mathscr{B}_{s}(\xi,r)\) we denote a visual ball in the metric \(\rho_{s}\), with center \(\xi\) and radius \(r\). Then one has for the following containment relations:
\[\mathscr{B}_{s}\left(\xi,\frac{r^{s}}{C_{2}}\right)\subset\mathscr{B}(\xi,r) \subset\mathscr{B}_{s}\left(\xi,C_{2}r^{s}\right) \tag{2.14}\]
and for \(C_{3}=C_{2}^{1/s}>1\),
\[\mathscr{B}\left(\xi,\frac{r^{1/s}}{C_{3}}\right)\subset\mathscr{B}_{s}(\xi,r )\subset\mathscr{B}\left(\xi,C_{3}r^{1/s}\right)\,. \tag{2.15}\]
For \(\xi\in\partial X\), following the definition of \(\gamma_{\xi}\) mentioned in the introduction, we define the shadow of a ball \(B=B(x,r)\subset X\) (viewed from \(o\)) at \(\partial X\) to be the set
\[\mathcal{O}_{o}(B):=\left\{\xi\in\partial X:\gamma_{\xi}(t)\in B\,,\text{ for some }t>0\right\}.\]
Using the fact that the underlying \(X\) is Gromov \(\delta\)-hyperbolic, one has the following standard'shadow lemma' for balls with sufficiently large radius:
**Lemma 2.6**.: _There exists \(C_{4}=C_{4}(\delta,s)>0\) such that for \(r\in\left(0,\min\left\{\frac{1}{C_{4}^{s}},\frac{1}{C_{2}}\right\}\right)\) and for all \(\xi\in\partial X\), we have_
\[\mathcal{B}_{s}(\xi,r)\subset\mathcal{O}_{o}\left(B\left(\gamma_{\xi}\left( \log\left(\frac{1}{C_{4}\,r^{1/s}}\right)\right),1+\delta\right)\right)\,.\]
As mentioned in the Introduction, the sectional curvature of \(X\) satisfies \(K_{X}\geq-b^{2}\) for some \(b>0\). If three points in \(X\) lie on the same geodesic, then they are called _collinear_. For three points \(x,y,z\) which are not collinear, we form the geodesic triangle \(\triangle\) in \(X\) by the geodesic segments \([x,y]\), \([y,z]\), \([z,x]\). A comparison triangle is a geodesic triangle \(\overline{\triangle}\) in \(\mathbb{H}^{2}(-b^{2})\) formed by geodesic segments \([\overline{x},\overline{y}]\), \([\overline{y},\overline{z}]\), \([\overline{z},\overline{x}]\) of the same lengths as those of \(\triangle\) (such a triangle exists and is unique up to isometry). Let \(\theta(y,z)\) denote the Riemannian angle between the points \(y\) and \(z\), subtended at \(x\). The corresponding angle between \(\overline{y}\) and \(\overline{z}\) subtended at \(\overline{x}\) is called the _comparison angle_ of \(\theta(y,z)\) in \(\mathbb{H}^{2}(-b^{2})\) and denoted by \(\theta_{b}(y,z)\). Then by Alexandrov's angle comparison theorem,
\[\theta_{b}(y,z)\leq\theta(y,z)\,. \tag{2.16}\]
Consider the geodesics that join \(x\) to \(y\) and the one that joins \(x\) to \(z\). Now extend these geodesics. Then the extended infinite geodesic rays will hit \(\partial X\) at two points, say \(\xi\) and \(\eta\) respectively. Now as points on these geodesics, say \(y^{\prime}\) and \(z^{\prime}\) in \(X\) converge to \(\xi\) and \(\eta\), the comparison angles of \(\theta(x^{\prime},y^{\prime})\) increase monotonically, and hence their limit exists. We define the comparison angle \(\theta_{b}(\xi,\eta)\) to be this limit and in fact we have,
\[e^{-b(\xi|\eta)_{x}}=\sin\left(\frac{\theta_{b}(\xi,\eta)}{2}\right)\leq\sin \left(\frac{\theta(\xi,\eta)}{2}\right)\,, \tag{2.17}\]
where \(\theta(\xi,\eta)\) is the Riemannian angle between \(\xi\) and \(\eta\) subtended at \(x\).
### Hausdorff Outer Measure and Hausdorff Dimension
In the setting of a general metric space, we now briefly recall the definitions of Hausdorff dimensions, Hausdorff outer measure and some of their important properties. These can be found in [10].
Let \((M,d)\) be a metric space. Then for \(\varepsilon>0\), an \(\varepsilon\)-cover of a set \(E\subset M\) is a countable (or finite) collection of sets \(\{U_{i}\}\) with
\[0<diameter\:(U_{i})\leq\varepsilon,\,\text{for all $i$ such that }E\subset\bigcup_{i}U_{i}\:.\]
For \(t\geq 0\), we recall that
\[\mathcal{H}_{\varepsilon}^{t}(E):=\inf\left\{\sum_{i}\left(diameter\:(U_{i}) \right)^{t}:\{U_{i}\}\text{ is an $\varepsilon$-cover of }E\right\}\:.\]
Then the \(t\)-dimensional Hausdorff outer measure of \(E\) is defined by,
\[\mathcal{H}^{t}(E):=\lim_{\varepsilon\to 0}\mathcal{H}_{\varepsilon}^{t}(E)\:.\]
The above value remains unaltered if one only considers covers consisting of balls.
The Hausdorff dimension of \(E\) is defined by
\[dim_{\mathcal{H}}E:=\inf\left\{t\geq 0:\mathcal{H}^{t}(E)<+\infty\right\}\:.\]
The following properties of Hausdorff dimension and Hausdorff outer measure will be crucial:
* _Countable stability:_ if \(\{E_{i}\}_{i=1}^{\infty}\) is a countable sequence of sets in \((M,d)\), then \[dim_{\mathcal{H}}\left(\bigcup_{i=1}^{\infty}E_{i}\right)=\sup_{i\in\mathbb{N }}\left\{dim_{\mathcal{H}}E_{i}\right\}\:.\]
* _Non-increasing in dimension:_ if \(0<t_{1}\leq t_{2}\) then for any \(E\), \(\mathcal{H}^{t_{2}}(E)\leq\mathcal{H}^{t_{1}}(E)\:.\)
## 3. Boundary behavior of Poisson integrals
For any complex measure \(\mu\) on \(\partial X\), its Poisson integral \(P[\mu]\) is a complex-valued harmonic function on \(X\). In this section we will determine the size of the exceptional sets of such Poisson integrals along radial geodesic rays. The key to this analysis is an estimate in terms of a maximal function.
### Estimates of Maximal Function
Let \(0<\alpha_{1}<\alpha_{2}\leq 1\) and \(\xi\in\partial X\). Then for a complex measure \(\mu\) on \(\partial X\), we consider the following maximal function:
\[M_{\alpha_{1},\alpha_{2}}[\mu](\xi):=\sup_{\alpha_{1}\leq r\leq\alpha_{2}}\frac {|\mu|(\mathscr{B}(\xi,r))}{r^{h}}\:. \tag{3.1}\]
When \(d\mu=fd\lambda_{o}\) for some suitable function \(f\) on \(\partial X\), we will denote the corresponding maximal function by \(M_{\alpha_{1},\alpha_{2}}[f]\).
Next we see an estimate relating the Poisson integral of a complex measure with the maximal function corresponding to the measure.
**Lemma 3.1**.: _Let \(\tau\geq 1,\)\(0<\varepsilon\leq 1\) and \(\mu\) be a complex measure on \(\partial X\). Then there exists a constant \(C(h)>0\) such that for all \(t>\log(\tau/\varepsilon)\), one has for all \(\xi\in\partial X\),_
\[|P[\mu]\left(\gamma_{\xi}(t)\right)|\leq C(h)\left\{e^{ht}|\mu|\left(\mathscr{B }\left(\xi,\tau e^{-t}\right)\right)+\frac{M_{\tau e^{-t},\varepsilon}[\mu]( \xi)}{\tau^{h}}+\frac{e^{-ht}}{\varepsilon^{2h}}|\mu|(\partial X)\right\}\,. \tag{3.2}\]
Proof.: Fix \(\xi\in\partial X\) and \(t>\log(\tau/\varepsilon)\). Then note that \(\tau e^{-t}<\varepsilon\). Hence there exists a largest non-negative integer \(m\) such that
\[2^{m}\tau e^{-t}\leq\varepsilon\,.\]
Let
\[\mathscr{B}^{(0)} =\mathscr{B}\left(\xi,\tau e^{-t}\right)\,,\] \[\mathscr{B}^{(j)} =\mathscr{B}\left(\xi,2^{j}\tau e^{-t}\right)\setminus\mathscr{B }\left(\xi,2^{j-1}\tau e^{-t}\right),\,\text{for }1\leq j\leq m\,,\] \[\mathscr{B}^{(m+1)} =\partial X\setminus\mathscr{B}\left(\xi,2^{m}\tau e^{-t}\right)\,.\]
Now
\[|P[\mu]\left(\gamma_{\xi}(t)\right)|\leq\sum_{j=0}^{m+1}I_{j}\,,\]
where
\[I_{j}=\int_{\mathscr{B}^{(j)}}e^{-hB_{\eta}\left(\gamma_{\xi}(t)\right)}\,d| \mu|(\eta)\,,\,\text{for }0\leq j\leq m+1\,.\]
We note that by triangle inequality, for all \(\eta\in\partial X\),
\[B_{\eta}\left(\gamma_{\xi}(t)\right)=\lim_{t^{\prime}\to\infty}\left(d\left( \gamma_{\xi}(t),\gamma_{\eta}(t^{\prime})\right)-d\left(o,\gamma_{\eta}(t^{ \prime})\right)\right)\geq-d\left(o,\gamma_{\xi}(t)\right)=-t\,.\]
Hence,
\[I_{0}\leq\int_{\mathscr{B}^{(0)}}e^{ht}\,d|\mu|(\eta)=e^{ht}\,|\mu|\left( \mathscr{B}\left(\xi,\tau e^{-t}\right)\right)\,.\]
Next we note that Gromov products are monotonically non-decreasing along geodesics, which is a simple consequence of the triangle inequality. Hence in particular, for all \(\eta\in\partial X\) such that \(\eta\neq\xi\), one has
\[\lim_{t^{\prime}\to\infty}\left(\gamma_{\xi}(t)|\gamma_{\eta}(t^{\prime}) \right)_{o}\leq\left(\xi|\eta\right)_{o}\,.\]
Combining the above with the facts that
* \(B_{\eta}\left(\gamma_{\xi}(t)\right)=t-2\lim_{t^{\prime}\to\infty}\left(\gamma _{\xi}(t)|\gamma_{\eta}(t^{\prime})\right)_{o}\,,\)
* \(e^{-(\xi|\eta)_{o}}\geq 2^{j-1}\tau e^{-t}\) when \(\eta\in\mathscr{B}^{(j)}\,,\,\,\text{for }1\leq j\leq m\,,\)
it follows that
\[I_{j} \leq \int_{\mathscr{B}^{(j)}}e^{-ht}\,e^{2h(\xi|\eta)_{o}}\,d|\mu|(\eta)\] \[\leq \int_{\mathscr{B}^{(j)}}\frac{e^{-ht}}{\left(2^{j-1}\tau e^{-t} \right)^{2h}}\,d|\mu|(\eta)\] \[\leq \frac{|\mu|\left(\mathscr{B}\left(\xi,2^{j}\tau e^{-t}\right) \right)}{\left(2^{j-2}\tau\right)^{h}\left(2^{j}\tau e^{-t}\right)^{h}}\] \[\leq \frac{1}{\left(2^{j-2}\tau\right)^{h}}\,M_{\tau e^{-t},\varepsilon }[\mu](\xi)\;.\]
Therefore, there exists \(C(h)>0\) such that,
\[\sum_{j=1}^{m}I_{j}\leq\left(\sum_{j=1}^{m}\frac{1}{\left(2^{j-2}\right)^{h}} \right)\frac{M_{\tau e^{-t},\varepsilon}[\mu](\xi)}{\tau^{h}}\leq C(h)\,\frac{ M_{\tau e^{-t},\varepsilon}[\mu](\xi)}{\tau^{h}}\;.\]
Repeating the same argument as above, we get
\[I_{m+1}\leq\int_{\mathscr{B}^{(m+1)}}\frac{e^{-ht}}{\left(2^{m}\tau e^{-t} \right)^{2h}}\,d|\mu|(\eta)\;. \tag{3.3}\]
Now by the choice of \(m\),
\[2^{m}\tau e^{-t}>\frac{\varepsilon}{2}\;.\]
Plugging the above in (3.3), it follows that
\[I_{m+1}\leq\int_{\mathscr{B}^{(m+1)}}\frac{2^{2h}\,e^{-ht}}{\varepsilon^{2h}} \,d|\mu|(\eta)\leq\frac{2^{2h}\,e^{-ht}}{\varepsilon^{2h}}|\mu|\left(\partial X \right)\;.\]
Then summing up the above estimates, we get (3.2).
Lemma 3.1 has the following consequences.
**Corollary 3.2**.: _Let \(0<\varepsilon\leq 1\) and \(\mu\) be a complex measure on \(\partial X\). Then there exists a constant \(C(h)>0\) such that for all \(t>\log(1/\varepsilon)\), one has for all \(\xi\in\partial X\),_
\[|P[\mu]\left(\gamma_{\xi}(t)\right)|\leq C(h)\left\{2M_{e^{-t},\varepsilon}[ \mu](\xi)+\frac{e^{-ht}}{\varepsilon^{2h}}|\mu|(\partial X)\right\}\,.\]
Proof.: The Corollary follows by taking \(\tau=1\) in Lemma 3.1.
**Corollary 3.3**.: _Let \(\xi\in\partial X,\,\tau>1\) and \(t>\log(\tau)\). If \(f\) is a non-negative measurable function on \(\partial X\) such that \(f\equiv 1\) on \(\mathscr{B}\left(\xi,\tau e^{-t}\right)\) and \(f\leq 1\) on \(\partial X\), then there exists \(C_{5}=C_{5}(h,\delta)>0\) (where \(\delta\) is the Gromov hyperbolicity constant) such that_
\[P[f]\left(\gamma_{\xi}(t)\right)\geq 1-\frac{C_{5}}{\tau^{h}}\,.\]
Proof.: Let \(t>\log(\tau)\). We consider
\[g:=1-f\,.\]
Then \(g\) is a measurable, non-negative function on \(\partial X\) such that
\[g\equiv 0\mbox{ on }\mathscr{B}\left(\xi,\tau e^{-t}\right)\mbox{ and }g\leq 1 \mbox{ on }\partial X\:.\]
Then applying Lemma 3.1, for \(d\mu=g\,d\lambda_{o}\) and \(\varepsilon=1\), we get that there exists \(C(h)>0\) such that,
\[P[g]\left(\gamma_{\xi}(t)\right) \leq C(h)\bigg{(}\frac{1}{\tau}\bigg{)}^{h}M_{re^{-t},1}[\mu](\xi)\] \[\leq C(h)\bigg{(}\frac{1}{\tau}\bigg{)}^{h}M_{re^{-t},1}[\lambda_{o}](\xi) \tag{3.4}\]
Now using (2.12) and (3.1), it follows that
\[M_{\tau e^{-t},1}[\lambda_{o}](\xi)=\sup_{\tau e^{-t}\leq r\leq 1}\frac{ \lambda_{o}\left(\mathscr{B}(\xi,r)\right)}{r^{h}}\leq e^{6\delta h}\:.\]
Then plugging the above in (3.4), one has for some \(C(h,\delta)>0\),
\[P[g]\left(\gamma_{\xi}(t)\right)\leq\frac{C(h,\delta)}{\tau^{h}}\:.\]
Thus,
\[P[f]\left(\gamma_{\xi}(t)\right)=1-P[g]\left(\gamma_{\xi}(t)\right)\geq 1- \frac{C(h,\delta)}{\tau^{h}}\:.\]
### Upper bound on the Hausdorff dimension
Proof of Theorem 1.1.: For \(L>0\), we set
\[E_{\beta}^{L}(P[\mu]):=\left\{\xi\in\partial X:\limsup_{t\to+\infty}e^{-\beta t }\left|P[\mu]\left(\gamma_{\xi}(t)\right)\right|>L\right\}\:. \tag{3.5}\]
Our strategy will be to get some useful estimates on the \((h-\beta)/s\) -dimensional outer Hausdorff measure of the set defined in (3.5). First we choose and fix \(\varepsilon\in(0,1)\) and \(\xi\in E_{\beta}^{L}(P[\mu])\). Then by Corollary 3.2 there exists \(C(h)>0\) such that
\[C(h)L<\limsup_{t\to+\infty}e^{-\beta t}\:M_{e^{-t},\varepsilon}[\mu](\xi).\]
Hence, there exists \(t_{\xi}\in(0,+\infty)\) satisfying \(e^{-t_{\xi}}\leq\varepsilon\) such that
\[C(h)L<e^{-\beta t_{\xi}}\:\frac{\left|\mu\right|\left(\mathscr{B}\left(\xi,e^ {-t_{\xi}}\right)\right)}{e^{-ht_{\xi}}}\leq e^{-\beta t_{\xi}}\:\frac{\left| \mu\right|\left(\mathscr{B}_{s}\left(\xi,C_{2}\:e^{-st_{\xi}}\right)\right)}{ e^{-ht_{\xi}}}\:. \tag{3.6}\]
Now by Vitali 5-covering Lemma, there exist countably many visual balls \(\{\mathscr{B}_{s}\left(\xi_{j},r_{j}\right)\}_{j=1}^{\infty}\) satisfying (3.6) such that
* \(r_{j}:=C_{2}\:e^{-st_{\xi_{j}}}\leq C_{2}\:\varepsilon^{s}\:,\) for all \(j\in\mathbb{N}\),
* \(\mathscr{B}_{s}\left(\xi_{j},r_{j}\right)\cap\mathscr{B}_{s}\left(\xi_{k},r_{ k}\right)=\emptyset\) for all \(j\neq k\),
* \(E_{\beta}^{L}(P[\mu])\subset\bigcup_{j=1}^{\infty}\mathscr{B}_{s} \left(\xi_{j},5r_{j}\right)\:\:.\)
Then by (3.6), there exists \(C(h,\beta,s)>0\) such that
\[\sum_{j=1}^{\infty}\left(diameter\left(\mathscr{B}_{s}\left(\xi_{j},5r_{j}\right)\right)\right)^{(h-\beta)/s} \leq \left(\frac{C(h,\beta,s)}{L}\right)\sum_{j=1}^{\infty}|\mu|\left( \mathscr{B}_{s}\left(\xi_{j},r_{j}\right)\right)\] \[= \left(\frac{C(h,\beta,s)}{L}\right)|\mu|\left(\bigcup_{j=1}^{ \infty}\mathscr{B}_{s}\left(\xi_{j},r_{j}\right)\right)\] \[\leq \left(\frac{C(h,\beta,s)}{L}\right)|\mu|(\partial X)\,.\]
We note that the constant appearing in the right hand side of the last inequality is independent of the choice of \(\varepsilon\) and hence letting \(\varepsilon\to 0\), we get that
\[\mathcal{H}^{(h-\beta)/s}\left(E_{\beta}^{L}(P[\mu])\right)\leq\frac{C(h, \beta,s)}{L}|\mu|\left(\partial X\right)<+\infty\;. \tag{3.7}\]
As \(E_{\beta}^{\infty}(P[\mu])\subset E_{\beta}^{L}(P[\mu])\) for all \(L>0\), it follows that
\[\mathcal{H}^{(h-\beta)/s}\left(E_{\beta}^{\infty}(P[\mu])\right)=0\;.\]
Finally combining countable stability of the Hausdorff dimension and (3.7) we obtain,
\[dim_{\mathcal{H}}E_{\beta}(P[\mu])=\sup_{m\in\mathbb{N}}\left\{dim_{\mathcal{ H}}E_{\beta}^{\frac{1}{m}}(P[\mu])\right\}\leq(h-\beta)/s\;.\]
### The sharpness result
Proof of Theorem 1.2.: Since \(\mathcal{H}^{(h-\beta)/s}(E)=0\), for any \(m\in\mathbb{N}\), there exists a covering of \(E\) by visual balls \(\{\mathscr{B}_{s}^{(m,j)}\}_{j=1}^{\infty}\) such that
\[\sum_{j=1}^{\infty}\left(diameter\left(\mathscr{B}_{s}^{(m,j)}\right)\right)^{(h- \beta)/s}<2^{-m}\,. \tag{3.8}\]
If \(\mathscr{B}_{s}\) is a visual ball with center \(\eta\in\partial X\) and with radius \(r\), then for notation convenience, \(2\mathscr{B}_{s}\) will denote the visual ball with the same center \(\eta\) and twice the radius, that is, \(2r\).
Now we define,
\[f:=\sum_{j,m}m\left(diameter\left(\mathscr{B}_{s}^{(m,j)}\right)\right)^{-( \beta/s)}\chi_{2\mathscr{B}_{s}^{(m,j)}}\;. \tag{3.9}\]
Then by (2.12) and (3.8), it follows that for some \(C(h,\delta,s)>0\),
\[\int_{\partial X}fd\lambda_{o} \leq \sum_{j,m}m\left(diameter\left(\mathscr{B}_{s}^{(m,j)}\right) \right)^{-(\beta/s)}\lambda_{o}\left(2\mathscr{B}_{s}^{(m,j)}\right)\] \[\leq C(h,\delta,s)\sum_{j,m}m\left(diameter\left(\mathscr{B}_{s}^{(m,j )}\right)\right)^{(h-\beta)/s}\] \[< C(h,\delta,s)\,\sum_{m=1}^{\infty}\frac{m}{2^{m}}\] \[< +\infty\:.\]
Thus \(fd\lambda_{o}\) defines a finite, positive Borel measure.
Now let \(\xi\in E\) and fix \(m\in\mathbb{N}\). Then there exists \(j_{m}\in\mathbb{N}\) such that \(\xi\in\mathscr{B}_{s}^{(m,j_{m})}\). If \(r_{m}\) is the radius of \(\mathscr{B}_{s}^{(m,j_{m})}\), then \(\mathscr{B}_{s}(\xi,r_{m})\subset 2\mathscr{B}_{s}^{(m,j_{m})}\). Then by Corollary 3.3, one has
\[P\left[\chi_{2\mathscr{B}_{s}^{(m,j_{m})}}\right](\gamma_{\xi}(t))\geq P\left[ \chi_{\mathscr{B}_{s}(\xi,r_{m})}\right](\gamma_{\xi}(t))\geq P\left[\chi_{ \mathscr{B}\left(\xi,\frac{r_{m}^{1/s}}{C_{3}}\right)}\right](\gamma_{\xi}(t) )\geq\frac{1}{2}\:, \tag{3.10}\]
whenever (following the statement of Corollary 3.3)
* \(\tau^{h}>\max\left\{1,\:2C_{5}\right\}\),
* \(t>\log(\tau)\),
* \(\tau e^{-t}\leq\frac{r_{m}^{1/s}}{C_{3}}\:.\)
Hence choosing \(\tau(=\tau(h,\delta))>0\) sufficiently large and setting
\[t_{m}:=\log(C_{3}\tau)+\frac{1}{s}\log\left(\frac{1}{r_{m}}\right)\:,\]
we have by (3.10),
\[P[f](\gamma_{\xi}(t_{m})) \geq m\big{(}diameter\left(\mathscr{B}_{s}^{(m,j_{m})}\right)\big{)} ^{-(\beta/s)}P\left[\chi_{2\mathscr{B}_{s}^{(m,j_{m})}}\right](\gamma_{\xi}( t_{m}))\] \[\geq m\left(2^{-\left(\frac{\beta}{s}+1\right)}C_{3}^{-\beta}\tau^{- \beta}\right)e^{\beta t_{m}}\:.\]
Hence, there exists \(C(h,\delta,\beta,s)>0\) such that for all \(m\in\mathbb{N}\),
\[e^{-\beta t_{m}}P[f](\gamma_{\xi}(t_{m}))\geq C(h,\delta,\beta,s)\:m\:. \tag{3.11}\]
Now by (3.8),
\[t_{m}>\log\left(2^{1/s}C_{3}\tau\right)+\frac{m}{h-\beta}\log(2)\to+\infty\text { as }m\to+\infty\:.\]
Hence (3.11) gives the result.
## 4. Riesz decomposition for subharmonic functions
### Riesz measure
In this subsection our aim would be to prove the existence of a unique Radon measure on \(X\) corresponding to a subharmonic function:
**Proposition 4.1**.: _If \(f\) is subharmonic on \(X\), then there exists a unique Radon measure \(\mu_{f}\) on \(X\) such that_
\[\int_{X}\psi\,d\mu_{f}=\int_{X}f\Delta\psi\,dvol\:,\text{ for all }\psi\in C^{2}_{c}(X)\:.\]
**Definition 4.2**.: _For a subharmonic function \(f\) on \(X\), the unique Radon measure \(\mu_{f}\) on \(X\) obtained in the conclusion of Proposition 4.1 is called the Riesz measure of \(f\)._
The following lemmas will be important for the proof of Proposition 4.1.
**Lemma 4.3**.: _Let \(f\) be a \(C^{2}\) subharmonic function on \(X\). Then for all \(x\in X\) and for all \(0<r_{1}\leq r_{2}\), one has_
\[\int_{T^{1}_{x}X}f\left(\gamma_{x,v}(r_{1})\right)d\theta_{x}(v)\leq\int_{T^{1 }_{x}X}f\left(\gamma_{x,v}(r_{2})\right)d\theta_{x}(v)\:.\]
Proof.: Let \(u_{1}\) and \(u_{2}\) be the harmonic extensions of \(f\) on the balls \(B(x,r_{1})\) and \(B(x,r_{2})\) respectively. Then by subharmonicity of \(f\), it follows that \(f\leq u_{2}\) on \(B(x,r_{2})\) and hence in particular on the sphere \(S(x,r_{1})\). Hence by the maximum principle, \(u_{1}(x)\leq u_{2}(x)\). Then using the mean value identity of harmonic functions, the result follows.
**Lemma 4.4**.: _For \(r>0\), we define \(\Omega_{r}:=\frac{1}{vol(B(o,r))}\chi_{B(o,r)}\). Then for \(f\in C^{2}(X)\) one has for some constant \(C(h,n)>0\),_
\[\Delta f(x)=\lim_{r\to 0}\frac{C(h,n)}{r^{2}}\left\{\left(f*\Omega_{r} \right)(x)-f(x)\right\}\:.\]
Proof.: Integrating the identity (2.6) and then using the estimates of the density function (2.2) for small \(r\) yields the identity,
\[\Delta f(x)=\lim_{r\to 0}\frac{C(h,n)}{r^{2}\:vol(B(o,r))}\int_{B(x,r)} \left\{f(y)-f(x)\right\}dvol(y)\:.\]
Now the result follows from the definitions of \(\Omega_{r}\) and the convolution.
Now we introduce the notion of an approximate identity.
**Definition 4.5**.: _A sequence of non-negative continuous functions \(\{h_{j}\}_{j=1}^{\infty}\) is an approximate identity in \(L^{1}(X,dvol)\) if_
* \(\int_{X}h_{j}\:dvol=1\:\)_, for all_ \(j\in\mathbb{N}\) _and_
* \(\lim_{j\to\infty}\int_{X\setminus B(o,\varepsilon)}h_{j}\:dvol=0\:\)_, for all_ \(\varepsilon>0\)_._
**Remark 4.6**.: _Let \(\{r_{j}\}_{j=1}^{\infty}\subset(0,+\infty)\) be a decreasing sequence with \(r_{j}\to 0\) as \(j\to+\infty\). For each \(j\), let \(h_{j}\) be a non-negative \(C^{\infty}\) radial function on \(X\) with support contained in \(\{x\in X:r_{j+1}<d(o,x)<r_{j}\}\) satisfying \(\int_{X}h_{j}\:dvol=1\). Then the sequence \(\{h_{j}\}_{j=1}^{\infty}\) forms a \(C^{\infty}\)-approximate identity._
**Lemma 4.7**.: _Let \(\{h_{j}\}_{j=1}^{\infty}\) be a \(C^{\infty}\)-approximate identity as defined in remark 4.6. If \(f\) is subharmonic on \(X\), then \(\{f*h_{j}\}_{j=1}^{\infty}\) is a non-increasing sequence of \(C^{\infty}\) subharmonic functions on \(X\) satisfying_
\[\left(f*h_{j}\right)(x)\geq f(x)\text{ and }\lim_{j\to\infty}\left(f*h_{j} \right)(x)=f(x)\:, \tag{4.1}\]
_for all \(x\in X\)._
Proof.: The statement \(f*h_{j}\in C^{\infty}(X)\) is a simple consequence of the facts that \(h_{j}\in C_{c}^{\infty}(X)\), for all \(j\in\mathbb{N}\) and \(f\) is locally integrable.
Let \(h_{j}=u_{j}\circ d_{o}\), where \(u_{j}\) is the corresponding function on \(\mathbb{R}\). The inequality in (4.1) follows by integration in polar coordinates,
\[\left(f*h_{j}\right)(x) = \int_{0}^{\infty}u_{j}(r)\:A(r)\left(\int_{T_{x}^{1}X}f\left( \gamma_{x,v}(r)\right)d\theta_{x}(v)\right)dr \tag{4.2}\] \[\geq f(x)\int_{0}^{\infty}u_{j}(r)\:A(r)\:dr\] \[= f(x)\:.\]
Next we fix \(x\in X\) and let \(\alpha>f(x)\). By upper semi-continuity of \(f\) there exists \(r>0\) such that
\[f(y)<\alpha\:,\text{ for all }y\in B(x,r)\:.\]
We note that for \(r_{j}<r\),
\[Supp\left(\tau_{x}h_{j}\right)\subset B(x,r)\:.\]
Then
\[\left(f*h_{j}\right)(x)=\int_{B(x,r)}f(y)\left(\tau_{x}h_{j}\right)(y)\:dvol(y )\leq\alpha\int_{X}h_{j}\:dvol=\alpha\:.\]
Hence,
\[\limsup_{j\to+\infty}\left(f*h_{j}\right)(x)\leq f(x)\:,\]
which combined with the inequality (4.2) yields (4.1).
To show that \(f*h_{j}\) is subharmonic for all \(j\in\mathbb{N}\), we consider the convolution \((f*h_{j})*\Omega_{r}\), where \(\Omega_{r}\) is as defined in Lemma 4.4. By repeated applications of Lemma 2.3 and by computations similar to that in (4.2), we get
\[\left(f*h_{j}\right)*\Omega_{r}=\left(f*\Omega_{r}\right)*h_{j}\geq f*h_{j}\:.\]
Then in view of Lemma 4.4, it follows that \(f*h_{j}\) is subharmonic for all \(j\in\mathbb{N}\).
Finally we show that the sequence is non-increasing. But first we will need an analogue of Lemma 4.3 for \(f\). Consider \(t_{2}\geq t_{1}>0\). As \(f*h_{j}\) are \(C^{\infty}\)-subharmonic functions with \(f*h_{j}(x)\geq f(x)\) for all \(x\in X\), we have by applying Lemma 4.3 to \(f*h_{j}\),
\[\int_{T_{x}^{1}X}f\left(\gamma_{x,v}(t_{1})\right)d\theta_{x}(v) \leq \int_{T_{x}^{1}X}\left(f*h_{j}\right)\left(\gamma_{x,v}(t_{1}) \right)d\theta_{x}(v)\] \[\leq \int_{T_{x}^{1}X}\left(f*h_{j}\right)\left(\gamma_{x,v}(t_{2}) \right)d\theta_{x}(v)\:.\]
Next using the fact that subharmonic functions are bounded above on compact sets we see that the reverse Fatou lemma is applicable on \(\{f*h_{j}\}_{j=1}^{\infty}\), which yields
\[\limsup_{j\to+\infty}\int_{T_{x}^{1}X}\left(f*h_{j}\right)\left(\gamma_{x,v}(t_{ 2})\right)d\theta_{x}(v)\leq\int_{T_{x}^{1}X}f\left(\gamma_{x,v}(t_{2})\right)d \theta_{x}(v)\:.\]
Combining the last two inequalities one gets the desired analogue of Lemma 4.3 for \(f\).
Now let \(m>l\). Since \(Supp(h_{l})\) is contained in \(\{r_{l+1}<d(o,x)<r_{l}\}\) and \(r_{l+1}\geq r_{m}\), we have
\[\left(f*h_{l}\right)(x) = \int_{r_{l+1}}^{r_{l}}u_{l}(r)A(r)\left(\int_{T_{x}^{1}X}f\left( \gamma_{x,v}(r)\right)d\theta_{x}(v)\right)dr\] \[\geq \int_{r_{l+1}}^{r_{l}}u_{l}(r)A(r)\left(\int_{T_{x}^{1}X}f\left( \gamma_{x,v}(r_{m})\right)d\theta_{x}(v)\right)dr\] \[= \int_{T_{x}^{1}X}f\left(\gamma_{x,v}(r_{m})\right)d\theta_{x}(v) \int_{X}h_{l}\:dvol\] \[= \int_{T_{x}^{1}X}f\left(\gamma_{x,v}(r_{m})\right)d\theta_{x}(v) \int_{X}h_{m}\:dvol\] \[\geq \int_{r_{m+1}}^{r_{m}}u_{m}(r)A(r)\left(\int_{T_{x}^{1}X}f\left( \gamma_{x,v}(r)\right)d\theta_{x}(v)\right)dr\] \[= \left(f*h_{m}\right)(x)\:.\]
**Remark 4.8**.:
1. _The proof of Lemma_ 4.7 _shows that Lemma_ 4.3 _is true for general subharmonic functions._
2. _For a subharmonic function_ \(f\)_, its restrictions to spheres are integrable, that is,_ (4.3) \[\int_{T_{x}^{1}X}\left|f(\gamma_{x,v}(r))\right|d\theta_{x}(v)<+\infty\:,\text { for all }x\in X\text{ and all }r>0\:,\] _This is seen as follows. Choose and fix_ \(x\in X\)_. Then combining the fact that_ \(f\) _is locally integrable with the polar decomposition of the volume measure, we get that the function_ \[\left(0,+\infty\right)\to\left(0,+\infty\right],\text{ defined by }\] \[r\mapsto\int_{T_{x}^{1}X}\left|f(\gamma_{x,v}(r))\right|d\theta_{x}(v)\] _is locally integrable with respect to the measure_ \(A(r)dr\)_. Then as_ \(A(r)dr\) _is a regular Borel measure on_ \(\left(0,+\infty\right)\)_, it follows that_ (4.4) \[\int_{T_{x}^{1}X}\left|f(\gamma_{x,v}(r))\right|d\theta_{x}(v)<+\infty\:,\] _for almost every_ \(r\in\left(0,+\infty\right)\:\) _, with respect to the measure_ \(A(r)dr\)_. Then as the above measure takes positive values on every non-empty open set in_ \(\left(0,+\infty\right)\)_, it
follows that (4.4) is true for \(r\) belonging to a dense subset of \((0,+\infty)\). Now as \(f\) is bounded above on compacts, we get that_
\[\int_{T^{1}_{x}X}f(\gamma_{x,v}(r))\,d\theta_{x}(v)>-\infty\:, \tag{4.5}\]
_is true for \(r\) belonging to a dense subset of \((0,+\infty)\). Then part \((1)\) of this remark yields that (4.5) is true for all \(r\in(0,+\infty)\). Combining this with the fact that \(f\) is bounded above on compacts, (4.3) follows._
Now we are in a position to prove Proposition 4.1.
Proof of Proposition 4.1.: We first show that
\[\int_{X}f\Delta\psi\:dvol\geq 0\:,\text{ for all }\psi\in C^{2}_{c}(X)\text{ with }\psi\geq 0\:. \tag{4.6}\]
Let \(\{h_{j}\}_{j=1}^{\infty}\) be a \(C^{\infty}\) approximate identity as defined in remark 4.6. Set \(f_{j}:=f*h_{j}\:.\) Then by Lemma 4.7, \(\{f_{j}\}_{j=1}^{\infty}\) is a non-increasing sequence of \(C^{\infty}\)-subharmonic functions on \(X\) that converges to \(f\) everywhere on \(X\). Now combining the facts that
* \(\psi\in C^{2}_{c}(X)\:,\)
* \(|f_{j}(x)|\leq|f(x)|+|f_{1}(x)|\:,\text{ for all }j\in\mathbb{N}\:,\)for all \(x\in X\:,\)
* both \(f,f_{1}\) being subharmonic are locally integrable,
we note that the Dominated Convergence Theorem is applicable for the sequence of functions \(\{f_{j}\Delta\psi\}_{j=1}^{\infty}\). Therefore by the Green's identity and the Dominated Convergence Theorem,
\[\int_{X}f\Delta\psi\:dvol=\lim_{j\to\infty}\int_{X}f_{j}\Delta\psi\:dvol=\lim _{j\to\infty}\int_{X}\psi\Delta f_{j}\:dvol\geq 0\:.\]
Hence,
\[L(\psi):=\int_{X}f\Delta\psi\:dvol\]
defines a positive linear functional on \(C^{\infty}_{c}(X)\). Then by usual density arguments (following the proof of Theorem 4.6.3 of [10] for the real hyperbolic ball verbatim) \(L\) extends to \(C_{c}(X)\) as a positive linear functional. Now the result follows by the Riesz Representation theorem for positive linear functionals on \(C_{c}(X)\).
### Harmonic majorants
**Definition 4.9**.:
1. _A subharmonic function_ \(f\) _on_ \(X\) _is said to have a harmonic majorant if there exists a harmonic function_ \(h\) _on_ \(X\) _such that_ \[f(x)\leq h(x)\:,\text{ for all }x\in X\:.\]
2. _A harmonic function_ \(h\) _on_ \(X\) _is said to be the least harmonic majorant of a subharmonic function_ \(f\) _on_ \(X\) _if_ 1. \(h\) _is a harmonic majorant of_ \(f\) _and_ 2. \(h(x)\leq H(x)\) _for all_ \(x\in X\)_, whenever_ \(H\) _is a harmonic majorant of_ \(f\:.\)
In this subsection we will prove the following equivalence for existence of a least harmonic majorant in terms of boundedness of integrals over spheres.
**Proposition 4.10**.: _Let \(f\) be subharmonic on \(X\). Then the following are equivalent:_
1. \(f\) _has a least harmonic majorant on_ \(X\)_._
2. \(f\) _has a harmonic majorant on_ \(X\)_._
3. _for all_ \(x\in X\)_,_ \[\lim_{r\to+\infty}\int_{T^{1}_{x}X}f\left(\gamma_{x,v}(r)\right)\,d\theta_{x}( v)<+\infty\:.\]
For the proof of Proposition 4.10, we will need the following:
**Lemma 4.11**.: _Let \(f\) be a subharmonic function on \(X\). Choose and fix \(x_{0}\in X\). Then for each \(r>0\), there exists a harmonic function \(f^{(r)}\) on \(B(x_{0},r)\) such that_
1. \(f(x)\leq f^{(r)}(x)\) _for all_ \(x\in B(x_{0},r)\) _and_
2. \[\int_{T^{1}_{x_{0}}X}f\left(\gamma_{x_{0},v}(r)\right)\,d\theta_{x_{0}}(v)=f^ {(r)}(x_{0})\:.\]
_Furthermore, if \(F\) is harmonic on an open subset \(\Omega\) of \(X\) with \(\overline{B(x_{0},r)}\subset\Omega\) and \(F(x)\geq f(x)\) for all \(x\in\Omega\), then_
1. \(f^{(r)}(x)\leq F(x)\) _for all_ \(x\in B(x_{0},r)\:.\)__
2. _If_ \(0<r_{1}<r_{2}\)_, then_ \[f^{(r_{1})}(x)\leq f^{(r_{2})}(x)\:,\text{ for all }x\in B(x_{0},r_{1})\:.\]
Proof.: Fix \(r>0\). Then by upper semi-continuity of \(f\), there exists a decreasing sequence \(\{f_{n}\}_{n=1}^{\infty}\) of continuous functions on \(S(x_{0},r)=\partial B(x_{0},r)\) such that
\[\lim_{n\to\infty}f_{n}\left(\gamma_{x_{0},v}(r)\right)=f\left(\gamma_{x_{0},v }(r)\right)\:,\text{ for all }v\in T^{1}_{x_{0}}X\:.\]
Now we consider the harmonic extensions of \(f_{n}\) to \(B(x_{0},r)\), say \(F_{n}\). Then by Harnack-Yau and the mean value property, it follows that the sequence \(\{F_{n}\}_{n=1}^{\infty}\) is uniformly Cauchy on all compact subsets of \(B(x_{0},r)\). So the sequence converges (uniformly on compacts) to a harmonic function on \(B(x_{0},r)\), say \(f^{(r)}\). We note that by the maximum principle, \(f(x)\leq f^{(r)}(x)\) for all \(x\in B(x_{0},r)\). Moreover it follows from part (2) of remark 4.8 and that \(\{f_{n}\}_{n=1}^{\infty}\) is a sequence of continuous functions decreasing to \(f\), that the Dominated Convergence Theorem is applicable to \(\{f_{n}\}_{n=1}^{\infty}\). Then by the mean value property and the Dominated Convergence Theorem, it follows that
\[f^{(r)}(x_{0})=\lim_{n\to\infty}F_{n}(x_{0}) = \lim_{n\to\infty}\int_{T^{1}_{x_{0}}X}f_{n}\left(\gamma_{x_{0},v }(r)\right)d\theta_{x_{0}}(v)\] \[= \int_{T^{1}_{x_{0}}X}f\left(\gamma_{x_{0},v}(r)\right)d\theta_{x _{0}}(v)\:. \tag{4.7}\]
Now for \(F\) as in the second part of the statement, we consider
\[h_{n}\left(\gamma_{x_{0},v}(r)\right):=\min\{f_{n}\left(\gamma_{x_{0},v}(r) \right),F\left(\gamma_{x_{0},v}(r)\right)\}\:,\text{ for all }v\in T^{1}_{x_{0}}X\:.\]
and \(H_{n}\) to be their corresponding harmonic extensions to \(B(x_{0},r)\). By the maximum principle, \(H_{n}\leq F_{n}\) on \(B(x_{0},r)\). But \(\{h_{n}\}_{n=1}^{\infty}\) is a non-increasing sequence of continuous functions on \(S(x_{0},r)\) with
\[\lim_{n\to\infty}h_{n}\left(\gamma_{x_{0},v}(r)\right)=f\left(\gamma_{x_{0},v} (r)\right)\:,\text{ for all }v\in T^{1}_{x_{0}}X\:.\]
Just as above, an application of Harnack-Yau and the mean value property would yield that the sequence \(\{H_{n}\}_{n=1}^{\infty}\) converges (uniformly on compacts) to a harmonic function on \(B(x_{0},r)\). Moreover, by the mean value property, the Dominated Convergence Theorem and (4.7),
\[\lim_{n\to\infty}H_{n}(x_{0})=\int_{T^{1}_{x_{0}}X}f\left(\gamma_{x_{0},v}(r) \right)d\theta_{x_{0}}(v)=\lim_{n\to\infty}F_{n}(x_{0})\:.\]
Hence by the maximum principle,
\[f^{(r)}(x)=\lim_{n\to\infty}F_{n}(x)=\lim_{n\to\infty}H_{n}(x)\:,\text{ for all }x\in B(x_{0},r)\:.\]
But again by the maximum principle,
\[H_{n}(x)\leq F(x)\:,\text{ for all }n\in\mathbb{N},\:x\in B(x_{0},r)\:,\]
and thus it follows that
\[f^{(r)}(x)\leq F(x)\:,\text{ for all }x\in B(x_{0},r)\:.\]
This proves part \((iii)\) of the Lemma. Part \((iv)\) follows immediately from \((iii)\).
Now we are in a position to prove Proposition 4.10.
Proof of Proposition 4.10.: Clearly \((i)\) implies \((ii)\) and \((ii)\) implies \((iii)\). We now suppose that \((iii)\) holds. Choose and fix \(x_{0}\in X\). Let \(\{r_{n}\}_{n=1}^{\infty}\) be an increasing sequence of positive real numbers such that \(r_{n}\to+\infty\) as \(n\to+\infty\). For each \(n\in\mathbb{N}\), let \(f^{(n)}\) be the harmonic function on \(B(x_{0},r_{n})\) satisfying the conclusion of Lemma 4.11. By part \((iv)\) of Lemma 4.11,
\[f^{(n)}(x)\leq f^{(n+1)}(x)\:,\text{ for all }x\in B(x_{0},r_{n})\:.\]
Moreover by part \((ii)\) of Lemma 4.11,
\[\lim_{n\to\infty}f^{(n)}(x_{0})=\lim_{n\to\infty}\int_{T^{1}_{x_{0}}X}f\left( \gamma_{x_{0},v}(r_{n})\right)\:d\theta_{x_{0}}(v)<+\infty\:,\]
Therefore \(\{f^{(n)}\}_{n=1}^{\infty}\) is a sequence of non-decreasing harmonic functions with a finite limit at \(x_{0}\). Then by Lemma 2.5 it follows that \(\{f^{(n)}\}_{n=1}^{\infty}\) converges uniformly (on compacts) to a harmonic function, say \(F_{f}\). Then by part \((i)\) of Lemma 4.11, \(F_{f}\) is a harmonic majorant of \(f\). In fact, \(F_{f}\) is the least harmonic majorant of \(f\), which is seen as follows. Let \(F\) be any harmonic majorant of \(f\). Then by part \((iii)\) of Lemma 4.11,
\[f^{(n)}(x)\leq F(x)\:,\text{ for all }x\in B(x_{0},r_{n})\:,\text{ for all }n\in\mathbb{N}\:.\]
Hence,
\[F_{f}(x)\leq F(x)\:,\text{ for all }x\in X\:.\]
Henceforth the least harmonic majorant of a subharmonic function \(f\) (when it exists) will be denoted by \(F_{f}\). An immediate consequence of Proposition 4.10 is the following:
**Corollary 4.12**.: _Let \(f\leq 0\) be subharmonic on \(X\). Then \(F_{f}\equiv 0\) if and only if_
\[\lim_{r\rightarrow+\infty}\int_{T_{x}^{1}X}f\left(\gamma_{x,v}(r)\right)\,d \theta_{x}(v)=0\:,\]
_for all \(x\in X\)._
Proof.: By construction of \(F_{f}\) in the proof of Proposition 4.10, we have for all \(x\in X\),
\[F_{f}(x)=\lim_{r\rightarrow+\infty}\int_{T_{x}^{1}X}f\left(\gamma_{x,v}(r) \right)\,d\theta_{x}(v)\:.\]
Hence the result follows.
### Riesz decomposition
Having established the notions of the Riesz measure and the least harmonic majorant of a subharmonic function, we now aim to prove the Riesz decomposition theorem.
First we see a couple of lemmas.
**Lemma 4.13**.: _Let \(h\) be a radial \(C_{c}^{\infty}\) function on \(X\) with \(\int_{X}h\,dvol=0\). Let \(v:=-G\ast h\:\), then \(v\) is a radial \(C_{c}^{\infty}\) function on \(X\) with \(\Delta v=h\:\)._
Proof.: Let \(h=u\circ d_{o}\), where \(u\) is the corresponding function on \(\mathbb{R}\). For fixed \(x\in X\), the function \(y\mapsto G(x,y)\) is harmonic in \(B(o,d(o,x))\) and hence for all \(r\in(0,d(o,x))\), by the mean value property,
\[\int_{T_{o}^{1}X}G\left(x,\gamma_{o,v}(r)\right)d\theta_{o}(v)=G(x,o)=G(d(o,x) )\:. \tag{4.8}\]
We now choose \(r>0\) such that \(Supp(h)\subset\overline{B(o,r)}\). Then for all \(x\) with \(d(o,x)>r\), by (4.8) and Lemma 2.3 we have
\[G\ast h(x) = \int_{X}h(y)G(x,y)\,dvol(y)\] \[= \int_{0}^{+\infty}u(r)A(r)\left(\int_{T_{o}^{1}X}G\left(x,\gamma_ {o,v}(r)\right)d\theta_{o}(v)\right)\,dr\] \[= G(d(o,x))\int_{X}h\:dvol\] \[= 0\:.\]
Thus \(Supp(v)\subset\overline{B(o,r)}\:\). Hence \(v\in C_{c}^{\infty}(X)\) (the regularity follows from the fact that the Green function is locally integrable) and is radial by Lemma 2.3.
Let \(\psi\in C_{c}^{2}(X)\). Then by Green's identity, symmetry of the Green function and Fubini's theorem, it follows that
\[\int_{X}\psi(y)\Delta v(y)\,dvol(y) = \int_{X}v(y)\Delta\psi(y)\,dvol(y)\] \[= -\int_{X}\left(\int_{X}G(x,y)h(x)\,dvol(x)\right)\Delta\psi(y)\,dvol (y)\] \[= -\int_{X}h(x)\left(\int_{X}G(x,y)\Delta\psi(y)\,dvol(y)\right)\,dvol (x)\] \[= -\int_{X}h(x)\left(\int_{X}\Delta G(x,y)\psi(y)\,dvol(y)\right)\, dvol(x)\] \[= \int_{X}h(x)\psi(x)\,dvol(x)\:.\]
Since the above is true for any \(\psi\in C_{c}^{2}(X)\), we get the result.
**Lemma 4.14**.: _Let \(f\leq 0\) be a subharmonic function on \(X\) satisfying for all \(x\in X\),_
\[\lim_{r\to+\infty}\int_{T_{x}^{1}X}f\left(\gamma_{x,v}(r)\right)\,d\theta_{x}( v)=0\:. \tag{4.9}\]
_Then for all \(x\in X\),_
\[f(x)=-\int_{X}G(x,y)\,d\mu_{f}(y)\:,\]
_where \(\mu_{f}\) is the Riesz measure of \(f\)._
Proof.: Let \(\{r_{j}\}_{j=1}^{\infty}\subset(0,1)\) such that it monotonically decreases to \(0\). For each \(j\in\mathbb{N}\), let
\[A_{j}^{1} :=\{y\in X:r_{j+1}<d(o,y)<r_{j}\}\] \[A_{j}^{2} :=\{y\in X:e^{1/r_{j}}<d(o,y)<e^{1/r_{j+1}}\}\:.\]
For each \(j\in\mathbb{N}\) and \(k=1,2\), let \(h_{j}^{k}\) be a non-negative \(C^{\infty}\) radial function with
\[Supp\,h_{j}^{k}\subset A_{j}^{k}\:,\text{ and }\int_{X}h_{j}^{k}\:dvol=1\:.\]
We write \(h_{j}^{k}=u_{j}^{k}\circ d_{o}\), where \(u_{j}^{k}\) is the corresponding function on \(\mathbb{R}\). We choose and fix \(x\in X\). Now since \(f\) is subharmonic by Lemma 4.7,
\[f(x)=\lim_{j\to\infty}\left(f*h_{j}^{1}\right)(x)=\lim_{j\to\infty}\int_{X}f( y)\left(\tau_{x}h_{j}^{1}\right)(y)\,dvol(y)\:. \tag{4.10}\]
On the other hand by part (1) of remark 4.8,
\[\left(f*h_{j}^{2}\right)(x) = \int_{e^{1/r_{j}}}^{e^{1/r_{j+1}}}u_{j}^{2}(r)A(r)\left(\int_{T_{x}^ {1}X}f\left(\gamma_{x,v}(r)\right)d\theta_{x}(v)\right)\,dr\] \[\leq \int_{e^{1/r_{j}}}^{e^{1/r_{j+1}}}u_{j}^{2}(r)A(r)\left(\int_{T_{x }^{1}X}f\left(\gamma_{x,v}\left(e^{1/r_{j+1}}\right)\right)d\theta_{x}(v) \right)\,dr\] \[= \int_{T_{x}^{1}X}f\left(\gamma_{x,v}\left(e^{1/r_{j+1}}\right) \right)d\theta_{x}(v)\:.\]
Similarly one gets the lower bound to obtain
\[\int_{T_{x}^{1}X}f\left(\gamma_{x,v}\left(e^{1/r_{j}}\right)\right)d\theta_{x }(v)\leq\left(f*h_{j}^{2}\right)(x)\leq\int_{T_{x}^{1}X}f\left(\gamma_{x,v} \left(e^{1/r_{j+1}}\right)\right)d\theta_{x}(v)\:.\]
Then in view of (4.9), we get
\[\lim_{j\to\infty}\left(f*h_{j}^{2}\right)(x)=0\:. \tag{4.11}\]
Now for \(h_{j}:=h_{j}^{1}-h_{j}^{2}\), by (4.10) and (4.11), one has
\[f(x)=\lim_{j\to\infty}\left(f*h_{j}\right)(x)\:.\]
We now note that since the Green function is superharmonic on \(X\), Lemma 4.7 implies that \(G*h_{j}^{1}\) increases to \(G\). Arguments similar to Lemma 4.7 also yield that \(G*h_{j}^{2}\) decreases to \(0\). Hence, \(G*h_{j}\) increases to \(G\). Now for each \(j\in\mathbb{N}\), let \(v_{j}:=-\left(G*h_{j}\right)\), then by Lemma 4.13 we have \(v_{j}\in C_{c}^{\infty}(X)\) is radial and \(\Delta v_{j}=h_{j}\:.\)
Then by Lemma 2.2 and Monotone Convergence Theorem, it follows that
\[f(x) = \lim_{j\to\infty}\int_{X}f(y)\left(\tau_{x}h_{j}\right)(y)\,dvol (y)\] \[= \lim_{j\to\infty}\int_{X}f(y)\:\Delta(\tau_{x}v_{j})(y)\:dvol(y)\] \[= \lim_{j\to\infty}\int_{X}(\tau_{x}v_{j})(y)\:d\mu_{f}(y)\] \[= -\lim_{j\to\infty}\int_{X}\left(\tau_{x}\left(G*h_{j}\right) \right)(y)\:d\mu_{f}(y)\] \[= -\int_{X}\left(\tau_{x}\:G\right)(y)\:d\mu_{f}(y)\] \[= -\int_{X}G(x,y)\:d\mu_{f}(y)\:.\]
Proof of Theorem 1.3.: Let \(F_{f}\) be the least harmonic majorant of \(f\). Consider \(h:=f-F_{f}\:\). Then \(h\leq 0\) is a subharmonic function with the constant zero function as its
least harmonic majorant. Then by Corollary 4.12, for all \(x\in X\),
\[\lim_{r\to+\infty}\int_{T^{1}_{x}X}h\left(\gamma_{x,v}(r)\right)\,d\theta_{x}(v)= 0\:.\]
Thus by Lemma 4.14,
\[f(x)=F_{f}(x)-\int_{X}G(x,y)\,d\mu_{h}(y)\:.\]
So it is enough to show that \(\mu_{h}=\mu_{f}\). This is seen as follows. For all \(\psi\in C^{2}_{c}(X)\), by Green's identity
\[\int_{X}\psi\:d\mu_{h}=\int_{X}h\Delta\psi\:dvol = \int_{X}f\Delta\psi\:dvol-\int_{X}F_{f}\:\Delta\psi\:dvol\] \[= \int_{X}f\Delta\psi\:dvol-\int_{X}\psi\:\Delta F_{f}\:dvol\] \[= \int_{X}f\Delta\psi\:dvol\] \[= \int_{X}\psi\:d\mu_{f}\:.\]
## 5. Boundary behavior of Green potentials
### Upper bound on the Hausdorff dimension
In this subsection we will work under the hypothesis of Theorem 1.4. We fix an origin \(o\in X\). First, we state the following result regarding a condition imposed on the measure by the well-definedness of its Green potential. It is a simple consequence of the estimates of the Green function (2.9) and hence the proof is omitted.
**Lemma 5.1**.: _Let \(\mu\) be a Radon measure on \(X\). If \(G[\mu]\) is well-defined then_
\[\int_{X}e^{-hd(o,y)}\:d\mu(y)<+\infty\:.\]
Now for \(\mu\) as in the statement of Theorem 1.4 and for any \(x\in X\), we write,
\[G[\mu](x)=\int_{X}G(x,y)\:d\mu(y)=\int_{B(x,1)}G(x,y)\:d\mu(y)+\int_{X\setminus B (x,1)}G(x,y)\:d\mu(y)\:.\]
We define,
\[u_{1}(x):=\int_{B(x,1)}G(x,y)\:d\mu(y)\:,\text{ and }u_{2}(x):=\int_{X\setminus B (x,1)}G(x,y)\:d\mu(y)\:. \tag{5.1}\]
We first study the boundary behavior of the more well-behaved \(u_{2}\).
**Lemma 5.2**.: _Let \(\beta\in[0,h]\) and let \(u_{2}\) be as in (5.1). Then_
\[dim_{\mathcal{H}}E_{\beta}(u_{2})\leq(h-\beta)/s\:,\text{ and }\mathcal{H}^{(h- \beta)/s}\left(E_{\beta}^{\infty}(u_{2})\right)=0\:.\]
Proof.: Let \(x\in X\). By (2.9),
\[u_{2}(x)\leq C_{1}\left\{e^{-hd(o,x)}\mu\left(\{o\}\right)+\tilde{u_{2}}(x) \right\}\;,\]
where
\[\tilde{u_{2}}(x):=\int_{X\setminus(B(x,1)\cup\{o\})}e^{-hd(x,y)}\;d\mu(y)\;.\]
Then
\[\tilde{u_{2}}(x)=\int_{X\setminus(B(x,1)\cup\{o\})}e^{-hB_{y}(x)}e^{-hd(o,y)} \;d\mu(y)\;,\]
where
\[B_{y}(x)=d(x,y)-d(o,y)\;.\]
Now rewriting the above in terms of the Gromov product, one has
\[B_{y}(x)=d(o,x)-2(x|y)_{o}\;.\]
Combining this with the fact that the Gromov product is monotonically non-decreasing along geodesics, it follows that the function, \(y\mapsto B_{y}(x)\) is monotonically non-increasing along geodesics. Hence (by denoting \(\eta_{y}\in\partial X\) to be the end-point of the extended infinite geodesic ray that joins \(o\) to \(y\), for \(y\in X\)) we get,
\[\tilde{u_{2}}(x) \leq \int_{X\setminus(B(x,1)\cup\{o\})}e^{-hB_{\eta_{y}}(x)}e^{-hd(o,y )}\;d\mu(y)\] \[\leq \int_{X\setminus\{o\}}e^{-hB_{\eta_{y}}(x)}e^{-hd(o,y)}\;d\mu(y)\;. \tag{5.2}\]
Now by Lemma 5.1 and the Riesz Representation Theorem for positive linear functionals on the space of continuous functions on \(\partial X\), there exists a Radon measure \(\tilde{\mu}\) on \(\partial X\) such that for all \(\varphi\) continuous on \(\partial X\), we have
\[\int_{X\setminus\{o\}}\varphi(\eta_{y})\;e^{-hd(o,y)}d\mu(y)=\int_{\partial X} \varphi(\eta)\;d\tilde{\mu}(\eta)\;.\]
Then by the boundary continuity of the Busemann function, it follows that
\[\int_{X\setminus\{o\}}e^{-hB_{\eta_{y}}(x)}e^{-hd(o,y)}d\mu(y)=\int_{\partial X }e^{-hB_{\eta}(x)}d\tilde{\mu}(\eta)=P[\tilde{\mu}](x)\;. \tag{5.3}\]
Thus combining (5.2) and (5.3), we obtain
\[u_{2}(x)\leq C_{1}\left\{e^{-hd(o,x)}\mu\left(\{o\}\right)+P[\tilde{\mu}](x) \right\}\;.\]
Hence, \(E_{\beta}(u_{2})\subset E_{\beta}\left(P[\tilde{\mu}]\right)\) and \(E_{\beta}^{\infty}(u_{2})\subset E_{\beta}^{\infty}\left(P[\tilde{\mu}]\right)\). The result now follows from Theorem 1.1.
Now we shift our focus to \(u_{1}\). But first we prove the following geometric estimate.
**Lemma 5.3**.: _Let \(B\) be a ball in \(X\) with center \(z\) and radius \(r\in(0,5]\). Then the diameter \(d\) of \(\mathcal{O}_{o}(B)\), satisfies the following upper bound, for some constant \(C_{6}=C_{6}(b,s)>0\) :_
\[d\leq C_{6}\,r^{s/2b}\;e^{-sd(o,z)}\;,\;\text{for }d(o,z)>6\;.\]
Proof.: Let \(x\in B\) such that \(o,x\) and \(z\) are not collinear. We first note that by triangle inequality,
\[\left(x|z\right)_{o}=\frac{1}{2}\left(d(o,x)+d(o,z)-d(x,z)\right)\geq d(o,z)-d(x,z)\geq d(o,z)-r\:. \tag{5.4}\]
Consider the geodesics that join \(o\) to \(x\) and the one that joins \(o\) to \(z\). Extend these geodesics. These extended geodesic rays will hit \(\partial X\) at some points, say \(\eta\) and \(\xi\) respectively. Then
\[\left(\xi|\eta\right)_{o}-\left(x|z\right)_{o}=\left(\eta|z\right)_{x}+\left( \xi|\eta\right)_{z}\:. \tag{5.5}\]
Then (5.4) and (5.5) yield,
\[\left(\xi|\eta\right)_{o}\geq d(o,z)-r+\left(\xi|\eta\right)_{z}\:,\]
which in turn gives,
\[\rho_{s}(\xi,\eta)\leq C_{2}\:e^{-s(\xi|\eta)_{o}}\leq C_{2}\:e^{-sd(o,z)}\:e ^{sr}\:e^{-s(\xi|\eta)_{z}}\:. \tag{5.6}\]
Now as \(r\in(0,5]\), it is enough to obtain an upper bound on \(e^{-(\xi|\eta)_{z}}\). Let \(\alpha\) (respectively \(\theta\)) denote the Riemannian angle between \(\eta\) and \(\xi\) (respectively \(\eta\) and \(o\)) subtended at \(z\). Then \(\alpha+\theta=\pi\). Now we claim that
\[\frac{e^{-2b(o|\eta)_{z}}-e^{-2bd(o,z)}}{1-e^{-2bd(o,z)}}\leq\sin^{2}\left( \frac{\theta}{2}\right)\:. \tag{5.7}\]
The above claim is proved as follows. Let \(\gamma\) be the geodesic ray starting from \(z\) and hitting \(\partial X\) at \(\eta\). Now for any \(t\in(0,+\infty)\), we consider the geodesic triangle \(\triangle(z,o,\gamma(t))\). Then let \(\theta_{b}(t)\) denote the angle corresponding to \(\theta\) in the comparison triangle \(\overline{\triangle}(z,o,\gamma(t))\) in \(\mathbb{H}^{2}(-b^{2})\). By the angle comparison theorem,
\[\sin\left(\frac{\theta_{b}(t)}{2}\right)\leq\sin\left(\frac{\theta}{2}\right) \:,\mbox{ for all }t\in(0,+\infty)\:.\]
Now by the hyperbolic law of cosines,
\[\sin^{2}\left(\frac{\theta_{b}(t)}{2}\right)=\frac{\cosh bd(o,\gamma(t))- \cosh b(d(o,z)-d(\gamma(t),z))}{2\sinh bd(o,z)\sinh bd(\gamma(t),z)}\:.\]
Then
\[\lim_{t\to+\infty}\frac{\cosh bd(o,\gamma(t))}{2\sinh bd(o,z) \sinh bd(\gamma(t),z)} = \lim_{t\to+\infty}\frac{e^{bd(o,\gamma(t))}+e^{-bd(o,\gamma(t))}} {\left(e^{bd(o,z)}-e^{-bd(o,z)}\right)\left(e^{bd(\gamma(t),z)}-e^{-bd( \gamma(t),z)}\right)}\] \[= \frac{e^{-2b(o|\eta)_{z}}}{1-e^{-2bd(o,z)}}\:,\]
and
\[\lim_{t\to+\infty}\frac{\cosh b(d(o,z)-d(\gamma(t),z))}{2\sinh bd (o,z)\sinh bd(\gamma(t),z)} = \lim_{t\to+\infty}\frac{e^{b(d(o,z)-d(\gamma(t),z))}+e^{-b(d(o,z )-d(\gamma(t),z))}}{\left(e^{bd(o,z)}-e^{-bd(o,z)}\right)\left(e^{bd(\gamma( t),z)}-e^{-bd(\gamma(t),z)}\right)}\] \[= \frac{e^{-2bd(o,z)}}{1-e^{-2bd(o,z)}}\:.\]
Hence, the claim is established. Now as by the triangle inequality,
\[\left(o|\eta\right)_{z}\leq d(x,z)\leq r\:,\]
plugging the above inequality in (5.7), we get that
\[\frac{e^{-2br}-e^{-2bd(o,z)}}{1-e^{-2bd(o,z)}}\leq\sin^{2}\left(\frac{\theta}{2 }\right)\:. \tag{5.8}\]
Then by (2.17) and (5.8), there exists \(C(b)>0\) such that,
\[e^{-2b(\xi|\eta)_{z}}\leq\sin^{2}\left(\frac{\alpha}{2}\right)=1-\sin^{2} \left(\frac{\theta}{2}\right)\leq 1-\frac{e^{-2br}-e^{-2bd(o,z)}}{1-e^{-2bd(o,z )}}=\frac{1-e^{-2br}}{1-e^{-2bd(o,z)}}\leq C(b)r\:.\]
Hence, there exists \(C(b,s)>0\) such that
\[e^{-s(\xi|\eta)_{z}}\leq C(b,s)\:r^{s/2b}\:. \tag{5.9}\]
Then plugging (5.9) in (5.6) we get that,
\[\rho_{s}(\xi,\eta)\leq C(b,s)\:r^{s/2b}\:e^{-sd(o,z)}\:.\]
Now since \(x\) was arbitrary, it follows that
\[d=\sup_{\eta_{1},\eta_{2}\in\mathcal{O}_{o}(B)}\rho_{s}(\eta_{1},\eta_{2}) \leq\sup_{\eta_{1},\eta_{2}\in\mathcal{O}_{o}(B)}(\rho_{s}(\eta_{1},\xi)+\rho _{s}(\xi,\eta_{2})\:)\leq 2\:C(b,s)\:r^{s/2b}\:e^{-sd(o,z)}\:.\]
Now we get back to estimating \(u_{1}\). We introduce some new notation. Choose and fix \(R>0\) and set for \(L>0\),
\[A_{\beta,R}(L):=\{x\in X\setminus B(o,R):e^{-\beta d(o,x)}u_{1}(x)>L\}\:.\]
**Lemma 5.4**.: _Let \(L>0\), \(\beta\in[0,h-n+2)\) and \(u_{1}\) defined as in (5.1). Then there exists a countable collection of balls \(\{B(x_{j},r_{j})\}_{j=1}^{\infty}\) with \(d(o,x_{j})\geq R\) and \(r_{j}\in(0,5]\) for all \(j\in\mathbb{N}\), such that it covers \(A_{\beta,R}(L)\) and moreover, one has_
\[\sum_{j=1}^{\infty}\left\{r_{j}\:e^{-d(o,x_{j})}\right\}^{h-\beta}\leq\frac{C (h,n,\beta)}{L}\int_{X}e^{-hd(o,x)}\:d\mu(x)\:,\]
_for some positive constant \(C(h,n,\beta)>0\)._
Proof.: Let
\[C_{7}:=\frac{1}{C_{1}}\left(1+\frac{2-n}{h-\beta}\right)\:,\]
where \(C_{1}\) is the constant implicit in (2.9). Note that our hypothesis on \(\beta\) implies that \(C_{7}>0\). Choose and fix \(x\in X\setminus B(o,R)\). Next we claim that if
\[\mu\left(B(x,r)\right)\leq C_{7}\:L\:e^{\beta d(o,x)}\:r^{h-\beta}\:,\text{ holds for all }0<r\leq 1\:,\text{ then }u_{1}(x)\leq L\:e^{\beta d(o,x)}\:.\]
The above claim is established as follows. By (2.9), we have
\[u_{1}(x)\leq C_{1}\int_{B(x,1)}d(x,y)^{2-n}d\mu(y)\:.\]
Now applying Fubini-Tonelli to the function \((y,r)\mapsto r^{1-n}\chi_{\{(y,r)\,:\,d(x,y)<r\}}\), it follows that
\[\int_{B(x,1)}d(x,y)^{2-n}d\mu(y) = \mu\left(B(x,1)\right)+(n-2)\int_{0}^{1}r^{1-n}\mu\left(B(x,r) \right)dr\] \[\leq C_{7}L\,e^{\,\beta d(o,x)}\left\{1+(n-2)\int_{0}^{1}r^{h+1-n- \beta}dr\right\}\] \[= \frac{L\,e^{\,\beta d(o,x)}}{C_{1}}\,.\]
Hence the claim follows. Thus for \(x\in A_{\beta,R}(L)\), the above claim ensures the existence of \(r_{x}\in(0,1]\) such that
\[e^{-\beta d(o,x)}\mu\left(B(x,r_{x})\right)>C_{7}L\,r_{x}^{h-\beta}\,. \tag{5.10}\]
Then an application of Vitali 5-covering lemma yields a countable, disjoint collection of balls \(B(x_{j},r_{x_{j}})\) such that each ball satisfies the inequality (5.10) and for \(r_{j}:=5r_{x_{j}}\), the balls \(B(x_{j},r_{j})\) cover \(A_{\beta,R}(L)\). Then by (5.10), we get
\[\sum_{j=1}^{\infty}\left(r_{j}\,e^{-d(o,x_{j})}\right)^{h-\beta} \leq \left(\frac{5^{h-\beta}}{C_{7}L}\right)\sum_{j=1}^{\infty}e^{-hd (o,x_{j})}\,\mu\left(B\left(x_{j},r_{x_{j}}\right)\right)\] \[\leq \left(\frac{5^{h-\beta}\,e^{h}}{C_{7}L}\right)\int_{X}e^{-hd(o,x) }\,d\mu(x)\,.\]
Now we do the estimates on the boundary. For \(\xi\in\partial X\), define
\[M_{\beta,R}[u_{1}](\xi):=\sup_{t>R}\,e^{-\beta t}u_{1}\left(\gamma_{\xi}(t) \right)\,.\]
**Lemma 5.5**.: _Let \(L>0,\,\beta\in\left[0,h-n+2\right),b^{\prime}:=\max\{2b,1\}\) and \(u_{1}\) be as defined in (5.1). For_
\[0<\varepsilon<C_{6}\bigg{(}\frac{5}{e^{6}}\bigg{)}^{s/b^{\prime}}\text{ and }R\geq\log(5)+\bigg{(}\frac{b^{\prime}}{s}\bigg{)}\log(C_{6}/\varepsilon)\,,\]
_(where \(C_{6}>0\) is as in the conclusion of Lemma 5.3), we have,_
\[\mathcal{H}_{\varepsilon}^{b^{\prime}(h-\beta)/s}\left(\{\xi\in\partial X:M_{ \beta,R}[u_{1}](\xi)>L\}\right)\leq\frac{C(h,n,\beta,b,s)}{L}\int_{X}e^{-hd(o, x)}d\mu(x)\,,\]
_for some constant \(C(h,n,\beta,b,s)>0\)._
Proof.: Let \(\xi\in\partial X\) such that \(M_{\beta,R}[u_{1}](\xi)>L\). Then there exists \(t_{\xi}>R\) such that
\[e^{-\beta t_{\xi}}\,u_{1}\left(\gamma_{\xi}(t_{\xi})\right)>L\,.\]
Then \(\gamma_{\xi}(t_{\xi})\in A_{\beta,R}(L)\). Let \(\{B\left(x_{j},r_{j}\right)\}_{j=1}^{\infty}\) be the balls obtained in the conclusion of Lemma 5.4. Then
\[\{\xi\in\partial X:M_{\beta,R}[u_{1}](\xi)>L\}\subset\bigcup_{j=1}^{\infty} \mathcal{O}_{o}\left(B\left(x_{j},r_{j}\right)\right)\,.\]
In fact, the diameters of \(\mathcal{O}_{o}\left(B\left(x_{j},r_{j}\right)\right)\) are uniformly bounded by \(\varepsilon\). This is seen as follows. We note that the hypothesis on \(\varepsilon\) and \(R\) imply that \(R>6\) and thus \(d(o,x_{j})\geq R>6\). Moreover as \(r_{j}\in(0,5]\), Lemma 5.3 is applicable and it yields
\[diameter\left(\mathcal{O}_{o}\left(B\left(x_{j},r_{j}\right)\right)\right)\leq C _{6}\,r_{j}^{s/b^{\prime}}\,e^{-(s/b^{\prime})d(o,x_{j})}\leq C_{6}\,5^{s/b^{ \prime}}\,e^{-(s/b^{\prime})R}\leq\varepsilon \tag{5.11}\]
(the last inequality follows from an elementary computation involving the hypothesis on \(R\)). Hence by (5.11) and Lemma 5.4, we get for some constant \(C(h,n,\beta,b,s)>0\),
\[\sum_{j=1}^{\infty}\left(diameter\left(\mathcal{O}_{o}\left(B \left(x_{j},r_{j}\right)\right)\right)\right)^{b^{\prime}(h-\beta)/s} \leq C(h,n,\beta,b,s)\sum_{j=1}^{\infty}\left(r_{j}\,e^{-d(o,x_{j}) }\right)^{h-\beta}\] \[\leq \frac{C(h,n,\beta,b,s)}{L}\int_{X}e^{-hd(o,x)}\,d\mu(x)\,.\]
Hence the result follows.
We now complete the proof of Theorem 1.4.
Proof of Theorem 1.4.: For \(L>0\) and \(\beta\in[0,h-n+2)\), we define \(E_{\beta}^{L}(u_{1})\) as in the proof of Theorem 1.1. Then from Lemma 5.5 and Lemma 5.1, one has
\[\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{L}(u_{1})\right)\leq\frac {C(h,n,\beta,b,s)}{L}\int_{X}e^{-hd(o,x)}d\mu(x)<+\infty\,.\]
Hence it follows that
\[\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty}(u_{1})\right)=0 \text{ and }dim_{\mathcal{H}}E_{\beta}^{\frac{1}{m}}(u_{1})\leq b^{\prime}(h-\beta)/s\,, \text{ for all }m\in\mathbb{N}\,.\]
Then by the countable stability of the Hausdorff dimension, we get
\[dim_{\mathcal{H}}E_{\beta}(u_{1})\leq b^{\prime}(h-\beta)/s\,.\]
Now as \(E_{\beta}\left(G[\mu]\right)\subset E_{\beta}(u_{1})\cup E_{\beta}(u_{2})\), one has from above and Lemma 5.2,
\[dim_{\mathcal{H}}E_{\beta}\left(G[\mu]\right)\leq\max\{dim_{\mathcal{H}}E_{ \beta}(u_{1}),dim_{\mathcal{H}}E_{\beta}(u_{2})\}\leq b^{\prime}(h-\beta)/s\,.\]
Next we note that as the Hausdorff outer measure is non-increasing in the dimension, one has from Lemma 5.2 that
\[\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty}(u_{2})\right)\leq \mathcal{H}^{(h-\beta)/s}\left(E_{\beta}^{\infty}(u_{2})\right)=0\,.\]
Now as \(E_{\beta}^{\infty}\left(G[\mu]\right)\subset E_{\beta}^{\infty}(u_{1})\cup E _{\beta}^{\infty}(u_{2})\), it follows that
\[\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty}\left(G[\mu] \right)\right)\leq\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty} (u_{1})\right)+\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty}(u_ {2})\right)=0\,.\]
**Remark 5.6**.: _For the class of negatively curved Harmonic manifolds with sectional curvature \(-b^{2}\leq K_{X}\leq-1\), for some \(b>1\), one has sharper upper bounds on the Hausdorff dimension of the exceptional sets for the Green potentials of Radon measures:_
\[\frac{b(h-\beta)}{s}\ \ \text{in stead of}\ \ \frac{2b(h-\beta)}{s}\,.\]
_First we note that in this case \(s=1\) and one can work with the natural metric \(\rho\). Then with respect to \(\rho\), one has the following estimate on the diameters of shadows of balls:_
_Let \(B\) be a ball in \(X\) with center \(z\) and radius \(r\in(0,5]\). Then the diameter \(d\) of \(\mathcal{O}_{o}(B)\), satisfies the following upper bound, for some constant \(C(b)>0\) :_
\[d\leq C(b)\:r^{1/b}\:e^{-(1/b)\:d(o,z)}\:,\text{ for }d(o,z)>6\:.\]
_The above claim is proved as follows. Let \(x,y\in B\) be such that the three points: the origin \(o,\,x\) and \(y\) are not collinear. Let \(\theta(x,y)\) be the Riemannian angle between \(x\) and \(y\), subtended at the origin \(o\). Let \(\theta_{1}(x,y)\) denote the corresponding comparison angle in \(\mathbb{H}^{2}(-1)\). Then by the hyperbolic law of cosine, one has_
\[\sin^{2}\left(\frac{\theta_{1}(x,y)}{2}\right)=\frac{\cosh d(x,y)-\cosh\left( \left|d(o,x)-d(o,y)\right|\right)}{2\sinh d(o,x)\sinh d(o,y)}\:.\]
_Then as \(d(o,x)>1\:,\:d(o,x)\geq d(o,z)-r\) and similarly for \(y\), we have_
\[\sinh d(o,x)\sinh d(o,y)\gtrsim e^{2d(o,z)}\:.\]
_Also since, \(d(x,y)\leq 2r\leq 10\), it follows that_
\[\cosh d(x,y)-\cosh\left(\left|d(o,x)-d(o,y)\right|\right)\leq\cosh 2r-1\lesssim r ^{2}\:.\]
_Then using the angle comparison theorem, we get for some constant \(C>0\), the following estimate:_
\[\sin\left(\frac{\theta(x,y)}{2}\right)\leq\sin\left(\frac{\theta_{1}(x,y)}{2} \right)\leq Cre^{-\,d(o,z)}\:. \tag{5.12}\]
_Now extend the geodesic rays joining \(o\) to \(x\) and the one joining \(o\) to \(y\). These extended geodesic rays hit \(\partial X\) at two points, say \(\eta_{x}\) and \(\eta_{y}\) respectively. Then by (2.17), it follows that_
\[e^{-b(\eta_{x}|\eta_{y})_{o}}\leq\sin\left(\frac{\theta(\eta_{x},\eta_{y})}{2 }\right)=\sin\left(\frac{\theta(x,y)}{2}\right)\:. \tag{5.13}\]
_Thus by (5.12) and (5.13), it follows that there exists \(C(b)>0\) such that_
\[\rho(\eta_{x},\eta_{y})=e^{-(\eta_{x}|\eta_{y})_{o}}=\left(e^{-b(\eta_{x}|\eta _{y})_{o}}\right)^{1/b}\leq\left(\sin\left(\frac{\theta(x,y)}{2}\right) \right)^{1/b}\leq C(b)\:r^{1/b}\:e^{-(1/b)\:d(o,z)}\:.\]
_The claim now follows by taking supremum over all such \(x,y\in B\)._
_Then using the above estimate and following the arguments as in Lemmas 5.4 and 5.5, we get the corresponding Hausdorff dimensions (with respect to \(\rho\)) in Theorem 1.4 to be \(b(h-\beta)\:.\)_
### Construction of Green potentials on realizable sets
In this subsection we work under the hypothesis of Theorem 1.5. Before proceeding with the proof of Theorem 1.5, we first see the following sufficient conditions for a non-negative Borel measure to have a well-defined Green potential, which again is an immediate consequence of (2.9).
**Lemma 5.7**.: _If \(\mu\) is a non-negative Borel measure on \(X\) satisfying_
\[Supp(\mu)\cap B(o,1)=\emptyset\text{ and }\int_{X}e^{-hd(o,y)}\:d\mu(y)<+ \infty\:,\]
_then \(G[\mu]\) is well-defined._
Proof of Theorem 1.5.: Since \(\mathcal{H}^{(h-\beta)/s}(E)=0\), we have for any \(m\in\mathbb{N}\), a covering of \(E\) by visual balls \(\{\mathscr{B}_{s}^{(m,j)}\}_{j=1}^{\infty}\) such that
\[\sum_{j=1}^{\infty}\left(diameter\left(\mathscr{B}_{s}^{(m,j)}\right)\right)^ {(h-\beta)/s}<2^{-k_{m}}\:, \tag{5.14}\]
where \(\{k_{m}\}_{m=1}^{\infty}\) is a strictly monotonically increasing sequence of positive integers such that
\[2^{-\frac{k_{1}s}{h-\beta}}<2\min\left\{\left(\frac{e^{-3(1+\delta)}}{C_{4}} \right)^{s},\frac{1}{C_{2}}\right\}\:, \tag{5.15}\]
where \(C_{4}>0\) is the positive constant obtained in the conclusion of Lemma 2.6.
Let \(\mathscr{B}_{s}^{(m,j)}=\mathscr{B}_{s}\left(\xi_{j}^{(m)},r_{j}^{(m)}\right)\). Then an elementary computation using (5.14) and (5.15) yields that for all \(m,j\in\mathbb{N}\),
\[r_{j}^{(m)}<\min\left\{\left(\frac{e^{-3(1+\delta)}}{C_{4}}\right)^{s},\frac{ 1}{C_{2}}\right\}\:,\:\text{ and hence }\:\log\left(\frac{1}{C_{4}\Big{(}r_{j}^{(m)}\Big{)}^{1/s}}\right)>3(1+ \delta)\:.\]
Then setting \(t_{j}^{(m)}=\log\left(\frac{1}{C_{4}\big{(}r_{j}^{(m)}\big{)}^{1/s}}\right)\), we get balls \(B\left(\gamma_{\xi_{j}^{(m)}}\left(t_{j}^{(m)}\right),1+\delta\right)\) whose centers satisfy
\[d\left(\gamma_{\xi_{j}^{(m)}}\left(t_{j}^{(m)}\right),o\right)=t_{j}^{(m)}>3(1 +\delta)\:. \tag{5.16}\]
Now applying Lemma 2.6, we get that for all \(m\), \(j\in\mathbb{N}\),
\[\mathscr{B}_{s}^{(m,j)}\subset\mathcal{O}_{o}\left(B\left(\gamma_{\xi_{j}^{(m )}}\left(t_{j}^{(m)}\right),1+\delta\right)\right) \tag{5.17}\]
We now set
\[f:=\sum_{j,m}m\:\:e^{\beta t_{j}^{(m)}}\chi_{\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:
exists a positive constant \(C(h,n,\beta,\delta,s)>0\) such that by the triangle inequality, the definition of \(t_{j}^{(m)}\) and (5.14) we have,
\[\int_{X}e^{-hd(o,x)}f(x)dvol(x) \leq e^{2h(1+\delta)}\,\sum_{j,m}m\,\,\,e^{-(h-\beta)t_{j}^{(m)}}\, vol\left(B\left(\gamma_{\xi_{j}^{(m)}}\left(t_{j}^{(m)}\right),2(1+\delta) \right)\right)\] \[\leq C(h,n,\beta,\delta,s)\sum_{j,m}m\,\left(r_{j}^{(m)}\right)^{(h- \beta)/s}\] \[\leq C(h,n,\beta,\delta,s)\sum_{m=1}^{\infty}m\,2^{-k_{m}}\] \[< +\infty\;.\]
Then by Lemma 5.7, \(G\left[f\;dvol\right]\) is well-defined.
Now we show that \(E\subset E_{\beta}^{\infty}(u)\) where \(u=G\left[f\;dvol\right]\). Let \(\xi\in E\) and choose and fix an \(m\in\mathbb{N}\). Now \(\xi\in\mathscr{B}_{s}^{(m,j_{m})}\) for some \(j_{m}\in\mathbb{N}\). Then by (5.17), there exists \(x_{m}\in B\left(\gamma_{\xi_{j_{m}}^{(m)}}\left(t_{j_{m}}^{(m)}\right),1+ \delta\right)\) such that \(x_{m}\) lies on the geodesic ray \(\gamma_{\xi}\). Since
\[B(x_{m},1+\delta)\subset B\left(\gamma_{\xi_{j_{m}}^{(m)}}\left(t_{j_{m}}^{(m) }\right),2(1+\delta)\right)\;,\]
one has using radiality of the Green function,
\[u(x_{m}) \geq m\;e^{\beta t_{j_{m}}^{(m)}}\int_{B\left(\gamma_{\xi_{j_{m}}^{(m )}}\left(t_{j_{m}}^{(m)}\right),\,2(1+\delta)\right)}G(x_{m},y)\,dvol(y)\] \[\geq m\;\left(\,\frac{e^{\beta d(o,x_{m})}}{e^{\beta(1+\delta)}} \right)\int_{B(x_{m},1+\delta)}G(x_{m},y)\,dvol(y)\] \[\geq m\;e^{\beta d(o,x_{m})}\left(\frac{1}{e^{\beta(1+\delta)}} \int_{B(o,1+\delta)}G(o,y)\,dvol(y)\right)\;.\]
Thus as \(m\to+\infty\) we get that \(e^{-\beta d(o,x_{m})}u(x_{m})\to+\infty\). By construction, all the points \(x_{m}\) lie on the geodesic \(\gamma_{\xi}\) and are contained in the balls \(B\left(\gamma_{\xi_{j_{m}}^{(m)}}\left(t_{j_{m}}^{(m)}\right),1+\delta\right)\). Then as by (5.14),
\[t_{j_{m}}^{(m)}\geq\log\left(\frac{2^{1/s}}{C_{4}}\right)+\left(\frac{k_{m}}{ h-\beta}\right)\log 2\to+\infty\;\text{as}\;m\to+\infty\;,\]
it follows that \(\xi\in E_{\beta}^{\infty}(u)\).
## 6. Proof of Theorem 1.7
As \(u\) is a positive superharmonic function, \(-u\) is a negative subharmonic function with its least harmonic majorant, \(F_{-u}\leq 0\). Then by Theorem 1.3 and Lemma 2.1,
\[u=P[\mu_{1}]+G[\mu_{2}]\;,\]
where \(\mu_{1}\) is a finite, positive Borel measure on \(\partial X\) and \(\mu_{2}\) is a Radon measure on \(X\). Then as
\[E_{\beta}(u)\subset E_{\beta}\left(P[\mu_{1}]\right)\cup E_{\beta}\left(G[\mu_{2} ]\right)\,,\]
we have from Theorem 1.1, and Theorem 1.4 that
\[dim_{\mathcal{H}}E_{\beta}(u)\leq\max\left\{dim_{\mathcal{H}}E_{\beta}\left(P[ \mu_{1}]\right),\,dim_{\mathcal{H}}E_{\beta}\left(G[\mu_{2}]\right)\right\} \leq b^{\prime}(h-\beta)/s\,.\]
Similarly one concludes that
\[\mathcal{H}^{b^{\prime}(h-\beta)/s}\left(E_{\beta}^{\infty}(u)\right)=0\,.\]
The converse part follows from Theorem 1.5.
This completes the proof of Theorem 1.7.
## Acknowledgement
The author would like to thank Prof. Kingshook Biswas for many illuminating discussions and suggestions. The author is supported by a research fellowship from Indian Statistical Institute.
|
2309.15466 | Observing gravitational redshift with X-Ray emission in galaxy clusters
with Athena X-IFU | Context. The Doppler shift predicted by general relativity for light escaping
a gravitational potential has been observed on Earth as well as in the
direction of various stars and galaxy clusters at optical wavelengths. Aims.
Observing the gravitational redshift in the X-ray band within galaxy clusters
could provide information on their properties and, in particular, their
gravitational potential. We present a feasibility study of such a measurement,
using the capabilities of the next-generation European X-ray observatory
Athena. Methods. We used a simple generalized Navarro-Frenk-White potential
model along with a beta-model for the density of baryonic matter, which sets
the emission to provide an estimation of the observed redshift in the simplest
of cases. We generated mock observations with the Athena X-ray Integral Field
Unit (X-IFU) for a nearby massive cluster, while seeking to recover the
gravitational redshift along with other properties of the toy model cluster.
Results. We investigated the observability of the gravitational redshift in an
idealized test case of a nearby massive cluster with the Athena X-IFU
instrument, as well as its use in probing the properties of the potential well.
We were also able to constrain the mass to a 20 % level of precision and the
cosmological redshift to less than 1%, within a simplified and idealized
observational framework. More refined simulations accounting for further
effects such as the internal gas motions and the actual shape of the potential
well are required to fully investigate the feasibility of measuring the
gravitational redshift for a single target or statistically over a sample of
galaxy clusters. | Alexeï Molin, Nicolas Clerc, Étienne Pointecouteau, François Pajot, Edoardo Cuchetti | 2023-09-27T08:04:10Z | http://arxiv.org/abs/2309.15466v1 | # Observing gravitational redshift with X-Ray emission in galaxy clusters with Athena X-IFU
###### Abstract
Context:The Doppler shift predicted by general relativity for light escaping a gravitational potential has been observed on Earth as well as in the direction of various stars and galaxy clusters at optical wavelengths.
Aims:Observing the gravitational redshift in the X-ray band within galaxy clusters could provide information on their properties and, in particular, their gravitational potential. We present a feasibility study of such a measurement, using the capabilities of the next-generation European X-ray observatory Athena.
Methods:We used a simple generalized Navarro-Frenk-White potential model along with a \(\beta\)-model for the density of baryonic matter, which sets the emission to provide an estimation of the observed redshift in the simplest of cases. We generated mock observations with the Athena X-ray Integral Field Unit (X-IFU) for a nearby massive cluster, while seeking to recover the gravitational redshift along with other properties of the toy model cluster.
Results:We investigated the observability of the gravitational redshift in an idealized test case of a nearby massive cluster with the Athena X-IFU instrument, as well as its use in probing the properties of the potential well. We were also able to constrain the mass to a \(\sim\)20 % level of precision and the cosmological redshift to less than \(\sim\)1%, within a simplified and idealized observational framework. More refined simulations accounting for further effects such as the internal gas motions and the actual shape of the potential well are required to fully investigate the feasibility of measuring the gravitational redshift for a single target or statistically over a sample of galaxy clusters.
## 1 Introduction
Gravitational redshift is caused by the loss of energy of a photon emitted within a gravitational potential and traveling through it. This effect is predicted by general relativity (Einstein, 1916) as well as by most alternative gravity theories (Cataneo and Rapetti, 2018). The effective associated redshift is given by \(\Delta\Psi/c^{2}\), where \(\Delta\Psi\) is the difference in the gravitational potential between the point of emission and the observer, which is mainly the potential due to the mass of the considered astrophysical object along the line of sight. Hence, the measurement of this redshift can be used to probe either the potential or, equivalently, the mass distribution from which it derives.
Clusters of galaxies, as the most massive gravitationally bound objects in the Universe, are reasonable candidates for the observation of this effect. Some of the earliest predictions for such observations in clusters of galaxies appear in Cappi (1995) and Broadhurst and Scannapieco (2000). Measurements through optical spectra soon followed, as in Wojtak et al. (2011) or, more recently, Mpetha et al. (2021) and Rosselli et al. (2023). A comprehensive overview is provided in Sect. 4 of Cataneo and Rapetti (2018), which focuses on tests of gravity with galaxy clusters. In that same section, the authors discuss the observability of the gravitational redshift from X-ray spectra of clusters of galaxies, suggesting that future instruments might be able to achieve such measurements.
The X-Ray emission from the intracluster medium (ICM) in galaxy clusters arises mainly from the radiative cooling of the hot gas infalling within the halo potential well (Sarazin, 1988). The ICM is routinely observed in X-rays from the center of clusters to their outskirts (Ettori et al., 2019; Walker and Lau, 2022). This hot gas is highly ionized and shows strong emission lines from the various elements within it. These emission lines offer access to high precision measurements of the redshift through high resolution spectroscopy (Hitomi Collaboration et al., 2016). It is thus suited for the observation of the gravitational redshift as (at first order), the hot gas distribution follows that of the dark matter, which is the main source of the halo gravitational well. Mapping the weak signal expected from gravitational redshift requires (i) high resolution X-ray spectroscopy in order to retain a high precision over the redshift determination and (ii) a spatial resolution mapping capability to trace the gravitational redshift induced gradient from the center to the cluster's outer parts.
Current X-ray missions such as XMM-Newton or Chandra only provide one of these products at a time, with either low-spectral-resolution imagers such as EPIC (Turner et al., 2001) and ACIS (Garmire et al., 2003) or high-spectral-resolution dispersive spectrometers such as RGS (den Herder et al., 2001) and LETG/HETG (Brinkman et al., 2000; Canizares et al., 2005). The upcoming generation of X-ray observatory will carry integral field unit spectrometers to offer the capability to achieve spatially resolved high-spectral resolution observation in X-rays. The Resolve instrument (Ishisaki et al., 2022) on board the XRISM missions (Tashiro et al., 2020) will soon fly, although the observation of the outer parts of clusters will likely be very limited due to the modest size of the XRISM mirrors impeding the measurement of
small redshift gradients. The X-ray Integral Field Unit (X-IFU, hereafter) on board the Athena observatory implements the science theme of the "hot and energetic Universe" (Nandra et al., 2013) and it should provide the adequate performances. The X-IFU is required to have a 5 arcmin field of view (FoV) with a full width half maximum (FWHM) resolution of 5 arcseconds and a spectral resolution of 2.5 eV over the 0.2-7 keV energy range (Barret et al., 2018, 2023). With this work, we investigate the feasibility of measuring the gravitational redshift in massive clusters of galaxies with the X-IFU instrument.
The work and results presented in this paper were obtained with the current baseline configuration for the Athena mission. Because of the actual programmatic context, the European Space Agency is revisiting the formulation of the Athena mission science case and specifications. Our results may thus be affected by to the to-be-defined instrumental configuration of the Athena mission. Throughout this study, we assume a \(\Lambda\)CDM cosmology with \(h=H_{0}/100\,\mathrm{km/s/Mpc}=0.7\), \(\Omega_{\Lambda}=0.7\) and \(\Omega_{m}=0.3\). In this framework, at a redshift of \(z=0.1\), 10 kilo-parsecs (kpc) correspond to a angular extent of 5.4 arcsec.
## 2 X-IFU/Athena mock observations
In order to investigate the observability of the gravitational redshift from the X-Ray emission of galaxy clusters, we used a cluster toy model, based on the simulations presented in Cucchetti et al. (2018), to produce simulated observations with Athena X-IFU using the SIXTE instrument end-to-end simulator (Wilms et al., 2014; Dauser et al., 2019). The emission models and spectral fitting rely on the xspec software (Arnaud, 1996).
### The X-IFU instrument
As a next-generation European X-ray observatory, Athena (Barcons et al., 2017), will board an integral field unit spectrometer with unprecedented capabilities, the X-IFU. It will allow for the spatial mapping of emission lines over extended sources such as galaxy clusters, allowing for spatially resolved spectroscopy with a power of R\(\sim\)1000 (Barret, 2022). The X-IFU will be equipped with a high precision detection chain including an array of more than a thousand Transition Edge Sensors (TES) cooled to 55 mK and high precision readouts electronics. It will provide the required high-spectral-resolution of 2.5 eV FWHM over the 0.2-7 keV energy band. Combined with the large collective area of the Athena mirrors, it will benefit from an effective area of \(\sim\)1m\({}^{2}\) at 1 keV. The requirement for the spatial resolution of the Athena mirrors is 5 arcsec half energy width (HEW). Taken together, these performances will fully open the era of spatially resolved high spectral resolution at X-ray wavelengths, in the wake of the first glimpses provided by the SXS instrument onboard the Hitomi satellite and of the upcoming observation of the Resolve instrument (Sato et al., 2023) on board the XRISM mission (XRISM Science Team, 2022).
### Cluster toy model
For the purpose of our study, we chose to model a nearby massive cluster, with \(z=0.1\) and \(M_{200}=10^{15}\) M\({}_{\odot}\). Accounting for the faintness of the gravitational redshift effect, local and very massive clusters are ideal targets to aim at its detection. Lower and/or more distant clusters would render such detection almost impossible and, as such, they are not further considered in this study. A more detailed discussion on the cluster choice is provided in Sect. 3.1. The parameters of the cluster according to the model described below are summarized in Table 1. The angular size of the cluster at this distance, noted \(\theta_{200}\), is provided as well.
The cluster toy model consists of a gas density model and a dark matter density model. The cluster is discretized as a grid of emitting particles, to which the parameters of the emission model are assigned based on their position in the cluster. The size of the grid is chosen such that it contains one X-IFU FoV and is deep enough to contain \(R_{200}\) of the cluster along the line of sight. At the chosen redshift, this corresponds to a grid of 7500 kpc in depth (i.e., along the line of sight), and 938x938 kpc in width.
#### 2.2.1 Redshift
The redshift of photons emitted in the cluster is the composition of multiple sources, which are detailed with the following equations from Cataneo & Rapetti (2018), for the emission point \(\mathbf{x}\) and an observer lying at the origin of the reference frame:
\[1+z_{\rm tot}=(1+z_{\rm cosmo})\left[1+\frac{1}{c^{2}}(\Psi(0)-\Psi(\mathbf{x}))+ \frac{\mathbf{n}\cdot\mathbf{v}}{c}+\frac{v^{2}}{2c^{2}}\right] \tag{1}\]
where \(z_{\rm cosmo}\) is the cosmological redshift, \(\Psi\) is the gravitational potential, \(\mathbf{n}\) is the unitary vector parallel to the line of sight, and \(\mathbf{v}\) is the velocity vector of the emitting point relative to the observer. The two last terms correspond respectively to the Doppler shift along the line of sight and the relativistic transverse Doppler shift. In the ICM, these are mainly due to the bulk and turbulent motions of the gas. We deliberately chose not to address these intrinsic motions of the gas in our study (we further discuss this choice in Sect. 5). The resulting approximation is then:
\[z_{\rm grav}=\frac{\Delta\Psi}{c^{2}}. \tag{2}\]
#### 2.2.2 Dark matter density model
We assumed that the dark matter (DM) density follows a generalized Navarro Frank White radial profile (hereafter, gNFW). The gNFW profile has been worked out based on a generalization of the NFW profile (Navarro et al., 1997; Nagai et al., 2007). The gNFW profile has three slope indexes, \(\alpha\), \(\beta\), and \(\gamma\), where \(\beta\) is the inner slope and \(\gamma\) is the outer one. We used a version presented in Zhu et al. (2019), which sets \(\alpha\) and \(\beta\) to 1. The profile is otherwise characterized by \(r_{s}\), a scale radius, the overdensity, \(\delta_{c}\), and \(\gamma\) is the asymptotic slope when \(r\to 0\). The scale radius, \(r_{s}\), is related to the mass, \(M_{\delta}\), at the density contrast, \(\delta\), (different from the overdensity) times the critical density of the Universe at redshift \(z\), \(\rho_{\rm crit}(z)\), as follows:
\[r_{s}=\left(\frac{M_{\delta}}{\frac{4}{3}\pi\delta\rho_{\rm crit}(z)}\right)^ {\frac{1}{3}}\frac{1}{c_{\delta}}, \tag{3}\]
with \(c_{\delta}\) being the concentration parameter. The overdensity, \(\delta_{c}\), can be expressed as a function of \(M_{\delta}\) as follows :
\[\delta_{c}=\frac{M_{\delta}}{\int_{0}^{R_{\delta}}\frac{4\pi r^{2}\rho_{\rm crit }(z)}{(r/r_{s})^{3}(1+r/r_{s})^{3}}}, \tag{4}\]
with \(R_{\delta}=c_{\delta}\cdot r_{s}\). This expression can be developed in the case of a gNFW density profile, as provided in the Appendix A. This entirely describes the DM density from which the gravitational
potential can be derived analytically (Zhu et al. [2019]). In doing this, we neglect the contribution of the gas and stars to the gravitational potential. We use this model in the following sections. We also chose to add a constant to the potential \(\phi\) to set \(\phi(r\rightarrow+\infty)=0\). This allows for a straightforward conversion between the potential and the redshift of light emitted from a point \(\mathbf{r}\) in the cluster such that \(z=\Psi(\mathbf{r})/c^{2}\) or, when expressed as an equivalent velocity shift, \(v_{z}=\Psi(\mathbf{r})/c\).
#### 2.2.3 Gas density model
We modeled the emission of our toy model cluster ICM with a broadened APEC model (bapec) under xspec(Smith et al. [2001]). This model represents the emission of a collisional, optically thin, diffuse plasma, mainly through the Bremsstrahlung radiation for the continuum, as well as the atomic lines due to the different processes at play in the plasma (e.g., dielectronic recombination, ionization, and radiative transitions). The broadening of the lines is only thermal in our simulations, excluding other possible sources of broadening such as bulk motions or turbulence.
For this study, we restrained ourselves to a simple isothermal cluster with homogeneous abundance through the cluster. We set the temperature such that \(k_{B}T=7\) keV. The solar abundances follows that of Anders & Grevesse ([1989]) and we set the intra-cluster gas global abundance such that \(Z/Z_{\rm solar}=0.7\). This leaves only the redshift and the normalisation as varying parameters for the bapec model.
The norm of each emitting volume element, V, of the cluster is defined as:
\[N=\frac{10^{-14}}{4\pi(D_{A}(1+z))^{2}}\int n_{e}n_{p}dV, \tag{5}\]
with \(D_{A}\) as the angular distance of the cluster in cm, \(z\) as the cosmological redshift, \(n_{e}\) and \(n_{p}\) as the electron and proton particle densities in cm\({}^{-3}\), respectively. The resulting norm is given in photons per unit of volume, per unit of effective area, thus: cm\({}^{3}\). For a fully ionized plasma, we can consider \(n_{e}=1.2n_{p}\). The emission model is multiplied by a photo-absorption model, phabs under XSPEC, using cross-sections from Verner et al. ([1996]), to account for the Galactic absorption. We fixed the hydrogen column density, noted \(n_{H}\), to \(0.03\times 10^{22}\)cm\({}^{-2}\).
For analytical convenience, we adopted a simple \(\beta\)-model (Cavaliere & Fusco-Femiano [1976]) as our gas density model, although it is not the best fit to represent the actual distribution of the intra-cluster gas. It is parameterized by the core electron density, \(n_{0}\), the core radius, \(r_{c}\), and the slope, \(\beta\).
### Foreground and background emissions
We accounted for the astrophysical foreground and background emissions in our simulations following the model proposed by McCammon et al. ([2002]). It includes a non absorbed thermal model representing the local bubble (apec), a second absorbed one for the Galactic halo (phabs*apec), and an absorbed power law for the cosmic X-ray background (CXB, phabs*powerlaw). We adopted the parametrisation provided by Lotti et al. ([2014]). The hydrogen column density is kept at the same value as for the cluster model.
The instrumental background is also accounted for in our simulations. It is managed entirely by the SIXTE tool according to the X-IFU requirements of \(5\times 10^{-3}[\mathrm{counts}/\mathrm{s}/\mathrm{cm}^{2}/\mathrm{keV}]\). This instrumental background mainly results from the high-energy cosmic rays hitting the neighborhood of the detector.
### Observational strategies
We investigated various observational configurations in order to assess the feasibility of measuring the gravitational redshift with the X-IFU instrument on board Athena. We varied the number of X-IFU pointings from one to three and individual exposures from 125 ksec (kiloseconds) to 1 Msec. The six investigated configurations are illustrated in Fig 1. The various multiple pointings configurations allow us to sample measurements of the ICM emission as far as the characteristic radius of \(\sim 0.6R_{200}\) (\(\sim 0.9R_{500}\)).
## 3 Mock data analysis
The main output of the SIXTE simulator is a mock event list of the X-IFU observation. For all the recorded events, namely, the X-ray photons that have been detected, the measured energy, detector and sky coordinates, time of arrival, and so on, are provided. From the SIXTE mock event lists, we generated count images. The spectra were computed within concentric annuli of
\begin{table}
\begin{tabular}{c c} \hline Parameter & Value \\ \hline \(M_{200}\) & \(1\cdot 10^{15}M_{\odot}\) \\ \(R_{200}(\theta_{200})\) & 2 Mpc (18.5’) \\ \(c_{200}\) & 4.5 \\ \(\gamma\) & 1.2 \\ \(z_{\mathrm{csum}00}\) & 0.1 \\ \(r_{c}\) & 400 kpc \\ \(\beta\) & 2/3 \\ \(n_{0}\) & \(3\cdot 10^{-3}\) [cm\({}^{-3}\)] \\ \(k_{B}T\) & 7 [keV] \\ Abundance (\(Z/Z_{\odot}\)) & 0.7 \\ nH & 0.03 [\(10^{22}\)cm\({}^{-2}\)] \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of our toy model clusters for the gravitational potential, DM and gas densities, and the gas emission.
Figure 1: Observing strategies considered in our simulations. The layout of the X-IFU pointings is shown in the right column together with the exposure time for each pointing in ksec. The count map for the configuration “uniform exposure 1” is plotted in the first line and shows the cluster center.
constant width or over full X-IFU FoV. Each spectrum is fitted using xspec with the phabs*bapec model and the aforementioned model for the background, with the redshift, the cluster emission normalisation, temperature, abundance, and the background emission normalisations as free parameters. The velocity broadening of the bapec model, which is set to 0 when simulating the cluster emission, is also set to 0 during the fit. The justification for this choice is discussed in Sect. 3.2. From the best-fit value in each bin, we reconstructed the redshift profile used to assess the gravitational redshift. In Figure 2, we show an example of a count map for an observation of three contiguous pointings (i.e., the configuration named "uniform exposure 1," as defined in Figure 1).
### Modeling the observed gravitational redshift
From the DM potential well and gas emission models described in Sects. 2.2.2 and 2.2.3, we modeled the observed redshift as an emission-weighted redshift along the line of sight. Then, it is expressed as:
\[z_{obs,los}(\mathbf{\theta})=\frac{\int_{Z(\mathbf{r})}e(\mathbf{r})dl}{\int \epsilon(\mathbf{r})dl}, \tag{6}\]
with \(\epsilon(\mathbf{r})\) as the emissivity at \(\mathbf{r}\), \(l\) as the line of sight, and \(\mathbf{\theta}\) as the angular distance to the cluster center. The finite dimension of the grid for the cluster model restricts the precision of the integrals to a finite length. We quantified that the loss in flux due to this cutoff is less than 1% by computing the integrals of the emissivity for different cutoff values. We approximate the integrals with a double exponential quadrature integral (Takahasi & Mori, 1974), which allows for a very good approximation (within numerical errors) and important computational time improvement. The redshift in a single bin is obtained with :
\[z_{bin} = \frac{\int_{bins,los}z_{obs,los}(\mathbf{\theta})\epsilon_{obs,los}( \mathbf{\theta})d\mathbf{\theta}}{\int_{S_{in}}\epsilon_{obs,\ los}d\mathbf{\theta}}, \tag{7}\]
with \(S_{bin}\) the area of the bin and \(\epsilon_{obs,\ los}=\int_{l}\epsilon(\mathbf{r})dl\). This formula remains true for any bin shape.
Models of the observed scaled radial profile for the gravitational redshift, \(z_{obs}\), are shown on Fig. 3 as a function of the cluster mass. As expected, more massive clusters show a deeper and steeper potential, making them obvious target for measurements of the gravitational redshift. The drawback is the angular size and extent of the cluster in view of the X-IFU FoV, limiting the emission sampling in the outer parts of clusters in a single X-IFU pointing. Even when the emission of more distant (hence, less extended) clusters at a given mass would be better sampled spatially, it quickly suffers from the dimming of the X-ray flux with redshift. The need for a balance between apparent luminosity and angular size, led us to choose a local massive cluster as a test case for our study.
### Line shift measurements and fitting procedure
Figure 3 shows how the measurement of the gravitational redshift in a nearby massive cluster requires to measure redshifts
Figure 3: Emission weighted radial profiles of the gravitational redshift for clusters of different masses.
Figure 2: Count map in each pixel (\(\sim\)5x5 arcsec) of three adjacent 1Ms pointings of X-IFU (corresponding to uniform exposure 1 in Table 1) of a \(10^{15}M_{\odot}\) and \(z=0.1\) cluster. The color scale is in units of \(\log_{10}\) of the counts. The center of the cluster is at (RA, DEC) = (0, 0)
with a precision of a few km/s. This is almost an order of magnitude lower than the line shift expected from bulk motions and turbulence in the ICM (Kunz et al., 2022; Simionescu et al., 2019). The requirement on the line shift and line broadening precisions for the X-IFU in its current configuration is respectively of 10 and 20 km/s (1\(\sigma\) level), for a typical observation time of \(\sim\)100 ksec. This imposes the need for an energy scale precision to better than 0.4 eV at \(\sim\) 6 keV (1\(\sigma\) level) and set over the 0.2-7 keV energy range (e.g., 0.4 eV/6 keV \(\cdot c\simeq\) 20km/s at \(\sim\)6 keV Cucchetti et al., 2018). This means that no incoming photon can have its energy determined with a precision better than 0.4 eV. It should not, however, be interpreted as a strict limitation on line energy and, thus, the line speed measurements. Over a whole observation, the factors leading to the variation of the energy scale will be corrected every few ksec (currently 4ks considered for the X-IFU), and are expected to vary evenly around the 0 point. This means that over a typical observation time (i.e., 10-1000 ksec), the energy scale variations should mainly result in a broadening of the lines. Assuming that other instrumental systematics are under control, the uncertainty on the line shift will be the only one remaining. It should remain below 10 km/s for a 100 ksec exposure time observation. It may thus be neglected in our 1 Msec simulations. We also note that the current version of SIXTE does not implement the effect of this in-flight energy scale correction.
For a given spectral resolution, given enough time, any precision over a Gaussian line centroid can be achieved. The associated error on the line centroid, \(\sigma_{r_{s}}\) goes as \(\sigma_{v}\sim\sigma_{res}/\mathrm{S/N}\) (with S/N as the signal-to-noise ratio, typically the square root of the number of counts for photon noise). Hence, the only restriction for measuring line shifts, for a given resolution, is observing time, or conversely, for a given observation time, is the instrumental resolution. The approximation over the centroid given previously is true for a Gaussian line (see Cucchetti, 2019, for extended details).
Besides the physical motivation of introducing thermal broadening, the reason for choosing the \(\mathsf{bapec}\) model over a simple apec model is practical. The main contributing element for the redshift measurement in the fitting procedure is the lines. In the case of a simple APEC model, the lines are considered as infinitely thin. This means that fitting the line position is automatically limited by the energy bin width of the instrument's response. More precisely, the likelihood becomes discretized and usual fitting procedures, such as gradient descent, do not guarantee proper convergence and/or proper parameter error estimation. We expand on this issue and illustrate it with a plot in Appendix C. However, with broadened lines (as is the case for a \(\mathsf{bapec}\) model), this problem does not arise, and we can be confident in the fitting procedure and its outcome for the redshift. Hence, we used a \(\mathsf{bapec}\) model for modeling and fitting the emission of the cluster. The velocity broadening was set to 0 in both cases, leaving only the thermal broadening accounted for.
Our spectra are binned according to the Kaastra & Bleeker (2016) method. They are then fitted over the whole X-IFU energy band, namely, 0.2-12 keV. We chose to make use of the whole information carried by the many emission lines of the ICM. We considered that this maximizes the use for the lines signal to extract the redshift. In the case of a real cluster (or a more evolved model), this would also allow for enough precision to be provided on the redshift with varying density, temperature, and abundances depending on the regions of the cluster investigated (i.e., central parts or outskirts). A focus on a single specific strong line (e.g., Fe K-\(\alpha\)) could also deliver a constrained enough estimation of the gravitational redshift. However, such a investigation is beyond the scope of this paper.
## 4 Results
In this section, we first evaluate the precision in the construction of the redshift profile that is achievable for a simple, single X-IFU 1Ms pointed observation of our toy model cluster. This allows for an evaluation of the reproducibility of such a measurement. In the second part, we investigate other observational strategies, using multiple X-IFU pointings and exposures to evaluate whether the observed redshift profile can be used as a probe to constrain parameters of the gravitational potential. We show an example the spectrum obtained with a 1Ms observation of the center of the cluster in Figure 4.
### Recovery of the gravitational redshift
From the various observing configurations of single or multiple X-IFU pointings as defined in Sect. 2.4, we are able to retrieve the radial profile for the redshift. The Poisson noise in our simulations is the only stochastic process. To check the reproducibility of our reconstructed redshift profile with respect to this source of noise, we ran 100 simulations for our "single-field" observational configuration. Figure 5 shows the mean profile and its dispersion over the 100 reconstructed profiles, together with a single profile with its error bars derived from the \(\mathsf{xspec}\) fit. The profiles are reconstructed over ten circular concentric annuli from the cluster center and covering the whole FoV. The conversion to velocities shown on the y-axis assumes a prior knowledge
Figure 4: Simulated spectrum from a 1Ms observation of the center of our cluster toy model (black line) and its best fit (red line). This spectrum is extracted from the central bin of the circular binning shown in the summary table in Fig. 1. An inset figure shows the details of the lines observed around 1 keV. The lower panel shows the error in units of \(\chi\) for each bin. The quantized shape observed at the highest energies comes from the low number counts observed at these high energies.
of the cosmological redshift (\(z_{\rm cosmo}\) in Eq. 1). This exercise of reproducibility illustrates the dispersion of profiles, which is not fully encapsulated within the error bar of each single measurement. This exercise has been led only for this specific configuration because of its heavy computational demand (about 2 hours for a single simulation on 32 cores CPUs, hence, a total of about 200 hours for the reproducibility study on a single toy model cluster). However, changing the binning and/or exposure time should not affect the recovery of the profile on average.
### Constraining the cluster parameters
The gravitational redshift directly links to the halo potential well and, thereby, to the underlying total mass of the cluster. The measurement of the gravitational redshift profile can thus be used as a probe to determine the cluster mass. The assumption of a prior and perfect knowledge of the cosmological redshift allows for one of the parameters of the model to be fit, such as the cluster mass. We can fit the expected redshift profile from the model detailed in Sect. 3.1 to the redshift profile obtained from our mock observations. As a toy model, we used a simple least squares minimization for the fitting procedure. Figure 6 shows the distribution of the 100 best-fit curves, fitting only the mass of the cluster. The distribution of the best-fit profiles is centered on the expected input profile, showing little to no bias in the profile recovery. This idealistic situation leads to an exceedingly optimistic estimation of the halo mass. We obtained a mean best-fit mass of \(0.998\pm 0.018\cdot 10^{15}M_{\odot}\) (for an expected mass of \(10^{15}M_{\odot}\)).
In reality, the situation would be less optimistic as none of the cluster parameters (e.g., the density, temperature, shape of the DM distribution, etc.) would be known perfectly. As such, they will have to be determined from the X-ray observations or constrained from ancillary data. The cosmological redshift could for instance be constrained from optical observations. The precision of this redshift would then condition our ability to estimate the gravitational redshift. All these uncertainties and unknowns would have to be formulated as priors in our analysis. As a first step towards this more complex situation, we considered the cosmological redshift to be completely undetermined. Because the mass and cosmological redshift similarly impact the observed redshift profile, we need to use the entire shape of the profile to disentangle their correlated effect (see, e.g., Fig. 3). To address this issue, we used multiple pointings mock observations (see Fig. 1). To maximize the signal-to-noise ratio (S/N) in the determination of the redshift, we considered each pointing of our three pointings configurations as a single bin and derived the associated spectra over the whole X-IFU FoV. In the case of a real potential well, small-scale variations in the gravitational field (and, thus, in the gravitational redshift distribution) could be expected, although this is not the case for our smooth gNFW toy model.
In the left panel of Figure 7, we compare the posterior distribution for mass and redshift obtained with the Markov chain Monte Carlo (MCMC) method (using the emcee package) on three different observing configurations, with a single, two and three X-IFU pointings, respectively. The single pointing configuration is binned with ten circular annuli, whereas the two multiple pointings configurations are binned into a single region for each pointing. For the best case scenario, that is "uniform exposure 1," the constraint obtained on the mass is \(M=0.80^{+0.17}_{-0.14}\cdot 10^{15}M_{\odot}\), whereas the single field alone provides \(M=0.63^{+0.58}_{-0.30}\cdot 10^{15}M_{\odot}\). The errors are provided at the 68 % confidence level. While "uniform exposure 2" seems to be more centered on the true value, the first one brings more constraints and, thus, lower errors on the reconstruction of the mass and redshift due to the added third pointing (see Fig. 1). The shift with respect to the true values of the parameters is due to the sample variance, which affect both configurations similarly, each being a single statistical realisation of the cluster emission (see the reproducibility study in Sect. 4.1). We evaluated the Pearson correlation coefficient obtained over the mass and redshift samples in the distributions for each of these strategies. In all of them, the coefficient remains above 0.9 in absolute value. The strong degeneracy between the mass and redshift is only restrained to a smaller range for strategies mapping the outer parts of the cluster. In the presented case, the astrophysical and instrument backgrounds have little impact, as we have set relatively important
Figure 5: Reproductibility in the reconstruction of the radial profile of the gravitational redshift over a single X-IFU pointing at the center of our toy model cluster. The blue points and associated errors show the mean profile and its associated dispersion over a 100 simulations of a 1Ms observation (see Sect. 4.1.) The red points show the example of a single profile and the associated errors provided by xspec.
Figure 6: Distribution of the best fit profiles to 100 simulated reconstructed gravitational redshift profiles from X-IFU mock observations. The expected profile, that was used as an input in all the simulations, is plotted with the green triangles.
exposure times and since we are targeting a nearby massive cluster (see Appendix B for further details on the simulations without background).
We also performed simulations with more realistic exposure times. In order to optimize the S/N across the radial range probed by our multiple pointings observations, we doubled the exposure time from one pointing to the next adjacent one (see Fig 1: "mixed exposure 1" accounts for a total of 1.75 Msec exposure, whereas "mixed exposure 2" for a total of 875 ksec. The results for these observing strategies are presented in the right panel of Fig. 7). "Mixed exposure 1" provides \(M=1.48^{+0.28}_{-0.23}\cdot 10^{15}M_{\odot}\) and "mixed exposure 2" provides \(M=1.84^{+0.56}_{-0.43}\cdot 10^{15}M_{\odot}\).
The posterior distribution retrieved from mixed exposures 1 and 2 in Fig. 7 are centered more than 1\(\sigma\) away from the input values. We believe that the line of sight mixing causes an underestimation of the errors that cannot be accounted for by the bapec model under xspec. Because the likelihood used for the MCMC is using these very same errors from xspec, the retrieved parameter distribution is showing optimistic error levels. In addition, the mixed exposure 1 strategy gives a heavier weight to the pointings far from the center. This compensates the lower signal in these regions. However, the fit in these regions can be biased not only from the line of sight mixing but also from the stronger contribution of the background; hence, this encompasses a biased redshift measurement and a biased posterior distribution.
The previous tests assume that the shape of the potential was known, namely, that the parameters \(\gamma\) and \(c_{200}\) were fixed at their known input values. A final test was run freeing all the parameters of the gravitational potential, including the mass, \(\gamma\), and \(c\). The result is shown in Figure 8. The distributions show that all the parameters span a large range of values, typical of unconstrained models. The concentration parameter, \(c\) is especially poorly constrained; it spans the entire uniform prior range, from 0 to 10. The upper left distribution shows the strong correlation between the mass and the cosmological redshift when the shape of the expected redshift profile is not fixed by the other shape parameters. This shows that the gravitational redshift alone cannot constrain all the parameters of the potential. We recall that it is highly unlikely that such measurements would be carried out from the X-ray point of view only, without any other ancillary data sets or inputs (e.g., gravitational lensing).
## 5 Conclusions and discussion
In this work, we evaluated the possibility of observing the gravitational redshift in galaxy clusters in X-rays with future integral field spectrometers such as the Athena X-IFU. To that end, we created mock observations of an idealized massive and nearby galaxy cluster (the targets with the highest probability to be detected) with X-IFU, by using the SIXTE software. We analyzed the data with the xspec spectral analysis software. We reconstructed the gravitational redshift profile that we modeled through the shape of the cluster potential well and the X-ray emission of its gas content. We showed that: (1) X-IFU could recover the gravitational redshift for massive (\(M_{200}\sim 10^{15}\) M\({}_{\odot}\)) and nearby (\(z\sim 0.1\)) clusters within a quite large, but still achievable exposure time; and (2) the measurement if the gravitational redshift profile can be used to derive properties of the halo gravitational potential, such as its total mass.
These conclusions have to take into account the limitations of our model. Firstly, we stress that the gas mass fraction in our simulated cluster is relatively high (\(\sim\)20 % for \(M_{\rm gas}/M_{500}\) at \(R_{500}\)). This is due to our choice of a \(\beta\)-model for the gas distribution, which can overestimate the gas fraction at large radii. Moreover,
Figure 7: Corner plots in the mass and redshift plane and associated posterior distribution for both parameters. The different colors correspond to the different observing strategies listed in the legend and presented in Fig. 1. Left: Uniform exposures 1 and 2, compared with a single pointing observation. Right: Mix exposures 1 and 2, compared with a single pointing observation. The input reference values are plotted as dotted lines.
our choice of total mass, that is \(M_{200}=10^{15}M_{\odot}\), is rather conservative with respect to some local massive clusters (Ettori et al., 2019). Assuming \(M_{200}\simeq 1.5\cdot 10^{15}M_{\odot}\) would yield a gas fraction of about 15 %. With such a cluster, we would observe a higher amplitude for the redshift profile. The mass does not affect the S/N of the observed X-Ray spectra. Hence, this does not change the essence of our results, as the uncertainties come from the photon counts, and these are driven by the emission, exposure and distance.
Secondly, we neglected the motions of the gas in the cluster. These motions result in a Doppler shift in the emission, which is of the order of \(\sim\)100-1000 km/s (Kunz et al., 2022; Simionescu et al., 2019). This is an order of magnitude above the observed redshift. It means that in a real cluster, the observation of the gravitational redshift would be added to that of bulk motion. The gravitational redshift would then be a difficult quantity to estimate. However, we can consider things the other way round, and any precise measurement of bulk or turbulent velocities using line shifts will have account for the gravitational redshift as a systematic bias. An a priori knowledge of the total mass profile, thus of the gravitational potential would provide the proper estimate of such a bias on bulk and turbulent motions of the ICM hot gas. In addition to turbulence, the internal structures of the physical properties of galaxy clusters (density, temperature, pressure, abundances, etc) depart at various scales from the idealized hypothesis of sphericity and homogeneity we adopted (e.g., Kravtsov and Borgani, 2012; Lovisari and Maughan, 2022).
The line of sight mixing is another issue when challenging the limits of spectral precision. Because the ICM is optically thin, we observe the emission of all the points along the line of sight. This increases the signal, but causes the different emitted spectra to be mixed. Because many of the observed quantities are not additive and we are modeling the observed spectrum with a single model, we assumed that the observed profile, which is an emission weighted average over different redshifts, is the profile of the emission weighted average redshift. This assumption works because the center of the cluster is the most emitting part and thus dominates the signal. However, this would not be systematically the case for a non-spherically symmetric potential and/or ICM emission. This obviously would also concern other physical parameters such as the temperature and the chemical abundance. The line of sight mixing remains a weak effect, as we illustrate in Fig. 4, where the observed spectrum is perfectly overlapping with the model. The evaluation of the fit of all our mock observations holds a \(\chi^{2}/\)d.o.f in the 1 to 1.15 range.
One way around these issues could be to stack several observations of different clusters. In doing that, fluctuations from cluster to cluster, such as the shape and the turbulence, could average out, and the gravitational redshift would remain. This would then require scaling the clusters with respect to each other, as well as other practical considerations, such as the determination of the cluster center. Such investigations have already been done with optical data, and have been used to test alternative theories of gravity (Wojtak et al., 2011; Mpetha et al., 2021; **?**). Similar work could be undertaken on the basis of observations of clusters samples with future X-ray Integral Field Units, such as X-IFU.
By the time Athena X-IFU is launched, exploratory work could be carried out by the upcoming XRISM (XRISM Science Team, 2022) mission and its Resolve instrument. The first high X-ray resolution spectra provided by its short lived predecessor, the SXS instrument onboard Hitomi (Sato et al., 2023), in the direction of the Perseus cluster held very promising perspectives on our ability to understand better the evolution and formation of galaxy clusters. There is hope that future data analysis methods will also be able to make full use of such spectra, allowing for combinations of spectral an spatial information in the cluster, and perhaps allow for the inclusion of the gravitational redshift as another useful probe to our understanding of these large structures.
At the time of publishing this paper, the European Space Agency has sponsored a full reformulation of the Athena mission science case and specifications. We thus stress that the results of our study may have to be reconsidered according to the future new instrumental requirements of the Athena mission.
###### Acknowledgements.
We are grateful to the anonymous referee for fruitful comments that helped improving this paper. AM, EF and NC acknowledge the support of CNRS/INSU and CNES. The following python packages have been used throughout this work: astropy (Astropy Collaboration et al., 2013, 2018, 2022), chainconsumer (Hinton, 2016), enece (Foreman-Mackey et al., 2013), matplotlib (Hunter, 2007) and cmasher (van der Velden, 2020).
|
2309.05675 | SHAPE: A Sample-adaptive Hierarchical Prediction Network for Medication
Recommendation | Effectively medication recommendation with complex multimorbidity conditions
is a critical task in healthcare. Most existing works predicted medications
based on longitudinal records, which assumed the information transmitted
patterns of learning longitudinal sequence data are stable and intra-visit
medical events are serialized. However, the following conditions may have been
ignored: 1) A more compact encoder for intra-relationship in the intra-visit
medical event is urgent; 2) Strategies for learning accurate representations of
the variable longitudinal sequences of patients are different. In this paper,
we proposed a novel Sample-adaptive Hierarchical medicAtion Prediction nEtwork,
termed SHAPE, to tackle the above challenges in the medication recommendation
task. Specifically, we design a compact intra-visit set encoder to encode the
relationship in the medical event for obtaining visit-level representation and
then develop an inter-visit longitudinal encoder to learn the patient-level
longitudinal representation efficiently. To endow the model with the capability
of modeling the variable visit length, we introduce a soft curriculum learning
method to assign the difficulty of each sample automatically by the visit
length. Extensive experiments on a benchmark dataset verify the superiority of
our model compared with several state-of-the-art baselines. | Sicen Liu, Xiaolong Wang, JIngcheng Du, Yongshuai Hou, Xianbing Zhao, Hui Xu, Hui Wang, Yang Xiang, Buzhou Tang | 2023-09-09T08:28:04Z | http://arxiv.org/abs/2309.05675v1 | # SHAPE: A Sample-adaptive Hierarchical Prediction Network for Medication Recommendation
###### Abstract
Effectively medication recommendation with complex multimorbidity conditions is a critical task in healthcare. Most existing works predicted medications based on longitudinal records, which assumed the information transmitted patterns of learning longitudinal sequence data are stable and intra-visit medical events are serialized. However, the following conditions may have been ignored: 1) A more compact encoder for intra-relationship in the intra-visit medical event is urgent; 2) Strategies for learning accurate representations of the variable longitudinal sequences of patients are different. In this paper, we proposed a novel Sample-adaptive Hierarchical medicineAction Prediction nEtwork, termed SHAPE, to tackle the above challenges in the medication recommendation task. Specifically, we design a compact intra-visit set encoder to encode the relationship in the medical event for obtaining visit-level representation and then develop an inter-visit longitudinal encoder to learn the patient-level longitudinal representation efficiently. To endow the model with the capability of modeling the variable visit length, we introduce a soft curriculum learning method to assign the difficulty of each sample automatically by the visit length. Extensive experiments on a benchmark dataset verify the superiority of our model compared with several state-of-the-art baselines.
Medication recommendation, Curriculum learning, Set encoder, EHR data-mining
## I Introduction
Recently, massive health data have offered the opportunity to assist clinical decision-making through deep learning [1, 2, 3, 4, 5, 6]. Effective and safe medication combination recommendation for patients who suffer from multiple diseases is an essential task in healthcare [7, 8, 9]. There are a lot of research interests on medication recommendation task [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. The intuitive goal of medication recommendation is to predict medication sequences for a particular patient based on complex health conditions. Existing strategies of medication recommendation can be categorized into two types: 1) _Instance-based methods_, which recommend medication sequences only based on the current hospital visit(e.g., diagnosis, procedure) [20, 21, 22, 23]. The instance-based setting will ignore the temporal dependencies on the patient's health records. To overcome this issue, 2) _Longitudinal-based methods_ were proposed to leverage the longitudinal patient records to predict personalized medication. Most longitudinal methods pursue enhanced representations of patient health status based on the historical health records(e.g., diagnosis, procedure) and use this patient representation to conduct medication recommendations [24, 25, 26, 27, 28, 29, 30, 31].
Despite the significance and value of the methods in the longitudinal methods, they still suffer from two critical limitations: _1)_ One problem with existing longitudinal works is that they neglect the compact intra-relationships between medical events within each visit. In other words, they ignore the relationship between the same type of medical codes during a visit. _2)_ Existing longitudinal models are static. Namely, all samples go through the same fixed computation flow. This may be powerless on the shorter records, which lack historical information.
On the one hand, existing longitudinal methods use the historical code sequences (e.g., medication, diagnosis) within each visit to present the complex patient's health condition, where medical events are adopted independently and sparsely represented methods to obtain equal contributions representation in the current record. Most of them use multi-hot embedding methods to encode the structured data sequences. However, the impact of medical events varies for each patient, especially for patients with multimorbidity. For instance, during a visit, the health condition differs a lot between a
patient diagnosed with both _Chronic systolic heart failure_ and _Septic shock_ and a patient diagnosed with both _Septic shock_ and _Acute respiratory failure_. Previous methods ignore the compact intra-relationship of these medical events and the variable importance of each code for the patient.
On the other hand, such longitudinal patterns rely on historical health information and are powerless to the short visit that lacks historical records. As shown in Figure 1, we conduct the statistic on the MIMIC-III [32] dataset. We can see that most visit lengths are short than thrice. For each visit, we calculate the Jaccard between current medications and past medications. We can see that a large portion of prescribed medicines are similar to those recommended before, which means the results of medication recommendations rely on historical medication records. Additionally, we conduct fine-grained statistics of the MIMIC-III dataset, as shown in Figure 2. We calculate the proportion of medications that have appeared in history and the Jaccard with various visit windows. We can see that in the more extended visits, a large portion of drug sequences have been recommended before. However, the prevalence of short visit records in real-world clinical scenarios often lacks crucial historical medication information that could be referenced for treatment decisions. This phenomenon illustrates that a more robust strategy that could model the accurate representation of the variable longitudinal sequences is urgent.
To overcome these challenges, we proposed a novel **S**ample-adaptive **H**ierarchical m **L**nc**A**tion **P**rediction **n**E**twork, named **SHAPE**, to learn a more accurate representation of patients. In SHAPE, we develop a hierarchical patient representation framework. Concretely, we first tailor an intra-visit set encoder to learn the visit-level representation and then design an inter-visit longitudinal encoder for learning the patient-level longitudinal representation. By performing the intra-visit set encoder and inter-visit longitudinal encoder, collaborative information latent in longitudinal historical interactions is explicitly hierarchical encoded. To enhance the ability to represent various lengths of visit records, we adopt a soft curriculum learning method to help our SHAPE model learn these data patterns by assigning the difficulty weight to each sample. The experiments on a public dataset demonstrate the effectiveness of our proposed model.
The main contributions of this work are three-fold:
* We present a hierarchical encoder mechanism towards medication recommendation, which could dig for a more accurate representation from the various records of the patient. In particular, we first design an intra-visit set encoder to encode the medical events and obtain visit-level representation, and then develop an inter-visit longitudinal encoder for learning the patient-level longitudinal information.
* We design an adaptive curriculum learning module for variable patient visit records, especially for the short ones, which aims at an adaptive learning strategy over time and the length of patient records to improve the effectiveness of medication recommendations.
* Extensive experimental results on the public benchmark dataset validate the effectiveness and superiority of our proposed method.
## 2 Related work
### Medication recommendation
Existing medication recommendation algorithms can be categorized into instance-based methods and longitudinal approaches. Instance-based algorithms extract patient information only from current visits. For example, LEAP [22] extracts patient representation from the current visit record and decomposes the medication recommendation into a sequential decision-making process. Longitudinal-based methods are designed to leverage temporal dependencies within the patient's historical information. For example, RETAIN [24] uses two-level attention, which models the longitudinal information based on recurrent neural networks (RNN). GAMENet [26] uses augmented memory neural networks to fuse the drug-drug interactions and store the historical drug record to model the patient representation. MICRON [27] pays attention to the changes in patient health records and uses residual-based network inference to update the sequential representation. COGNet [29] conditional generates the medication combinations either copied from the historical drug records or direct generate new drugs. These existing efforts, however, still suffer from the following limitations. Existing work ignores that the intra-visit medical events may pay variable effects on differing the health status of the patient. Most of them use multi-hot embedding to encode the medical events in the current visit and ignore the difference of each medical event in intra-visit records. In this paper, we proposed a hierarchical architecture to learn the comprehensive patient representation. We use an intra-visit set encoder to learn a more accurate representation of intra-visit medical events and develop an inter-visit longitudinal encoder to learn longitudinal information about the patient.
Figure 1: The histogram of visit counts of MIMIC-III dataset (left) and the histogram of Jaccard between current medications and historical medications (right).
Figure 2: The statistics of (a) medication overlap rate and (b) Jaccard coefficients in various visits with different window sizes.
### _Curriculum learning_
The conventional curriculum learning methods formalized the organized learning process of humans and animals, which illustrates gradually more complex ones [33]. Alex et al. derived two distinct indicators (i.e., rate of increase in prediction accuracy and rate of increase in network complexity) of the learning process as the reward signal to maximize learning efficiency automatically [34]. Guy et al. introduce sorted samples with different scoring functions to assign the learning difficulty of each instance [35]. Recently, curriculum learning has been applied to different medical tasks. Basu et al. propose a curriculum inspired by human visual acuity, which reduces the texture biases for gallbladder cancer detection [36]. Guo et al. demonstrate the application of curriculum learning for drug molecular design [37]. Gu et al. utilized curriculum learning to improve the training efficiency of molecular graph learning [38]. According to Figure 1 and Figure 2, we found that the short and new visits samples account for most of the entire dataset. The conventional longitudinal methods are hard to fit this pattern because lacking a flexible ability to model the scenarios where the patients do not have enough historical medication records and diagnosis information about their health condition. In this paper, we propose a sample-adapting curriculum learning algorithm to assign the difficulty of each instance automatically.
## 3 Problem Formulation
### _Electrical Health Records (EHR)_
Patient EHR data contains comprehensive medical information about the patient. Formally, EHR for patient \(j\) can be represented as a sequence \(X_{j}=(x_{j}^{1},x_{j}^{2},\cdots,x_{j}^{T})\), where \(T\) is the corresponding totally visits number for patient \(j\). For the single visit \(x_{j}^{t}\) of patient \(j\) at \(t-\)th visit, where \(t\in\{1,2,\cdots,T\}\), we ignore the index \(j\) of patient to simplify notation. Then, the visit record is represented as \(x^{t}=(D^{t},P^{t},M^{t})\), where \(D^{t}\subseteq\{d_{1},d_{2},\cdots,d_{|\mathcal{D}|}\}\) denotes the set of diagnoses appeared in \(t\)-th visit, \(P^{t}\subseteq\{p_{1},p_{2},\cdots,p_{|\mathcal{P}|}\}\) denotes the set of procedures and \(M^{t}\subseteq\{m_{1},m_{2},\cdots,m_{|\mathcal{M}|}\}\) denotes the set of medications appeared in \(t\)-th visit. \(|\mathcal{D}|,|\mathcal{P}|\) and \(|\mathcal{M}|\) indicate the cardinality of corresponding element sets.
### _DDI Graph_
The medications may interact with other medications when prescribed, while the adverse drug-drug interactions (DDIs) graph records this interaction of adverse drug events. The DDI graph can be denoted as \(\mathcal{G}_{d}=\{\mathcal{V},\mathcal{E}_{d}\}\), where node set \(\mathcal{V}\in\{m_{1},m_{2},\cdots,m_{|\mathcal{M}|}\}\) represent the set of medications. The \(\mathcal{E}_{d}\) is the edge set of known DDIs between a pair of drugs. Adjacency matrix \(A_{d}\in\mathbb{R}^{|\mathcal{M}|\times|\mathcal{M}|}\) are defined to the construction of the graphs. When the \(A_{d}[i,j]=1\) means the \(i\)-th medication and \(j\)-th one could interact with each other.
### _Medication Recommendation Problem_
Given a patient EHR sequence \([x^{1},x^{2},\cdots,x^{t}]\) and the DDI graph \(\mathcal{G}_{d}\). For the multi-visit records patient, which includes the current diagnosis, procedure codes \([D^{t},P^{t}]\) and the historical records \([x^{1},x^{2},\cdots,x^{t-1}]\). Note that, for the record of new visit patients, there are only current diagnosis and procedure codes \([D^{1},P^{1}]\). The goal is to train a model to effectively recommend multiple medications by generating multi-label output \(\hat{y}_{t}\subseteq\{m_{1},m_{2},\cdots,m_{|\mathcal{M}|}\}\) for this patient.
## 4 The SHAPE Framework
In this section, we present the technical details of the proposed **SHAPE** framework. As illustrated in Figure 3, our model includes three components: (1) an **intra-visit set encoder** that learns the visit-level representation of the patient from the EHR data. (2) an **inter-visit longitudinal encoder** that takes the visit-level representation as input to learn the longitudinal information of the patient. (3) a **adaptive curriculum learning module** that cooperates with the prediction phase in the training stage to dynamically assign the difficulty weight of each instance by the patient visit length to improve the effectiveness of medication recommendations. Finally, the drug output is obtained from the sigmoid output representation.
### _Patient Representation_
The patient representation aims to learn a dense vector to represent a comprehensive patient's status. The physicians recommend medications based on the current diagnosis and procedure information during a clinical visit. Furthermore, the clinician also references the history of diagnosis, procedure, and medication records when the patient has historical visit records. Since the SHAPE is proposed for the generic patient, we use the three codes as the model input in the following, and the medication codes are always behind the other two medical events. Note that, for the patient who only once visit with diagnosis and procedure record, we apply a padding embedding as the medic input.
#### 4.1.1 code-level embedding
For predict the medication of multi-visit, we use the \([D^{t},P^{t},M^{t-1}]\) as the current input, where \(M^{t-1}\) is the previous medication record. We design three correspond embedding table \(E_{d}\in\mathbb{R}^{|\mathcal{D}|\times dim},E_{p}\in\mathbb{R}^{|\mathcal{P}| \times dim}\) and \(E_{m}\in\mathbb{R}^{|\mathcal{M}|\times dim}\), where the \(dim\) is the dimension of the embedding space. For the \(t-\)th visit, the set of medical events \(d^{(t)}\in D^{t},p^{(t)}\in P^{t}\), and \(m^{(t-1)}\in M^{t-1}\) was transfer to the embedding space.
\[d_{e}^{(t)} =d^{(t)}E_{d} \tag{1}\] \[p_{e}^{(t)} =p^{(t)}E_{p}\] (2) \[m_{e}^{(t-1)} =m^{(t-1)}E_{m} \tag{3}\]
#### 4.1.2 Intra-visit Set Encoder
Unlike the previous works [27, 28], which use the code embedding representation of the medical events as the patient representation. We employ the code-level embedding as the input of the set encoder to learn the code-level relationship and then migrate the code-level information into the visit-level representation. Inspired by the Set-Transformer [39], we utilize inducing point methods to compress medical code representations into a more compact space for modeling the impact of medical events. The set encoder contained two _Induced Set Attention Block_ (ISAB). In
ISAB, along with the set \(X\in\mathbb{R}^{m\times d}\), define a new trainable parameters vector \(I\in\mathbb{R}^{n\times d}\), called inducing points. The ISAB has the two major sub-layers: _Multi-Head Attention_ (MHA) and _row-wise FeedForward layer_ (rFF), the functions are defined as:
\[MHA(Q,K,V)=[head_{1},head_{2},\cdots,head_{h}] \tag{4}\]
\[head_{i}=Att(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}) \tag{5}\]
\[Att(Q,K,V)=Softmax(\frac{QK^{\top}}{\sqrt{s}})V \tag{6}\]
\[rFF(X)=Relu(XW_{rFF}+b_{rFF}) \tag{7}\]
where \(Q\in\mathbb{R}^{n_{u}\times d}\), \(K\in\mathbb{R}^{n_{u}\times d},V\in\mathbb{R}^{n_{u}\times d}\) are the inputs of attention \(Att(\cdot)\), \(W_{i}^{Q}\in\mathbb{R}^{d\times d_{u}},W_{i}^{K}\in\mathbb{R}^{d\times d_{u}}\), \(W_{i}^{V}\in\mathbb{R}^{d\times d_{u}}\), and \(d_{q}=d_{k}=d_{v}=d/h\). \(W_{rFF}\in\mathbb{R}^{d\times d}\) and \(b_{rFF}\in\mathbb{R}^{d}\) are learnable parameters. The \([\cdot]\) means the concatenate operation. The ISAB is defined as:
\[ISAB(X)=LN(H+rFF(H)) \tag{8}\]
\[H=LN(X+MHA(X,Y,Y)) \tag{9}\]
\[Y=LN(Z+rFF(Z)) \tag{10}\]
\[Z=LN(I+MHA(I,X,X)) \tag{11}\]
where \(LN\) is layer normalization operation. The set-encoder is defined as:
\[SE_{*}(X)=ISAB(ISAB(X)) \tag{12}\]
where \(*\in\{d,p,m\}\).
Given the code-level embedding representation, the output of the diagnosis set encoder is formulated as follows:
\[S_{d}^{(t)}=SE_{d}(d_{e}^{(t)})) \tag{13}\]
Similar to the diagnosis set encoder, the output of the procedure set encoder and medication set encoder are formulated as \(S_{p}^{(t)}=SE_{p}(p_{e}^{(t)}),S_{m}^{(t-1)}=SE_{m}(m_{e}^{(t-1)})\). After obtaining the code-level set representation of the three medical events, we combine them to visit-level representation \(V^{(t)}\) as the health status of the patient in the current visit. The visit-level representation is defined as:
\[V^{(t)}=[V_{d}^{(t)},V_{p}^{(t)},V_{m}^{(t-1)}] \tag{14}\]
where the \(V_{d}^{(t)},V_{p}^{(t)},V_{m}^{(t-1)}\) is the summation of code-level representation, and \([\cdot]\) is the concatenate operation.
#### 3.2.3 Inter-visit Longitudinal Encoder
Previous works usually employ Recurrent Neural Networks (RNN) to model the dynamic patient history for learning longitudinal representations of patients. As the success of the attention mechanism in sequence task [40, 41, 42], it will be helpful to combine the attention mechanism and RNN pattern. Inspired by the Block-Recurrent Transformer (BRT) [43], which applies a transformer layer in a recurrent fashion along the sequence input. Differing from the basic BRT, we have followed the GPT [42], added the masked vector to prevent information leaks while modeling patient longitudinal visit records, and named Recurrent Attention Block (RAB). The RAB mainly includes the update stream between the hidden state vector and the visit-level representation. The hidden state vector carries the patient temporal information, and the visit-level representation updates the information based on the historical state representation. For the state vector, the update function is formulated as follows:
\[C_{t+1}=g_{2}(MLP(g_{1}(C_{t}^{{}^{\prime}},C_{t})),g_{1}(C_{t}^{{}^{\prime}},C_{t})) \tag{15}\]
\[g_{*}(X,Y)=X\odot f+z\odot i \tag{16}\]
\[f=\sigma(W_{f}Y+b_{f}+1) \tag{17}\]
\[i=\sigma(W_{i}Y+b_{i}-1) \tag{18}\]
\[z=tanh(W_{z}Y+b_{z}) \tag{19}\]
Figure 3: The Framework of our proposed SHAPE. There are three components:(1) The Intra-visit Set Encoder captures the intra-relationship of the code-level medical events and summarizes it to the current visit-level representation. (2) An Inter-visit Longitudinal Encoder to model the longitudinal information of the patient. (3) An Adaptive curriculum learning module automatically assigns each sample’s difficulty according to the patient’s visit length.
where \(MLP\) is multi-layer perceptron, \(\odot\) is the Hadamard product, \(W_{f}\in\mathbb{R}^{n_{f}\times d_{f}},W_{i}\in\mathbb{R}^{n_{i}\times d_{i}},W_{z }\in\mathbb{R}^{n_{z}\times d_{z}}\) are trainable weight matrices, and \(b_{f}\in\mathbb{R}^{d_{f}},b_{i}\in\mathbb{R}^{d_{i}},b_{z}\in\mathbb{R}^{d_{z}}\) are trainable bias vectors. The \(g_{*}\in\{g_{1},g_{2}\}\) is the gate mechanism. \(C^{{}^{\prime}}_{t}\) is the combination of masked self-attention on the current hidden state \(C_{t}\) and the masked cross-attention with the visit-level representation \(V^{(t)}\),
\[C^{{}^{\prime}}_{t}=W^{{}^{\prime}}_{c}([Att(C_{t},C_{t},C_{t}),Att(C_{t},V^{(t )},V^{(t)})])+b^{{}^{\prime}}_{c} \tag{20}\]
where \(W^{{}^{\prime}}_{c}\in\mathbb{R}^{n_{e}^{{}^{\prime}}\times d^{{}^{\prime}}_{ c}}\) and \(b^{{}^{\prime}}_{c}\in\mathbb{R}^{d^{{}^{\prime}}_{c}}\) are learnable parameters.
The update stream of visit-level representation selects the longitudinal information from the hidden state and visit-level information from the current visit and is defined as:
\[\hat{V}^{(t)}=MLP(V^{(t)^{{}^{\prime}}}+V^{(t)})+(V^{(t)^{{}^{\prime}}}+V^{(t)}) \tag{21}\]
where \(MLP\) is a multi-layer perceptron. \(V^{(t)^{{}^{\prime}}}\) is the concatenate of visit-level representation masked self-attention and masked cross-attention with the current hidden state, where a central feature is to delegate a considerable portion of the information update responsibility to the process for generating attention weights. The formulation is:
\[V^{(t)^{{}^{\prime}}}=W^{{}^{\prime}}_{v}([Att(V^{(t)},V^{(t)},V^{(t)}),Att(V^{ (t)},C_{t},C_{t})])+b^{{}^{\prime}}_{v} \tag{22}\]
where \(W^{{}^{\prime}}_{v}\in\mathbb{R}^{n_{v}^{{}^{\prime}}\times d^{{}^{\prime}}_{ v}}\) and \(b^{{}^{\prime}}_{v}\in\mathbb{R}^{d^{{}^{\prime}}_{v}}\) are trainable parameters.
#### 3.1.4 Adaptive Curriculum Learning module
This module includes the prediction layer and the adaptive curriculum manager. After obtaining the updated patient-level representation \(V^{(t)}\), the final medication representation is generated through an output layer, which is defined as:
\[\hat{y}^{(t)}=\sigma(W_{o}\hat{V}^{(t)}+b_{o}) \tag{23}\]
where \(\sigma\) is sigmoid function, and \(W_{o}\in\mathbb{R}^{n_{u}^{{}^{\prime}}\times|\mathcal{M}|}\), \(b_{o}\in\mathbb{R}^{|\mathcal{M}|}\) are learnable parameters.
* **Supervised Multi-label Classification Loss**. The recommendation of medication combinations can be treated as a multi-label prediction task. We use the binary cross entropy loss \(l_{bce}\) as the multi-label task loss function, and \(l_{bce}\) is defined as: \[\mathcal{L}_{bce}=-\sum_{t}^{v_{j}}\sum_{i}^{|\mathcal{M}|}m^{(t)}_{i}log(\hat {y}^{(t)}_{i})+(1-m^{(t)}_{i})log(1-\hat{y}^{(t)}_{i})\] (24) where \(m^{(t)}_{i}\) and \(\hat{y}^{(t)}_{i}\) means the medical code at \(i-\)th coordinate at \(t-\)th visit.
* **Drug-Drug Interaction Loss**. The DDI loss is designed to control the DDI rate of generated medication combinations. Following the previous work [28], it is formulated as: \[\mathcal{L}_{ddi}=-\sum_{t}^{v_{j}}\sum_{i,j}^{|\mathcal{M}|}(A_{d}\odot(\hat{ y}^{(t)}{}^{\top}\hat{y}^{(t)}))\] (25) where \(\odot\) is the Hadamard product.
* **Combined Loss Functions**. During the training, we noticed that the accuracy and the DDI rate often increase together, mainly due to the drug-drug interaction in real-world clinical scenarios. It is important to balance the multi-label classification loss and the DDI loss. Finally, we use a penalty weight \(\alpha\) over the DDI loss for training. The final loss function is defined as: \[\mathcal{L}=\mathcal{L}_{bce}+\alpha\mathcal{L}_{ddi}\] (26) where \(\alpha\) is a pre-defined hyperparameter. By presetting different \(\alpha\), our SHAPE model could meet a different level of DDI requirements (the details of selecting the \(\alpha\) are shown in the DISCUSSION section).
* **Adaptive Curriculum Manager**. As shown in Figure 2 (a), although the medication combinations of most long visit records have been recommended before and are easy to predict, the short one lacking historical medication information is the most frequent situation in real-life clinical scenarios, which may be hard to predict accurately. To address this issue, we propose an adaptive curriculum manager to adaptively assign the complex coefficient of each patient and adopt the curriculum learning framework to train our SHAPE model. Specifically, we combine the visit length of the patient into the training schema, where we calculate \(\frac{l+l_{t}}{I_{max}}\) (i.e., Eq. (28)) to adjust the learning rate at the Adam [44] optimizer. Intuitively, when assigning a lower learning rate to shorter patient visit lengths, the model is guided to learn more complex parameter patterns for those shorter visit records. The adaptive curriculum manager is defined as: \[\theta_{t}=\theta_{t-1}-\frac{\hat{\gamma}\mu_{t}}{\sqrt{\eta_{t}}+\epsilon}\] (27) \[\hat{\gamma}=\gamma(1-\frac{I+l_{t}}{I_{max}})\] (28) \[\mu_{t}=\frac{\beta_{1}\mu_{t-1}+(1-\beta_{1})g_{t}}{1-\beta_{1}}\] (29) \[\eta_{t}=\frac{\beta_{2}\eta_{t-1}+(1-\beta_{2})g_{t}^{2}}{1-\beta_{2}}\] (30) \[g_{t}=\nabla_{\theta}f_{t}(\theta_{t-1})\] (31) where \(\epsilon\) is a constant added to the denominator to improve numerical stability, \(\gamma\) is the learning rate, \(I\) is the current training iteration number, \(l_{t}\) is the current visit length, \(I_{max}\) is the pre-defined maximum iteration number, and \(\mu_{t},\eta_{t}\) is the parameter of the first moment and the second moment of Adam, \(\beta_{1},\beta_{2}\) is the coefficient of the moment, the \(f(\theta)\) is the objective function, and \(\theta\) are parameters waiting to update, \(\nabla(\cdot)\) is the derivative operation. The adaptive curriculum manager is banded with the parameter update. Eq. (28) is the critical step of the optimizer of the objective. We use the current iteration and the current patient visit length to select the learning difficulty automatically.
### Inference
The SHAPE is trained end-to-end, and in the inference phase, the safe drug combination recommendation is generated from the sigmoid output \(\hat{y}^{(t)}\), where we fix the threshold
value as 0.5 to predict the label set. Then, the final predicted medication combinations correspond to the following:
\[\hat{Y}^{(t)}=\{\hat{y}_{i}^{(t)}|\hat{y}_{i}^{(t)}>0.5,1\leq i\leq|\mathcal{M}|\} \tag{32}\]
## 5 Experiments
In this section, we introduce the experiment details and conduct evaluation experiments to demonstrate the effectiveness of our SHAPE model1.
Footnote 1: [https://github.com/sherry6247/SHAPE.git](https://github.com/sherry6247/SHAPE.git)
### Dataset
We use the EHR data from the Medical Information Mart for Intensive Care (MIMIC-III)2. It contains 46,520 patients and 58,976 hospital admissions from 2001 to 2012. We conduct experiments on a benchmark released by COGNet [29], which is based on the MIMIC-III dataset for a fair comparison. Following the COGNet, we selected Top-40 severity DDI types from TWOSIDES [45], and we converted the drug code into ATC Third Level codes3 to align with the DDI graph nodes. Finally, we followed the setting of COGNet and divided the dataset into training, validation, and test sets by the ratio of \(4:1:1\). The statistics of the post-processed data are reported in Table 1.
Footnote 2: [https://mimic.physionet.org](https://mimic.physionet.org)
Footnote 3: [https://www.whocc.no/atc/structure_and_principles/](https://www.whocc.no/atc/structure_and_principles/)
### Metrics
We use three efficacy metrics: Jaccard, F1, and Precision-Recall Area Under Curve (PRAUC) combinations to evaluate the recommendation efficacy. Additionally, we also showed the DDI rate, and the number of predicted medications following the previous works [28, 29].
The Jaccard for the patient is calculated as below:
\[Jaccard=\frac{1}{T}\sum_{t=1}^{T}\frac{|M^{t}\cap\hat{Y}^{(t)}|}{|M^{t}\cup \hat{Y}^{(t)}|} \tag{33}\]
where the \(M^{(t)}\) is the ground-truth medication set sequence at \(t-\)th visit and the \(\hat{Y}^{(t)}\) is the predicted medication combinations.
The F1 of the patient is calculated as follows:
\[F1=\frac{1}{T}\sum_{t=1}^{T}2\times\frac{P_{t}*R_{t}}{P_{t}+R_{t}} \tag{34}\]
\[P_{i}=\frac{|M^{i}\cap\hat{Y}^{(i)}|}{|\hat{Y}^{(i)}|} \tag{35}\]
\[R_{i}=\frac{|M^{i}\cap\hat{Y}^{(i)}|}{|M^{i}|} \tag{36}\]
The PRAUC is calculated with the ground truth code's predicted probability of each medication code.
\[PRAUC=\frac{1}{T}\sum_{t=1}^{T}\sum_{k=1}^{|\mathcal{M}|}P(k)_{t}(R(k)_{t}-R(k -1)_{t}) \tag{37}\]
where \(P(k)_{t},R(k)_{t}\) are the precision and recall at the cut-off \(k-\)th threshold in the ordered retrieval list.
DDI rate aims to measure the interaction between the recommended medication combinations, which is calculated as follows:
\[DDI=\frac{1}{T}\sum_{t=1}^{T}\frac{\sum_{i=1}^{|\hat{Y}^{(t)}|}\sum_{j=i+1}^{| \hat{Y}^{(t)}|}\mathbf{1}\{A_{d}[\hat{Y}_{i}^{(t)},\hat{Y}_{j}^{(t)}]=1\}}{ \sum_{i=1}^{|\hat{Y}^{(t)}|}\sum_{j=i+1}^{|\hat{Y}^{(t)}|}1} \tag{38}\]
where \(A_{d}\) is the known knowledge of the DDI matrix. \(\hat{Y}_{i}^{(t)}\) denoted the \(i-\)th recommended medication and \(\mathbf{1}\{\cdot\}\) means to return 1 when the \(\{\cdot\}\) is true, otherwise, return 0.
### Baseline
We compare the SHAPE model with the following methods from different perspectives: _conventional machine learning method_, such as Logistic Regression(LR). _Instance-based methods_: LEAP [22], 4SDrug [23]. _Longitudinal-based methods_: RETAIN [24], DMNC [25], GAMENet [26], MICRON [27], SafeDrug [28], COGNet [29]. Specifically, LEAP [22] uses an attention mechanism to encode the diagnosis sequence step by step. 4SDrug [23] designs an attention-based method to augment the symptom representation and leverages the DDI graph to generate the current drug sequence. RETAIN [24] employs the attention gate mechanism to model the patient longitudinal information. DMNC [25] proposes a memory network to capture more interaction in the patient EHR record. GAMNet [26] combines the RNN and graph neural network to recommend medication combinations. MICRON [27] leverages a residual-based network to update the patient representation according to the new feature change. SafeDrug [28] utilizes drugs' molecule structures in the medication recommendation. COGNet [29] proposes a conditional generation model to copy or predict drugs according to the patient representation.
### Parameter Setting
Here, we list the implementation details of SHAPE. We set the hidden dimension as 128 and use the Adam optimizer [44] with an initial learning rate \(1\times 10^{-3}\) for 50 epochs. We fixed the random seed as 2023 to ensure the reproducibility of the model. Our model is implemented by Pytorch 1.7.1 based on Python 3.8.13 and training on two GeForce RTX 3090 GPUs, and an early-stopping mechanism was utilized. For a fair comparison, in the testing stage, we follow the previous work CONGNet [29], which random sample 80% data from
test data for a round of evaluation. We repeat this process \(10\) times and calculate the mean and standard deviation as the final result we reported.
### Result Analysis
As shown in Table 2, our proposed model SHAPE outperforms all baselines with the higher Jaccard, F1, and AUPRC and increased by nearly 2% compared to the previous best model. The conventional LR and the Instance-based methods are poor as they only consider the patient's health condition at the current visit. The performance of RETAIN and DMNC are comparable because both use the RNN architecture to capture the longitudinal information. The GAMENet introduced an additional DDI graph and fused it with the EHR co-occurrence graph, resulting in further performance improvement. SafeDrug leverages the drugs' molecule structures to improve the performance of medication recommendations. Unlike most longitudinal algorithms, which focus on the historical record, the MICRON proposed using the residual network to capture changes in medications. The COGNet proposes the copy or prediction mechanism to generate the medication sequence since the statistics show that most medication codes have been recommended in historical EHR records. However, it fails to consider the short visit, which may not be enough historical reference, especially for the newly and secondly admission patients.
Compared with the baseline methods, our SHAPE model achieves state-of-the-art performance. On the one hand, it designed an intra-visit set encoder to automatically collect the most informative medical events of each patient. On the other hand, we develop an inter-visit longitudinal encoder to capture the longitudinal pattern, which inherits the merit of RNN and the attention mechanism. Besides, our adaptive curriculum manager assigns the difficulty of each sample base on the visit length accordingly. Hence, our SHAPE performance is better than the other methods.
We also noticed in Table 2 that the 4SDrug achieves the lowest and most charming DDI rate of predicted medication combinations. However, combined with the results shown in Figure 4, the 4SDrug achieves the lowest DDI probably because the predicted medication code counts are less than other methods since we have observed that the DDI rate increase with the number of predicted medications. This lower DDI rate phenomenon also appears in the MICRON model since there are few predicted medications.
Furthermore, we noticed that the MIMIC-III dataset has an average DDI rate of 0.0875 itself, which means there is a large number of DDI phenomena in real-world practice. Based on this fact, our SHAPE also achieves a lower DDI rate and higher accuracy of medication recommendations, indicating the effectiveness of our proposed method.
To further validate that our SHAPE model can better model the short visit and even the new visit problem and recommend medication effectively, we investigate the performance of various visits with different models. As shown in the right picture of Figure 1, there are severe long-tail phenomena in the MIMIC-III dataset, and most patients have less than five times admission records. We take patients' first five visit records in the test set for visualization. We compared SHAPE with the COGNet and 4SDrug since (1) the COGNet achieves the best performance of the existing methods, and (2) the 4SDrug method uses the set-orient method to learn the code-level representation and uses the DDI loss to control the output predicted. As shown in Figure 4, our SHAPE model is superior to the COGNet on the three metrics (i.e., Jaccard, F1, and PRAUC). Especially, our SHAPE achieves higher performance
Figure 4: The performance of different visit lengths with the various models.
in the short visit length and shows an increasing trend. These results may directly show the power of SHAPE to solve the problem shown in Figure 1, in which the short visit records are the critical samples. The higher accuracy of these samples is helpful for most situations in real-world clinical practices. On the contrary, the 4SDrug is always under the COGNet and SHAPE. The reason may be that the 4SDrug is an instance-based method that ignores temporal longitudinal information.
## VI Discussion
Upon analyzing the results in Table 2, we can conclude that our proposed model SHAPE achieved the best performance compared to the LR and _Instance-base_ and _Longitudinal-base_ methods. The success of SHAPE is ascribed to the three modules we proposed (i.e., the Intra-visit Set Encoder (ISE), the Inter-visit Longitudinal Encoder (ILE), and Adaptive Curriculum Learning Module (ACLM)), and it achieved a lower DDI rate with our proposed combined loss function. To verify the effectiveness of each module we proposed, we designed the ablation experiments, SHAPE\({}_{w/oISE}\): which remove the intra-visit set encoder and summarize the code-level to visit-level representation directly. SHAPE\({}_{w/oILE}\): which uses the recurrent neural network to replace the inter-visit longitudinal encoder for learning the longitudinal information. SHAPE\({}_{w/oACLM}\): which means removing the step of Eq. (28) and using the basic Adam optimizer to optimize the SHAPE. SHAPE\({}_{w/oDIlloss}\): which only uses the multi-label classification loss function as the objective to train the model. We also compared the self-attention (SA) to investigate the effectiveness of our proposed compact intra-visit set encoder, SHAPE\({}_{wSA}\): which replaces the set encode as self-attention.
Table 3 shows the results for the different variants of
Fig. 5: Loss comparison on SHAPE and SHAPE\({}_{w/oACLM}\) regarding different numbers of train epochs.
SHAPE. As expected, when randomly removing the three modules we proposed. The performance brought a significant deterioration to the complete SHAPE model. The results of the DDI rate of SHAPE\({}_{w/oDIllos}\), illustrate the effectiveness of the combination loss function. Overall, the SHAPE outperforms all variant models, which means each component is integral to SHAPE. Compared with the SHAPE, the SHAPE\({}_{wSA}\) drops performance on total metrics, demonstrating that a more compacted encoder is more suitable to model the complex medical event code sequence.
Moreover, the performance drop of SHAPE\(w/oACLM\) can be observed in Table 3, indicating that it is important to consider the visit length as the guidance to assign the complex coefficient in the model of each patient. To explore the impact of the ACLM module, we conducted experiments to visualize the loss trajectory between SHAPE and SHAPE\(w/oACLM\). As shown in Figure 5, it can be seen that compared to SHAPE\(w/oACLM\), SHAPE has a significant decrease in loss and converges quickly. This demonstrates the vital importance of the ACLM module, as it can automatically assign difficulty coefficients to each sample and learn more suitable parameters for various visit records.
Furthermore, to achieve a satisfactory trade-off for the DDI rate phenomenon in the medication combinations generated by SHAPE, we explore the hyperparameter \(\alpha\) in Eq. (26). The details are also shown in the second half of Table 3, according to the results of Table 3, we can conclude that: (1) the DDI rate of predicted medication combinations is gradually increasing with the decline of \(\alpha_{ddi}\). (2) before the \(\alpha>0.05\), the performance of other metrics is suppressed, which indicates the DDI rate and the accuracy performance of the predicted medication combination almost linearly decreases with the penalty weight. However, when the \(\alpha<0.05\), the performance of SHAPE fluctuated. Combined with the previously mentioned that the MIMI-III dataset has a 0.0875 DDI rate itself, which means not the lowest DDI rate is the superior optimal selection of clinical practice.
To intuitively demonstrate the advantages of SHAPE over the two baseline models, we analyze some examples to show the predicted results. We choose the short or new visit patients to demonstrate the model effect on harder predicted cases. Due to space constraints, we use the International Classification of Disease (ICD) code to represent the diagnosis and procedure information and the ATC code to represent the medications. As shown in Table 4, _case 1_ is a new admission patient, the doctor prescribed ground truth medication based on the diagnosis and procedure information of the patient's current visit. _Case 2_ is a secondary admission patient, and we list the second record in _Case 2_. In _Case 2_, the physician combines the current health condition and the patient's historical record to prescribe medication. Overall, the SHAPE performed the best with 14 correct and 19 correct medications in two cases and achieved the lowest miss or error in the two cases. Furthermore, we noticed that in new visit _Case 1_, the instance-based method 4SDrug also achieves comparable performance with COGNet, probably because of the instance-based approach against the single visit problem.
As shown in Figure 6, we visualize the DDI status in two cases of each model, where the symmetric matrix shows the drug-drug relationship of the combination of medications. The point of \(GT_{normal}\) means there is no DDI in ground truth medication combinations, and \(GT_{ddi}\) means there probably is DDI in the ground truth medication combinations. The empty rows and columns mean these codes do not appear in the ground truth medications. We noticed in _Case 1_ our SHAPE only generates two pairs of medication which maybe
Figure 6: Visualization DDI of the case study. _Case 1_ is a new admission patient. _Case 2_ is a secondary admission patient. In a chessboard, the red square corresponds to the DDI in the ground truth; the green point corresponds there are not appear DDI in the ground truth; the blue circle corresponds to the DDI in the predicted medications with COGNet, the inverted yellow triangle corresponds to the DDI predicted medications with 4SDrug. The purple cross corresponds to the DDI in the predicted medications with SHAPE. Best viewed in color.
suffers the drug-drug interaction, on the contrary, the 4SDrug and COGNet generate five pairs (i.e., [A01A, R03A], [A06A, R01A], [N02A, B01A], [B01A, N02A], [B01A, R01A]) and eight pairs (i.e., [A01A, R03A], [A06A, R01A], [C07A, A12B], [C07A, R01A], [A12B, C07A], [N02A, B01A], [B01A, N02A], [B01A, R01A]). In the DDI of _Case 2_, we find that the DDI phenomenon in real-life scenarios exceeds ten pairs of medications. Our SHAPE simultaneously hits most situations similar to the ground truth medication prescribed by doctors, which hints that SHAPE can provide a safer way to recommend medication combinations.
There are also several limitations of the current study. Firstly, we only used diagnosis and procedure information for the side information to infer the medication and ignored others, such as vital signs and laboratory test records. Secondly, we only evaluate the SHAPE model on a public dataset, which also limits the generalizability of the model.
## 7 Conclusion
In this paper, we proposed a sample adaptive hierarchical medication prediction network, named SHAPE, to better learn the accurate representation of the patient. Concretely, we first present an intra-visit set encoder to capture medical events relationship from the code-level perspective, which is usually ignored in most current works. Then, we developed an inter-visit longitudinal encoder to learn the visit-level longitudinal representation, which inherits the merits between attention and the RNN. Additionally, we designed an adaptive curriculum learning module that references patients' personalities to automatically assign each patient's difficulty for improving the performance of medication recommendations. Experiment results on the public benchmark dataset demonstrate that SHAPE outperforms existing medication recommendation algorithms by a large margin. We also investigate the performance of short visits and new visit samples, which shows that the SHAPE can effectively figure out the medication recommendation with the short admission of patients. Further ablation study results also suggest the effectiveness of each module of our proposed SHAPE.
|
2309.13017 | Splittings for symbolic powers of edge ideals of complete graphs | In this paper we study the $s$-th symbolic powers of the edge ideals of
complete graphs. In particular, we provide a criterion for finding an
Eliahou-Kervaire splitting on these ideals, and use the splitting to provide a
description for the graded Betti numbers. We also discuss the symbolic powers
and graded Betti numbers of edge ideals of parallelizations of finite simple
graphs. | Susan M. Cooper, Sergio Da Silva, Max Gutkin, Tessa Reimer | 2023-09-22T17:29:29Z | http://arxiv.org/abs/2309.13017v1 | # Splittings for symbolic powers of edge ideals of complete graphs
###### Abstract.
In this paper we study the \(s\)-th symbolic powers of the edge ideals of complete graphs. In particular, we provide a criterion for finding an Eliahou-Kervaire splitting on these ideals, and use the splitting to provide a description for the graded Betti numbers. We also discuss the symbolic powers and graded Betti numbers of edge ideals of parallelizations of finite simple graphs.
Key words and phrases:Betti numbers, Splittings, Symbolic powers, Edge ideals 2010 Mathematics Subject Classification: 13D02; 13F55. Cooper was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). Da Silva was supported in part by a PIMS postdoctoral fellowship at the University of Manitoba and an NSERC Postdoctoral Fellowship at McMaster University. Gutkin was supported by a University of Manitoba Undergraduate Research Award and Reimer was supported by a University of Manitoba Student Union (UMSU) Undergraduate Research Award.
The symbolic power of a monomial ideal \(I\) has a convenient description in terms of its primary decomposition, and is easier to describe compared to more general homogeneous ideals. Let \(I\subset R=k[x_{1},\dots,x_{m}]\) be a monomial ideal, where \(k\) is a field. Suppose that \(I=I_{1}\cap\dots\cap I_{r}\) is a primary decomposition for \(I\). Given a maximal associated prime ideal \(Q\) of \(I\) and a positive integer \(s\), we define
\[I_{\subseteq Q}=\bigcap_{\sqrt{I_{\ell}}\subseteq Q}I_{\ell}\]
so that the \(s\)-th **symbolic power** of \(I\) is
\[I^{(s)}=\bigcap_{Q\in\max\operatorname{Ass}(I)}I_{\subseteq Q}^{s},\]
where \(\sqrt{I_{\ell}}=\{r\in R\mid r^{n}\in I_{\ell}\,\text{ for some positive integer }n\}\) denotes the radical of \(I_{\ell}\) and \(\max\operatorname{Ass}(I)\) denotes the set of associated primes of \(I\) that are maximal with respect to inclusion.
This definition doesn't depend on the primary decomposition since \(I_{\subseteq Q}=R\cap IR_{Q}\) (see [3]). In Sections 3 and 4 we provide a description of the minimal monomial generating set for \(I(G)^{(s)}\) when \(G\) is a complete graph or a parallelization of a finite simple graph. These descriptions are necessary to study Eliahou-Kervaire splittings for symbolic powers of edge ideals, which in turn allow us to compute graded Betti numbers recursively. Theorem 3.5 provides a criterion for defining an Eliahou-Kervaire splitting in this context. A similar technique is employed for a specific family of monomial ideals found in [16].
Having a method to determine the graded Betti numbers of symbolic powers of edge ideals allows one to readily obtain information of related invariants of interest. We demonstrate this for the minimum socle degree in Section 3.4. It is known that symbolic powers of edge ideals of complete graphs are Cohen-Macaulay and have dimension \(1\) (see Lemma 3.15 for the relevant citations). Projectively these are fat points, and this implies that Section 3 is studying zero-dimensional arithmetically Cohen-Macaulay schemes, which complements the content found in [15].
Finally, a discussion about the graded Betti numbers for graph parallelizations of finite simple graphs can be found in Section 4. The definition of a graph parallelization is given there, but the utility of this construction comes from the ability to define families of graphs with properties that are related to the original graph (which can be useful for finding examples or counter-examples).
The arguments in this article work over any field \(k\). While there exist some subtleties when working over fields of positive characteristic (for instance, [7, Example 4.2] illustrates a Betti splitting which fails to be a splitting except in characteristic \(2\)), our arguments rely on a particular splitting map which is characteristic-free.
## 2. Preliminaries
There is a rich theory involving edge ideals and their combinatorial properties. We refer the reader to [17] for a thorough overview of this theory. Throughout the paper, \(G\) will denote an undirected finite simple graph with vertex set \(V(G)=\{x_{1},\ldots,x_{m}\}\) and edge set \(E(G)\). Recall that \(G\) is **simple** if it does not have multiple edges between vertices and if it does not have any loops at vertices. Let \(k\) be a field and \(R=k[x_{1},\ldots,x_{m}]\). The **edge ideal** of \(G\) is defined as the square-free monomial ideal \(I(G)=\langle x_{i}x_{j}:\{x_{i},x_{j}\}\in E(G)\rangle\subset R\).
We mainly focus on complete graphs in this paper. Recall that the **complete graph** on \(m\) vertices, denoted \(K_{m}\), is the simple undirected graph for which every pair of distinct vertices is connected by exactly one unique edge.
In the sections that follow, we will also consider subgraphs of graphs. An **induced subgraph**\(H\) of \(G\) is a graph with vertex set \(V(H)\subseteq V(G)\) and edge set \(E(H)=E(G)\cap[V(H)]^{2}\). That is, if \(u,v\in V(H)\), then \(u\) and \(v\) are adjacent in \(H\) if and only if they are adjacent in \(G\). For example, the complete graph \(K_{3}\) is an induced subgraph of \(K_{4}\), whereas the subgraph with \(4\) vertices but no edges is not.
Determining invariants of symbolic powers of homogeneous ideals in the polynomial ring can be quite challenging. Recall from the introduction that we can define the \(s\)-th symbolic power of a monomial ideal \(I\) in terms of a primary decomposition of \(I\). When \(I\) is the edge ideal of a graph \(G\), this becomes tractable via vertex covers of \(G\). A subset \(W\subseteq V(G)\) is said to be a **vertex cover** if \(W\cap e\neq\emptyset\) for all \(e\in E(G)\). A vertex cover \(W\) is a **minimal vertex cover** if no proper subset of \(W\) is a vertex cover of \(G\). We have the following useful connection between vertex covers and \(I(G)\).
**Lemma 2.1** ([17, Theorem 1.34, Corollary 1.35]).: _Let \(W_{1},\ldots,W_{t}\) be the minimal vertex covers of a graph \(G\). If we set \(\langle W_{i}\rangle=\langle x_{j}\mid x_{j}\in W_{i}\rangle\), then \(I(G)=\langle W_{1}\rangle\cap\cdots\cap\langle W_{t}\rangle\) is the minimal primary decomposition of \(I(G)\)._
One of the main goals of this paper is to investigate the use of a splitting technique due to Eliahou and Kervaire (in particular, a splitting for the symbolic powers of \(I(G)\)) to reduce the determination of graded Betti numbers to an induction on much simpler base cases. We denote the set of minimal monomial generators of a monomial ideal \(I\subset R\) (which are unique) by \(\mathcal{G}(I)\), and recall the following definition from [7].
**Definition 2.2**.: _Let \(I,J\) and \(K\) be monomial ideals of \(R=k[x_{1},\ldots,x_{m}]\) such that \(\mathcal{G}(I)\) is the disjoint union of \(\mathcal{G}(J)\), \(\mathcal{G}(K)\). We call \(I=J+K\) an **Eliahou-Kervaire splitting** (or \(\mathbf{E}\)**-\(\mathbf{K}\) splitting**) if there exists a splitting function \(\mathcal{G}(J\cap K)\longrightarrow\mathcal{G}(J)\times\mathcal{G}(K)\) sending \(w\mapsto(\phi(w),\varphi(w))\) such that:_
1. \(w=\operatorname{lcm}(\phi(w),\varphi(w))\)_; and_
2. _for every subset_ \(S\subset\mathcal{G}(J\cap K)\)_, both_ \(\operatorname{lcm}(\phi(S))\) _and_ \(\operatorname{lcm}(\varphi(S))\) _strictly divide_ \(\operatorname{lcm}(S)\)_._
One benefit of having an Eliahou-Kervaire splitting is the ability to compute the graded Betti numbers of \(I\) in terms of the graded Betti numbers for \(J\) and \(K\). Recall that the
\(i,j\)**-th graded Betti number** of \(I\) is by definition \(\beta_{i,j}(I)=\dim_{k}\operatorname{Tor}_{i}(k,I)_{j}\). That is, \(\beta_{i,j}(I)\) is the number of copies of \(R(-j)\) appearing in the \(i\)-th module of the graded minimal free resolution of \(I\):
\[0\to\bigoplus_{j}R(-j)^{\beta_{\ell,j}(I)}\to\cdots\to\bigoplus_{j}R(-j)^{ \beta_{1,j}(I)}\to\bigoplus_{j}R(-j)^{\beta_{0,j}(I)}\to I\to 0,\]
where \(R(-j)\) is the polynomial ring \(R\) shifted by degree \(j\).
**Lemma 2.3** ([6, Proposition 3.2]).: _Let \(I,J\) and \(K\) be monomial ideals of \(k[x_{1},\dots,x_{m}]\) such that \(I=J+K\) is an Eliahou-Kervaire splitting. Then_
\[\beta_{i,j}(I)=\beta_{i,j}(J)+\beta_{i,j}(K)+\beta_{i-1,j}(J\cap K),\]
_for all \(i\in\mathbb{N}\) and multidegrees \(j\)._
**Remark 2.4**.: _The original version of this result was proved for total Betti numbers in [5, Proposition 3.1] over an arbitrary field. The proof of Lemma 2.3 is also valid over any field, even if the author of [6] assumes that the field is algebraically closed (which is needed later in [6])._
All E-K splittings are examples of Betti splittings, which are a choice of monomial ideals \(I=J+K\) where \(\mathcal{G}(I)=\mathcal{G}(J)\sqcup\mathcal{G}(K)\) which also satisfy the graded Betti number equality from Lemma 2.3. The distinction will not be important for us since all of the splittings in the sections that follow are E-K splittings. See [7] for more information on Betti splittings. We finish with a simple auxiliary lemma, which we leave as an exercise.
**Lemma 2.5**.: _If \(I\subset k[x_{1},\dots,x_{m}]\) is a monomial ideal, then \(\beta_{i,j}(x_{\ell}I)=\beta_{i,j-1}(I)\) for all \(i,j\geq 1\), \(1\leq\ell\leq m\)._
## 3. Symbolic Powers Of Edge Ideals Of Complete Graphs
We denote the complete graph on \(m\) vertices by \(K_{m}\) and label its vertices by \(x_{1},\dots,x_{m}\). Fix \(R=k[x_{1},\dots,x_{m}]\). Our main goal is to determine the graded Betti numbers for the symbolic powers of the edge ideal of \(K_{m}\). In order to make use of the Eliahou-Kervaire splitting technique, we need a convenient description for the minimal monomial generators of symbolic powers of \(I(K_{m})\). The following lemma will simplify this task.
**Lemma 3.1** ([2, Lemma 2.6]).: _Let \(I\subset R=k[x_{1},\dots,x_{m}]\) be a square-free monomial ideal with minimal primary decomposition \(I=P_{1}\cap\cdots\cap P_{n}\) with \(P_{\ell}=\langle x_{j_{1}},\dots,x_{j_{\alpha_{\ell}}}\rangle\) for \(\ell=1,\dots,n\). Then \(x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in I^{(s)}\) if and only if \(a_{j_{1}}+\cdots+a_{j_{\alpha_{\ell}}}\geq s\) for \(\ell=1,\dots,n\)._
We now determine the minimal monomial generating set \(\mathcal{G}(I(K_{m})^{(s)})\) for the \(s\)-th symbolic power of \(I(K_{m})\subset R\).
**Proposition 3.2**.: _If \(I=I(K_{m})\subset R\) and \(s\geq 2\), then \(I^{(s)}\) has minimal monomial generating set_
\[\mathcal{L}:=\{x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}:\exists\ 1\leq i\leq m\text{ with } \sum_{j\neq i}a_{j}=s,a_{i}=\max_{j\neq i}\{a_{j}\}\}.\]
_That is, \(\mathcal{G}(I^{(s)})=\mathcal{L}\)._
Proof.: Let \(x=x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in\mathcal{L}\) where \(a_{i}\geq 0\) for each \(1\leq i\leq m\). Without loss of generality, suppose that \(a_{1}+\cdots+a_{m-1}=s\) and \(a_{m}=\max\{a_{1},\ldots,a_{m-1}\}\). Note that any choice of \(m-1\) vertices of \(K_{m}\) defines a minimal vertex cover for \(K_{m}\), and so by Lemma 2.1 we can write \(I=\bigcap_{i=1}^{m}\langle x_{1},\ldots,\hat{x_{i}},\ldots,x_{m}\rangle\), which is the minimal primary decomposition for \(I\). Then, by Lemma 3.1, \(x\in I^{(s)}\) since every subset of \(\{a_{1},\ldots,a_{m}\}\) of size \(m-1\) sums to a value larger than or equal to \(s\).
Conversely, suppose that \(x=x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in I^{(s)}\). Without loss of generality, we may assume that \(a_{m}=\max\{a_{1},\ldots,a_{m}\}\) so that \(a_{1}+\cdots+a_{m-1}\geq s\) by Lemma 3.1. Then \(x\) is clearly in the ideal \(\langle\mathcal{L}\rangle\), proving that \(\mathcal{L}\) is a generating set for \(I^{(s)}\).
To see that \(\mathcal{L}=\mathcal{G}(I^{(s)})\), suppose \(x=x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\) and \(y=x_{1}^{b_{1}}\cdots x_{m}^{b_{m}}\) are both monomials in \(\mathcal{L}\) such that \(x\) divides \(y\) (i.e. \(b_{t}\geq a_{t}\) for \(t=1,\ldots,m\)). Suppose \(1\leq i,j\leq m\) are such that
\[a_{1}+\cdots+a_{m}-a_{i}=b_{1}+\cdots+b_{m}-b_{j}=s\text{ with }a_{i}=\max\{a_{1}, \ldots,a_{m}\},\ b_{j}=\max\{b_{1},\ldots,b_{m}\}.\]
Then \(b_{j}\geq b_{i}\geq a_{i}\) and we can write \(b_{j}-a_{i}=c\geq 0\). Thus,
\[b_{1}+\cdots+b_{m}=s+b_{j}=s+a_{i}+c=a_{1}+\cdots+a_{m}+c.\]
Therefore,
\[c=\sum_{\ell=1}^{m}(b_{\ell}-a_{\ell})=(b_{j}-a_{i})+(b_{i}-a_{j})+\sum_{\ell \neq i,j}^{m}(b_{\ell}-a_{\ell})\implies(b_{i}-a_{j})+\sum_{\ell\neq i,j}^{m}( b_{\ell}-a_{\ell})=0.\]
Note that since \(a_{i}=\max\{a_{1},\ldots,a_{m}\}\), we have that \(a_{j}\leq a_{i}\leq b_{i}\) and so \(b_{i}-a_{j}\geq 0\). Then each term in the previous equation is non-negative showing that \(a_{\ell}=b_{\ell}\) for \(\ell\neq i,j\) and \(b_{i}=a_{j}\). Since \(b_{j}=\max\{b_{1},\ldots,b_{m}\}\), there must be some \(t\neq j\) such that \(b_{t}=b_{j}\) (by the definition of \(\mathcal{L}\)). Now if \(b_{j}\neq b_{i}\), then \(t\neq i\) and \(a_{t}=b_{t}=b_{j}>b_{i}\geq a_{i}=\max\{a_{1},\ldots,a_{m}\}\), a contradiction. Thus, \(b_{j}=b_{i}=a_{j}\). We conclude that \(x=y\), as required.
### E-K Splittings
We are now in a position to use splittings to determine the graded Betti numbers of symbolic powers of the edge ideal of a complete graph \(K_{m}\). As before, we let \(R=k[x_{1},\ldots,x_{m}]\). Observe that if we set \(G=K_{m}\) and fix \(0\leq r\leq m\), then we can view \(H=K_{r}\) as an induced subgraph of \(K_{m}\) where \(V(K_{r})=\{x_{1},\ldots,x_{r}\}\) (here \(K_{0}\) is the null subgraph and \(I(K_{0})\) is the zero ideal).
**Definition 3.3**.: _Let \(G=K_{m}\) and \(H=K_{r}\) for some fixed \(0\leq r\leq m\). If \(s\geq 2\) is an integer and \(r\neq m\), then_
\[I_{H,s}=\langle w\in\mathcal{G}(I(G)^{(s)}):x_{i}\nmid w,i=r+1,\ldots,m\rangle \;\;\text{and}\;\;I_{G\setminus H,s}=I(G)^{(s)}\cap\langle\prod_{j=r+1}^{m}x_{ j}\rangle.\]
_By convention, if \(r=m\), then we define \(I_{H,s}=I_{G\setminus H,s}=I(G)^{(s)}\)._
In general, if \(w=x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in\mathcal{G}(I(K_{m})^{(s)})\), then \(w\in\mathcal{G}(I_{H,s})\) if \(a_{r+1}=\cdots=a_{m}=0\) and \(w\in\mathcal{G}(I_{G\setminus H,s})\) if \(a_{i}\neq 0\) for all \(i\in\{r+1,\ldots,m\}\). Also, observe that \(I_{H,s}\) can be viewed as the extension of \(I(H)^{(s)}\subset k[x_{1},\ldots,x_{r}]\) to the ring \(k[x_{1},\ldots,x_{n}]\).
**Lemma 3.4**.: _If \(m\geq 3,s\geq 2,G=K_{m}\) and \(H=K_{m-1}\), then \(I_{H,s}\cap I_{G\setminus H,s}=x_{m}I_{H,s}\)._
Proof.: Note first that \(I_{H,s}\) is an ideal contained in \(I(G)^{(s)}\), and so \(I_{H,s}\cap I(G)^{(s)}=I_{H,s}\). Thus, \(I_{H,s}\cap I_{G\setminus H,s}=I_{H,s}\cap I(G)^{(s)}\cap\langle x_{m}\rangle =I_{H,s}\cap\langle x_{m}\rangle\). Since no generator of \(I_{H,s}\) is divisible by \(x_{m}\), it is clear that \(I_{H,s}\cap\langle x_{m}\rangle=x_{m}I_{H,s}\).
We now define an E-K splitting for ideals of the form \(I_{K_{m}\setminus K_{r},s}\). The ideal \(I(K_{m})^{(s)}\) is a special case, and an E-K splitting for it will follow as a corollary of the next theorem. These results are needed as part of the induction for the next section.
**Theorem 3.5**.: _Let \(m\geq 3,s\geq 2\) and \(r\in\{1,\ldots,m\mid r\neq m-s-1\}\). If \(G=K_{m},H=K_{r}\),_
\[L_{1}=\langle w\in\mathcal{G}(I_{G\setminus H,s}):x_{r}\mid w\rangle=I_{G \setminus H,s}\cap\langle x_{r}\rangle,\;\;\;\text{and}\;\;\;L_{2}=\langle w \in\mathcal{G}(I_{G\setminus H,s}):x_{r}\nmid w\rangle,\]
_then \(I_{G\setminus H,s}=L_{1}+L_{2}\) is an Eliahou-Kervaire splitting._
Proof.: We first note that the minimal monomial generating set of \(L_{2}\) is given by all monomials \(w=x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in\mathcal{G}(I(G)^{(s)})\) such that \(a_{r}=0\) and every exponent from \(\{a_{r+1},\ldots,a_{m}\}\) is nonzero.
To construct a splitting function that sends \(w\in\mathcal{G}(L_{1}\cap L_{2})\) to \((\phi(w),\varphi(w))\in\mathcal{G}(L_{1})\times\mathcal{G}(L_{2})\), we need to define two functions, \(\varphi:\mathcal{G}(L_{1}\cap L_{2})\to\mathcal{G}(L_{2})\) and \(\phi:\mathcal{G}(L_{1}\cap L_{2})\to\mathcal{G}(L_{1})\). We start with the function \(\varphi\). First notice that by a similar argument to Lemma 3.4, \(L_{1}\cap L_{2}=x_{r}L_{2}\) and each \(w\in\mathcal{G}(L_{1}\cap L_{2})\) can be written uniquely as \(w=x_{r}v\), where \(v\in\mathcal{G}(L_{2})\). Therefore, we define \(\varphi:\mathcal{G}(L_{1}\cap L_{2})\to\mathcal{G}(L_{2})\) by \(w=x_{r}v\mapsto v\).
Next we define \(\phi:\mathcal{G}(L_{1}\cap L_{2})\to\mathcal{G}(L_{1})\). Given \(w=x_{r}v\in\mathcal{G}(L_{1}\cap L_{2})\), let \(a_{i}\) denote the exponent of \(x_{i}\) in \(w\) for \(1\leq i\leq m\). Note that \(a_{r}=1\). Let \(j\in\{1,\ldots,m\}\setminus\{r\}\) be the smallest index such that
\[a_{1}+\cdots+a_{m}-a_{r}-a_{j}=s,\;\;\text{and}\;\;\;a_{j}=a_{max}\coloneqq \max(\{a_{1},\ldots,a_{m}\}\setminus\{a_{r},a_{j}\}).\]
Let \(t\in\{1,\ldots,m\}\setminus\{j,r\}\) be the smallest index such that \(a_{t}=a_{max}\). The indices \(j\) and \(t\) exist by Proposition 3.2. Observe that \(j\) and \(t\) are the two smallest indices in \(\{1,\ldots,m\}\setminus\{r\}\) such that \(a_{t}=a_{j}=a_{max}\) and that \(j<t\). Define \(\phi:\mathcal{G}(L_{1}\cap L_{2})\to\mathcal{G}(L_{1})\)
by
\[w=x_{r}v=x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\mapsto\begin{cases}\dfrac{w}{x_{j}}& \text{ if }\exists\,\ell\neq j,t,r\text{ with }a_{\ell}=a_{max},\\ \dfrac{w}{x_{t}x_{j}}&\text{ otherwise.}\end{cases}\]
We need to show that \(\phi\) does in fact map into \(\mathcal{G}(L_{1})\). If \(a_{t}\) is the only value in \(\{a_{1},\ldots,a_{m}\}\setminus\{a_{j},a_{r}\}\) such that \(a_{t}=a_{max}\), then let \(A^{\prime}=\{x_{1},\ldots,x_{m}\}\setminus\{x_{t}\}\) and for each \(1\leq i\leq m\), let \(a_{i}^{\prime}\) denote the exponent of \(x_{i}\) in \(\phi(w)\). Note that \(a_{j}^{\prime}=a_{j}-1,a_{t}^{\prime}=a_{t}-1,a_{r}^{\prime}=a_{r}=1\) and \(a_{l}^{\prime}=a_{l}\) for all \(l\neq j,t\). Then
\[\sum_{x_{i}\in A^{\prime}}a_{i}^{\prime}=\sum_{i=1}^{m}a_{i}^{ \prime}-a_{t}^{\prime}=\deg(\phi(w))-(a_{t}-1) =\deg(w)-2-(a_{max}-1)\] \[=\sum_{i=1}^{m}a_{i}-1-a_{max}\] \[=\sum_{i=1}^{m}a_{i}-a_{r}-a_{j}=s.\]
Also,
\[a_{max}^{\prime}\coloneqq\max(\{a_{1}^{\prime},\ldots,a_{m}^{\prime}\}\setminus \{a_{t}^{\prime}\})=a_{j}^{\prime}=a_{max}-1\ \text{ and }\ a_{t}^{\prime}=a_{t}-1=a_{max}-1=a_{max}^{\prime}.\]
Thus \(\phi(w)\in\mathcal{G}(I(G)^{(s)})\).
To show that \(\phi(w)\in\mathcal{G}(I_{G\setminus H,s})\), it suffices to verify that \(\prod_{i=r+1}^{m}x_{i}\) divides \(\phi(w)\). To this end, assume that \(\prod_{i=r+1}^{m}x_{i}\) does not divide \(\phi(w)\). Recall that \(w=x_{r}v\) for some \(v\in\mathcal{G}(L_{2})\). Since \(\mathcal{G}(L_{2})\subset\mathcal{G}(I_{G\setminus H,s})\), we know that \(\prod_{i=r+1}^{m}x_{i}\) divides \(v\). Thus, \(\prod_{i=r+1}^{m}x_{i}\) divides \(w\). Since \(\phi(w)=w/(x_{j}x_{t})\) and \(\prod_{i=r+1}^{m}x_{i}\) does not divide \(\phi(w)\), it must be that at least one of \(j\) or \(t\) is greater than or equal to \(r+1\) and \(a_{j}=a_{t}=1\). Since \(a_{j}=a_{t}=a_{max}\), this implies that \(w\) is a square-free monomial. Hence,
\[\deg(w)=a_{1}+\cdots+a_{m}=s+2\]
and so \(\phi(w)\) has degree \(s\). This is a contradiction since \(\phi(w)\in\mathcal{G}(I(G)^{(s)})\) which consists of monomials of degree \(s+1\) and higher. We conclude that \(\prod_{i=r+1}^{m}x_{i}\) divides \(\phi(w)\), and thus \(\phi(w)\in\mathcal{G}(I_{G\setminus H,s})\). Furthermore, since \(x_{r}\mid\phi(w)\), we cannot have \(\phi(w)\in\mathcal{G}(L_{2})\). Thus, \(\phi(w)\in\mathcal{G}(L_{1})\).
Similarly, if \(a_{t}\) is not the only value in \(\{a_{1},\ldots,a_{m}\}\setminus\{a_{j},a_{r}\}\) such that \(a_{t}=a_{max}\), then define \(A^{\prime}\) and each \(a_{i}^{\prime}\) as before. Note that \(a_{j}^{\prime}=a_{j}-1,a_{r}=a_{r}^{\prime}=1\) and \(a_{l}^{\prime}=a_{l}\) for all \(l\neq j\). Then
\[\sum_{x_{i}\in A^{\prime}}a_{i}^{\prime}=\sum_{i=1}^{m}a_{i}^{\prime}-a_{t}^{ \prime}=\deg(\phi(w))-a_{t}=\deg(w)-1-a_{max}=\sum_{i=1}^{m}a_{i}-a_{r}-a_{j}=s.\]
Also,
\[a^{\prime}_{max}\coloneqq\max(\{a^{\prime}_{1},\dots,a^{\prime}_{m}\}\setminus\{a^ {\prime}_{t}\})=a_{max}\quad\text{and}\quad a^{\prime}_{t}=a_{t}=a_{max}=a^{ \prime}_{max}.\]
Thus, \(\phi(w)\in\mathcal{G}(I(G)^{(s)})\).
We need to show that \(\phi(w)=w/x_{j}\in\mathcal{G}(I_{G\setminus H,s})\). Again, it suffices to verify that \(\prod_{i=r+1}^{m}x_{i}\) divides \(\phi(w)\). Arguing by contradiction, suppose that \(\prod_{i=r+1}^{m}x_{i}\) does not divide \(\phi(w)\). As above, \(w=x_{r}v\) for some \(v\in\mathcal{G}(L_{2})\) and \(\prod_{i=r+1}^{m}x_{i}\) divides \(v\) and hence \(w\). Since \(\phi(w)=w/x_{j}\), it follows that \(j\geq r+1\) and \(a_{j}=1\). Thus, since \(t>j\) and \(a_{t}=a_{max}=a_{j}\), we have \(t>r+1\) and \(1=a_{t}=a_{j}=a_{max}\). This implies that \(w\) is a square-free monomial. Further, since the exponents in \(\phi(w)\) of the variables in \(A^{\prime}\) sum to \(s\) and \(a^{\prime}_{t}=a_{t}=1\), \(\deg(\phi(w))=s+1\). By the choice of \(j\) and \(t\), this implies that for all \(1\leq i<r\) we have \(a_{i}\neq a_{max}\). Since \(w\) is square-free, we must have \(a_{i}=0\) for \(1\leq i<r\). But then \(w\) is square-free and divisible by \(x_{r}\) and \(\prod_{i=r+1}^{m}x_{i}\), and so \(w=\prod_{i=r}^{m}x_{i}\). Thus \(\deg(\phi(w))=(m-r+1)-1=m-r\), and so
\[m-r=s+1\implies m-s-1=r,\]
a contradiction to the assumption that \(m-s-1\neq r\). We conclude that \(\prod_{i=r+1}^{m}x_{i}\) divides \(\phi(w)\), and so \(\phi(w)\in\mathcal{G}(I_{G\setminus H,s})\). Again, since \(x_{r}\mid\phi(w)\), we have that \(\phi(w)\) is not in \(\mathcal{G}(L_{2})\) and thus \(\phi(w)\in\mathcal{G}(L_{1})\).
We now show that these maps define an E-K splitting. It is easy to verify the first condition by checking that \(\operatorname{lcm}(\phi(w),\varphi(w))=w\) when \(w\in\mathcal{G}(L_{1}\cap L_{2})\). To check the second condition, we need to verify that for every subset \(S\subset\mathcal{G}(J\cap K)\), both \(\operatorname{lcm}(\phi(S))\) and \(\operatorname{lcm}(\varphi(S))\) strictly divide \(\operatorname{lcm}(S)\). To this end, let \(S\subset\mathcal{G}(L_{1}\cap L_{2})\). Clearly, \(\operatorname{lcm}(\phi(S))\) and \(\operatorname{lcm}(\varphi(S))\) both divide \(\operatorname{lcm}(S)\). Since \(x_{r}\mid\operatorname{lcm}(S)\) and \(x_{r}\nmid\operatorname{lcm}(\varphi(S))\), we cannot have \(\operatorname{lcm}(\varphi(S))=\operatorname{lcm}(S)\), so \(\operatorname{lcm}(\varphi(S))\) strictly divides \(\operatorname{lcm}(S)\), as required.
To show \(\operatorname{lcm}(\phi(S))\neq\operatorname{lcm}(S)\), let \(a\) be the maximum exponent of any variable in any monomial in \(S\). Let \(i_{0}\) be the smallest index such that \(x_{i_{0}}^{a}\) divides at least one \(w\in S\). By definition of \(\phi\), \(x_{i_{0}}^{a}\nmid\phi(w)\). If there exists any \(w^{\prime}\in S\) such that \(x_{i_{0}}^{a}\mid\phi(w^{\prime})\), then either there exists an exponent in \(w^{\prime}\) that is larger than \(a\), or there exists an index \(i_{1}<i_{0}\) such that \(x_{i_{1}}^{a}\mid w^{\prime}\). Both are clear contradictions. Thus, \(x_{i_{0}}^{a}\nmid\operatorname{lcm}(\phi(S))\), yet it clearly divides \(\operatorname{lcm}(S)\), showing that \(\operatorname{lcm}(\phi(S))\neq\operatorname{lcm}(S)\) and completing the proof.
**Example 3.6**.: _The assumption that \(r\neq m-s-1\) is necessary in Theorem 3.5. For example, consider \(m=5,s=2\) and \(r=2\). By definition, \(x_{3}x_{4}x_{5}\in\mathcal{G}(L_{2})\), and so \(w=x_{2}x_{3}x_{4}x_{5}\in\mathcal{G}(L_{1}\cap L_{2})\). Here, \(a_{max}=1\). The smallest index \(j\in\{1,\dots,5\}\setminus\{2\}\) such that \(a_{j}=a_{max}\) is 3. The smallest index \(t\in\{1,\dots,5\}\setminus\{2,3\}\) such that \(a_{t}=a_{max}\) is 4. Note that \(a_{5}=a_{max}\) and \(5\neq j,t,r\). Thus, \(w/x_{j}=w/x_{3}=x_{2}x_{4}x_{5}\) is not in \(\mathcal{G}(I_{G\setminus H,s})\) as it is not divisible by \(x_{3}x_{4}x_{5}\)._
**Corollary 3.7**.: _If \(m\geq 3,s\geq 2,G=K_{m}\) and \(H=K_{m-1}\), then \(I(G)^{(s)}=I_{H,s}+I_{G\setminus H,s}\) is an E-K splitting._
Proof.: By definition, \(I(K_{m})=I(G)^{(s)}=I_{K_{m}\setminus K_{m},s}\). Therefore, by Theorem 3.5 with \(r=m\), we have that \(I_{K_{m}/K_{m},s}=L_{1}+L_{2}\) is an E-K splitting where
\[L_{1}=I_{K_{m}\setminus K_{m},s}\cap\langle x_{m}\rangle=I_{K_{m}\setminus K_{m -1},s}\ \ \text{and}\ \ L_{2}=I_{K_{m-1}\setminus K_{m-1},s}=I(K_{m-1})^{(s)}=I_{K_{m-1},s}.\]
**Remark 3.8**.: _With these results, if \(m-s-1<0\), then we may repeatedly apply the splitting defined in Theorem 3.5 to define a function that gives the graded Betti numbers of \(I(G)^{(s)}\). In fact, it is an important observation that \(L_{1}\) from Theorem 3.5 is actually of the form \(I_{K_{m}\setminus K_{r-1},s}\), since \(I_{K_{m}\setminus K_{r-1},s}=I_{K_{m}\setminus K_{r},s}\cap\langle x_{r}\rangle\), which allows the theorem to be iteratively applied to each subsequent \(L_{1}\). Each step of this iteration applies Theorem 3.5 to \(I_{K_{m}\setminus K_{r},s}\) for decreasing \(r\), and terminates with \(I_{K_{m}\setminus K_{0},s}\)._
### Graded Betti Numbers Of Symbolic Powers Of \(I(k_{2})\) and \(I(k_{3})\)
We now use splittings and induction to determine the graded Betti numbers for the \(s\)-th symbolic powers of the edge ideal of \(K_{3}\). We begin with the following observation.
**Lemma 3.9**.: _If \(I=I(K_{2})\subset R=k[x_{1},x_{2}]\) and \(s\geq 2\), then \(\beta_{1,2s}(I_{K_{2},s})=1\) and \(\beta_{i,j}(I_{K_{2},s})=0\) for \(i\neq 1,j\neq 2s\)._
Proof.: Notice that \(I^{(s)}=\langle x_{1}^{s}x_{2}^{s}\rangle=I^{s}\). It is straightforward to see that a minimal graded free resolution of \(I^{(s)}\) (over any field \(k\)) is given by \(0\to R(-2s)\to R\to R/I^{(s)}\to 0\).
**Theorem 3.10**.: _If \(i,j\in\mathbb{Z}^{+}\) and \(s\geq 1\), then we have:_
\[\beta_{1,\frac{3s}{2}}(I(K_{3})^{(s)})=1;\ \ \beta_{2,\frac{3s+3}{2}}(I(K_{3})^{(s )})=2;\ \ \beta_{1,j}(I(K_{3})^{(s)})=3\ \ \text{if}\ \frac{3s+1}{2}\leq j\leq 2s;\]
\[\beta_{2,j}(I(K_{3})^{(s)})=3\ \ \text{if}\ \frac{3s+4}{2}\leq j\leq 2s+1;\ \ \text{and}\ \ \beta_{i,j}(I(K_{3})^{(s)})=0\ \ \text{otherwise}.\]
Proof.: We induct on \(s\). The result is clear for \(s=1\) and \(s=2\) via a computation using Macaulay2. Fix \(s>2\) and suppose the function holds for all positive integers \(s^{\prime}<s\). By Corollary 3.7, there is an E-K splitting of \(I(K_{3})^{(s)}=I_{K_{2},s}+I_{K_{3}\setminus K_{2},s}\), and by Lemma 2.3 we can write
\[\beta_{i,j}(I(K_{3})^{(s)})=\beta_{i,j}(I_{K_{2},s})+\beta_{i,j}(I_{K_{3} \setminus K_{2},s})+\beta_{i-1,j}(I_{K_{2},s}\cap I_{K_{3}\setminus K_{2},s}).\]
Using Lemma 3.4, we know that \(I_{K_{2},s}\cap I_{K_{3}\setminus K_{2},s}=x_{3}I_{K_{2},s}\), and by Lemma 2.5, \(\beta_{i-1,j}(x_{3}I_{K_{2},s})=\beta_{i-1,j-1}(I_{K_{2},s})\). Therefore,
\[\beta_{i,j}(I(K_{3})^{(s)})=\beta_{i,j}(I_{K_{2},s})+\beta_{i,j}(I_{K_{3} \setminus K_{2},s})+\beta_{i-1,j-1}(I_{K_{2},s}).\]
We now write \(\beta_{i,j}(I_{K_{3}\setminus K_{2},s})\) in terms of the known graded Betti numbers coming from the induction. Let \(L_{1}\) and \(L_{2}\) be as in Theorem 3.5 with \(m=3\) and \(r=2\). Then \(I_{K_{3}\setminus K_{2},s}=L_{1}+L_{2}\) is an E-K splitting, and therefore
\[\beta_{i,j}(I_{K_{3}\setminus K_{2},s})=\beta_{i,j}(L_{1})+\beta_{i,j}(L_{2})+ \beta_{i-1,j}(L_{1}\cap L_{2}).\]
From the proof of Theorem 3.5, \(L_{1}\cap L_{2}=x_{2}L_{2}\) so that \(\beta_{i-1,j}(L_{1}\cap L_{2})=\beta_{i-1,j-1}(L_{2})\). By a change of coordinates, we have that \(\beta_{i,j}(L_{2})=\beta_{i,j}(I_{K_{2}\setminus K_{1},s})=\beta_{i,j}(I_{K_{2 },s})\). Therefore
\[\beta_{i,j}(I_{K_{3}\setminus K_{2},s})=\beta_{i,j}(L_{1})+\beta_{i,j}(I_{K_{2 },s})+\beta_{i-1,j-1}(I_{K_{2},s}).\]
It remains to show that \(\beta_{i,j}(L_{1})\) can be computed using what is known from the induction. We require one more E-K splitting. By definition, \(L_{1}=I_{K_{3}\setminus K_{1},s}\), so we can apply Theorem 3.5 one more time (using \(m=3\) and \(r=1\)) to get a splitting \(L_{1}=L_{1}^{\prime}+L_{2}^{\prime}\), where \(L_{1}^{\prime}=I(K_{3})^{(s)}\cap\langle x_{1}x_{2}x_{3}\rangle\). This yields
\[\beta_{i,j}(L_{1})=\beta_{i,j}(L_{1}^{\prime})+\beta_{i,j}(L_{2}^{\prime})+ \beta_{i-1,j}(L_{1}^{\prime}\cap L_{2}^{\prime}).\]
Using the same observations as before, notice that \(\beta_{i,j}(L_{2}^{\prime})=\beta_{i,j}(I_{K_{2}\setminus K_{0},s})=\beta_{i,j}(I_{K_{2},s})\) and \(\beta_{i-1,j}(L_{1}^{\prime}\cap L_{2}^{\prime})=\beta_{i-1,j-1}(I_{K_{2},s})\). It is not difficult to see that \(L_{1}^{\prime}=x_{1}x_{2}x_{3}I(K_{3})^{(s-2)}\) so that \(\beta_{i,j}(L_{1}^{\prime})=\beta_{i,j-3}(I(K_{3})^{(s-2)})\).
Overall we have shown that
\[\beta_{i,j}(I(K_{3})^{(s)})=3\beta_{i,j}(I_{K_{2},s})+3\beta_{i-1,j-1}(I_{K_{2 },s})+\beta_{i,j-3}(I(K_{3})^{(s-2)}).\]
By Lemma 3.9, we know that \(\beta_{i,j}(I_{K_{2},s})=1\) if \(i=1\), \(j=2s\) and \(0\) otherwise. The result follows by induction.
**Remark 3.11**.: _We relied on the reduction to \(L_{1}^{\prime}=x_{1}x_{2}x_{3}I(K_{3})^{(s-2)}\) in the proof of Theorem 3.10, and this is why 3 splittings are needed in the proof. See Lemma 3.12 for a generalization._
### Graded Betti Numbers Of Symbolic Powers Of \(I(k_{m})\) In General
In Theorem 3.10, we used the fact that \(I(K_{3})^{(s)}\cap\langle x_{1}x_{2}x_{3}\rangle=x_{1}x_{2}x_{3}I(K_{3})^{(s-2)}\). The reader might wonder why we actually needed 3 splittings in the theorem. Perhaps the induction could be completed using just 2 splittings, for example, requiring a similar identification for \(L_{1}=I(K_{3})^{(s)}\cap\langle x_{2}x_{3}\rangle\). A look at the generators however shows that we need all of the variables present in the second ideal in the intersection to make such an identification. For example, we know that \(\mathcal{G}(I(K_{3})^{(3)})=\{x_{1}^{3}x_{2}^{3},x_{1}^{2}x_{2}^{2}x_{3},x_{1}^ {2}x_{2}^{2}x_{3}^{2},x_{1}x_{2}^{3}x_{3}^{3},x_{2}^{3}x_{3}^{3}\}\). If we look at the terms for which \(x_{2}x_{3}\) can be factored out, we notice that the generator \(x_{1}^{2}x_{2}^{2}x_{3}\) can be factored as \(x_{2}x_{3}(x_{1}^{2}x_{2})\), but \(x_{1}^{2}x_{2}\) is not a minimal generator for \(I(K_{2})^{(s)}\) or \(I(K_{3})^{(s)}\) with any choice of \(s\). We avoid this issue by only considering generators where each variable \(x_{i}\) can be factored out. In particular, when computing graded Betti numbers for \(I(K_{m})^{(s)}\), one will need to use \(m\) E-K splittings to reduce to the case \(I_{K_{m}\setminus K_{0},s}\) and achieve a similar result to Theorem 3.10.
**Lemma 3.12**.: _We have \(I_{K_{m}\setminus K_{0},s}=I(K_{m})^{(s)}\cap\langle x_{1}\cdots x_{m}\rangle= x_{1}\cdots x_{m}I(K_{m})^{(s-m+1)}\) when \(s\geq m\geq 2\)._
Proof.: This follows from the observation that if \(x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in\mathcal{G}(I(K_{m})^{(s)})\cap\langle x_{ 1}\cdots x_{m}\rangle\), then \(x_{1}^{a_{1}-1}\cdots x_{m}^{a_{m}-1}\in\mathcal{G}(I(K_{m})^{(s-m+1)})\).
The underlying ideas in the proof of Theorem 3.10 can now be generalized to obtain formulae for the graded Betti numbers for the \(s\)-th symbolic powers of the edge ideal of
\(K_{m}\). However, as \(m\) increases, so does the complexity in writing the formulae. For the sake of concreteness, we provide only the formulae for \(m=4\) below.
**Theorem 3.13**.: _If \(i,j\in\mathbb{Z}^{+}\) and \(s\geq 4\), then we have:_
\[\beta_{i,j}(I(K_{4})^{(s)})=6\ \ \text{if $i=1$, $j=2s$ or $i=3$, $j=2s+2$;}\ \ \beta_{2,2s+1}(I(K_{4})^{(s)})=12;\ \ and\]
\[\beta_{i,j}(I(K_{4})^{(s)})=\beta_{i,j-4}(I(K_{4})^{(s-3)})+4\beta_{i-1,j-4}(I (K_{3})^{(s-2)})+4\beta_{i,j-3}(I(K_{3})^{(s-2)})\ \ \text{otherwise}.\]
While the statement of the formulae for these graded Betti numbers seems cumbersome, they are a direct result of an inductive computation as in Theorem 3.10.
**Remark 3.14**.: _Given any fixed \(m>3\), we are able to inductively compute \(\beta_{i,j}(I(K_{m})^{(s)})\) for any \(s>m-1\). If \(s<m\), then at some point in the induction process \(r=m-s-1\) and we cannot reduce the computation any further, and are left with finitely many Betti numbers to manually compute by other means._
### Minimum Socle Degree Of Symbolic Powers Of Edge Ideals Of Complete Graphs
Having formulae for the graded Betti numbers of the symbolic powers for edge ideals of complete graphs gives us information on related invariants of the ideals. For example, we can obtain the minimum socle degrees which we now discuss. In the following, the dimension of a homogeneous ideal \(I\subseteq R=k[x_{1},\ldots,x_{m}]\) is the Krull dimension of \(R/I\). If \(A=\oplus_{i\geq 0}A_{i}\) is a graded Artinian \(k\)-algebra, then \(\mathfrak{m}=\oplus_{i\geq 1}A_{i}\) is a maximal ideal and we call the ideal quotient \(\text{Socle}(A)=0:\mathfrak{m}=\{r\in A\mid r\mathfrak{m}=0\}\) the **socle** of \(A\). It is a finite-dimensional graded \(k\)-vector space, and we write \(\text{Socle}(A)=\oplus_{i=0}k(-a_{i})\) where \(a_{i}\in\mathbb{Z}^{+}\) are the **socle degrees** of \(A\). The **minimum socle degree** of \(A\) is the minimum of the \(a_{i}\).
If \(I\) is a homogeneous ideal of \(R\) such that \(R/I\) is Cohen-Macaulay, then we can find a maximal regular sequence \(f_{1},\ldots,f_{n}\) of \(R/I\) where \(f_{i}\) is a homogeneous polynomial of degree \(1\) and \(n=\dim(R/I)\). Let \(\bar{I}=I+\langle f_{1},\ldots,f_{n}\rangle\). In this situation, the Artinian reduction \(A=R/\bar{I}\) is \(0\)-dimensional, and hence Artinian. It is well-known that the socle degrees of \(A\) are related to the back twists at the end of a minimal resolution of \(R/I\). More precisely, let us write the graded minimal free resolution for \(R/I\) as
\[0\to F_{m-1}=\bigoplus_{i}R(-a_{i})\to\cdots\to F_{1}\to R\to R/I\to 0.\]
The last module in the free resolution for \(R/\bar{I}\) is \(F_{m-1}(-n)\). Since it is in position \(m+n-1\) of the free resolution, the socle degrees of \(A\) are \(s_{i}=(a_{i}+n)-(m+n-1)=a_{i}-(m-1)\) by [11, Lemma 1.3]. With a slight abuse of notation, we will say that socle degrees of \(A\) are the socle degrees of \(R/I\). In particular, the minimum socle degree of \(R/I\) is just \(\min_{i}\{s_{i}\}\). This is similar to the setup in [14] and [15] except for the labelling of indices.
**Lemma 3.15**.: _If \(I=I(K_{m})\subset R=k[x_{1},\ldots,x_{m}]\), then \(R/I^{(s)}\) is Cohen-Macaulay of dimension 1._
Proof.: The Cohen-Macaulay property follows from [13, Theorem 3.6] and [18, Theorem 2.1]. It suffices to show that \(R/I\) has dimension \(1\) since \(I^{(s)}\) and \(I\) have the same height. Each vertex cover of \(K_{m}\) involves exactly \(m-1\) vertices and defines a primary component in the primary decomposition of \(I\) by Lemma 2.1. Since \(R/I\) is Cohen-Macaulay, all of the associated primes of \(I\) must have the same height, so it suffices to compute the height of just one of these ideals. One such ideal is \(\langle x_{1},\ldots,x_{m-1}\rangle\) which has height \(m-1\) (so \(R/I\) has dimension \(1\)), proving the result.
As a consequence, we can apply our technique to determine the minimum socle degree of the \(s\)-th symbolic power of an edge ideal of any complete graph for any \(s\geq 2\). For example, the minimum socle degree of \(R/I(K_{2})^{(s)}=2s-1\) and the minimum socle degree of \(R/I(K_{3})^{(3)}=4\) since, by Theorem 3.10, \(\beta_{2,6}(I(K_{3})^{(3)})=2,\beta_{1,5}(I(K_{3})^{(3)})=\beta_{1,6}(I(K_{3}) ^{(3)})=\beta_{2,7}(I(K_{3})^{(3)})=3\) and \(\beta_{i,j}(I(K_{3})^{(3)})=0\) otherwise.
## 4. Parallelizations
It is natural to ask if we can determine the graded Betti numbers of edge ideals of graphs obtained by certain graph operations. One such operation is called a graph parallelization, which we now turn our attention to. The notion of a graph parallelization appears in [12, Section 2] in the discussion about polarizations and depolarizations of monomial ideals. We continue to work with undirected finite simple graphs.
**Definition 4.1**.: _Let \(G\) be a graph with vertex set \(\{x_{1},\ldots,x_{m}\}\) and fix \(\alpha=(\alpha_{1},\ldots,\alpha_{m})\in(\mathbb{Z}^{+})^{m}\). The **parallelization** of \(G\) by \(\alpha\), denoted \(G^{\alpha}\), is the graph with vertex set \(V(G^{\alpha})=\{x_{1,1},\ldots,x_{1,\alpha_{1}},\ldots,x_{m,1},\ldots,x_{m, \alpha_{m}}\}\) and edge set \(E(G^{\alpha})=\{\{x_{i,t},x_{j,\ell}\}|\{x_{i},x_{j}\}\in E(G)\}\)._
For example, the graph for \(K_{3}^{(3,1,1)}\) is obtained by duplicating \(x_{1}\) to get the vertices \(x_{1,1},x_{1,2}\) and \(x_{1,3}\) and adding edges between these vertices and \(x_{2,1}\) and \(x_{3,1}\).
In particular, when \(\alpha=(1,\ldots,1)\), we recover the original graph so that \(G^{(1,\ldots,1)}\) is the same as \(G\) (we are identifying \(x_{i,1}=x_{i}\) in general). We will denote the set of vertices of \(G^{\alpha}\) corresponding to the vertex \(x_{i}\in V(G)\) by \(V_{i}\). These are called the **duplications** of \(x_{i}\). The next lemma shows that all minimal vertex covers for \(G^{\alpha}\) come from minimal vertex covers for \(G\) replaced by the appropriate duplications \(V_{i}\). Recall that the **open neighbourhood** of a given set \(V^{\prime}\) of vertices in a graph \(G\), denoted \(N(V^{\prime})\), is the set of all vertices which are adjacent to vertices in \(V^{\prime}\), not including the vertices in \(V^{\prime}\) themselves.
**Lemma 4.2**.: _Let \(G\) be a graph on \(m\) vertices and fix \(\alpha\in(\mathbb{Z}^{+})^{m}\). Then any minimal vertex cover for \(G^{\alpha}\) has the form \(\{x_{i_{1},1},\ldots,x_{i_{1},\alpha_{i_{1}}},\ldots x_{i_{r},1}\ldots,x_{i_{r},\alpha_{i_{r}}}\}\) where \(\{x_{i_{1}},\ldots,x_{i_{r}}\}\) is a minimal vertex cover of \(G\)._
Proof.: Let \(S\) be a minimal vertex cover of \(G\) and, without loss of generality, suppose that \(S=\{x_{1},\ldots,x_{n}\}\). Let us define \(S^{\prime}=\{x_{1,1},\ldots,x_{1,\alpha_{1}},\ldots,x_{n,1},\ldots,x_{n, \alpha_{n}}\}\) as the set of vertices of \(G^{\alpha}\) obtained by replacing each vertex \(x_{i}\) in \(S\) by all of its duplicates \(V_{i}\) in \(G^{\alpha}\). We first show that \(S^{\prime}\) is a minimal vertex cover of \(G^{\alpha}\).
Let \(\{x_{i,t},x_{j,\ell}\}\) be any edge of \(G^{\alpha}\). By definition, \(\{x_{i},x_{j}\}\in E(G)\). Since \(S\) is a minimal vertex cover of \(G\), at least one of \(x_{i}\) or \(x_{j}\) is in \(S\). Without loss of generality, let us suppose that \(x_{i}\in S\). Then \(V_{i}\subset S^{\prime}\), and so \(S^{\prime}\) contains a vertex from the edge \(\{x_{i,t},x_{j,\ell}\}\). That is, \(S^{\prime}\) is a vertex cover of \(G^{\alpha}\).
To see that \(S^{\prime}\) is minimal, suppose that there exists \(x_{i,t}\in S^{\prime}\) such that \(S^{\prime}\backslash\{x_{i,t}\}\) is a vertex cover of \(G^{\alpha}\). Then necessarily \(N(x_{i,t})\subseteq S^{\prime}\). By definition of a parallelization, \(N(V_{i})=N(x_{i,t})\). Hence, if \(e\in E(G^{\alpha})\) contains some vertex \(x_{i,\ell}\), then \(S^{\prime}\) contains a vertex of \(e\) other than \(x_{i,\ell}\). That is, \(S^{\prime}\backslash\{V_{i}\}\) is a vertex cover of \(G^{\alpha}\). However, since \(N(V_{i})\subseteq S^{\prime}\), by construction of \(S^{\prime}\) it follows that \(N(x_{i})\subseteq S\). Then \(S\backslash\{x_{i}\}\) is a vertex cover of \(G\), contradicting the minimality of \(S\).
It remains to show that these are the only minimal vertex covers of \(G^{\alpha}\). Let \(C^{\prime}\) be a minimal vertex cover of \(G^{\alpha}\). Note that for any \(x_{i,t}\in C^{\prime}\), there exists an edge \(e\in E(G^{\alpha})\) such that \(e=\{x_{i,t},x_{j,\ell}\}\) (since \(C^{\prime}\) is a minimal vertex cover, \(x_{i,t}\) cannot be an isolated vertex). However, \(\{x_{i,t},x_{j,\ell}\}\in E(G^{\alpha})\) if and only if \(\{x_{i},x_{j}\}\in E(G)\). So for all \(x\in V_{i},y\in V_{j},\{x,y\}\in E(G^{\alpha})\). Furthermore, if there exists some \(V_{i}\) such that \(V_{i}\nsubseteq C^{\prime}\), then by the above, \(N(V_{i})\subseteq C^{\prime}\). Hence, since \(C^{\prime}\) is minimal, \(V_{i}\cap C^{\prime}=\emptyset\) and so either \(V_{i}\subseteq C^{\prime}\) or \(V_{j}\subseteq C^{\prime}\). Thus \(C^{\prime}=V_{1}\cup\cdots\cup V_{t}\) for some labelling of the partite sets. Since \(G\) is an induced subgraph of \(G^{\alpha}\), \(C^{\prime}\) being a minimal vertex cover implies that \(C=\{x_{1},\ldots,x_{t}\}=V(G)\cap C^{\prime}\) is a vertex cover of \(G\), and if \(C\) were not minimal then the minimal vertex cover contained in \(C\) would correspond to a minimal vertex cover of \(G^{\alpha}\) properly contained in \(C^{\prime}\), a contradiction.
**Proposition 4.3**.: _Let \(G\) be a graph with vertex set \(\{x_{1},\ldots,x_{m}\}\) and edge ideal \(I=I(G)\). Fix \(\alpha=(\alpha_{1},\ldots,\alpha_{m})\in(\mathbb{Z}^{+})^{m}\) and denote the edge ideal of \(G^{\alpha}\) by \(I_{\alpha}\). Then_
\[\mathcal{G}(I_{\alpha}^{(s)})=\{x_{1,1}^{e_{1,1}}\cdots x_{1,\alpha_{1}}^{e_{1, \alpha_{1}}}\cdots x_{m,1}^{e_{m,1}}\cdots x_{m,\alpha_{m}}^{e_{m,\alpha_{m}}} |\sum_{j=1}^{\alpha_{i}}e_{i,j}=a_{i},\text{ where }x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in \mathcal{G}(I)\}.\]
Proof.: The argument is similar to that of Proposition 3.2. We use Lemma 3.1 and Lemma 4.2 to show that the set is generating. Minimality follows from the fact that \(x_{1}^{a_{1}}\cdots x_{m}^{a_{m}}\in\mathcal{G}(I)\) and the description of minimal vertex covers.
### Graded Betti Numbers Of Parallelizations
The graded Betti numbers of parallelizations for complete graphs can be bounded below using the splittings from Corollary 3.7. This however is a special case of the next result.
**Proposition 4.4**.: _If \(G\) is finite simple graph on \(m\) vertices, \(s\geq 2\), and \(\alpha\in(\mathbb{Z}^{+})^{m}\), then for all \(i,j\geq 1\), \(\beta_{i,j}(I(G^{\alpha})^{(s)})\geq\beta_{i,j}(I(G)^{(s)})\)._
Proof.: Since we can view \(G\) as an induced subgraph of \(G^{\alpha}\), the result follows as a direct consequence of [8, Lemma 4.4] (which is a generalization of the work in [9] for edge ideals).
Proposition 4.4 naturally leads one to try to determine what classes of ideals are obtained by parallelization of graphs \(G\) where the Betti numbers of \(I(G)^{(s)}\) are known. We illustrate this useful direction with complete \(n\)-partite graphs. Recall that the **complete \(n\)-partite graph** with partite sets of size \(a_{1},\ldots,a_{n}\), denoted \(K_{a_{1},\ldots,a_{n}}\), is the graph with vertex set \(V=\{x_{1,1},\ldots,x_{1,a_{1}},\ldots,x_{n,1},\ldots,x_{n,a_{n}}\}\) and edge set \(E=\{\{x_{i,j},x_{\ell,m}\}\,|\,i\neq\ell\}\). In addition, recall that an **independent set** of vertices of a graph \(G\) is a set of vertices in which no two vertices are adjacent.
**Corollary 4.5**.: _If \(K_{a_{1},\ldots,a_{n}}\) is a complete \(n\)-partite graph, then for all \(s\geq 2\) and \(i,j\geq 1\) we have the bound_
\[\beta_{i,j}(I(K_{a_{1},\ldots,a_{n}})^{(s)})\geq\beta_{i,j}(I(K_{n})^{(s)}).\]
Proof.: The result follows by noticing that \(K_{a_{1},\ldots,a_{n}}=K_{n}^{(a_{1},\ldots,a_{n})}\). To see this, observe that the duplicates of each vertex in \(K_{n}^{(a_{1},\ldots,a_{n})}\) form an independent set, and any two of these independent sets have all possible edges between them by definition of a parallelization applied to a complete graph.
As a consequence, the results of Section 3 combined with Corollary 4.5 yields explicit lower bounds for the graded Betti numbers of symbolic powers of edge ideals of complete \(n\)-partite graphs.
Not surprisingly, the bound of Proposition 4.4 is not very effective as the entries in \(\alpha\) grow. A better lower bound would not only depend on \(I(G)^{(s)}\), but also on \(\alpha\). As a motivational example, consider \(I(K_{3})^{(2)}=\langle\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{4} \rangle=\langle x_{1}^{2}x_{2}^{2},x_{1}^{2}x_{3}^{2},x_{2}^{2}x_{3}^{2},x_{1} x_{2}x_{3}\rangle\). One possible relation on these generators is \(\sigma=x_{3}\epsilon_{1}-x_{1}x_{2}\epsilon_{4}=0\). Now let \(\alpha=(2,2,2)\) and consider \(I(K_{3}^{\alpha})^{(2)}\). Then the relation demonstrated by \(\sigma\) also holds if we substitute any of the variable duplications. One possible choice is using \(x_{3,2}\) instead of \(x_{3,1}=x_{3}\). Now if \(\epsilon_{4}^{\prime}=x_{1}x_{2}x_{3,2}\), then \(\sigma^{\prime}=x_{3,2}\epsilon_{1}-x_{1}x_{2}\epsilon_{4}^{\prime}=0\) also (where \(x_{1}=x_{1,1}\) and \(x_{2}=x_{2,1}\)). The same would hold true if we replaced any of the variables \(x_{i}\) by one of their duplications \(x_{i,j}\). In this way we get groupings of relations corresponding to some choice of the duplicated variables, which is also true for higher syzygies.
We see this more generally. In particular, if \(\sigma\) is a syzygy for the \(i\)-th step of the minimal graded free resolution of \(I(G)^{(s)}\), then it is also a syzygy for the same step of the minimal graded free resolution of \(I(G^{\alpha})^{(s)}\), and substituting any variable in \(\sigma\) with any of its duplicates also yields a syzygy. We can expect \(\prod_{i}\alpha_{i}\) many blocks of such relations, and so we might guess that \(\beta_{i,j}(I(G^{\alpha})^{(s)})\geq(\prod_{i}\alpha_{i})\beta_{i,j}(I(G)^{(s)})\). Another lower bound would instead use blocks of relations coming from disjoint copies of \(G\) in \(G^{\alpha}\) (that is copies of \(G\) which partition the vertex set of \(G^{\alpha}\)). There are exactly \(\min\{\alpha_{i}\}\) many
of these, leading to a weaker bound, but likely one which is easier to prove. Instead, we will conjecture that the stronger bound holds. This bound works over any field since any potential cancellation in positive characteristic would occur both before and after any substitution by duplications.
**Conjecture 4.6**.: _Let \(G\) be a graph on \(m\) vertices and let \(\alpha=(\alpha_{1},\dots,\alpha_{m})\in(\mathbb{Z}^{+})^{m}\). For all \(i,j\in\mathbb{Z}^{+}\) and \(s\geq 2\), we have \(\beta_{i,j}(I(G^{\alpha})^{(s)})\geq(\prod_{i}\alpha_{i})\beta_{i,j}(I(G)^{(s )})\)._
One might worry that the relations might not be part of a minimal generating set for the syzygy. However, we know from Grobner theory that there at least \(\binom{\prod_{i}\alpha_{i}}{2}\) relations which generate the syzygy, and the conjecture simply asks that a fraction of these are part of a minimal generating set. In particular, we know how to find generators for syzygies using \(S\)-polynomials (for example by Schreyer's Theorem). Information about \(S\)-polynomials and computing syzygies can be found in [4]. To prove the conjecture, one would need to show that the \(S\)-polynomials corresponding to each choice of duplication are part of a minimal generating set.
|
2309.17271 | Spiral shocks induced in galactic gaseous disk: hydrodynamic
understanding of observational properties of spiral galaxies | We investigate the properties of spiral shocks in a steady, adiabatic,
non-axisymmetric, self-gravitating, mass-outflowing accretion disk around a
compact object. We obtain the accretion-ejection solutions in a gaseous
galactic disk and apply them to the spiral galaxies to investigate the possible
physical connections between some galaxy observational quantities. The
self-gravitating disk potential is considered following Mestel's (1963)
prescription. The spiral shock-induced accretion-ejection solutions are
obtained following the point-wise self-similar approach. We observe that the
self-gravitating disk profoundly affects the dynamics of the spiral structure
of the disk and the properties of the spiral shocks. We find that the
observational dispersion between the pitch angle and shear rate and between the
pitch angle and star formation rate in spiral galaxies contains some important
physical information. There are large differences in star formation rates among
galaxies with similar pitch angles, which may be explained by the different
star formation efficiencies caused by the distinct galactic ambient conditions. | Ramiz Aktar, Li Xue, Li-Xin Zhang, Jing-Yi Luo | 2023-09-29T14:20:36Z | http://arxiv.org/abs/2309.17271v1 | Spiral shocks induced in galactic gaseous disk: hydrodynamic understanding of observational properties of spiral galaxies
###### Abstract
Context:We investigate the properties of spiral shocks in a steady, adiabatic, non-axisymmetric, self-gravitating, mass-outflowing accretion disk around a compact object.
Aims:We obtain the accretion-ejection solutions in a galactic disk and apply them to the spiral galaxies to investigate the possible physical connections between some galaxy observational quantities.
Methods:The self-gravitating disk potential is considered following Mestel (1963) prescription. The spiral shock induced accretion-ejection solutions are obtained following the point-wise self-similar approach (Aktar et al. 2021).
Results:We observe that the self-gravitating disk profoundly affects the dynamics of the spiral structure of the disk and the properties of the spiral shocks. We find that the observational dispersion between the pitch angle and shear rate and between the pitch angle and star formation rate in spiral galaxies contains some important physical information.
Conclusions:There are large differences of star formation rates among galaxies with similar pitch angle, may be explained by the different star formation efficiencies caused by the distinct galactic ambient conditions.
## 1 Introduction
The spiral structure is a long-term and fascinating topic in the observational and theoretical study of accretion disks. In the observation, there are many pieces of evidence for the existence of the spiral structure in accretion disks (Steeghs et al. 1997; Neustroffer & Borisov 1998; Pala et al. 2019; Baptista & Wojcikiewicz 2020; Lee et al. 2020). It has become a general consensus that the spiral shock wave induces this spiral structure in the accretion disk. However, the origin of the shock may correspond to many different mechanisms. In theory, Michel (1984) firstly proposed the spiral shock in accretion disks as an effective angular momentum transfer mechanism. Sawada et al. (1986, 2016) performed two-dimensional hydrodynamic simulations of the Roche lobe overflow in a semi-detached binary system to confirm the formation of the spiral shock and the angular momentum transfer in an accretion disk. From the new millennium onwards, with the progress of enormous computational facilities, more and more more three-dimensional simulations, which include the spiral shock, have been investigated for the accretion in a binary system in many different studies (Makita et al. 2000; Molteni et al. 2001; Ju et al. 2016, 2017; Xue et al. 2021).
Though the solution of numerical simulation is closer to the physical reality, the insight of a simplified model of intrinsic physical laws still plays an essential role in developing a theory. In the theoretical field on the spiral shock of accretion disks, Spruit (1987) first introduced the radial self-similar simplification for the steady accretion flow in an inertial frame. The same simplification has also been adopted in subsequent theoretical studies of accretion disks (e.g., Chakrabarti (1990b); Narayan & Yi (1994)). It is worth mentioning that it is a common feature that the Newtonian gravitational potential has been used in these studies, which maintains the mathematical self-consistency of self-similar solutions at different radii. In addition, Narayan & Yi (1994) pointed out that this kind of radial self-similar solutions under the Newtonian potential are piece-wise valid, which can only match the simulations in the middle radial region of accretion disks where there is less effect from the inner and outer boundaries. Though, they can be applied to all available radii mathematically. Following these theoretical studies, we extended the spiral shock model presented by Spruit (1987) and further improved by Chakrabarti (1990b) from the single star in an inertial frame to the binary system in a non-inertial corotating frame as well as involved the mass outflow induced by spiral shocks (Aktar et al. 2021). Accordingly, the Newtonian potential has been replaced by the Roche potential as well as the Coriolis force. This allows us to involve the effects of the binary system on the spiral shock in our model, but our self-similar solution degenerates to become point-wise valid because it is no longer to keep the separation of variables valid at different radii.
On the other hand, the existence of shock waves in an axisymmetric accretion flow and their implication has been extensively studied in literature both analytically and numerically (Fukue 1987; Chakrabarti 1989; Lu et al. 1999; Becker & Kazanas 2001; Fukumura & Tsuruta 2004; Chakrabarti & Das 2004; Sarkar & Das 2016; Sarkar et al. 2018; Dhingia et al. 2018, 2019, 2019, 2019). Due to the shock transition, the post-shock matter becomes very dense and hot (known as post-shock corona (PSC), see Aktar et al. (2015)). As a result, a part of the accreting matter is ejected as mass outflow
from the disk due to the excess thermal gradient force across the shock. The accretion-ejection process has been widely investigated based on the shock compression model considering an axisymmetric accretion flow assumption (Chattopadhyay & Das 2007; Das & Chattopadhyay 2008; Kumar & Chattopadhyay 2013; Aktar et al. 2015, 2017, 2019). In the same spirit, Aktar et al. (2021) investigated mass outflow from the disk induced by spiral shock compression in a non-axisymmetric accretion flow.
In another astrophysical field, the spiral structure in galaxies has also been investigated for a long time. The number of spiral arms and pitch angle (PA, the angle between the tangent and azimuthal directions on the spiral arm) are both essential criteria of Hubble's scheme for classifying galaxies (Hubble 1926). Lin & Shu (1964) proposed the famous density wave theory to explain the formation and preservation of spiral arms in galaxies. Woodward (1976) performed a two-dimensional hydrodynamical simulation to demonstrate the mechanism of star formation (SF) in the density wave theory. Elmegreen (1979) proposed that the interstellar matter flows through the spiral density wave, becomes shocked, and then collapses by its self-gravity. Block et al. (1997) studied the spiral arms of M51 and found evidences that the gravitational collapse of the shocked gas triggers the SF in spiral arms.
Inspired by these studies, based on our self-similar model of spiral shocks (Aktar et al. 2021), we are encouraged to investigate the possible correlation between the star formation rate (SFR) and the characteristic quantity of spiral arms, PA, in spiral galaxies. Since the galactic gaseous disk is self-gravitating, our model must be modified to adapt to this new situation (see Section 2) and involve the SF as a special kind of mass outflow from the gaseous disk (see Section 4). Additionally, since the self-gravity of disk depends on the specific disk mass distribution, the self-similar solution of our model would be locally point-wise (radius-wise) valid, which enables us to apply some similar methodologies from our previous work (Aktar et al. 2021).
We organize the paper as follows. In section 2, we present the description of the model and governing equations. In section 3, we discuss the results of our model in detail. In section 4, we apply our model to understand the dispersion between galactic observational quantities. Finally, we draw the concluding remarks in section 5.
## 2 Model Description
We consider a steady, adiabatic, non-axisymmetric accretion flow around a compact star. Here, we assume that the effect of gravity on the accretion disk is significant enough compared to the central object. Therefore, we consider the self-gravitating disk in this paper. We also adopt the spiral shock model proposed by Chakrabarti (1990a). In this work, we simultaneously solve the radial and the azimuthal components of momentum equations and consider that the accretion flow is in vertical hydrostatic equilibrium throughout the disk.
### Governing Equations
In this paper, we write the governing equations in cylindrical coordinates on the equatorial plane. The governing equations are as follows
(i) The radial momentum conservation equation:
\[v_{r}\frac{\partial v_{r}}{\partial r}+\frac{v_{\phi}}{r}\frac{\partial v_{r} }{\partial\phi}+\frac{1}{\rho}\frac{\partial P}{\partial r}-\frac{v_{\phi}^{2 }}{r}+\frac{\partial\Phi}{\partial r}=0, \tag{1}\]
(ii) The azimuthal momentum equation:
\[v_{r}\frac{\partial v_{\phi}}{\partial r}+\frac{v_{\phi}}{r}\frac{\partial v _{\phi}}{\partial\phi}+\frac{v_{\phi}v_{r}}{r}+\frac{1}{r\rho}\frac{\partial P }{\partial\phi}=0, \tag{2}\]
(iii) The continuity equation:
\[\frac{\partial}{\partial r}(hv_{r}\rho r)+\frac{\partial}{\partial\phi}(h \rho v_{\phi})=0, \tag{3}\]
and finally
(iv) The vertical pressure balance equation:
\[\frac{1}{\rho}\frac{\partial P}{\partial z}=\left(\frac{\partial\Phi}{ \partial z}\right)_{z<r}, \tag{4}\]
where \(r\), \(\phi\), \(v_{r}\), \(v_{\phi}\), \(P\), \(\rho\), and \(2h\) are the radial coordinate, azimuthal coordinate, the radial component of velocity, the azimuthal component of velocity, gas pressure, the density of the flow, and local vertical thickness, respectively. The \(\Phi\) in equation (1 and 4) is the total gravitational potential due to the compact object present at the center of the disk and self-gravitational potential due to the disk material. The expression of \(\Phi\) is given in section 2.2. We also use the adiabatic equation of state \(P=K\rho^{\gamma}\), where \(K\) is the measure of the entropy of the flow. \(\gamma=1+\frac{1}{n}\) is the adiabatic index, and \(n\) represents polytropic index of the flow.
### Self-gravitating disk
In an accretion disk, the gravitational field is generally dominated by the central compact object, but in some cases, the disk's self-gravity can also produce a significant effect. The contribution of gravitational field due to disk depends on the matter distribution through Poisson's equation. For an infinitesimally thin disk, the relation between the surface density of the disk (\(\Sigma(r)\)) and the disk gravitational field (\(\Phi_{d}\)) can be written using the complete elliptic integrals of the first kind (Lodato 2007). The integral form is quite complicated to handle analytically. However, there is a particular simplified relation between \(\Sigma\) and \(\Phi_{d}\) at the disk midplane proposed by Mestel (1963). In this work, we consider the gravitational force due to self-gravitating disk, and is given by
\[\frac{\partial\Phi_{d}}{\partial r}=2\pi G\Sigma(r) \tag{5}\]
where, \(\Sigma=2\rho h\) is the surface density of the disk. Therefore, the total gravitational force in the presence of a self-gravitating disk, as well as the central compact object, is given by
\[\frac{\partial\Phi}{\partial r}=\frac{\partial}{\partial r}(\Phi_{c}+\sigma \Phi_{d})=\frac{GM}{r^{2}}+\sigma\ 2\pi G\Sigma(r) \tag{6}\]
where \(\Phi_{c}\) and \(\Phi_{d}\) are the gravitational potentials due to the compact object at the center of the disk and due to the self-gravity of the disk, respectively. Here, \(G\) is the gravitational constant. We also introduce a constant factor \(\sigma\). For which, \(\sigma=0\) implies non self-gravitating disk (Chakrabarti 1990a), and \(\sigma=1\) introduces the effect of self-gravity. In this paper, we use the unit system \(G=M=c=1\) throughout; otherwise, it is stated.
It is to be emphasized that in reality, the gravity torque generated by the spiral arm of the spiral galaxy is inevitable (Block et al. 2002, 2004; Tiret & Combes 2008). However, in the present work, we ignore the effect of gravity torque in the presence of the spiral arm in equation 6. Further, it is to be mentioned that the spiral galactic disk is composed of visible and invisible matter such as gaseous matter, stars, dark matter, etc. In our present
theoretical model, we assume the galactic disk is predominately dominated by gaseous matter, and calculation is independent of mass. However, we consider the total mass visible or invisible within the radius \(r\) when we derive the observational data from the circular velocity curve of spiral galaxies (see section 4). Therefore, our calculation considers the gravity contribution from the stellar component and invisible mass implicitly.
### Flow equations in spiral coordinates using self-similar conditions
In this work, we transform the conservation equations in cylindrical coordinates to spiral coordinates. The spiral coordinates are defined as \(\psi=\phi+\beta(r)\). Here \(\beta\) connects to the radial distances and spirality of the disk. Now, we consider the self-similarity conditions in the spiral coordinate as (Chakrabarti 1990a; Aktar et al. 2021)
\[v_{r} =r^{-1/2}q_{1}(\psi), \tag{7a}\] \[v_{\phi} =r^{-1/2}q_{2}(\psi),\] (7b) \[a =r^{-1/2}q_{3}^{1/2}(\psi),\] (7c) \[\rho =r^{-3/2}q_{\rho}(\psi),\] (7d) \[P =r^{-5/2}q_{\Gamma}(\psi), \tag{7e}\]
and
\[\frac{\partial\beta}{\partial r}=r^{-1}B, \tag{7f}\]
where,'spirality', \(B=\tan\theta\), and \(\theta\) is the pitch angle (PA). The measure of entropy \(K\) remains constant along the flow between two consecutive shocks; however, it changes at the shock. Here, \(a\) represents the sound speed of the flow. Using the definition of sound speed, we calculate the variation of \(K\) as
\[K=r^{3/\gamma 2-5/2}K_{0} \tag{8}\]
where, \(K_{0}=\frac{q_{\varphi}}{q_{\varphi}}\) (Chakrabarti 1990b). The entropy should generally increase inward for accretion and outward for wind. In this paper, we are interested only on the accretion solution. Therefore, we always choose \(\gamma<5/3\) to analyze accretion flow.
Now we obtain the disks height (\(h\)) from equation (4) as
\[h=\frac{r^{-1/2}q_{3}^{1/2}}{\mathcal{G}} \tag{9}\]
where, \(P=\rho a^{2}\) and \(\mathcal{G}=\left(\frac{1}{r}+\sigma ar^{-3/2}\right)^{1/2}\). Here, \(\alpha=4\pi q_{\rho}\). It is evident that the disk height is dependent on the density of the material present in the disk for the self-gravitating disk. Here, the surface density of the disk can be obtained as \(\Sigma=2\rho h\). In general, if the disk is predominately dominated by disk gravity with negligible central object mass, the surface density follows \(\Sigma\sim 1/r\) relation (Bertin & Lodato 1999, 2001). On the other hand, in a real spiral galactic disk, the surface density profile may be completely different, as depicted in equation 9. In our present model, we consider a point-wise self-similar approach to incorporate spiral coordinate (Aktar et al. 2021). Our self-similarity model is valid point-wise, i.e., within a fixed radial distance (\(r\)). Therefore, it is difficult to infer the radial dependence of flow variables in the present formalism.
Therefore, we obtain the dimensionless differential equations of \(q_{1}\), \(q_{2}\) and \(q_{3}\) from equations (1 - 4) using equations (7a - 7f) and (9), and are given by
\[q_{\omega}\frac{dq_{1}}{d\psi}\frac{n_{\rho}+1}{\gamma}q_{3}+\frac{B}{(\gamma -1)}\frac{dq_{3}}{d\psi}\frac{q_{1}^{2}}{2}-q_{2}^{2}+1+\frac{\sigma aq_{3}^{1 /2}}{\mathcal{G}}=0 \tag{10}\]
\[q_{\omega}\frac{dq_{2}}{d\psi}+\frac{q_{1}q_{2}}{2}+\frac{1}{(\gamma-1)}\frac{ dq_{3}}{d\psi}=0\]
, and
\[B\frac{dq_{1}}{d\psi}+\frac{dq_{2}}{d\psi}+\frac{(\gamma+1)q_{ \omega}}{2(\gamma-1)q_{3}}\frac{dq_{3}}{d\psi}-\frac{\sigma ar^{-3/2}}{2G^{2} }\frac{q_{\omega}}{(\gamma-1)q_{3}}\frac{dq_{3}}{d\psi}\] \[-\frac{3}{2}q_{1}+\frac{3}{2}\frac{q_{1}r^{-3}}{\mathcal{G}^{2}}+ \frac{3}{4}\frac{q_{1}\sigma ar^{-3/2}}{\mathcal{G}^{2}}=0 \tag{12}\]
where, \(q_{\omega}=q_{2}+Bq_{1}\).
### Sonic point analysis
Here, we obtain the sonic point conditions by eliminating \(\frac{dq_{1}}{d\phi}\) and \(\frac{dq_{2}}{d\phi}\) from equation (12) using equation (10) and (11), and is given by
\[\frac{dq_{3}}{d\psi}=\frac{N}{D}, \tag{13}\]
where,
\[N =-\frac{(n_{\rho}+1)Bq_{3}}{\gamma}-\frac{Bq_{1}^{2}}{2}-Bq_{2}^{2 }+B+\frac{B\sigma aq_{3}^{1/2}}{\mathcal{G}}\] \[+\frac{q_{1}q_{2}}{2}+\frac{3}{2}q_{\omega}q_{1}-\frac{3}{2} \frac{r^{-3}q_{1}q_{\omega}}{\mathcal{G}^{2}}-\frac{3}{4}\frac{\sigma ar^{-3/2 }q_{1}q_{\omega}}{\mathcal{G}^{2}} \tag{14}\]
, and
\[D=-\frac{B^{2}}{(\gamma-1)}-\frac{1}{(\gamma-1)}+\frac{q_{\omega}^{2}(\gamma+1 )}{2(\gamma-1)q_{3}}-\frac{\sigma ar^{-3/2}}{2\mathcal{G}^{2}}\frac{q_{\omega }^{2}}{(\gamma-1)q_{3}}. \tag{15}\]
During the accretion process into the compact object, the denominator (\(D\)) at equation (13) becomes zero at some surfaces, known as sonic surface \(\psi=\psi_{c}\). Simultaneously, the numerator (\(N\)) also has to be zero at sonic surfaces to maintain the smooth solution (Chakrabarti 1989). The vanishing condition of denominator \(D=0\) provides the sound speed at the sonic surface as
\[q_{\infty}=\frac{q_{\omega}^{2}}{(B^{2}+1)}\frac{\Lambda}{2} \tag{16}\]
where, \(\Lambda=\left[(\gamma+1)-\frac{\sigma ar^{-3/2}}{\mathcal{G}^{2}}\right]\).
In the presence of shock, the velocity component perpendicular to the shock is
\[q_{\perp}=\frac{q_{2}+Bq_{1}}{(B^{2}+1)^{1/2}}, \tag{17}\]
and velocity component parallel to the shock is given by
\[q_{\parallel}=\frac{q_{1}-Bq_{2}}{(B^{2}+1)^{1/2}}. \tag{18}\]
The value of Mach number at the sonic surface is obtained as \(M_{c}=\left(\frac{q_{\perp}}{a}\right)_{c}=\sqrt{\frac{2}{\Lambda}}\) using equation (16) and (17). It is to be noted that the mach number at the sonic point deviates from the axisymmetric vertical equilibrium model for the self-gravitating disk.
On the other hand, the vanishing condition of numerator \(N=0\) gives rise to the radial velocity (\(q_{1c}\)) at the sonic surface and is given by
\[q_{1c}=\frac{-\mathcal{B}\pm\sqrt{\mathcal{B}^{2}-4\mathcal{A}C}}{2\mathcal{A}} \tag{19}\]
where,
\[\mathcal{A}=-\frac{B^{3}(n_{\rho}+1)}{(B^{2}+1)}\frac{\Lambda}{2\gamma}+B-\frac{3 }{2}\frac{Br^{-3}}{\mathcal{G}^{2}}-\frac{3}{4}\frac{B\sigma\alpha r^{-3/2}}{ \mathcal{G}^{2}}\]
\[\mathcal{B}=-\frac{B^{3}q_{2}(n_{p}+1)}{(B^{2}+1)}\frac{\Lambda}{ \gamma}+\frac{B^{2}\sigma\alpha}{\mathcal{G}}\left[\frac{\Lambda}{2(B^{2}+1)} \right]^{1/2}+2q_{2}\] \[-\frac{3}{2}\frac{r^{-3}q_{2}}{\mathcal{G}^{2}}-\frac{3}{4}\frac {\sigma\alpha r^{-3/2}q_{2}}{\mathcal{G}^{2}}\]
\[\mathcal{C}=-\frac{Bq_{2}^{2}(n_{p}+1)}{(B^{2}+1)}\frac{\Lambda}{2\gamma}-Bq_{2 }^{2}+B+\frac{B\sigma\alpha q_{2}}{\mathcal{G}}\left[\frac{\Lambda}{2(B^{2}+1 )}\right]^{1/2} \tag{20}\]
where the subscript, "c", represents the quantities evaluated at the sonic surface. To obtain the derivative \(\frac{dq_{2}}{dq}|_{\epsilon}\) at the sonic surfaces, we apply 'l'Hospital rule in equation (13) similar to Aktar et al. (2021).
### Computation of mass outflow rate from the disk
The net mass flux can be obtained from equation (3) using self-similar conditions (equations 7a-7f). One part of the mass flux is contributed to radial inflow mass flux (\(\dot{M}_{\rm in}\)) (i.e., accretion rate), and another part is contributed to the wind flux in the azimuthal direction (Chakrabarti 1990a). It is notable to mention that Chakrabarti (1990a) did not consider the situation of mass outflow from the disk. In general, if there is no spiral shock in the flow, the wind flux is zero (Chakrabarti 1990a). However, in the presence of spiral shocks, the post-shock matter is very hot and dense. Due to the excess thermal gradient force across shocks may drive the matter as mass outflow in the spiral arm from the disk similar to the axisymmetric accretion disk model (Aktar et al. 2015, 2017). It is to be mentioned that here we assume a two-dimensional vertical equilibrium model (i.e., 2.5D). Therefore, estimating the mass flux component in the vertical direction is impossible in the present model. However, we argue that the mass flux in the azimuthal direction accumulates in the spiral arm and is ejected away as mass outflow from the disk due to the thermal gradient force across spiral shock waves. Now, in our model, if we consider mass outflow, we need to balance non-zero mass flux in azimuthal direction with mass outflow rates \(\dot{M}_{\rm out}\) to maintain mass conservation equation (3).
Therefore, the mass accretion rate in the radial direction can be obtained from equation (3) as
\[\dot{M}_{\rm in}=\int_{0}^{2\pi}q_{1}q_{\rho}q_{3}^{1/2}d\psi. \tag{21}\]
On the hand, the mass outflow rates from the disk are obtained by equating the wind flux normal to the spiral shock to preserve the total mass flux in the flow, i.e.,
\[\dot{M}_{\rm out}\equiv\int_{0}^{2\pi}hq_{\rho}q_{u}d\psi. \tag{22}\]
Here, the ratio of mass outflow to inflow rates can be calculated as \(R_{in}=\frac{\dot{M}_{out}}{\dot{M}_{\rm in}}\) (Aktar et al. 2015, 2017, 2021).
### Spiral shock conditions and solution methodology
The spiral shock conditions are given by (Chakrabarti 1990a; Aktar et al. 2021)
(1) The energy conservation:
\[\frac{q_{3+}}{\gamma-1}+\frac{q_{++}^{2}}{2}=\frac{q_{3-}}{\gamma-1}+\frac{q_ {+-}^{2}}{2} \tag{23a}\]
(2) The momentum conservation:
\[W_{+}+\Sigma_{+}q_{++}^{2}=W_{-}+\Sigma_{-}q_{+-}^{2} \tag{23b}\]
(3) The conservation of mass flux normal to the shock:
\[h_{+}\ q_{\rho+}\ q_{\nu+}=h_{-}\ q_{\rho-}\ q_{\nu-} \tag{23c}\]
(4) The conservation of velocity component parallel to the flow:
\[q_{1+}-Bq_{2+}=q_{1-}-Bq_{2-} \tag{23d}\]
where "\(\pm\)" implies post-shock and pre-shock quantities, respectively. Here, \(W\) represents the vertically integrated gas pressure of the flow (Matsumoto et al. 1984; Chakrabarti 1989). The shock invariant quantity (\(C_{s}\)) is obtained using equations (23a-23c) as
\[C_{s}=\frac{\left[M_{+}(3\gamma-1)+\frac{2}{M_{+}}\right]}{\left[2+(\gamma-1) M_{+}^{2}\right]}=\frac{\left[M_{-}(3\gamma-1)+\frac{2}{M_{-}}\right]}{ \left[2+(\gamma-1)M_{-}^{2}\right]}. \tag{24}\]
We define the shock strength as \(\mathcal{S}=\frac{M_{+}}{M_{-}}\). The analytical expression of shock location \(\epsilon\) can be obtained as (Chakrabarti 1990a; Aktar et al. 2021), and is given by
\[\epsilon=\frac{1}{2}+\frac{1}{\delta\psi}\left(\frac{\frac{dq_{1}}{d\phi}}{ \frac{dq_{1}}{d\phi}}\right)_{c}, \tag{25}\]
where, \(\delta\psi=2\pi/n_{s}\). \(n_{s}\) is the number of shocks in the flow. We obtain the calculation of the second-order derivatives at the sonic surfaces in a similar way of Aktar et al. (2021). We ignore to represent the long expression here to avoid repetition. Further, we quantify the amount of specific angular momentum (\(\lambda\)) dissipated in the presence of the spiral shocks as
\[\Delta\lambda=\frac{\lambda_{+}}{\lambda_{-}}=\frac{q_{2+}}{q_{2-}}. \tag{26}\]
The classical self-similar solution is a common feature for the Newtonian gravitational potential, and it has been widely investigated in literature starting with some pioneering works (Spruit 1987; Chakrabarti 1990a; Narayan & Yi 1994). The self-similar solutions make flow equations dimensionless and independent of position. This approach can be widely applied in various physical situations in accretion physics. However, the classical self-similar approach unable to incorporate various interesting physical scenarios, such as the non-inertial effects from the co-rotating frame of the binary, self-gravitating disk, etc. Moreover, the numerical simulations also indicate that the self-similar solution is only valid in the middle radial region of the accretion disk in which there is less effect from the inner and outer boundaries (Narayan & Yi 1994). In general, it is pointed out that the self-similar solution is only a local solution under local simplification but not a global solution. Recently, Aktar et al. (2021) considered the self-similar condition to simplify the
calculation and obtain the point-wise valid solution to incorporate the physical effects from the companion gravity, centrifugal force, and Coriolis' force. Motivating by this, we also adopt point-wise self-similar solutions to investigate spiral shocks in a self-gravitating disk by incorporating self-gravitating potential in our model.
Here, we adopt the same solution methodology, i.e., the point-wise self-similar approach proposed by Aktar et al. (2021). We first fix the radial distance (\(r\)) of the flow. Then, to obtain the solution, we apply the same input parameters mentioned by Chakrabarti (1990a). Therefore, we supply the number of shocks (\(n_{s}\)), pitch angle (\(\theta\)), rotational velocity at the sonic surface (\(q_{2c}\)), and adiabatic index (\(\gamma\)) of the flow. Additionally, we need to supply the density (\(q_{gc}\)) at the sonic surface due to the consideration of a self-gravitating disk. We self-consistently determine the shock location (\(\epsilon\)) using equations (24) and (25). We also fix the adiabatic index \(\gamma=4/3\) throughout the paper; otherwise, it is stated.
## 3 Results
In a non-axisymmetric accretion flow, the inflowing matter spirals around the compact object. During accretion, the flow might encounter several spiral shock transitions depending on the flow parameters (Chakrabarti 1990a; Aktar et al. 2021). Due to the shock transition, the flow losses its angular momentum (see equation 26) and enters into the central compact object. If the gravitational field due to the matter present in the disk is significant enough, we need to incorporate the self-gravitating effect in governing equations. Keeping this in mind, we consider the self-gravitating effect in our present formalism. To obtain the solutions, we first need to examine the nature of the sonic surfaces, which can be determined following the quadratic expression of \(\frac{dq_{2c}}{dq}|_{\epsilon}\) at the sonic surfaces. The sonic surfaces can be broadly classified into two ways, such as physical (discriminant \(>0\)) and unphysical (discriminant \(<0\)) sonic surfaces. The physical sonic surfaces are also classified into'saddle type,''straight line', and 'nodal type' depending on various conditions (see Aktar et al. (2021); Chakrabarti (1990b) for details). In this work, we first identify the saddle-type sonic surfaces by supplying the inflow parameters, namely pitch angle (\(\theta\)), the rotational velocity at the sonic surface (\(q_{2c}\)) and density of matter (\(q_{pc}\)) at the radial position (\(r\)), respectively (Chakrabarti 1989, 1990a). In order to obtain solutions, we numerically integrate equation (10-12) from saddle type sonic surfaces by supplying the flow variables (\(\theta,q_{2c},q_{pc},\gamma\)) using both the slope of \(\frac{dq_{1}}{dq_{0}}|_{\epsilon}\) at a particular radial distance (\(r\)) (see Aktar et al. (2021)). During accretion, the flow passes through spiral shocks, and due to the shock compression, a part of the accreting matter emerges from the disk as mass outflows (Aktar et al. 2015, 2017, 2021). Here, the mass outflow rates are calculated using equations (21-22). To begin with, we first investigate the comparison between non self-gravitating and self-gravitating disk accretion flow. For the purpose of comparison, we investigate mach number (\(M\)), rotational velocity (\(q_{2}\)), sound speed (\(q_{3}^{1/2}\)), and density of the matter (\(q_{p}\)) with the spiral coordinates, respectively. In panel (\(a\)) of Figure 1, we compare the mach number with spiral coordinates (\(\psi\)). We observe that mach number variation is completely
Figure 1: Comparison of flow variables for non self-gravitating and self-gravitating disk with the spiral coordinates. The panel (a), (b), (c) and (d) represent mach number (\(M\)), rotational velocity (\(q_{2}\)), sound speed or equivalently disk height (\(q_{3}^{1/2}\)), and density of flow (\(q_{c}\)) respectively. Here, we fix the flow parameters (\(\theta,q_{2c},q_{pc}\))\(=(50^{\circ},0.10,10^{-4})\). For the calculation of self-gravitating disk, we fix the radial distance at \(r=25\). See the text for details.
different for the self-gravitating disk compared to the non self-gravitating disk. Moreover, the mach number at the sonic point (\(M_{c}\)) is different from the non self-gravitating disk as indicated in equations (16-17), depicted in Figure 1a. In a similar manner, the rotational velocity (\(q_{2}\)) and the sound speed or equivalent disk height (\(q_{3}^{1/2}\)) varies significantly in the presence of self-gravitating disk, shown in Figure 1b and 1c, respectively. It is obvious that the disk height increases in a particular radial position with the spiral coordinates for self-gravitating disk compared to non self-gravitating disk, depicted in Figure 1c. On the other hand, we also observe that the density of matter (\(q_{\rho}\)) also deviates from the non self-gravitating disk even starting from the same sonic values, as shown in Figures 1d. Here, we fix the flow parameters (\(\theta,q_{2c},q_{pc}\)) as \((50^{\circ},0.10,10^{-4})\) at the radial distance \(r=25\). The solid (red) and dashed (black) curves are for self-gravitating and non self-gravitating disks, respectively. The saddle-type sonic surfaces (\(\psi_{c}\)) are indicated in the figure. It is to be noted that the particular solution mentioned in Figure 1 does not exhibit spiral shocks.
Now we investigate the comparison of solution topology between non self-gravitating and self-gravitating disks in Figure 2 for \(n_{s}=2\). During accretion, the inflowing matter passes through some sonic surfaces (\(\psi_{c}\)) to become supersonic. If the spiral shock conditions (equations 23a - 23d, 24, 25) are satisfied, then the flow makes a discontinuous jump to the subsonic flow. Immediately, the flow picks up its velocity and again passes through another sonic surface. Again the shock transition happens, and the flow loses its angular momentum. Finally, the matter enters into the compact object. The vertical arrows indicate the spiral shock transitions in the flow and solid (black) circles represent sonic surfaces. Here, the solid (red) and dashed (black) curves are represented for self-gravitating and non self-gravitating disks, respectively. Interestingly, we find two spiral shocks (\(n_{s}=2\)) for self-gravitating disk, albeit there are no spiral shocks present in non-self-gravitating flow for the same inflow parameters. The corresponding sonic surfaces (\(\psi_{1c},\psi_{2c},\psi_{3c}\)) and shock locations (\(\psi_{s1},\psi_{s2}\)) are also indicated in the figure. Here, the shock parameters are (\(\epsilon,\mathcal{S}\)) = (0.2700, 1.9526). In a similar way, we also present solution topology in the presence of self-gravitating disks.
Figure 4: Variation of (a): shock locations (\(\epsilon\)), (b): shock strength (\(\mathcal{S}\)), (c): amount of angular momentum dissipation (\(\Delta\lambda\)) across shock, and (d): mass outflow rate (\(R_{\rm a}\)) in terms of pitch angle for various flow density at sonic surface (\(q_{\rho c}\)). The solid (black), dashed (red), dotted (blue), and dashed-dotted (green) curves are for \(q_{\rho}=0.0001,0.1,0.2\) and 0.3 respectively. Here, we fix \(q_{2c}=0.60\) at the radial distance \(r=0.01\). See the text for details.
Figure 3: Representation of spiral shocks transitions for the number of shocks \(n_{s}=4\) in presence of self-gravitating disk. The vertical arrows represent spiral shock transitions in the flow. Here, the flow parameters are (\(\theta,q_{2c},q_{pc},r\))= (\(45^{\circ},0.75,10^{-5},20\)). See the text for details.
Figure 2: Comparison of solution topology for non self-gravitating and self-gravitating disk. The flow variables are (\(\theta,q_{2c},q_{pc}\))= (\(60^{\circ},0.86,10^{-4}\)). We also fix \(r=25\). See the text for details.
of a self-gravitating disk when the number of spiral shocks is \(n_{s}=4\), depicted in Figure 3. The corresponding four sonic surfaces \((\psi_{1c},\psi_{2c},\psi_{3c})\), and four shock locations \((\psi_{41},\psi_{22},\psi_{33},\psi_{44})\) are shown in the figure. The corresponding shock parameters are \((\epsilon,\mathcal{S})=(0.3469,2.7238)\). We fix the flow parameters as \((\theta,q_{2c},q_{\rm sc},r)\) as \((60^{\circ},0.86,10^{-4},25)\) and \((45^{\circ},0.75,10^{-5},20)\) for Figure 2 and Figure 3, respectively.
Further, we examine the overall behavior of shock properties in terms of pitch angle by fixing all other flow parameters. In Figure 4a, we represent shock location (\(\epsilon\)) with the variation of pitch angle (\(\theta\)). Here, solid (black), dashed (red), dotted (blue), and dashed-dotted (green) are for different flow densities at sonic surface \(q_{\rm sc}=0.0001,0.1,0.2\), and \(0.3\), respectively. The corresponding shock strength (\(\mathcal{S}\)) is plotted in Figure 4b. We observe that shock strength decreases with the increase of pitch angle. This implies that a tighter spiral arm exhibits stronger spiral shocks in the flow. Also, shock strength decreases with the increase of density of the flow for a particular pitch angle. Similar trends have been observed for dissipation of angular momentum (\(\Delta\lambda\)) as shock strength, depicted in Figure 4c. On the other hand, the mass outflow rates (\(R_{\rm in}\)) is plotted in Figure 4d. It is found that mass outflow rates increase with the pitch angle. It clearly indicates that gaseous matter can escape more easily from the disk due to spiral shocks for a weakly wound spiral arm compared to the strong one. Also, mass outflow rates are higher for a denser flow than a less dense flow for a particular pitch angle due to the availability of more matter in the disk surface. Here we fix rotational velocity \(q_{2c}=0.6\) at the radial distance \(r=0.01\).
So far, we have compared the solution topology. Now, we investigate the overall parameter space containing spiral shocks. The shock parameter space is separated by pitch angle (\(\theta\)) and rotational velocity (\(q_{2c}\)) at sonic surfaces, shown in Figure 5. Theoretically, the number of spiral shocks lies within the range \(1\geq n_{s}\rightarrow\infty\) (Spruit 1987). Therefore, we compare the parameter space for a self-gravitating disk for the number of shocks \(n_{s}=2,4,\) and \(10\). Here, we fix the radial distance at \(r=10\) and density at sonic surface \(q_{\rm sc}=10^{-4}\). We observe that shock parameter space increases significantly from \(n_{s}=2\) to \(n_{s}=4\), decreasing again for \(n_{s}=10\). There is a clear indication that the parameter space shrinks for the higher number of shocks \(n_{s}\). It also indicates that the shock solutions are less probable for the higher number of shock solutions. Along with that, we also plot the parameter space for non self-gravitating disk, which is independent of radial position (Chakrabarti 1990a). Here, we choose the number of shocks \(n_{s}=4\). In Figure 5, solid (black), dotted (red), and dashed (blue) curves are for self-gravitating disks for the number of shocks \(n_{s}=2,4,\) and \(10\), respectively. The corresponding dashed-dotted (green) curve is for the non self-gravitating disk. This parameter space of non self-gravitating disk is the same as Figure 6 of Chakrabarti (1990a) for accretion solution (i.e., \(\sigma=0\)).
## 4 Application to spiral galactic gaseous disks
In this section, we apply our model to spiral galactic gaseous disks. The shock compression, due to spiral shocks, drives some of the shocked gaseous matter forming star, while some directly outflow the galactic disk. However, the detailed physical processes of star formation will depend on the specific galactic environment. In our present work, inferring the galactic environment and various other physical processes is impossible. Two parameters, namely PA and shear rate (SR), both play pivotal roles in spiral galaxy properties. We assume that the spiral shock wave and the spiral arm have the same PA since they are always associative; however, they are actually a little different. Therefore, here we consider the PA estimated by the galaxy image analysis as the PA of the spiral shock wave. For the SR (\(\Gamma\)), its definition can easily be found in previous works (e.g., Seigar et al. (2005, 2006); Yu & Ho (2019)) and is given as
\[\Gamma=1-\frac{r}{V_{c}}\frac{dV_{c}}{dr}, \tag{27}\]
where \(r\) and \(V_{c}\) are the radial distance and the circular velocity around the galactic center, respectively. In our model, the azimuthal velocity \(v_{\phi}\) is identical to the \(V_{c}\) in equation (27). Therefore, replacing \(V_{c}\) with the equation (7b) and doing the mathematical simplification, we obtain from the equation (27)
\[\Gamma=\frac{3}{2}-k\tan\theta, \tag{28}\]
where \(k\equiv d\ln q_{2}/d\psi\), which is one of the derivatives in our model (see equation 11) and represents the changing rate of azimuthal velocity along the \(\psi\)-direction. It is an interesting fact that \(\Gamma\) is equal to \(3/2\) if the Keplerian velocity \(V_{c}=\sqrt{GM/r}\) is substituted into equation (27), which implies the effect of the mass \(M\) is only concentrated inside the radius \(r\) (no mass outside \(r\)). Therefore, on the right side of the equation (28), \(3/2\) is the Keplerian upper limit of SR, and the term of \(k\tan\theta\) represents the effects from the disk mass and dark matter halo.
The PA, \(\theta\), can be measured from the galaxy images through the discrete Fourier transformation, and the SR, \(\Gamma\), can be estimated by fitting the circular velocity curve (CVC) of the galaxy. Seigar et al. (2005, 2006) supplied a set of measuring PAs and SRs for a total of 45 galaxies by the images in the near-infrared/optical band and the observational CVCs, respectively. Recently, Yu & Ho (2019) also provided a new data set including 79 galaxies, whose PAs were measured from their optical images of the Sloan Digital Sky Survey (SDSS) and SRs came from Kalinova et al. (2017) using the CVCs of the Calar Alto Legacy Integral Field Area (CALIFA).
In Figure 6, we represent a plot of \(\theta\) vs. \(\Gamma\), which contains the contours of \(k\) defined in Equation (28) and the data points
Figure 5: Parameter space for different number of spiral shocks (\(n_{s}\)). Here, we fix the radial distance at \(r=10\) and the flow density as \(q_{\rm sc}=10^{-4}\). See the text for details.
of 124 galaxies collected from Seigar et al. (2005, 2006) (green triangles) and Yu & Ho (2019) (blue circles). It can be easily seen that most of the data points (green and blue) are distributed in the area between two curves of \(k=0.5\) and \(5.0\), and the dispersion of PA gradually contracted along the contour lines of \(k\) when the SR increases to approach the limit of \(3/2\). This interesting contractivity shows the intrinsic physical properties of this dispersion, which can be measured by \(k\) with Equation (28). We can calculate \(k\) from the observational PA and SR on a specific galaxy case and fix the derivate \(d\ln q_{2}/d\psi\) in our model to constrain the property of spiral shocks coherent to spiral arms in this galaxy.
Encouraging by this, we continue to analyze the correlations between the SFR and PA. In order to achieve this goal, we first need to calculate the physical quantities in a proper unit system. Hereafter, we use the unit system of \(G=V_{\rm K}=r=1\) instead of \(G=M=c=1\), which is used in the earlier part of this paper. Where the \(V_{\rm K}\equiv\sqrt{GM/r}\) is the Keplerian velocity at location \(r\) and the mass \(M\) includes all of the visible and invisible mass, such as the gas, dust, star, and dark matter, etc., inside the radius \(r\). In order to estimate \(V_{\rm K}\) for a particular galaxy, we assume that the stars are approximately rotating with the local Keplerian velocity (It should be noted that the \(v_{\phi}\) in our model is the circular velocity of the gas. Because our model is a hydrodynamic model, \(\psi_{\phi}\) is not necessarily Keplerian and is scaled with the local Keplerian velocity). Then, we can obtain this local star rotating velocity at the location \(r\) by interpolation from the CVC of this galaxy. Based on the principal component analysis (PCA) of Kalinova et al. (2017), we can obtain this physical quantity accurately and conveniently by their PCA interpolating formulae. As mentioned earlier, our model solution is point-wise valid; our analysis is restrained locally. Therefore, the location \(r\) refers to the radial distance from the galaxy's center to where these analyses and measurements occur. In this regard, we estimate \(r\) as the radius of the middle point of the range used by Fourier analysis for the PA, i.e., \(r=r_{\rm mid}\), which is the most probable measuring location of PA. Yu & Ho (2019) provided the radial ranges in unit arcsec for their 79 galaxies and we can obtain the distances from earth to these galaxies in unit Mpc from the data set of CALIFA, so we can finally calculate the \(r_{\rm mid}\) in unit parsec, which is listed in Table 1 (column 6) for every galaxy from Yu & Ho (2019). Moreover, we need to fix the number of spiral shocks \(n_{s}\) for our model. Here, we assume the Fourier mode used to calculate pitch angle in Yu & Ho (2019) paper as the number of shocks (see column 4 of Table 1). Unfortunately, we have not found any observational distance for the 45 galaxies analyzed by Seigar et al. (2005, 2006), so we cannot continue to use their sample for subsequent analysis in this paper.
Next, we continue to estimate the SFR based on our model. As mentioned in the above Section 2.5, we can calculate the outflow rate (equation 22), which is regarded as the estimation of SFR induced by spiral shocks (arms) in our model. However, we are still unable to determine this SFR for each specific galaxy because there are two parameters (\(q_{2c}\) and \(q_{pc}\)) undetermined. These two parameters represent the circular velocity and density of flow at the sonic surface, whose values might depend on the galactic ambient conditions at the shock front, and they cannot be estimated through the existing observational data in this paper. We can only explore the parameter space combined by these two parameters to determine the SFR range for each specific galaxy. In Figure 7, we show the plot of SFR vs. PA. With the SFRs from Catalan-Torrecilla et al. (2015), 79 data points for galaxies of Yu & Ho (2019) are included with their error bars. The area between the black solid (upper limit) and dotted (lower limit) lines denotes the SFR range estimated by our model, and most of the data points are within its area except for several ones above the upper limit at the low PA part (The upper limit curve is the connecting line of the maximal estimating SFR of all individual galaxies, and so is the lower limit curve). This shows that the parameter space composed of \(q_{2c}\) and \(q_{pc}\) can reasonably explain the dispersion among these data points, i.e., the differences of galactic ambient conditions cause the differ
Figure 6: Theoretical value of \(k\) in terms of pitch angle and shear rate. The blue circles and green triangles are for observational data of Yu & Ho (2019) and Seigar et al. (2005, 2006), respectively. See the text for details.
Figure 7: Model calculated maximum and minimum star formation rate (SFR) in terms of pitch angle (\(\theta\)) for 79 spiral galaxies. Black solid and dotted curves are for the maximum and minimum of theoretically estimated SFR at SFE=100%. The other coloured solid curves are for the maximums SFR at different SFEs. The solid blue circles represent observed SFR (SFR\({}^{\rm obs}\)) (Catalán-Torrecilla et al. 2015). See the text for details.
ences of SFRs. This, in turn, shows the rationality of our model. The rising trend of SFR with increasing PA is both shown on the upper and lower limit curves, which is a rational theoretical relationship predicted by our model. However, it is difficult to be reflected on those dispersive data points. The subsequent analysis of our model can reveal the reason for this deviation between the theory and the observation. The observed pitch angle (\(\theta\)), shear rate (\(\Gamma\)), number of shocks (\(n_{s}\)), local Keplerian velocity (\(V_{\rm K}\)), and disk mid radial distance (\(r_{\rm mid}\)) are shown shown in Table 1. The other theoretical model parameters, such as \(k\), \(q_{2c}\), and \(q_{pc}\), are tabulated in columns (7-9) of Table 1, respectively. We also mention the corresponding pre-shock (\(q_{2-}\)) and post-shock (\(q_{2+}\)) rotational velocity in column 10 and 11, respectively. We find that due to the star formation, the post-shock velocity is always lower compared to pre-shock velocity for all the cases. It implies that gas loses its angular momentum to shift towards lower angular momentum orbits and settle down after star formation (see equation 26). The maximum and minimum SFR are shown in column 12 of Table 1 for SFE = 100%. We also tabulated the available observed SFR (SFR\({}^{\rm obs}\)) for spiral galaxies from Catalan-Torrecilla et al. (2015) in column 13 of Table 1.
By further analysis, more physical insights can be obtained from Figure 7. The dynamics of outflow gas from the galaxy's gaseous disk are not included in our model, so we have no way of knowing how the gas is left, but there are only two possibilities. One is the disk wind, and the other is the star formation. Unfortunately, we cannot determine or constrain the individual fractions of disk wind or SFR on the total outflow through the observational data in this paper. In fact, it is equivalent to assuming 100% of the compressed outflow gas forms stars that the outflow rate calculated with equation (22) is directly regarded as SFR. Obviously, it is impossible to achieve a star formation efficiency (SFE) of 100% in reality; thus, we also draw a series of upper limit curves under the different SFEs (The other colored curves paralleling the black curve) in Figure 7. This shows the differences in SFEs among these galaxies, which may be caused by the galactic ambient conditions manifested by parameters \(q_{2c}\) and \(q_{pc}\) in our model. We speculate that compared with the SFR induced by spiral arms, those galaxies close to the 1% curve might have large disk wind launched from arms, while those beyond the 100% curve (even including those beyond 50%) might have other stronger star formation mechanisms. These speculations need further observations and simulations to confirm, but it is beyond the scope of our study in this paper.
## 5 Discussions and Conclusions
In this paper, we assume a non-axisymmetric, inviscid, self-gravitating accretion flow around a compact object. We calculate the spiral shocks in the flow following Chakrabarti (1990a) prescription. We also adopt the same solution methodology, i.e., a point-wise self-similar approach based on our earlier work (Aktar et al. 2021). In general, the matter distribution in the disk should be considered via the Poisson's equation. However, it is challenging to handle analytically to get the self-gravitating effect. As a result, we consider a simplified relation between disk surface density and gravitational potential due to self-gravity (Mestel 1963; Lodato 2007). Our self-gravitating model immediately reduces to Chakrabarti (1990a) model in the absence of self-gravity, i.e., when \(\sigma=0\) (see equation 6). First, we compare the flow variables of accretion in terms of spiral coordinates for non self-gravitating and self-gravitating disks, shown in Figure 1. We observe that the evolution of flow variables is completely different in the presence of self-gravitating disks. In the same spirit, we compare the solution topology in Figure 2. We find that the flow exhibits spiral shocks for self-gravitating disks even though there is no shock for non self-gravitating disks for the same set of flow parameters. We also observe that two-shock and four-shock solutions are possible in the presence of self-gravitating disks (see Figure 2 and Figure 3). Moreover, we observe that mass outflow rates increase with the increase of pitch angle, which indicates that gaseous mass can be easily escaped from the disk due to spiral shock for weakly wound spiral arm, depicted in Figure 4.
Further, we compare and examine the overall shock parameter space separated by pitch angle (\(\theta\)) and rotational velocity at sonic surface (\(q_{2c}\)) by varying the number of shocks (\(n_{s}\)). We observe that the shock parameter space shrinks with the increase of the number of shocks, shown in Figure 5. Finally, we attempt to calculate SFR for 79 spiral galaxies based on our accretion-ejection model. Interestingly, we observe that mass outflow triggered by spiral shock waves serves as one of the essential physical mechanisms for SFR, depicted in Figure 7. Moreover, our model-calculated SFR is consistent with the observed SFR for various spiral galaxies.
Looking back at the analysis of PA-SR and SFR-PA correlation, depicted in Figure 6 and Figure 7, which are both shown as very dispersive relationships by observational data. We conclude that the dispersion of data also contains rich physical information, which needs to be extracted by appropriate theoretical models, and our one is just a simple attempt in this regard. We also admit that the physical mechanism of SFR is extremely complex. There are various other mechanisms, such as AGN feedback, supernova, etc., that may trigger SFR in galaxy (Salome et al. 2016; Padoan et al. 2017; Mukherjee et al. 2018; Cosentino et al. 2022).
In this work, we avoid any dissipation mechanisms in the disk. However, in a realistic situation, various dissipation is present in the flow. In a complete scenario, we need to incorporate the viscous effect along with the various cooling mechanisms to get the complete picture, and it will change the flow dynamics (Chakrabarti & Das 2004; Aktar et al. 2017). Moreover, it is already pointed out that the dynamics of spiral shocks are significantly affected in the presence of radiative losses in the disk (Spruit 1987). Also, the radiative processes are very significant in galactic disks. The present work investigates the spiral shock properties in radiatively inefficient galactic disks (i.e., adiabatic). On the other hand, the gravitational instabilities redistribute the angular momentum in the disk (Binney & Tremaine 1987; Bertin & Lodato 1999). Further, we do not consider the gravity torque effect in our model, which is essential in spiral galactic disk (Block et al. 2002, 2004; Tiret & Combes 2008). Moreover, one of the major limitations of the point-wise self-similar approach is that it is impossible to investigate the global radial variations of flow variables. To analyze this more rigorously, we need a time-dependent simulation study. This kind of study is beyond our scope in the present formalism. We hope to address these issues in the future.
## Acknowledgments
We thank the anonymous referee for very useful comments and suggestions that improved the quality of the paper. The authors also want to express their humble gratitude to Si-Yue Yu for various fruitful discussions during the preparation of the manuscript. The work was supported by the Natural Science Foundation of Fujian Province of China (No. 2023J01008). |
2309.07062 | Large Language Models for Compiler Optimization | We explore the novel application of Large Language Models to code
optimization. We present a 7B-parameter transformer model trained from scratch
to optimize LLVM assembly for code size. The model takes as input unoptimized
assembly and outputs a list of compiler options to best optimize the program.
Crucially, during training, we ask the model to predict the instruction counts
before and after optimization, and the optimized code itself. These auxiliary
learning tasks significantly improve the optimization performance of the model
and improve the model's depth of understanding.
We evaluate on a large suite of test programs. Our approach achieves a 3.0%
improvement in reducing instruction counts over the compiler, outperforming two
state-of-the-art baselines that require thousands of compilations. Furthermore,
the model shows surprisingly strong code reasoning abilities, generating
compilable code 91% of the time and perfectly emulating the output of the
compiler 70% of the time. | Chris Cummins, Volker Seeker, Dejan Grubisic, Mostafa Elhoushi, Youwei Liang, Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Kim Hazelwood, Gabriel Synnaeve, Hugh Leather | 2023-09-11T22:11:46Z | http://arxiv.org/abs/2309.07062v1 | # Large Language Models for Compiler Optimization
###### Abstract
We explore the novel application of Large Language Models to code optimization. We present a 7B-parameter transformer model trained from scratch to optimize LLVM assembly for code size. The model takes as input unoptimized assembly and outputs a list of compiler options to best optimize the program. Crucially, during training, we ask the model to predict the instruction counts before and after optimization, and the optimized code itself. These auxiliary learning tasks significantly improve the optimization performance of the model and improve the model's depth of understanding.
We evaluate on a large suite of test programs. Our approach achieves a 3.0% improvement in reducing instruction counts over the compiler, outperforming two state-of-the-art baselines that require thousands of compilations. Furthermore, the model shows surprisingly strong code reasoning abilities, generating compilable code 91% of the time and perfectly emulating the output of the compiler 70% of the time.
## I Introduction
There is increasing interest in Large Language Models (LLMs) for software engineering domains such as code generation [1, 2, 3, 4, 5, 6, 7, 8, 9], code translation [10, 11, 12], and code testing [13, 14, 15]. Models such as Code Llama [9], Codex [8], and ChatGPT [16] have a good statistical understanding of code and suggest likely completions for unfinished code, making them useful for editing and creating software. However, it appears they have not been trained specifically to optimize code. ChatGPT, for instance, will make minor tweaks to a program such as tagging variables to be stored as registers, and will even attempt more substantial optimizations like vectorization, though it easily gets confused and makes mistakes, frequently resulting in incorrect code.
Prior works on machine learning-guided code optimization have used hand-built features [17, 18, 19], all the way to graph neural networks (GNNs) [20, 21]. However, in all cases, the way the input program is represented to the machine learning algorithm is incomplete, losing some information along the way. For example, MLGO [17] uses numeric features to provide hints for function inlining, but cannot faithfully reproduce the call graph or control flow, etc. PrograML [21] forms graphs of the program to pass to a GNN, but it excludes the values for constants and some type information which prevents reproducing instructions with fidelity.
In this work, we ask: can Large Language Models learn to optimize code? LLMs can accept source programs, as is, with a complete, lossless representation. Using text as the input and output representation for a machine learning optimizer has desirable properties: text is a universal, portable, and accessible interface, and unlike prior approaches is not specialized to any particular task.
We started our investigation into the code-optimizing power of LLMs by replicating the optimizing transformations present in compilers, targeting the industry standard LLVM [22] compiler. LLVM's optimizer is extremely complex and contains thousands of rules, algorithms, and heuristics in over 1M lines of C++ code. Our expectation was that while LLMs have shown great progress in natural language translation and code generation tasks, they would be incapable of emulating such a complex system. Understanding and applying compiler optimizations require multiple levels of reasoning, arithmetic computation capabilities, and applying complex data structure and graph algorithms, which are capabilities LLMs have shown to lack [23, 24].
We thought this would be a paper about the obvious failings of LLMs that would serve as motivation for future clever ideas to overcome those failings. We were entirely taken by surprise to find that in many cases a sufficiently trained LLM can not only predict the best optimizations to apply to an input code, but it can also directly perform the optimizations without resorting to the compiler at all!
Our approach is simple. We begin with a 7B-parameter LLM architecture, taken from LLMaMa 2 [25], and initialize it from scratch. We then train it on millions of examples of LLVM assembly, coupled with the best compiler options found by a search for each assembly, as well as the resulting assembly from performing those optimizations. From these examples alone the model learns to optimize code with remarkable accuracy.
Our singular contribution is the first application of LLMs to optimizing code. We construct LLMs solely for the purpose of compiler optimization and show that they achieve a single-compile 3.0% improvement in code size reduction over the compiler versus a search-based approach which achieves 5.0% with \(2.5e^{9}\) compilations and versus state of the state-of-the-art ML approaches that cause regressions and require thousands of compilations. We provide auxiliary experiments and code examples to further characterize the potential and limits of LLMs for code reasoning. Overall we find their efficacy remarkable and think that these results will be of interest to the community.
## II Pass Ordering with LLMs
In this work we target compiler pass ordering. The pass ordering task is to select from the set of optimizing transformation passes available in a compiler the list of passes that will produce the best result for a particular input code. Manipulating pass orders has been shown to have a considerable impact on both runtime performance and code size [19, 26].
Machine learning approaches to this task have shown good results previously, but struggle with generalizing across different programs [27]. Previous works usually need to compile new programs tens or hundreds of times to try out different configurations and find out the best-performing option, making them impractical for real-world use. We hypothesized that a large language model with sufficient reasoning power would be able to learn to make good optimization decisions without needing this.
Most prior work on LLMs for code operates on source languages such as Python. Instead, for the pass ordering problem we require reasoning at the lower level of compiler assembly, known as the Intermediate Representation (IR). While there exist curated datasets of source languages for pretraining LLMs (e.g. [28, 29, 30]), compiler IRs do not make up a significant portion of these datasets, and though models like ChatGPT show some promise of understanding, their ability to reason about IR is far inferior to source languages.
We target optimizing LLVM pass orders for code size as in prior works [17, 27], using IR instruction count as an (imperfect) proxy for binary size. The approach is agnostic to the chosen compiler and optimization metric, and we intend to target runtime performance in the future. For now, optimizing for code size simplifies the collection of training data.
### _Prompts_
We present the model with an unoptimized LLVM-IR (such as emitted by the _clang_ frontend) and ask it to produce a list of optimization passes that should be applied to it. Figure 1 shows the format of the input prompt and output text.
In this work, we target LLVM 10 and use the optimization flags from opt. There are 122 optimization passes to choose
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(n\) & unoptimized & -Oz \\ & functions & instruction & instruction \\ & count & count & count \\ \hline AI-SOCO [31] & 8,929 & 97,800 & 47,578 \\ ExeBench [32] & 26,806 & 386,878 & 181,2777 \\ POJ-104 [33] & 310 & 8,912 & 4,492 \\ Transcoder [12] & 17,392 & 289,689 & 129,611 \\ CSmith [34] & 33,794 & 647,815 & 138,276 \\ VARPGen [35] & 12,769 & 285,360 & 144,539 \\ \hline Total & 100,000 & 1,716,354 & 645,773 \\ \hline \hline \end{tabular}
\end{table}
Table II: Test data.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(n\) & unoptimized & size on \\ & functions & count & disk \\ \hline Handwritten & 610,610 & 8,417,799 & 653.5 MB & 214,746,711 \\ Synthetic & 389,390 & 13,775,149 & 352.3 MB & 158,435,151 \\ \hline Total & 1,000,000 & 16,411,249 & 1.0 GB & 373,181,862 \\ \hline \hline \end{tabular}
\end{table}
Table I: Training data. Each LLVM-IR function is autotuned and used to create a (Prompt, Answer) pair. The \(n\) tokens column shows the number of tokens when the prompt is encoded using the Llama 2 [25] tokenizer.
Figure 1: Overview of our approach, showing the model input (Prompt) and output (Answer) during training and inference. The prompt contains unoptimized code. The answer contains an optimization pass list, instruction counts, and the optimized code. During inference we generate only the optimization pass list which we feed into the compiler, ensuring that the optimized code is correct.
from and passes can be selected more than once in a single sequence. We also include the 6 meta-flags (-O0, -O1, -O2, -O3, -Oz, and -Os) that may each occur only once per pass list. Pass lists can be any length, though in our experiments we found typically up to 9 passes long, for a combinatorial search space of around \(10^{18}\).
As shown in Figure 1, we also include two auxiliary tasks: i) generating the instruction counts of the code before and after the optimizations are applied and ii) generating the output IR after the optimizations are applied. We hypothesize that these would enable better pass-ordering decisions by forcing a deep understanding of the mechanics of code optimization. We verify this experimentally in Section V-B.
While the model is trained to generate instruction counts and optimized IR, we do not need those auxiliary tasks for deployment. All we need to do is generate the pass list which we then execute using the compiler. We thus sidestep the problems of correctness that plague techniques that require the output of the model to be trustworthy [10, 11, 36, 12].
### _LLVM-IR Normalization_
We normalize the LLVM-IR that is used for training the LLM using the following rules: we discard comments, debug metadata and attributes, and ensure consistent whitespace by feeding the IR through a custom lexer that retains newlines but standardizes other whitespace and strips indentation. We do this to reduce the length of the LLVM-IR to make maximum use of the limited input size of the LLM (Section III-A). The code in Figure 1 has been processed in this manner.
## III The Model
We use the ubiquitous transformer architecture [37]. The transformer is an artificial neural network that employs self-attention over a fixed-size context window.
The input text is first tokenized into words and subword units. These are embedded into continuous vector representations and provided as input to the transformer's encoder, where self-attention mechanisms capture contextual relationships between tokens to encourage the model to understand and process the input text's semantic structure.
The output text is produced by iteratively generating one token at a time. The decoder takes the encoded input along with any previously generated tokens and uses self-attention to predict the next token in the sequence. We greedily sample during decoding to select the most likely token sequence. This process continues until an end-of-sequence token is generated or a predefined maximum length is reached.
### _Model Architecture_
We use the same model architecture and Byte Pair Encoding (BPE) [38] tokenizer as Llama 2 [25], but train our model from scratch. We use the smallest of the Llama 2 configurations: 32 attention heads, 4,096 hidden dimensions, and 32 layers, for a total of 7B parameters.
The maximum length of a (prompt, answer) pair is defined by the sequence length. In this work, we use a sequence length of 2,048 tokens. The Llama 2 tokenizer achieves an average of 2.02 characters per token when encoding LLVM-IR, so this provides an approximate upper limit on the longest LLVM-IR we can train on at 2KB (since 2KB prompt and 2KB answer \(\approx\) 2,048 tokens).
### _Training Data_
We assembled a large corpus of unoptimized LLVM-IR functions, summarized in Table I. We extracted the functions from datasets of publicly available handwritten C/C++ code and supplemented this with synthetic code generated by C/C++ compiler test generators. In total, our training corpus comprises 1,000,000 deduplicated IR functions, totaling 373M training tokens. We operate at the level of individual IR functions rather than entire modules to maximize the amount of data we can fit inside a 2,048-token sequence length.
To find the list of optimization passes that will produce the smallest instruction count we employ _autonning_. Our autotuner combines random search and all-to-all results broadcasting between functions, inspired by the work of Liang et. al. [20].
Figure 2: Performance on holdout validation set during training. We evaluate performance every 250 training steps (131M train tokens). Parity with -Oz is reached at 393M tokens and peak performance at 10.9B tokens.
For each function we run random search for a fixed amount of time (780 seconds) and then minimize the best pass list by iteratively removing individual randomly chosen passes to see if they contribute to the instruction count. If not, they are discarded. After performing this on each of the functions we aggregate the set of unique best pass lists and broadcast them across all other functions. Thus, if a pass list was found to work well on one function it is tried on all others.
In total, the autotuner compiled each training program an average of 37,424 times, achieving a 5.8% improvement in instruction count reduction over the baseline fixed pass ordering in the compiler provided by -Oz. For our purposes, this autotuning serves as a gold standard for the optimization of each function. While the instruction count savings discovered by the autotuner are significant, the computational cost to reach these wins was 9,016 CPU days. The goal of this work is to achieve some fraction of the performance of the autotuner using a predictive model that does not require running the compiler thousands of times.
### _Training_
Starting from randomly initialized weights, we trained the model for 30,000 steps on 64 V100s for a total training time of 620 GPU days. We use the AdamW optimizer [40] with \(\beta_{1}\) and \(\beta_{2}\) values of 0.9 and 0.95. We use a cosine learning rate schedule with 1,000 warm-up steps, a peak learning rate of \(1e{-5}\), and a final learning rate of 1/10th of the peak. We used a batch size of 256 and each batch contains 524,288 tokens for a total of 15.7B training tokens. The full 30,000 steps of training is 7.7 epochs (iterations over the training corpus).
During training, we evaluated the model on a holdout validation set of 1,000 unseen IRs that were processed in the same manner as the training set. We evaluate every 250 steps.
## IV Evaluation
In this section, we evaluate the ability of the model to generate pass lists for unseen code and to correctly perform optimization.
### _Training Results_
Figure 2 shows the performance during training when evaluated on a holdout validation set of 1,000 unseen LLVM-IR functions. Peak validation performance was achieved by the model at 10.9B training tokens.
At peak performance, the code optimized using model-generated pass sequences contains 4.4% fewer instructions than when optimized using the compiler's built-in pass ordering (-Oz). The autotuner achieves a greater instruction count reduction of 5.6%, but this required 27 million compilations of the validation set. The model makes its predictions without invoking the compiler once.
Figure 2b shows the error of predicted input and output instruction counts. Prediction of instruction counts for unoptimized code rapidly approaches near-perfect accuracy. Prediction of output instruction count proves more challenging, reaching a Mean Average Percentage Error (MAPE) of 5.9%.
Figure 2c evaluates the quality of the generated code using three metrics. The _BLEU_[41] score shows the similarity between the model-generated code and a reference ground-truth code produced by the compiler using the generated pass list. _Code compiles_ is the frequency that model-generated code compiles without error. _Exact match_ tracks the frequency that the model-generated code is a character-by-character match of the compiler-generated code when optimized using the generated pass list (i.e. how many times BLEU=1).
At peak performance, the model achieves an impressive 90.5% rate of generating code that compiles without errors. Furthermore, a BLEU score of 0.952 shows that the model-optimized code closely approximates that of the compiler, and the exact match frequency is 70%. For comparison, a baseline that simply copies the unoptimized code to the output would achieve a BLEU score of 0.531 and an exact match frequency of 0%, demonstrating that significant manipulation of the input code is required to achieve such high scores.
By the end of training performance on the validation set had plateaued. We use the best-performing checkpoint and switch to a \(100\times\) larger-scale evaluation for the remainder of the evaluation.
### _Comparison to State-of-the-Art_
In this experiment, we perform a large-scale evaluation of the LLM's ability to predict pass lists in comparison to baselines.
\begin{table}
\begin{tabular}{l c c} \hline \hline & additional compilations & overall improvement \\ \hline AutoPhase [39] & 4,600,000 & 1.02\% \\ Coreset-NVP [20] & 542,747 & 2.55\% \\ Our Approach & 5,721 & 3.52\% \\ \hline \hline \end{tabular}
\end{table}
Table IV: Extending the models in Table III with “-Oz backup”. If a model predicts a pass list _other than_ -Oz, it also evaluates -Oz and selects the best. This prevents regressions _w.r.t_ –Oz at the expense of additional compilations.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & additional & functions & functions & instructions & instructions & overall \\ & compilations & improved & regressed & saved & regressed & improvement \\ \hline Autotuner & 2,522,253,069 & 6,764 & 0 & 30,948 & 0 & 5.03\% \\ AutoPhase [39] & 4,500,000 & 1,558 & 8,400 & 6,522 & 32,357 & -3.85\% \\ Coreset-NVP [20] & 442,747 & 3,985 & 6,072 & 16,064 & 28,405 & -1.88\% \\ Our Approach & 0 & 4,136 & 526 & 21,935 & 3,095 & 3.01\% \\ \hline \hline \end{tabular}
\end{table}
Table III: Performance of different approaches to pass ordering on a test set of unseen LLVM-IR functions from Table II. All metrics are _w.r.t_ –Oz. _Instructions saved_ is summed over _functions improved_ and _instructions regressed_. _Overall improvement_ is the sum total instruction count savings _w.r.t_ –Oz. The Autotuner achieves the best performance but requires 2.5B additional compilations (949 CPU-days). Our approach achieves 60% of the gains of the autotuner without invoking the compiler once.
**Datasets** We aggregate a broad suite of benchmark datasets for evaluation, summarized in Table II. We deduplicate and exclude IR functions identical to those we trained on. Our test data comprises code from a variety of domains including coding competitions (AI-SCOO [31], POJ-104 [33]), compiler test case generators (CSmith [34], YARPGen [35]), and miscellaneous publicly available code (EkeBench [32], Transcoder [12]).
**Baselines** We compare our approach to three baselines: AutoPhase [39], Coreset-NVP [20], and the Autotuner.
AutoPhase [39] is a reinforcement learning approach in which an agent is trained using Proximal Policy Optimization [42] to select the sequence of optimization passes that will maximize cumulative instruction count savings over a fixed-length episode. At each step, the program being optimized is represented to the agent as a 56-dimensional vector of instruction counts and other properties. We replicate the environment of [39] but use the implementation and expanded training regime from [27] in which the agent is trained for 100,000 episodes. We train the agent on the same data as our language model (Table I) and evaluate agent performance periodically during training on a holdout validation set. As in prior works, we use an action space and episode length of 45.
Coreset-NVP [20] is a technique that combines iterative search with a learned cost model. First, a greedy search is run on 17,500 benchmarks to determine a _Core set_ of best pass lists. Then a _Neural Value Prediction_ (NVP) is trained on the results of this search, using ProGraML [21] graphs processed by a Graph Convolutional Network as program representation. At inference, Coreset-NVP predicts the normalized reward and tries the first few pass sequences with the highest normalized
Figure 3: Frequency that passes occur in the pass list for each of the 100,000 test programs (left), and the length of pass lists (right), -Oz is the starting point for the autotuner and is the dominant result, being the best-found result for 93.2% of autotuned test programs and appearing in an additional 0.6% of pass lists as part of a longer sequence. The model-generated pass distribution tracks the autotuner but slightly overpredicts -Oz (94.3%) and includes 9 passes that the autotuner used on the training set but not on the test set. Results are ordered by decreasing autotuner frequency.
reward. The total number of passes it is allowed to try for each benchmark is 45, following prior works. We use author-provided model weights to perform inference on our test set.
Finally, we compare it to the Autotunner that we used to generate training data. We autotuned the test dataset in the same manner as the training data, described in Section III-B.
**Results** Table III summarizes the results. Our approach outperforms -Oz, AutoPhase, and Coreset-NVP across all datasets. Overall, the thousands of optimization attempts that are afforded to the autotunner enable it to discover the best-performing pass lists.
AutoPhase and Coreset-NVP are both able to identify pass lists that outperform -Oz but have an overall net negative impact on instruction count due to a large number of regressions. We propose a simple "-Oz backup" extension to overcome this: if a model predicts a pass list _other than_ -Oz, we also run -Oz and select the best of the two options. This prevents regressions _w.r.t._ -Oz, but increases the number of additional compilations by the number of times the model predicts a pass list other than -Oz. Table IV shows the results of the techniques when evaluated in this manner. While this does not help the models find further improvements, the lack of regressions means that AutoPhase and Coreset-NVP now achieve overall improvements over -Oz, though still less than the LLM with or without the -Oz backup.
### _Evaluation of Generated Pass Lists_
Figure 3 shows the frequency with which passes are selected by the autotunner and our model from the previous experiment. The distribution of passes selected by the model broadly tracks the autotuner. -Oz is the most frequently optimal pass. Excluding -Oz, model-generated pass lists have an average length of 3.4 (max 10), and autotuner pass lists have an average length of 3.1 (max 9). 105 of the pass lists generated by the model never appear in the training data.
In 710 cases the model-generated pass lists outperform the autotuner on the test set, though improvements are typically small. Listing 1 shows an example where the model-generated pass list simplifies control flow to fewer blocks, saving one further instruction.
Figure 4 breaks down the improvement of each approach to pass ordering by benchmark dataset. The biggest improvements over -Oz is found in the POJ-104 and Transcoder datasets, which both aggregate large amounts of handwritten code, while YARPGen, a random program generator for testing compilers, has the fewest opportunities for improving over -Oz.
We discovered that there is a strong correlation between the input program size and the potential performance improvement over -Oz that is found by both the autotuner and the model. Figure 5 plots this trend, showing clearly that larger programs have more opportunities to improve over -Oz.
### _Evaluation of Generated Code_
In this section, we evaluate the quality of model-generated code. To do this we ran the auxiliary training task of generating optimized code for all 100k functions in the test set. Note that this is not required to generate the pass lists evaluated in the previous section. We have made minor edits to the code samples in this section for brevity such as omitting superfluous statements and shortening identifier names.
In 90.3% of cases, the model-generated optimized IR compiles, and in 68.4% of cases the output IR matches character-for-character the ground truth generated by the compiler. We taxonomize the different classes of errors for the 9.7% of cases where the generated IR does not compile in Table V, and Listing 2 provides code examples.
\begin{table}
\begin{tabular}{l r} \hline \hline error category & \(n\) \\ \hline type error & 5,777 \\ instruction forward referenced & 1,521 \\ undefined value & 1,113 \\ invalid redefinition & 616 \\ syntax error & 280 \\ invalid value for constant & 144 \\ undefined function & 112 \\ index error & 98 \\ other & 83 \\ \hline Total & 9,744 \\ \hline \hline \end{tabular}
\end{table}
Table V: Compiler errors of model-optimized code on 100,000 unseen inputs.
Figure 4: Improvement over -Oz by dataset. Handwritten code optimizes more.
Figure 5: Improvement over -Oz by input size. Larger codes optimize more.
error: '415' defined with type '132' but expected 'i1' hor.cond = or i1 #14, #15 (a) The model defined #15 as an integer but later tried to use it as a bool (_type error_).
error: constant expression type mismatch
#.str = private unnamed_addr constant (493 x i8) c*@simp d92 chars...,", align 1
(b) The model omitted a single character when transcribing a 493-character string-literal from the input code (_type error_).
error: floating point constant invalid for type
#1 = tail call i32 @fl(float -0.47799998483256463, float -1.8159999847412109)
(c) LLVM requires exact decimal values for floating-point constants. These model-generated values have repeating decimals in binary so are rejected (_invalid value for constant_).
recognizes that the expression can be calculated at compile time but fails to compute the correct value. This type of mathematical reasoning is a known weakness of LLMs [24].
Sometimes the model generates correctly-optimized code but fails to produce the pass list needed to achieve it. Listing 4 shows one such example. A further class of error is when the model makes unsafe optimizations by failing to analyze the input code. Listing 5 shows an example.
We observe an interesting connection between the quality of pass lists and the corresponding optimized code, shown in Figure 6. When the model produces a poor-performing pass list, the quality of the generated code is lower.
## V Additional Experiments
In the previous section, we evaluated the performance of an LLM trained to optimize LLVM-IR for code size. In this section, we build additional models to better understand the properties of LLMs for code optimization. All models use the same architecture and parameters as in Section III.
### _Abalation of Dataset Size_
We ablate the contribution of dataset size by training two additional models and varying the amount of the training data from 50% (500k examples) down to 25% (250k examples) by random dropout. Figure 8 shows progress during the training of the models. For dataset sizes of 50% and 25%, the models begin to overfit the training set after around 8B training tokens. Table VI shows the peak performance of each configuration. With 50% and 25% of the training data, downstream performance falls by 21% and 24%, respectively.
### _Abalation of Code Optimization Task_
We train the model to generate not just a pass list but also the optimized code resulting from this pass list. One may expect this to degrade model performance - not only must it learn to predict good pass lists, but also how to produce correctly optimized code, a more difficult task. In fact, we believe this to be crucial to model performance. By forcing LLMs to learn the semantics of LLVM-IR we enable them to make better optimization decisions.
To ablate this we trained a model to generate only pass lists without the corresponding optimized code. We kept the data mix and all other parameters the same. Figure 8 and Table VI show that without training the model to generate optimized code, downstream performance falls by 16%.
\begin{table}
\begin{tabular}{c c c} \hline \hline \(n\) training examples & generate optimized code? & overall improvement \\ \hline
1,000,000 & ✓ & 4.95\% (—) \\
500,000 & ✓ & 3.91\% (-21\%) \\
250,000 & ✓ & 3.74\% (-24\%) \\ \hline
1,000,000 & \(\times\) & 4.15\% (-16\%) \\ \hline \hline \end{tabular}
\end{table}
Table VI: Ablation experiments. We evaluate the impact of varying training data size and of training the model to generate the optimized code. We train each model for 30k steps and report performance of the best model checkpoint on a holdout validation set of 1,000 unseen IR functions.
Figure 8: Ablating the impact of training data size and the auxiliary co-training task of generating optimized code (denoted _No Aux_). Data size is measured as a number of training examples. The graph shows performance on a holdout validation set during training.
Figure 7: Training a model to predict single optimization passes. The top subplot evaluates the quality the of generated code for the corresponding pass (ordered by BLEU score). The bottom subplot shows the frequency that the corresponding pass contributed to an improvement or regression of instruction count over -Oz.
### _Evaluation of Single Pass Translation_
In previous sections we trained LLMs to orchestrate optimization passes to produce the best-optimized code. In this section, we evaluate the ability of LLMs to emulate the different optimizations in themselves. For this experiment, the model input is an unoptimized IR and the name of an optimization pass to apply, the output is the IR after applying this pass.
**Dataset** We generate a new dataset for this task using 60 optimization passes and applying them randomly to the programs from Table I. We augment the dataset of unoptimized code with partially optimized code by first running a sequence of randomly selected passes on unoptimized IRs before the desired target pass. We collect 10,000 unique (prompt, answer) examples for each of the 60 passes for a total of 600k examples.
**Model** We trained a new model from scratch on this pass translation dataset. It reached peak performance after 11B training tokens (74 GPU days).
**Results** Figure 7 summarizes model performance. The average BLEU score over all passes is 0.846, with exact character-by-character matches 73.7% of the time and compilable code 82.3% of the time. We also plot the frequency with which each of the optimizations appears in a model-generated pass list that improved or regressed performance over -Oz in Table III. We find no correlation between code quality metrics and its frequency in generated pass lists.
As can be seen, many passes are learned near-perfectly while others prove more challenging. Of the passes that perform poorly, some of them hint at simple improvements to the representation while others result from deeper limitations of the model's reasoning. Listing 5(a) shows an example from the -name-anon-globals pass, which is a simple utility pass that renames anonymous global variables using a hash of the module name. Since we do not provide the module name in the prompt, the LLM is forced to hallucinate random values. We will add the module name to prompts to address this.
Listing 5(b) shows an example from the -instcombine pass. This is a complex pass that is implemented in over 4.5k lines of C++ code in LLVM. We see that the model correctly identifies the instructions to combine, but makes an error in data flow analysis and substitutes an incorrect value. This is an important optimization that frequently occurs in pass lists that outperform -Oz. We will explore an active learning approach in which more
\begin{table}
\end{table} TABLE VI: Example failures from the pass translation experiment. We combine the model input (red), ground-truth (blue), and model-generated (green) texts into a single unified diff for brevity. Black text is common to all three.
\begin{table}
\end{table} TABLE VII: Example of correct generation of optimized IR. The model performed several complex optimizations including control-flow simplification and replacing if-then-else code blocks with instructions.
examples are provided for complex and difficult passes.
Finally, we present an example of correct model optimization in Listing 7. The example combines several non-trivial code manipulations: register allocation, control flow graph simplification, and instruction combining. We visualize the control- and data-flow graphs to help interpret the changes that the model made. Even on the scale of these small IR functions, we find the sophisticated grasp of LLVM-IR semantics demonstrated by the LLM remarkable. The model has learned to perform these optimizations entirely from examples, without access to the compiler implementation.
## VI Discussion
We have shown that LLMs can near-perfectly emulate many compiler optimizations and outperform prior approaches, but there are limitations. This section aims to provide a pragmatic discussion of limits and directions for future research.
### _Context Window_
The main limitation of LLMs is the limited sequence length of inputs (context window). In this work we target 2k-token context windows and split IRs into individual functions to maximize the amount of code we can fit into the context window. This is undesirable for a number of reasons. First, it limits the context available to the model when making optimization decisions; second, it prevents intra-function optimization; third, we cannot optimize code that does not fit within the context window. Figure 5 suggests that larger programs have more interesting optimization opportunities.
Researchers are adopting ever-increasing context windows [45], but finite context windows remain a common concern with LLMs. As new techniques for handling long sequences continue to evolve we plan to incorporate them and apply them to code optimization, e.g. Code Llama's variant of positional interpolation [46] which is RoPE base period scaling [9] or recent length extrapolation techniques [47].
### _Math Reasoning and Logic_
Compilers perform lots of arithmetic. Whenever possible expressions are evaluated at compile time to minimize work at runtime and to expose further opportunities for optimization. We see examples of LLMs struggling with this type of reasoning, e.g. failed constant folding (Listing 3) and failed data-flow analysis (Listing 6b).
We think that a chain-of-thought approach [48] in which models are taught to decompose complex reasoning problems into incremental steps will prove fruitful. We took the first step in this direction by breaking optimizations down into individual passes in Section V-C. We also plan to focus training on a curriculum of arithmetic and logic, and train LLMs that use tools to compute intermediate results [49, 50].
### _Inference Speed_
Compilers are fast. It takes two orders of magnitude more time for the model to generate a pass list than it does for the compiler to execute it. While this is much faster than the autotuner it is trained on, it remains an overhead that may prove prohibitive for some applications. That is to say nothing of the difference in compute resources needed to evaluate compiler heuristics vs. a 7B-parameter LLM running on multiple GPUs.
In addition to aggressive batching and quantization [51], significant inference speedups can be achieved by specializing the vocabulary to a use case. For example, we can reduce entire subsequences of passes to single vocabulary elements using Byte Pair Encoding so that at inference time fewer tokens need to be generated.
## VII Related Work
Compiler pass ordering for performance has been exploited for decades [52, 26, 53]. Over the years there have been several approaches using machine learning [18, 19, 20, 39, 54, 55]. The application of machine learning in compilers is not limited to pass order and has been applied to many other problems [56, 57, 58, 17, 59]. No one has applied LLMs to the problem of pass ordering, we are the first to do so.
_Neural machine translation_ is an emerging field that uses language models to transform code from one language to another. Prior examples include compiling C to assembly [11], assembly to C [60, 36], and source-to-source transpilation [10]. In these works code correctness cannot be guaranteed. In our work we use code generation solely as an auxiliary learning task - correctness is supplied by the compiler.
Language models have found broad adoption for coding tasks, though few operate at the level of compiler IR. Gallagher et al. train a RoBERTA architecture on LLVM-IR for the purpose of code weakness identification [61] and Transcoder-IR [12] uses LLVM-IR as a pivot point for source-to-source translation. Neither use LLMs for optimization as we do.
Many language models have been trained on source code including CodeBERT [62], GraphCodeBERT [63], and CodeT5 [64] which are trained to perform multiple tasks including code search, code summarization, and documentation generation. LLMs trained on source code have also been used for program fuzzing [13, 14, 65], test generation [15], and automated program repair [66, 67, 68]. A large number of useful applications have been explored for language models, however, this is the first work where an LLM is used specifically for optimizing code.
Most LLMs are trained at least partly on code [5, 69, 25, 3, 5]. Some LLMs are trained similarly to general models but especially target programming languages and can be used for code completion such as Codex [8] which powers Copilot [70]. The introduction of fill-in-the-middle capabilities is especially useful for real-world code completion use cases and has become common in recent code models such as InCoder [6], SantaCoder [4], StarCoder [1], and Code Llama [9]. Code Llama was also trained to follow instructions and generate code as well as explain its functionalities.
While the multi-terabyte training corpora for these models contain some assembly, we believe that a focused exploration of the value of LLMs in the domain of compilers will be of value to the community. This paper aims to provide that.
## VIII Conclusions
We present the first steps towards LLMs for code optimization. We construct a model that can predict good optimization strategies for unseen LLVM-IR. Results are promising, though we face challenges in sequence length which limits us to operating over small program fragments, and in arithmetic reasoning which limits the ability of the model to predict the outcome of optimizations. We hope to inspire the research community to push beyond LLMs for simple max-likelihood code generation and into performance-aware code optimization.
|
2301.13635 | Active Learning-based Domain Adaptive Localized Polynomial Chaos
Expansion | The paper presents a novel methodology to build surrogate models of
complicated functions by an active learning-based sequential decomposition of
the input random space and construction of localized polynomial chaos
expansions, referred to as domain adaptive localized polynomial chaos expansion
(DAL-PCE). The approach utilizes sequential decomposition of the input random
space into smaller sub-domains approximated by low-order polynomial expansions.
This allows approximation of functions with strong nonlinearties,
discontinuities, and/or singularities. Decomposition of the input random space
and local approximations alleviates the Gibbs phenomenon for these types of
problems and confines error to a very small vicinity near the non-linearity.
The global behavior of the surrogate model is therefore significantly better
than existing methods as shown in numerical examples. The whole process is
driven by an active learning routine that uses the recently proposed $\Theta$
criterion to assess local variance contributions. The proposed approach
balances both \emph{exploitation} of the surrogate model and \emph{exploration}
of the input random space and thus leads to efficient and accurate
approximation of the original mathematical model. The numerical results show
the superiority of the DAL-PCE in comparison to (i) a single global polynomial
chaos expansion and (ii) the recently proposed stochastic spectral embedding
(SSE) method developed as an accurate surrogate model and which is based on a
similar domain decomposition process. This method represents general framework
upon which further extensions and refinements can be based, and which can be
combined with any technique for non-intrusive polynomial chaos expansion
construction. | Lukáš Novák, Michael D. Shields, Václav Sadílek, Miroslav Vořechovský | 2023-01-31T13:49:52Z | http://arxiv.org/abs/2301.13635v1 | # Highlights
###### Abstract
We propose a novel method for sequential decomposition of the input random space and construction of local approximations. The proposed algorithm is based on an active learning methodology. The proposed algorithm is based on a novel algorithm for the computation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local approximation of the local local approximation of the local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local approximation of the local local local local approximation of the local local local approximation of local local local local local approximation of the local local local local approximation of the local local local local local approximation of the
# Active Learning-based Domain Adaptive Localized Polynomial Chaos Expansion
Lukas Novak
[email protected] Brno University of Technology, Brno, Czech Republic
Michael D. Shields
Johns Hopkins University, Baltimore, USA
Vaclav Sadilek
Moroslav Vorechovsky
Brno University of Technology, Brno, Czech Republic
###### Abstract
The paper presents a novel methodology to build surrogate models of complicated functions by an active learning-based sequential decomposition of the input random space and construction of localized polynomial chaos expansions, referred to as domain adaptive localized polynomial chaos expansion (DAL-PCE). The approach utilizes sequential decomposition of the input random space into smaller sub-domains approximated by low-order polynomial expansions. This allows approximation of functions with strong nonlinearities, discontinuities, and/or singularities. Decomposition of the input random space and local approximations alleviates the Gibbs phenomenon for these types of problems and confines error to a very small vicinity near the non-linearity. The global behavior of the surrogate model is therefore significantly better than existing methods as shown in numerical examples. The whole process is driven by an active learning routine that uses the recently proposed \(\Theta\) criterion to assess local variance contributions [1]. The proposed approach balances both _exploitation_ of the surrogate model and _exploration_ of the input random space and thus leads to efficient and accurate approximation of the original mathematical model. The numerical results show the superiority of the DAL-PCE in comparison to (i) a single global polynomial chaos expansion and (ii) the recently proposed stochastic spectral embedding (SSE) method [2] developed as an accurate surrogate model and which is based on a similar domain decomposition process. This method represents general framework upon which further extensions and refinements can be based, and which can be combined with any technique for non-intrusive polynomial chaos expansion construction.
keywords: Polynomial Chaos Expansion, Adaptive Sampling, Sequential Sampling, Local Approximations, Active Learning, Stochastic Spectral Embedding +
Footnote †: journal: Computer Methods in Applied Mechanics and Engineering
## 1 Introduction
The Polynomial Chaos Expansion (PCE), originally proposed by Norbert Wiener [3] and further investigated in the context of engineering problems by many researchers, e.g. [4; 5], is a preferred method for uncertainty quantification (UQ) and surrogate modeling in industrial applications [6; 7] thanks to its efficiency and powerful post-processing. Once a PCE is available for a given problem, the constructed explicit function can be exploited
to directly estimate important properties of the original problem including its statistical moments, response probability distribution or sensitivity indices (without additional sampling [(8)]), which brings significant efficiency for surrogate modeling, sensitivity analysis, uncertainty quantification and reliability analysis [(9)].
The PCE, in its non-intrusive form, offers a convenient way to perform probabilistic analysis of any black-box model, e.g. finite element models representing complex physical systems in engineering. There are generally two types of non-intrusive methods to calculate the deterministic PCE coefficients: spectral projection and linear regression. The spectral projection approach utilizes the orthogonality of the multivariate polynomials and calculates the coefficients using inner products. The spectral projection leads to an explosion of computational complexity referred to as the _curse of dimensionality_. Therefore, the non-intrusive approach based on linear regression is often preferred. Although it is typically less expensive than the spectral projection (the number of samples should be at least \(\mathcal{O}(P\,\ln(P))\), where \(P\) is the number of terms in the PCE [(10; 11)]), it suffers from the _curse of dimensionality_ as well, since the number of PCE terms grows rapidly with both dimension and maximum polynomial order. Therefore, it becomes necessary to employ advanced adaptive techniques to construct sparse PCEs that yield efficient solutions for real-world physical systems.
Regression-based PCE can be significantly affected by the selected sampling scheme, as was recently shown in an extensive review paper [(12)] comparing several general statistical sampling techniques. However, PCE construction as a linear regression model is a very problem specific task and it can be highly beneficial to use methods that exploit information from the given mathematical model and sequentially update the surrogate model - referred to as _active learning_. Active learning is a common approach for surrogate-based reliability analysis, wherein an initial experimental design is iteratively updated based on the current estimate of the limit-state surface [(13; 14; 15)]. Active learning for reliability analysis with PCE was used e.g. in [(16; 17; 18)]. For general UQ studies, some recent studies have focused on general sequential sampling for PCE based on space-filling criteria or alphabetical optimality [(19; 20)]. However, it is beneficial to use both _exploitation_ (leveraging model behavior) criteria and _exploration_ (space filling) criteria to define an optimally balanced criterion [(21)]. Such sequential sampling for sparse Bayesian learning PCE combining both aspects - epistemic uncertainty of the statistical inference (exploration) together with quadratic loss function (local exploitation) - was recently proposed in [(22)]. However, its application is limited to PCE built by sparse Bayesian learning only.
The authors of this paper recently proposed a general active learning method based on sequential adaptive variance-based sampling [(1)], which is an efficient tool for accurate surrogate modeling that is sufficiently general for further extension [(23)]. Although this approach leads to superior results in comparison to standard approaches without active learning, it is limited by the inherently smooth nature of the PCE. More specifically, polynomial basis functions are not able to approximate functions with discontinuities or singularities. Moreover, it is necessary to use high-order polynomials to approximate functions with local non-linearities, even when the rest of the input random space could be easily approximated by a low-order PCE. This can lead to spurious oscillations in the approximation and over-fitting. To overcome this limitation, we propose a method to construct localized PCEs based on the concept of _divide-and-conquer_, i.e. decomposition of the input random space to sub-domains approximated by many low-order PCEs instead of a single high-order global PCE. Although this concept is not entirely new in stochastic finite elements [(24)] and stochastic collocation [(25; 26)], there is no such approach for non-intrusive PCE. However there are two primary techniques based on similar concepts as described in the following section.
### Related Developments
Stochastic Spectral Embedding (SSE) [(2)] is a general approximation technique based on a decomposition of the input random space and the construction of embedded local approximations. Although it is generally possible to use any spectral approximation technique, it is beneficially coupled with PCE. SSE is based on a novel idea of _embedding_ - instead of constructing local approximations of the original mathematical model, local surrogates are constructed to approximate the _residuals_ between the model and approximation from the previous level of the decomposed space. Although such an approach can lead to significant improvement in comparison to a single global approximation [(2)], it is not a sequential approach based on active learning and thus it does not iteratively reflect new information obtained from the previous steps of the algorithm. Active learning is crucial in analysis of functions with discontinuity or singularity because it allows for the aforementioned exploration and exploitation necessary to find and resolve these features. For the sake of completeness, active learning for SSE has been
proposed for reliability analysis [27], but it does not lead to an accurate approximation over the entire input random space. Its accuracy is limited to regions around the limit surface, which are important for an estimation of failure probability.
The second related technique is Multi-element generalized Polynomial Chaos Expansion (ME-gPC) [28]. ME-gPC was developed as an extension of generalized PCE based on Wiener-Askey scheme [29] allowing analysis of models with arbitrary distribution of input random vector. The ME-gPC method consists of three main parts: decomposition of the input random space, numerical construction of locally orthogonal polynomials and an adaptive procedure based on the decay rate of local error in estimated variance derived from local PCE. ME-PCE applies an \(h\)-type mesh refinement procedure akin to mesh refinement in finite element methods. By doing so, they introduce a structured grid of uniform points in each new element and solve for the PCE coefficients. This can be cumbersome and does not afford the flexibility to adaptively select sparse and near-optimal training points. Moreover, we note that the ME-gPC was created mainly for uncertainty propagation in models with arbitrary input distributions, and thus in contrast to SSE, its objective is not necessarily to construct the best possible surrogate model using adaptive algorithms, but rather to minimize errors in response statistics. This is a subtle, but important difference that distinguishes its use as a predictive tool from that of a tool for statistical estimation.
### Contributions of this paper
This paper describes a novel method, termed Domain Adaptive Localized PCE (DAL-PCE) that applies adaptive sequential decomposition of the input random space and adaptive sequential sampling within the subdomains. Both of these features are based on recently a proposed criterion for variance-based sequential statistical sampling, developed specifically for PCE in [30]. In the context of previously described methods SSE and ME-gPC, the proposed novel approach can be though to lie between them. Like SSE, it is developed specifically for the construction of accurate surrogate models, especially for functions with high non-linearity or discontinuity. But the decomposition of the input random space is rather similar to ME-gPC. The uniqueness of our proposal lies in the combination of active learning, sequential sampling, sequential decomposition of the input space and regression-based PCE using sparse solvers such as Least Angle Regression (LARS) allowing adaptivity and learning in each iteration of the proposed algorithm.
## 2 Polynomial Chaos Expansion
Assume a probability space \((\Omega,\mathcal{F},\mathcal{P})\), where \(\Omega\) is an event space, \(\mathcal{F}\) is a \(\sigma\)-algebra on \(\Omega\) and \(\mathcal{P}\) is a probability measure on \(\mathcal{F}\). If the input variable of a mathematical model, \(Y=f(X)\), is a random variable \(X(\omega),\omega\in\Omega\), the model response \(Y(\omega)\) is also a random variable. Assuming that \(Y\) has a finite variance, PCE represents the output variable \(Y\) as a function of an another random variable \(\xi\) called the germ with a known distribution
\[Y=f(X)=f^{\text{PCE}}(\xi), \tag{1}\]
and represents the function \(f(X)\) via infinite polynomial expansion. A set of polynomials, orthogonal with respect to the distribution of the germ, are used as a basis of the Hilbert space \(L^{2}\)\((\Omega,\mathcal{F},\mathcal{P})\) of all real-valued random variables of finite variance, where \(\mathcal{P}\) takes over the meaning of the probability distribution. The orthogonality condition is given by the inner product of \(L^{2}\)\((\Omega,\mathcal{F},\mathcal{P})\) defined for any two functions \(\psi_{j}\) and \(\psi_{k}\) for all \(j\neq k\) with respect to the weight function \(p_{\xi}\) (probability density function of \(\xi\)) as:
\[\langle\psi_{j},\psi_{k}\rangle=\int\psi_{j}(\xi)\psi_{k}(\xi)p_{\xi}(\xi)\, \mathrm{d}\xi=0. \tag{2}\]
This means that there are specific orthogonal polynomials associated with the corresponding distribution of the germ via its weighting function. For example, Hermite polynomials orthogonal to the Gaussian measure are associated with normally distributed germs. Orthogonal polynomials corresponding to other distributions can be chosen according to Wiener-Askey scheme [29] or constructed numerically [31]. For further processing, it is beneficial to use normalized polynomials (orthonormal), where the inner product of \(i\)th and \(j\)th polynomials is equal to the Kronecker delta \(\delta_{jk}\), i.e. \(\delta_{jk}=1\) if and only if \(j=k\), and \(\delta_{jk}=0\) otherwise.
In the case of \(\mathbf{X}\) and \(\mathbf{\xi}\) being vectors containing \(M\) independent random variables, the polynomial \(\Psi(\mathbf{\xi})\) is multivariate and it is built up as a tensor product of univariate orthonormal polynomials, i.e.
\[\Psi_{\mathbf{a}}(\mathbf{\xi})=\prod_{i=1}^{M}\psi_{a_{i}}(\xi_{i}), \tag{3}\]
where \(\mathbf{a}\in\mathbb{N}^{M}\) is a set of integers called the _multi-index_ reflecting polynomial degrees associated to each \(\xi_{i}\). The quantity of interest (QoI), i.e. the response of the mathematical model \(Y=f(\mathbf{X})\), can then be represented as [5]
\[Y=f(\mathbf{X})=\sum_{\mathbf{a}\in\mathbb{N}^{M}}\beta_{\mathbf{a}}\Psi_{\mathbf{a}}(\mathbf{\xi}), \tag{4}\]
where \(\beta_{\mathbf{a}}\) are deterministic coefficients and \(\Psi_{\mathbf{a}}\) are multivariate orthonormal polynomials.
### Non-intrusive computation of PCE coefficients
For practical computation, the PCE expressed in Eq. (4) must be truncated to a finite number of terms \(P\). One can generally choose any truncation rule (e.g. tensor product of polynomials up to the selected order \(p\)), but the most common truncation is achieved by retaining only terms whose total degree \(|\mathbf{a}|\) is less than or equal to a given \(p\), in which case the truncated set of PCE terms is then defined as
\[\mathcal{A}^{M,p}=\left\{\mathbf{a}\in\mathbb{N}^{M}:|\mathbf{a}|=\sum_{i=1}^{M}\alpha _{i}\leq p\right\}. \tag{5}\]
The cardinality of the truncated _index set_\(\mathcal{A}^{M,p}\) is given by
\[\text{card }\mathcal{A}^{M,p}=\frac{(M+p)!}{M!\,p!}\equiv P\,. \tag{6}\]
When the PCE is truncated to a finite number of terms, there is an error \(\varepsilon\) in the approximation such that
\[Y=f(\mathbf{X})=\sum_{\mathbf{a}\in\mathcal{A}}\beta_{\mathbf{a}}\Psi_{\mathbf{a}}(\mathbf{\xi})+ \varepsilon\,.\]
From a statistical point of view, PCE is a simple linear regression model with intercept. Therefore, it is possible to use _ordinary least squares_ (OLS) regression to minimize the error \(\varepsilon\).
Knowledge of vector \(\mathbf{\beta}\) fully characterizes the approximation via PCE. To solve for \(\mathbf{\beta}\), first it is necessary to create \(N_{\text{sim}}\) realizations of the input random vector \(\mathbf{X}\) and the corresponding results of the original mathematical model \(\mathcal{Y}\), together called the experimental design (ED). Then, the vector of \(P\) deterministic coefficients \(\mathbf{\beta}\) can be determined by OLS as
\[\mathbf{\beta}=(\Psi^{T}\Psi)^{-1}\ \Psi^{T}\mathcal{Y}, \tag{7}\]
where \(\Psi\) is the data matrix
\[\mathbf{\Psi}=\left\{\Psi_{ij}=\Psi_{j}(\mathbf{\xi}^{(i)}),\ i=1,\ldots,N_{\text{sim }},\ j=0,\ldots,P-1\right\}. \tag{8}\]
A well-known problem, the _curse of dimensionality_, states that \(P\) is highly dependent on the number of input random variables \(M\) and the maximum total degree of polynomials \(p\), which is clear from Eq. (6). Considering that estimation of \(\mathbf{\beta}\) by regression requires at least \(\mathcal{O}(P\ \ln(P))\) number of samples for stable solution [10; 11], the problem can become computationally highly demanding in case of a large or strongly non-linear stochastic models. Although one can use advanced model selection algorithms such as Least Angle Regression (LAR) [32; 4], orthogonal matching pursuit [33] or Bayesian compressive sensing [34] to find an optimal set of PCE terms, and thus reduce the number of samples needed to compute the unknown coefficients, the benefit of these techniques is significant only if the true coefficient vector is sparse or compressible. The sparse set of basis functions obtained by any adaptive algorithm is further denoted by \(\mathcal{A}\) for the sake of clarity.
### Approximation Error Estimation
Once the PCE is constructed, it is crucial to estimate its accuracy. Further, the PCE accuracy can be used to directly compare several PCEs to choose the best surrogate model. Ideally the ED should be divided into validation and training sets, but this might be extremely computationally demanding in engineering applications with complex numerical models. Therefore in the field of uncertainty quantification (UQ) of engineering models, it is preferred to estimate the approximation error directly from the training set, without any additional sampling of the original model. A common choice is the coefficient of determination \(R^{2}\), which is well-known from machine learning or statistics. However, \(R^{2}\) may lead to over-fitting and thus advanced methods should be used. One of the most widely-used methods is the leave-one-out cross-validation (LOO-CV) error \(Q^{2}\). The LOO-CV is based on residuals between the original surrogate model and the surrogate model built with the ED while excluding one realization. This approach is repeated for all realizations in the ED and the average error is estimated. Although the calculation of \(Q^{2}\) is typically highly time-consuming, it is possible to obtain results analytically from a single PCE as follows [35]:
\[Q^{2}=\frac{\frac{1}{N_{\text{sim}}}\sum_{i=1}^{N_{\text{sim}}}\left[\frac{g \left(\mathbf{x}^{(i)}\right)-g^{\text{PCE}}\left(\mathbf{x}^{(i)}\right)}{1-h_{i}} \right]^{2}}{\sigma_{Y,\text{ED}}^{2}}, \tag{9}\]
where \(\sigma_{Y,\text{ED}}^{2}\) is the variance of the ED calculated using the original mathematical model and \(h_{i}\) represents the \(i\)th diagonal term of matrix \(\mathbf{H}=\mathbf{\Psi}\left(\mathbf{\Psi}^{\top}\mathbf{\Psi}\right)^{-1}\mathbf{\Psi}^{\top}\).
### Statistical Moments Derived from PCE
The form of PCE as a linear summation over orthonormal polynomials allows for powerful and efficient post-processing. In particular, once a PCE approximation is created, it is possible to directly estimate statistical moments of the output from the expansion.
The first statistical moment (the mean value) is simply the first deterministic coefficient of the expansion \(\mu_{Y}=\left\langle Y^{1}\right\rangle=\beta_{\mathbf{0}}\). The second raw statistical moment, \(\left\langle Y^{2}\right\rangle\), can be estimated by
\[\left\langle Y^{2}\right\rangle =\int\left[\sum_{\mathbf{a}\in\mathcal{A}}\beta_{\mathbf{a}}\Psi_{\mathbf{a}} \left(\mathbf{\xi}\right)\right]^{2}p_{\mathbf{\xi}}\left(\mathbf{\xi}\right)\,\mathrm{d} \mathbf{\xi}=\sum_{\mathbf{a}_{1}\in\mathcal{A}}\sum_{\mathbf{a}_{2}\in\mathcal{A}}\beta_{ \mathbf{a}_{1}}\beta_{\mathbf{a}_{2}}\int\Psi_{\mathbf{a}_{1}}\left(\mathbf{\xi}\right)\Psi_{ \mathbf{a}_{2}}\left(\mathbf{\xi}\right)p_{\mathbf{\xi}}\left(\mathbf{\xi}\right)\,\mathrm{d} \mathbf{\xi} \tag{10}\] \[=\sum_{\mathbf{a}\in\mathcal{A}}\beta_{\mathbf{a}}^{2}\int\Psi_{\mathbf{a}} \left(\mathbf{\xi}\right)^{2}p_{\mathbf{\xi}}\left(\mathbf{\xi}\right)\,\mathrm{d}\mathbf{\xi }=\sum_{\mathbf{a}\in\mathcal{A}}\beta_{\mathbf{a}}^{2}\left\langle\Psi_{\mathbf{a}},\Psi_ {\mathbf{a}}\right\rangle.\]
Considering the orthonormality of the polynomials, it is possible to obtain the variance \(\sigma_{Y}^{2}=\left\langle Y^{2}\right\rangle-\mu_{Y}^{2}\) as the sum of all squared deterministic coefficients except the intercept (which represents the mean value), i.e.
\[\sigma_{Y}^{2}=\sum_{\begin{subarray}{c}\mathbf{a}\in\mathcal{A}\\ \mathbf{a}\neq\mathbf{0}\end{subarray}}\beta_{\mathbf{a}}^{2}. \tag{11}\]
Note that the computation of higher statistical central moments, specifically skewness \(\gamma_{Y}\) (\(3^{\text{rd}}\) moment) and kurtosis \(\kappa_{Y}\) (\(4^{\text{th}}\) moment), are more complicated since they require triple and quad products. These can be obtained analytically only for certain polynomial families, e.g. formulas for Hermite and Legendre polynomials (and their combination) can be found in [30].
## 3 Active Learning-based Domain Adaptive Localized PCE (Dal-Pce)
In this section, we propose a novel methodology to constructed localized PCEs designed for highly non-linear functions, termed Domain Adaptive Localized PCE (Dal-Pce). Instead of increasing the maximum polynomial order \(p\) (\(p\)-adaptivity), which brings high computational requirements due to the _curse of dimensionality_, we
propose to decompose the input random space into several sub-domains approximated by low-order PCEs (\(h\)-adaptivity). Although this idea is not entirely new, we use this approach in combination with novel active learning methods to identify domains for refinement and for sequential sample selection and regression-based PCEs. This allows us to use any sparse adaptive solver (e.g. LAR) and thus it can be easily implemented into the existing software packages [36; 37]. In the following sections, we define the requisite components of the proposed method and provide an algorithm (Algorithm 1) for its implementation.
### Variance-based Adaptive Sequential Sampling
The decomposition of the input random space is a sequential process coupled with adaptive sampling assuring optimal coverage of the sub-domains of interest. The whole process thus consists of two steps: (i) identification of an important sub-domain, that is, a domain that is either large compared to other sub-domains or that is associated with a high local variance; and (ii) identification of the best positions for additional samples extending the current ED in the selected sub-domain. Each of these steps must be based on a criterion that balances _exploration_ of the input random space with _exploitation_ of the surrogate model, which in our case is in the form of a PCE. The \(\Theta\)-criterion for adaptive sequential sampling, which is driven by the output variance and its approximation via local variance using PCE[1], is employed for both steps. We will first discuss the process for adaptive sequential sampling within a specified sub-domain in this section. This will be followed by the process for refinement of the domain in the subsequent sections.
Consider a pool of candidate samples containing realizations of the random vector \(\mathbf{\xi}\) generated by an arbitrary sampling technique, e.g., Latin Hypercube Sampling (LHS) [38; 39] or Coherence sampling [40; 41; 10]. From this pool of candidates, we select the best sample using a method inspired by the sequential sampling proposed in [21] and based on Koksma-Hlawka inequality [42]. The \(\Theta\)-criterion for PCE, which accounts for both variation of the function and discrepancy of the samples, was proposed as follows [1]:
\[\Theta(\mathbf{\xi}^{(c)})\equiv\Theta^{c}=\underbrace{\sqrt{\sigma_{\mathcal{A}}^ {2}(\mathbf{\xi}^{(c)})\cdot\sigma_{\mathcal{A}}^{2}(\mathbf{\xi}^{(s)})}}_{\text{ave variance density}}l_{\text{c,s}}^{M}\equiv\sqrt{\sigma_{\text{c}}^{2}\cdot\sigma_{\text{s}}^{2}}l_{ \text{c,s}}^{M}. \tag{12}\]
The criterion is a product of two terms - the _exploitation_ term (denoted as "ave variance density") and the _exploration_ part (the distance term \(l_{\text{c,s}}\) raised to the domain dimension) - which are multiplied to maintain an optimal balance between exploration and exploitation [1].
The _exploration_ aspect is maintained by accounting for the distance \(l_{\text{c,s}}\) between a candidate \(\mathbf{\xi}^{(c)}\) and its nearest neighboring realization from the existing ED, \(\mathbf{\xi}^{(s)}\) as
\[l_{\text{c,s}}=\sqrt{\sum_{i=1}^{M}|\xi_{i}^{(c)}-\xi_{i}^{(s)}|^{2}}. \tag{13}\]
If the criterion was reduced to this term only, sequential filling of the greatest empty regions would occur, converging to uniform space coverage in the spirit of the space-filling "miniMax criterion" [43; 44; 45].
The _exploitation_ component is motivated by the desire to sample points in regions with the greatest contributions to the total variance of the QoI \(\sigma_{Y}^{2}\), i.e. at points with the highest _variance density_. Once the PCE has been established at any given stage of the algorithm, the _variance density_ is computationally cheap to evaluate for any location \(\mathbf{\xi}\) as
\[\sigma_{\mathcal{A}}^{2}(\mathbf{\xi})=\big{[}\sum_{\begin{subarray}{c}\mathbf{\alpha }\in\mathcal{A}\\ \mathbf{\alpha}\neq\mathbf{\beta}\end{subarray}}\beta_{\mathbf{\alpha}}\Psi_{\mathbf{\alpha}} \left(\mathbf{\xi}\right)\big{]}^{2}p_{\xi}\left(\mathbf{\xi}\right). \tag{14}\]
The local variance is therefore estimated directly using the basis functions and coefficients \(\beta\) of the PCE. When considering a candidate "c", an estimate of the variance contribution of the region between the candidate and its nearest neighbor "s" may be obtained by averaging the local variance densities between the two. Therefore, we can say that the candidate with the greatest \(\Theta^{c}\) criterion is the one that represents the largest amount of total variance to be refined by its selection.
A significant advantage of this method is the ability to add candidates into an existing ED one-by-one. Thus, it can be employed at any moment of the PCE construction process. Moreover, this learning function can be
combined with any sampling algorithm for the construction of the initial ED and candidates for extension. The ideas behind the \(\Theta\) criterion will now be used in the proposed domain decomposition and ED extension algorithm.
### Decomposition of Input Random Space
The core of the proposed approach is a sequential decomposition of the input random space \(\mathcal{D}\) for the construction of local approximations. This approach assumes that the original mathematical model can be approximated by piecewise low-order PCEs that are valid only in individual sub-domains of \(\mathcal{D}\). Therefore, in the proposed approach, the input random space is sequentially decomposed into \(n_{\mathcal{D}}\) smaller non-overlapping sub-domains \(\mathcal{D}_{i}\subset\mathcal{D}\) that collectively fill the full input random space \(\mathcal{D}\), i.e.
\[\bigcup_{i=1}^{n_{\mathcal{D}}}\mathcal{D}_{i}=\mathcal{D}\quad\text{such that}\quad\mathcal{D}_{i}\cap\mathcal{D}_{j}=\emptyset\quad\forall i,j \tag{15}\]
In each iteration of the algorithm, a single sub-domain \(\mathcal{D}_{i}\) (referred to as the parent) is identified for refinement and divided by a plane perpendicular to the direction of one selected input random variable. Specifically, \(\mathcal{D}_{i}\) is divided into a refinement-child \(\mathcal{D}_{i}\), which is further processed, and an inheriting-child \(\mathcal{D}_{i}^{*}\) adopting the PCE from the parent as illustrated for a one-dimensional function in Fig. 1. In this case, we see that the space is divided into two subdomains. In the left (refinement child) a new PCE is constructed. In the right (inheriting child), the original PCE is retained. Such process assures an exhaustive decomposition into disjoint subsets i.e. \(\mathcal{D}_{i}=\mathcal{D}_{i}\oplus\mathcal{D}_{i}^{*}\). This sequential domain decomposition is illustrated in Fig. 2, which depicts the original input random space and the first four iterations of the decomposition process.
In contrast to SSE [2], the selection of a single sub-domain for refinement in each iteration is based on an active learning approach, the details of which are provided in subsequent sections. Importantly, actively integrating information from the original mathematical model leads to a significantly more effective decomposition of the space and thus assures accurate approximations, even for small-size EDs. On the other hand, the identified decomposition and the associated ED are directly connected to the given mathematical model and therefore might be inefficient for general statistical analysis.
The complete surrogate model is assembled from the \(n_{\mathcal{D}}\) local PCEs associated with all sub-domains \(\mathcal{D}_{i}\) as:
\[Y\approx\sum_{i=0}^{n_{\mathcal{D}}}\sum_{\mathbf{a}_{i}\in\mathcal{A}_{i}}\beta_ {\mathbf{a}_{i}}\Psi_{\mathbf{a}_{i}}(\mathbf{\xi})\mathbb{1}_{\mathcal{D}_{i}}(\mathbf{\xi}), \tag{16}\]
where \(\mathbb{1}_{\mathcal{D}_{i}}(\mathbf{\xi})\) represents indicator function, i.e. \(\mathbb{1}_{\mathcal{D}_{i}}(\mathbf{\xi})=1\) only if \(\mathbf{\xi}\in\mathcal{D}_{i}\) and \(\mathbb{1}_{\mathcal{D}_{i}}(\mathbf{\xi})=0\) otherwise. In other words, to approximate the original model at any point, it suffices to determine the one relevant sub-domain and use the corresponding local PCE. Each such local PCE has its own set of basis functions \(\mathcal{A}_{i}\) and corresponding coefficients \(\beta_{\mathbf{a}_{i}}\), which can be obtained by any model-selection algorithm. In this paper the OLS and LAR algorithms are employed, but generally any non-intrusive technique can be used.
Figure 1: The first iteration of the algorithm: the original sub-domain is split and the new local PCE is constructed in \(\mathcal{D}_{i}\) (red background), while the second part in \(\mathcal{D}_{i}^{*}\) inherits the PCE approximation from the original domain.
### Domain Selection via Modified Variance-based Criterion
The selection process to identify the "best" subdomain for possible division is governed by extending the \(\Theta\)-criterion from Eq. (12) as follows:
\[\Theta_{i}=\underbrace{\mathcal{W}_{i}\cdot\exp(Q_{i}^{2})}_{\text{weight of subdomain}}\cdot\underbrace{\sqrt{\sigma_{\mathcal{A}_{i}}^{2}(\mathbf{\xi}^{(c)}) \cdot\sigma_{\mathcal{A}_{i}}^{2}(\mathbf{\xi}^{(s)})}\,I_{c,s}^{M}}_{\Theta^{c} \text{ in }\text{ }i\text{th subdomain}}. \tag{17}\]
This extended criterion aims to identify sub-domains of the input random space associated with the maximum value of \(\Theta^{c}\), while simultaneously accounting for the size of each subdomain and the accuracy of the existing local PCE. The former is calculated using Eq. (12) calculated for a rich pool of screening global candidates, while the latter are measured by incorporating the volume of each sub-domain \(\mathcal{W}_{i}\) and the LOO-CV error \(Q_{i}^{2}\), respectively. The LOO-CV term, \(\exp(Q_{i}^{2})\), can be thought to artificially inflate the domain volume as a penalization for inaccurate approximation. When the approximation is perfect (\(Q_{i}^{2}=0\)) the true volume of the sub-domain is used. Meanwhile, a poor approximation with \(Q_{i}^{2}=1\) leads to roughly \(2.72\) times increased volume.
The three terms featured in Eq. (17) aim at different aspects affecting the accuracy of the final surrogate model: large sub-domains are preferred by \(\mathcal{W}_{i}\), sub-domains containing poor PCE approximation are promoted via \(\exp(Q_{i}^{2})\) and finally, \(\Theta^{c}\) prefers sub-domains with high concentration of variance. Note that \(\Theta^{c}\) is calculated for a rich pool of screening candidates, and \(\mathcal{W}_{i}\) and \(\exp(Q_{i}^{2})\) are calculated directly from the geometry of existing sub-domain and the local PCE model, respectively. The product of all three terms in the extended criterion therefore maintains the desired balance and assures the selection of the sub-domain, \(\mathcal{D}_{i}\), that currently seems to be the most important for increasing the accuracy of the PCE surrogate model.
Sub-domain \(\mathcal{D}\) with the greatest \(\Theta_{i}\) is selected and one of the operations described in detail in Sec. 3.6 is performed, depending on whether \(\mathcal{D}_{i}\) contains a critical number of ED points. Two scenarios can occur:
* \(\mathcal{D}_{i}\) contains a sufficient number of ED points (\(n_{i}\geq n_{\text{sim}}\)) to ensure accuracy of a PCE on the domain. Therefore, it becomes a parent \(\mathcal{D}_{i}\) (bold boundaries in Fig. 2) and is divided into two parts by a selected rule. The child domain containing the decisive candidate with the greatest \(\Theta^{c}\) becomes the refinement-child \(\mathcal{D}_{i}\) (see the red subdomains in steps \(1-4\) in Fig. 2). The remaining volume becomes an inheriting-child denoted \(\mathcal{D}_{i}^{*}\) (see the green subdomains in Fig. 2), which retains the PCE from the parent. Division occurs by a cutting plane, oriented perpendicular to the selected direction (blue arrows in Fig. 2) and naturally, the coordinates of the cutting plane are restricted to the bounding box of the selected parent \(\mathcal{D}_{i}\), see Sec. 3.6. If needed, the refinement-child domain \(\mathcal{D}_{i}\) is sequentially filled with additional ED points (according to \(\Theta^{c}\)) to reach \(n_{i}=n_{\text{sim}}\) needed to construct a new PCE approximation.
* \(\mathcal{D}_{i}\) does _not_ contain a sufficient number of ED points (\(n_{i}<n_{\text{sim}}\)). The domain is not divided because the suggestion for division is based on insufficient information. Instead, new ED points are sequentially added to \(\mathcal{D}_{i}\), again using the \(\Theta^{c}\) criterion. Note that this scenario practically arises when the selected domain was an inheriting-child in the previous iteration. In this case, the selected domain has inherited a PCE model that was constructed over a larger domain. When that domain was divided, it was left with an insufficient number of points from which to construct a new PCE.
Figure 2: The first four steps of the decomposition of a 3D space of input random variables. The thick black lines outline the parent domain selected for division. The red and green boxes inside it represent the two newly created refinement-child \(\mathcal{D}_{i}\) (red) and inheriting-child \(\mathcal{D}_{i}^{*}\) (green) sub-domains created by splitting the parent domain \(\mathcal{D}_{i}\) (bold boundaries), selected via Eq. (17), by the cutting plane (blue). The cutting plane is perpendicular to the variable selected for splitting (blue arrow).
### PCE Basis Functions
Without loss of generality, the proposed method operates on the \(M\)-dimensional unit hypercube with uniform distributions of input random variables, i.e. \(\mathbf{X}\sim\mathcal{U}[0,1]^{M}\). In the case of a general joint probability distribution of \(\mathbf{X}\), it is always possible to transform input random vector to the unit hypercube by Rosenblatt transformation [46], Nataf transformation [47] or various methods based on copulas [48]. Standard normalized Legendre polynomials, orthonormal to the uniform distribution, can thus be used as basis functions for the PCE. However, due to the decomposition of the input random space to smaller sub-domains, each with lower bound \(a_{i}\) and upper bound \(b_{i}\), it is necessary to use univariate scaled orthonormal Legendre polynomials of \(n\)th order \(\tilde{\psi}_{n}(\xi)\) defined as follows:
\[\tilde{\psi}_{n}(\xi)=\psi_{n}\left(\frac{2\xi-a_{i}-b_{i}}{b_{i}-a_{i}}\right), \tag{18}\]
where \(\psi_{n}\) represents standard orthonormal Legendre polynomials. Naturally, the transformation of the original input random vector to the unit hypercube might bring additional non-linearity, and thus one might prefer the direct construction of polynomials locally orthonormal to the given original probability measure as proposed in the Me-gPC [28]. While certainly possible, this brings additional computational demands and thus it is not employed here.
### Local and Global Statistical Estimates from Dal-Pce
The significant advantage of PCE is that analytically post-processing of the expansion yields highly efficient estimates of statistical moments [30], sensitivity indices [8] and LOO-CV [4]. In the proposed DAL-PCE, since the original domain \(\mathscr{D}\) is decomposed into a set of sub-domains (see Eq. (15)), standard analytical post-processing can be applied locally and global characteristics can be obtained by simple weighted summations that converge to the true values as \(n_{\mathscr{D}}\) increases. Specifically, the global mean value and variance of a QoI are obtained from localized PCEs (denoted by subscript \(\mathscr{D}_{i}\)) as follows:
\[\mu_{\gamma}=\sum_{i=1}^{n_{\mathscr{D}}}\mathcal{W}_{i}\beta_{0_{i}}=\sum_{i =1}^{n_{\mathscr{D}}}\mathcal{W}_{i}\mu_{\mathscr{D}_{i}}, \tag{19}\]
\[\sigma_{\gamma}^{2}=\sum_{i=1}^{n_{\mathscr{D}}}\mathcal{W}_{i}\sum_{ \begin{subarray}{c}\mathbf{a}_{i}\in\mathcal{A}_{i}\\ \mathbf{a}_{i}\neq\mathbf{0}\end{subarray}}\beta_{\mathbf{a}_{i}}^{2}=\sum_{i=1}^{n_{ \mathscr{D}}}\mathcal{W}_{i}\sigma_{\mathscr{D}_{i}}^{2}. \tag{20}\]
where the local mean \(\mu_{\mathscr{D}_{i}}\) and variance \(\sigma_{\mathscr{D}_{i}}^{2}\) are obtained as described in Section 2.3.
Local Sobol' indices, \(S_{\mathscr{D}_{i}}\), of any order can be derived directly from localized PCEs and their first-order (main effect) estimates are given by
\[S_{\mathscr{D}_{i}}^{X_{j}}=\frac{1}{\sigma_{\mathscr{D}_{i}}^{2}}\sum_{\mathbf{a }_{i}\in\mathcal{A}_{i}^{X_{j}}}\beta_{\mathbf{a}_{i}}^{2}\quad\mathcal{A}_{i}^{X_ {j}}=\left\{\mathbf{a}_{i}\in\mathcal{A}_{i}:\alpha_{i}^{j}>0,\alpha_{i}^{k\neq j}= 0\right\}. \tag{21}\]
These local Sobol' indices are used in the DAL-PCE to determine the cut direction (see Section 3.6). Likewise, global Sobol' indices can be obtained easily from weighted summation of local contributions to partial variances normalized by \(\sigma_{Y}^{2}\) as follows:
\[S_{X_{j}}=\frac{\sum_{i=1}^{n_{\mathscr{D}}}\mathcal{W}_{i}\sum_{\mathbf{a}_{i}\in \mathcal{A}_{i}^{X_{j}}}\beta_{\mathbf{a}_{i}}^{2}}{\sigma_{Y}^{2}}. \tag{22}\]
Similarly, global LOO-CV, \(Q^{2}\), of a QoI can be approximated by the weighted summation of the local contributions as
\[Q^{2}=\sum_{i=1}^{n_{\mathscr{D}}}\mathcal{W}_{i}Q_{\mathscr{D}_{i}}^{2}, \tag{23}\]
where \(Q_{\mathscr{D}_{i}}^{2}\) are obtained from each local PCE using Eq. (9).
These estimates are used throughout the proposed DAL-PCE, as described in detail next.
### Numerical Algorithm
Based on the presented theoretical background, we now present the numerical algorithm for the domain adaptive localized PCE. As mentioned above, the whole process can be divided to two iterative tasks: (i) decomposition of the input random space and (ii) construction of localized PCEs. Both of these tasks are described in the following paragraphs with specific reference to the steps in Algorithm 1.
```
0: maximum local polynomial order \(p\), number of screening global candidates \(n_{c,g}\), number of local candidates \(n_{c,l}\), number of iterations \(n_{\text{iter}}\)
1: set the minimum number of realizations for local PCE construction \(n_{\text{sim}}\in\langle P,2P\rangle\)
2: generate a rich pool of \(n_{c,g}\) screening candidates
3: generate the initial ED (size \(n_{\text{sim}}\)) and construct the initial global PCE
4:for\(1\) to \(n_{\text{iter}}\)do
5: identify the sub-domain \(\mathcal{D}_{i}\) with the highest \(\Theta_{i}\) based on screening candidates
6:\(n_{i}\leftarrow\) number of ED samples existing in \(\mathcal{D}_{i}\)
7:if\(n_{i}\geq n_{\text{sim}}\)then
8: the identified sub-domain \(\mathcal{D}_{i}\) becomes a parent \(\mathcal{D}_{i}\)
9: identify the direction of the highest first-order Sobol' index \(S_{\mathcal{D}_{i}}\) of the parent \(\mathcal{D}_{i}\)
10: restrict coordinates of \(\mathcal{D}_{i}\rightarrow\mathcal{D}_{i}\) and create \(\mathcal{D}_{i}^{*}\)
11:\(n_{i}\leftarrow\) number of ED samples existing in \(\mathcal{D}_{i}\)
12:endif
13: generate \(n_{c,l}\) local candidates in \(\mathcal{D}_{i}\)
14:while\(n_{i}<n_{\text{sim}}\)do
15: extend size of local ED \(n_{i}\) using the local \(\Theta^{c}\) criterion
16:endwhile
17: reconstruct local PCEs in the \(\mathcal{D}_{i}\)
18:endfor
19:list of subdomains and corresponding PCEs
```
**Algorithm 1** DAL-PCE: Active Domain Decomposition and Construction of Localized PCEs
The first task identifies the important sub-domain \(\mathcal{D}_{i}\) that should be divided and over which low-order local PCE should be constructed. The sub-domain \(\mathcal{D}_{i}\) is specifically identified using the \(\Theta_{i}\) criterion from Eq. (17), which again incorporates three important characteristics for accurate surrogate modeling - the size of the sub-domain \(\mathcal{W}_{i}\), the accuracy of the existing local PCE measured by \(Q^{2}_{\mathcal{D}_{i}}\), and the original \(\Theta^{c}\) criterion measuring the variance contribution in \(\mathcal{D}_{i}\). While \(\mathcal{W}_{i}\) and \(Q^{2}_{\mathcal{D}_{i}}\) are computed for the whole sub-domain, \(\Theta^{c}\) is computed at specific realizations of input random vector. Therefore, it is necessary to cover the sub-domains by a sufficiently large number of screening candidates, such that the total global number of screening candidates is given by \(n_{c,g}\). Based on numerical experiments, we recommend \(n_{c,g}\geq 1000\,M\) to ensure that each sub-domain contains a sufficient number of screening candidates. Note that the screening candidates are used only to identify \(\mathcal{D}_{i}\) [_step 5_]. They are not used for the ED, and thus even high \(n_{c,g}\) does not bring any additional computational demand.
Once \(\mathcal{D}_{i}\) is identified, it is necessary to check whether there are enough samples to construct a PCE inside the sub-domain. We start with finding out how many points belong to the selected domain \(\mathcal{D}_{i}\) [_step 6_]. If the number of samples in the identified sub-domain, \(n_{i}\), is greater than (or equal to) \(n_{\text{sim}}\) [_step 7_], a local PCE already exists for \(\mathcal{D}_{i}\). The subdomain is then assigned as a parent \(\mathcal{D}_{i}\) for division [_step 8_] and the first-order Sobol' indices are estimated by Eq. (22) [_step 9_]. This identified parent \(\mathcal{D}_{i}\) is divided in the direction of the highest first-order Sobol' index \(S_{\mathcal{D}_{i}}^{X_{i}}\). The new restricted coordinates of refinement-child \(\mathcal{D}_{i}\) are identified and the inheriting-child \(\mathcal{D}_{i}^{*}\) is created [_step 10_]. Further, the number of ED samples \(n_{i}\) in the refinement-child \(\mathcal{D}_{i}\) is determined [_step 11_]. On the other hand, if the identified sub-domain \(\mathcal{D}_{i}\) does not contain enough samples (i.e. \(n_{i}<n_{\text{sim}}\)), the inherited PCE from the previous iteration is not sufficiently local (it was trained over a domain that has since been divided) and it is necessary to add new samples to \(\mathcal{D}_{i}\) before constructing a new local PCE.
The second task of the proposed algorithm is sequential sampling and adaptive PCE construction in sub-domain \(\mathcal{D}_{i}\). Recall that this domain may be either
1. a refinement-child that was just divided but does not contain a sufficient number of points (\(n_{i}<n_{\text{sim}}\)) or,
2. an inheriting-child that now does not contain at least \(n_{\text{sim}}\) ED samples.
Next, a set of local candidates is generated in region \(\mathcal{D}_{i}\)[_step 13_]. To ensure sufficient assessment of the coverage of the domain, the number of local candidates is empirically recommended as \(n_{c,j}\in\langle 3P,5P\rangle\)[1]. From these candidates, the standard \(\Theta^{c}\) criterion in Eq. (12) is used to iteratively select the best candidates until there are \(n_{\text{sim}}\) samples in \(\mathcal{D}_{i}\)[_step 14-16_]. This sequential extension of the sample in \(\mathcal{D}_{i}\) is adaptive in the sense that the pairwise distances in Eq. (12) between candidates and existing ED points are updated after the addition of each new point. However, because \(n_{i}<n_{\text{sim}}\) the local variance densities are estimated from the previously existing PCE, which cannot be updated until a sufficient number of samples are available in \(\mathcal{D}_{i}\).
The last step of each iteration is to construct the local PCE using scaled Legendre polynomials as basis functions (see Eq. (18)) [_step 17_]. Any non-intrusive technique can be used to estimate the coefficients \(\boldsymbol{\beta}\); we use LARS and OLS for an adaptive construction of the local PCEs in this paper. At the end of the iteration, all sub-domains are re-numbered and a list of sub-domains with corresponding PCEs can be exported or the next iteration can be started.
### Adaptivity in PCE Construction and Domain Decomposition
Adaptivity is central to the proposed DAL-PCE. In the proposed algorithm, there are two types of adaptivity employed:
1. adaptivity in PCE construction (selection of the optimal set of basis functions), and
2. adaptivity in domain decomposition
Since the PCE can be constructed by any regression technique in each sub-domain, PCE adaptivity is incorporated by sparse solvers and best model selection algorithms, e.g. Least Angle Regression [32], orthogonal matching pursuit [33] or Bayesian compressive sensing [34]. Although sparse solvers are often used for PCE with high \(p\), this adaptivity is also important for reducing the number of basis functions (and thus the minimum number of ED samples) for high-dimensional examples or, in our case, for very low-size ED in each \(\mathcal{D}_{i}\) approximated by low-\(p\) local PCE.
The second type of adaptivity is the proposed adaptivity in the domain decomposition. At any point in the iterative process, the existing ED samples can be used to construct local PCEs or a single global PCE. The DAL-PCE is not guaranteed to provide a better approximation than the global PCE. This can be measured via \(Q^{2}\), specifically by computing \(Q^{2}_{\text{local}}\) from Eq. (23) and \(Q^{2}_{\text{global}}\) from a single global PCE according to Eq. (9). If \(Q^{2}_{\text{local}}>Q^{2}_{\text{global}}\) at a given iteration, the domain decomposition is deemed to be poor and the whole decomposition process is _re-started_. That is, the complete geometrical decomposition is forgotten and all existing ED points are taken as an initial ED for a brand new run of the algorithm. This is illustrated in Fig. 3 which shows the decomposition (top) and the associated error (bottom) right before the restart a) at \(N_{\text{sim}}=181\), b) the new decomposition and error right after the restart, and c) the final decomposition/error which shows significant improvement over the global PCE. These histories show the standard \(R^{2}\) error defined in Eq. (24). It is not necessary to check this criterion at every iteration, but it is suggested to check it periodically, every \(n_{r}\) steps, to ensure adequate local refinement.
### Stopping Criteria
The proposed DAL-PCE algorithm can be fully automated by adding an adequate stopping criterion. A simple but practical stopping criterion is based on computational budget, i.e. once the total number of model evaluations \(N_{\text{sim}}\) or number of iterations \(n_{\text{iter}}\) have reached a critical level/budget. One may also use a stopping criterion based on decomposition pattern, e.g. the smallest or the largest volumes of any subdomain, to ensure a desired resolution. Valuable stopping criterion can be also obtained directly from \(Q^{2}\), corresponding to a target/threshold level of achieved accuracy. Regardless of the selected stopping criteria, it can easily be applied before _step_ 5 of the proposed algorithm (start of each iteration).
## 4 Numerical Experiments
The proposed DAL-PCE is presented on four numerical examples of increasing complexity and which illustrated different aspects of the approach. The obtained results are compared (a) to the standard global PCE approach with adaptive maximum order \(p\in[5,25]\) and (b) to SSE [2], as current state-of-the-art non-intrusive surrogate modeling technique based on the domain decomposition. The PCE is constructed using the UQPy package [36] and the original implementation of SSE is used from the UQLab package [37]. To compare methods, the relative mean squared errors \(\epsilon\) are calculated for all three approximations \(\tilde{f}\) on a validation set containing a large pool of \(10^{6}\) integration points generated by crude Monte Carlo according to:
\[\epsilon(\mathbf{X})\coloneqq\frac{\mathbb{E}\Big{[}\big{(}f(\mathbf{X})-\tilde{f}( \mathbf{X})\big{)}^{2}\Big{]}}{\mathbb{D}\Big{[}f(\mathbf{X})\Big{]}}, \tag{24}\]
where \(\mathbb{E}[]\) and \(\mathbb{D}[]\) are the mean value and variance operators, respectively.
To show representative results of the proposed DAL-PCE algorithm, the calculations were repeated 100 times, and the same settings of the algorithm for all examples were selected as follows: maximum local polynomial degree \(p=2\), number of global candidates \(n_{c,g}=1000\)\(M\), number of local candidates \(n_{c,l}=5P\), minimum number of samples for local PCE construction \(n_{\text{sim}}=1.5P\), minimum number of iterations before checking for restart \(n_{r}=20\), and \(\mathbf{\beta}\) are obtained by LARS and OLS algorithm. Minimum number of samples in sub-domains required to justify an expansions for SSE was set identically to DAL-PCE and polynomial order is adaptively selected in the range \(p\in[2,6]\). Since the SSE is not a sequential approach, the presented results were obtained for 10 discrete sample sets of increasing size to compare convergence of the method. Note that all samples and candidates are generated by LHS for all compared approaches, though it was shown [1] that for the variance-based sequential sampling, it is significantly better to use advanced techniques such as Coherence D-optimal sampling [41].
Figure 3: Illustration of domain decomposition restart. a) decomposition and error evolution prior to restart, b) rebuilt decomposition and error drop right after the restart, c) final decomposition and error showing that the restart unlocks a dramatic decrease in approximation error.
### One-dimensional Toy Example
The first example involves a simple 1D function [2] that is extremely difficult to approximate with PCE due to the third, highly nonlinear "exp" term:
\[f(X)=-X+0.1\sin(30X)+\exp(-(50(X-0.65))^{2}),\quad X\sim\mathcal{U}[0,1]. \tag{25}\]
The poor performance of a single global PCE learned from 200 samples is depicted by the blue line in Fig. 4c where it is clear that a single global PCE is not able to accurately approximate the function even for a high number of samples and high maximum polynomial order \(p\in[5,25]\). This function was originally developed to demonstrate the efficiency of SSE based on domain decomposition and thus it is a natural choice for comparison of the proposed DAL-PCE and SSE.
Fig. 4a-b show a typical realization of the DAL-PCE where the algorithm sequentially decomposes the domain and adds additional samples to the ED. Specifically shown are the 4th and 11th iterations. The boundaries of sub-domains are represented by blue vertical lines and red dots show the positions of samples in the ED. Once the algorithm discovers the highly nonlinear region (the steep peak caused by exp), it progressively refines this region and adds more samples there as a result of the high variance density. Of course, these figures show only one realization of the algorithm and the decomposition is dependent on the initial ED. Therefore, it is necessary to repeat the algorithm many times with random initial ED to assess convergence.
Fig. 4d shows convergence of the error \(\epsilon\) from 100 repeated trials. The single global PCE is unable to accurately approximate the original function even when using high \(p\) and thus the \(\epsilon\) does not converge, as expected. Both methods based on domain decomposition (DAL-PCE and SSE) achieve great accuracy already for 200 samples. However, the DAL-PCE consistently has 1-2 orders of magnitude higher accuracy than SSE for the given number of samples. Moreover, increase in variance of \(\epsilon\) is, in general, slower in DAL-PCE than in SSE. Fast increment in variance of SSE can be seen also in the original paper [2]. Finally, we again observe that convergence is continuous with DAL-PCE, where convergence can only be assessed at discrete sample sizes with SSE through a new analysis. All of these
Figure 4: (a), (b) The adapted domain and ED before (iteration 4) and after (iteration 11) exploration and discovery of the exponential part of the mathematical model. (c) Final surrogate models from global PCE and DAL-PCE. (d) Convergence plot comparing the mean square error for global PCE SSE, and DAL-PCE. The convergence plots for Global PCE and DAL-PCE show continuous mean value \(\pm\sigma\) intervals from 100 repeated trials, while those for SSE are plotted for several discrete ED sizes.
advantages of the DAL-PCE can be attributed to the active learning, which both explores the space and exploits the behavior of the function to decompose the domain and add samples. Although active learning might lead to lower accuracy (higher \(e\)) initially (for small \(n_{\text{sim}}=10\)-\(20\)) as it is dominated by exploration, it rapidly improves once it identifies important features and begins to favor exploitation.
### Two-dimensional Singularity
The second example involves a 2D function with mirrored quarter-circle arc line singularities [1]. The form of the function is give by:
\[f(\mathbf{X})=\frac{1}{|0.3-X_{1}^{2}-X_{2}^{2}|+\delta}-\frac{1}{|0.3-(1-X_{1})^{ 2}-(1-X_{2})^{2}|+\delta},\quad\mathbf{X}\sim\mathcal{U}[0,1]^{2}, \tag{26}\]
where the strength of the singularities is controlled by the parameter \(\delta\), which we set as \(\delta=0.1\). The singularities in this example represent a challenging task for a global PCE even with high order, due to the well-known Gibbs phenomenon [49]. It is thus beneficial to identify the location of the singularity, locally decompose the domain, and construct low-order local PCEs.
Fig. 5 illustrates the decomposition and DAL-PCE approximation at a given stage of the computation. Panel a) visualizes the true values of the function via a background color. The same coloring scheme is used in panel b) for the pointwise information available in the current ED (small circles) and for the function approximation via DAL-PCE by the background color. Panels b) and c) show also the final domain decomposition. The symmetry
Figure 5: Results for the 2-dimensional Singularity function: a) original mathematical model, b) approximation via DAL-PCE (background color), current domain division and the corresponding ED, c) local LOO-CV \(Q_{g_{i}}^{2}\) and \(\Theta_{i}\) value for each sub-domain, d) convergence plots for DAL-PCE, Global PCE, and SSE showing the mean value and \(\pm\sigma\) interval. Convergence plots for SSE show the mean \(\pm\sigma\) at discrete sample sizes.
in the decomposition documents the great convergence of the DAL-PCE thanks to an adaptive decomposition described in the previous section. Plot c) shows the local \(Q_{\alpha_{i}}^{2}\) error in each individual sub-domain (darker color corresponds to higher local error). These local errors clearly show localization of the prediction error to very small areas near singularities, which are continually being refined. The color of the small solid squares in the center of each sub-domains shows the \(\Theta_{i}\) value for that sub-domain.
Finally, the convergence plot in Fig. 5d) shows that both DAL-PCE and SSE outperform the global PCE, as expected. The SSE performs comparable to or slightly better than DAL-PCE for small \(N_{\text{sim}}\), but the DAL-PCE begins to outperform SSE as \(N_{\text{sim}}\) grows thanks to the active learning approach that targets samples in the vicinity of the singularities. Note that the error converges for both SSE and DAL-PCE as we approach 1000 samples and does not seem to substantially reduce after this. This is due to the fundamental limitation of trying to approximate this singularity, even locally, with low-order polynomials.
### \(M\)-dimensional Discontinuity
The third example investigates the role of dimensionality on the performance of the proposed DAL-PCE. The following discontinuous function is defined for an arbitrary number of input random variables \(M\)[26]:
\[f(\mathbf{X})=\begin{cases}\sin\left(X_{1}\pi\right)\sin\left(X_{2}\pi\right)& \text{if }x_{1}\leq 0.5\text{ and }x_{2}\leq 0.5\\ \sum_{i=3}^{M}X_{i}&\text{otherwise}\end{cases},\quad\mathbf{X}\sim\mathcal{U}[0,1 ]^{M}. \tag{27}\]
This function has a discontinuity in the first two input random variables, which can be seen in Fig. 6a. A single global PCE cannot accurately approximate the function because of the discontinuity, although the function \(f(\mathbf{X})\)
Figure 6: Results for the 2-dimensional discontinuiy function: a) original mathematical model, b) approximation via DAL-PCE and ED, c) local LOO-CV \(Q_{\alpha_{i}}^{2}\) and \(\Theta_{i}\) value for each sub-domain, d) convergence plots for DAL-PCE, Global PCE, and SSE showing the mean value and \(\pm\sigma\) interval. Convergence plots for SSE show the mean \(\pm\sigma\) at discrete sample sizes.
can be easily approximated by two separate PCEs in the two regions for which the definitions differ. But, this requires _a priori_ knowledge of the discontinuity location. Since the location of the discontinuity is assumed to be unknown, this function is a good example for domain adaptation using DAL-PCE.
The detailed results for a 2D version of this problem are depicted in Fig. 6 in identical form as in the previous example. Note that the local \(Q_{i}^{2}\) errors Fig. 6c show perfect accuracy in the part of the input random space where \(f(\textbf{X})=0\) and thus the associated sub-domains are not preferred for further decomposition. The convergence plot in Fig. 6d confirms that a single global PCE is not able to create an accurate approximation and adding more points to ED does not lead to significant improvements in the approximation. The mean values of errors \(\epsilon\) associated to the proposed DAL-PCE approach are significantly lower in comparison to SSE (1-2 orders of magnitude) similarly as in the first example, though the convergence trend is similar for both methods. SSE, however, uses a random splitting routine. This can lead to very high variance of results, since the accuracy is highly dependent on the pattern of the decomposed input random space. This clearly shows the advantage of an active learning approach.
The influence of dimensionality \(M\) on convergence of the DAL-PCE, SSE, and global PCE is studied in Fig. 7 for a) 3, b) 5, c) 6, and d) 8 input random variables. As the domain dimension increases, the linear part of the function \(f(\textbf{X})\) occupies an increasing proportion of the domain while the discontinuity remain low-dimensional. The proposed DAL-PCE greatly improves the convergence because it is able to identify an ideal decomposition and local samples to resolve the discontinuity. For low-dimensions (\(M=2,3\)), SSE error \(\epsilon\) shows a decreasing trend that is better than global PCE but has an extremely high variance. This is caused by a lack of control in sample placement. The domain decomposition in SSE is a product of sample location and without active learning to guide sample placement, SSE will sometimes produce a very good decomposition and sometimes a very poor decomposition. Meanwhile, the proposed DAL-PCE errors have comparably low variance for low-dimensions and consistently have accuracy comparable to, or better than, the best SSE realizations.
As the dimension, \(M\), increases the DAL-PCE is able to maintain a very high level of accuracy, while the accuracy degrades completely for the SSE such that it is comparable to the global PCE. The DAL-PCE is able to maintain its low error because the discontinuity remains low-dimensional and the active learning process is able to target this region for domain refinement and sampling. This means that the DAL-PCE remains largely independent of the problem dimension, and instead depends predominantly on the intrinsic dimension of the
Figure 7: Convergence plots for the \(M\)-dimensional function: a) 3-dimensional version, b) 5-dimensional version, c) 6-dimensional version, and d) 8-dimensional version. Convergence plots for the DAL-PCE and global PCE show the mean value \(\pm\sigma\) interval. Convergence plots for SSE also show the mean \(\pm\sigma\), but at discrete sample sizes.
discontinuous/nonlinear features of the model. The performance of SSE, on the other hand, degrades with dimension because its domain decomposition depends only on a set of _a priori_ specified points that are not selected in a way that is aware of the important features of the model. Consequently, as the dimension increases the algorithm becomes less likely to refine the domain appropriately around an embedded low-dimensional feature. We remark that this desirable scalable convergence trend of the DAL-PCE is not likely a universal property, as the trend may break down in problems where the intrinsic dimension of the discontinuity/nonlinearity is high or where the discontinuity occupies a very small proportion of the domain - in which case exploration of the space to find the important feature may take a very large number of samples.
In the present example, the discontinuity in the function given in Eq. (27) lies at \(x_{1}=0.5\) and \(x_{2}=0.5\), which corresponds to the exact location where the domain will be split for both SSE and during the early iterations of the DAL-PCE. One might argue that this presents an unreasonable advantage for the proposed algorithm. We therefore modified the function such that the discontinuity lies at \(x_{1}=0.61\) and \(x_{2}=0.61\). Fig. 8 shows the convergence for the DAL-PCE and SSE for this modified function with varying dimension, \(M\). The absolute errors \(\epsilon\) exhibit slower decrease, especially for dimensions \(M=3\) and \(M=5\). However, the proposed active learning still leads to superior results (especially for higher dimensions as in the previous case). Note that there are visible spikes in the DAL-PCE convergence graph for the 3-dimensional example. Although the results were statistically processed, these spikes are caused by the restart adaptivity occurring at the same \(N_{\rm sim}\) in each replication. In this case, the optimal decomposition pattern is very complicated and therefore the algorithm activates the restart adaptivity frequently (after multiples of \(n_{r}\) steps), until it finds a suitable pattern to continue convergence. SSE in the 3- and 5-dimensional cases has higher mean error and significantly lower variance in comparison to the previous example. This is caused by the fact that the modified discontinuity location no longer lies along the boundary of the domain decomposition. In the previous example, some SSE realizations achieved near-perfect accuracy because the domain was coincidentally divided along the discontinuity.
This phenomenon is investigated more closely in Fig. 9, which compares number of outliers in both versions of 3D examples. In addition to the mean \(\pm\sigma\) seen previously, the figure also shows standard boxplots for SSE (median along with lower and upper quartiles) and the corresponding number of "extreme" realizations producing very high accuracy (top axis) for a) the original position of discontinuity; and b) discontinuity at \(x_{1}=0.61\)
Figure 8: Convergence plots for the modified \(M\)-dimensional function: a) 3-dimensional version, b) 5-dimensional version, c) 6-dimensional version, and d) 8-dimensional version. Convergence plots for the DAL-PCE and global PCE show the mean value \(\pm\sigma\) interval. Convergence plots for SSE also show the mean \(\pm\sigma\), but at discrete sample sizes.
and \(x_{2}=0.61\). As can be seen, in panel a) there are many outliers producing \(\epsilon<-7\), which effectively decreases \(\mu\) relative to the median while also significantly increasing the variance. In contrast DAL-PCE has no outliers and it leads to very consistent results. In panel b), there are no outliers for either SSE or DAL-PCE and the results are thus consistent with low variance for both methods.
### Asymmetric shallow von Mises truss
In this section, we demonstrate the relevance of the proposed method for a representative engineering example exhibiting discontinuous response. Consider the shallow two-bar planar truss subjected to a vertical load at its top joint, as presented in [50] and illustrated in Fig. 10a.
The truss is formed by two prismatic bars made of a hard wood (density 800 kg/m\({}^{3}\), modulus of elasticity \(E=12\) GPa). There are two variables in the studied von Mises truss: (i) the loading vertical force \(F\), and (ii) a half sine-wave imperfection of the left bar having magnitude \(\delta\), see the sketch in Fig. 10a. The load is applied dynamically as a step function at time zero for an unlimited duration. The structure is modeled, as illustrated in Fig. 10b. In particular, the mass of the bar is concentrated in 21 mass points, including the supports and the loading point. These mass points are connected via \(10+10\) translational springs representing the normal stiffness of the true bars. The pairs of the axial members are connected via rotational spring having zero moment for a zero angle between adjacent bars. The only exceptions are the loading ans support points where there are no rotational springs attached (hinges). The damping is associated with the mass points via linear viscous damping coefficient set to \(11\) N \(\cdot\) s/(kg \(\cdot\) m) approximating the relative damping of about 3%. Explicit dynamics solver FyDiK [51; 52] was used to solve the equations of equilibrium at the mass points. The numerical solution lasts to up to two seconds, which is the time needed for almost complete stabilization of the solution (kinetic energy drops below a negligible threshold).
Since the structure is very shallow, sudden application of the vertical force can cause snap-through buckling, wherein the loading point drops down between the supports and the members switch from a state of compression to tensile stresses in the final stable state. We specifically study the horizontal coordinate \(y_{F}\) of the loading point after the dynamic response stabilizes to the final deformed shape. The force \(F\in(31.6,772.6)\) kN and initial imperfection \(\delta\in(-0.4,0.4)\) m are treated as uniform random variables mapped to the unit square such that the model input \(\mathbf{X}\sim\mathcal{U}[0,1]^{2}\). Because of the potential snap-through buckling, the solution is discontinuous as illustrated in Fig. 10c. On each side of the discontinuity, the solution \(y_{F}\) is smooth and slowly-varying having values near \(+1\) m and \(-1\) m, respectively. Note that the output is _not symmetric_ with respect to \(\delta=0\) because the dynamical response evolves differently for concave and convex initial displacements.
The sharp boundary between the buckled and unbuckled regions, shown in Fig. 11a cause global PCE to produce poor approximations that are vulnerable to the Gibbs phenomenon, similar to the example in subsection
Figure 9: Convergence plots for DAL-PCE and SSE with additional boxplots for SSE showing the median, lower and upper quartiles and outliers for: a) the 3D example with discontinuity at \(x_{1}=0.5\) and \(x_{2}=0.5\), b) the 3D example with discontinuity at \(x_{1}=0.61\) and \(x_{2}=0.61\).
This is shown by the convergence plots in Fig. 11d comparing global PCE, DAL-PCE, and SSE. Clearly, the complexity of this example and the complicated shape of the discontinuity limits the accuracy of all the surrogate models. The proposed DAL-PCE achieves low accuracy for small sample sizes because the corresponding small number of sub-domains and low-order PCEs are unable to sufficiently approximate the boundary. Therefore, the global PCE and SSE (with a low number of embedding levels) are initially better. With increasing number of samples, the proposed DAL-PCE approach leads to superior results because the active learning is able to resolve the discontinuity as illustrated in Fig. 11b, which shows the domain decomposition and approximation after 2000 samples. Fig. 11c shows the corresponding LOO-CV errors for each subdomain, demonstrating the errors are confined to small, localized regions near the boundary.
Figure 11: Results for the von Misses truss example: a) original mathematical model (numerical solution), b) approximation via DAL-PCE and ED, c) local LOO-CV \(Q_{\mathcal{G}_{\mathcal{G}_{\mathcal{G}_{\mathcal{G}_{\mathcal{G}_{\mathcal{G} }}}}}^{2}}\) and \(\Theta_{i}\) value for each sub-domain, d) convergence plots for DAL-PCE, Global PCE, and SSE showing the mean value and \(\pm\sigma\) interval; convergence plots for SSE show the mean \(\pm\sigma\) at discrete sample sizes.
Figure 10: Asymmetric shallow von Mises truss. a) Initial geometry with two random variables \(F\) and \(\delta\); b) illustrative sketch of the discrete dynamical model and the meaning of output variable \(y_{F}\), c) illustration of the discontinuous response function of the two input variables.
## 5 Discussion & Future Work
The proposed DAL-PCE approach is a general methodology for the decomposition of the input random space and construction of localized PCEs using active learning. The proposed active learning is based on a novel \(\Theta\) criterion that optimally balances global _exploration_ with local _exploitation_ of the model. Although this paper presents one specific learning algorithm, the methodology is general and amenable to modifications to reflect the specific user's needs. The whole process can be divided into two tasks: A) decomposition of the input random space and B) construction of localized PCEs; and both can be easily modified as discussed further:
* The most important sub-domain \(\mathcal{D}_{i}\) is identified by extended \(\Theta\) according to Eq. (17) evaluated for a large number of global candidates. In this paper, we use standard LHS for candidate generation, but it may be beneficial to use different sampling methods that produce more uniform coverage of the whole input random space (see e.g. [53; 54; 45]). Although it is generally possible to generate a large number of candidates, it might be challenging to uniformly cover the entire input random space, especially in high dimensions. Thus, one can use any sampling technique suitable for a specific example, e.g. [55]. Once the \(\mathcal{D}_{i}\) is identified via Eq. (17), it is either divided (providing it contains enough ED points) or the sample is extended inside it, to achieve a better PCE approximation. The simplest division occurs by splitting the volume into two parts of identical hypervolume in the direction of the highest first-order Sobol' index. However, the algorithm can accommodate various different approaches. For example, it is possible to divide the \(\mathcal{D}_{i}\) into a higher number of sub-domains, not just two. Moreover, instead of splitting the domain into parts of equal hypervolume, other criteria can be used. For example, the cutting plane can be positioned so to split the domain variance into equal parts.
* The user can choose to employ any existing method to construct the non-intrusive PCEs, including various sparse solvers or adaptive algorithms, which may be preferable for certain applications [12]. For example, we use LARS with OLS. However, it is generally more efficient to use active learning based on the \(\Theta\) criterion for PCE as shown in [1], which employs variance-based sequential sampling. This improvement can be integrated within the DAL-PCE to make local PCE more efficient in each subdomain, and thereby improving the overall convergence. The can be compounded by the use of advanced sampling techniques within the subdomains such as Coherence D-optimal sampling [40; 41].
As seen from the previous paragraphs, the whole algorithm can be adapted for specific needs reflecting the characteristics of a given mathematical model, such as dimensionality, sparsity, non-linearity etc., by simply exchanging components of the proposed algorithm for suitable existing (or new) techniques. Note that even after the modification, the whole methodology based on \(\Theta\) criterion is still valid and can be used for uncertainty quantification and surrogate modelling as described in this paper. Moreover, in comparison to SSE, the DAL-PCE sequentially adds points and divides the sub-domains one-by-one based on information obtained from the previous iteration.
Another significant advantage of the DAL-PCE is that it provides estimates of the local errors, \(Q_{\mathcal{D}_{i}}\), associated with each sub-domain. Since localized PCEs are constructed independently, local errors estimate the local accuracy of the surrogate model directly, and can be assembled to provide global error measures. Naturally, local accuracy is very important information that can be used for further probabilistic analysis and active learning. Although this paper does not propose any specific approach for further processing of this information, it could serve as a main ingredient for various active learning algorithms. For example, it could be directly used to predict uncertainty in industrial applications and possibly extend the ED in a sub-domain of interest.
Finally, an important topic of further research is to study the behavior of the proposed criterion in higher dimensions. In particular, the geometrical terms \(l_{c,s}^{M}\) and \(\mathcal{W}_{i}\) likely cause poor convergence in high dimensions. Although some preliminary results focused on investigating of \(l_{c,s}^{M}\) in high dimensions was previously performed in the paper [1] proposing the original \(\Theta\) criterion, it is still necessary to perform an extensive study of its behavior as well as investigating the influence of \(\mathcal{W}_{i}\), which may need to be reformulated for high dimensions.
## 6 Conclusion
The paper presented a novel approach, domain adaptively localzed PCE, for the adaptive sequential construction of localized PCEs based on active learning and decomposition of the input random space. It combines adaptive sequential sampling based on the recently proposed \(\Theta\) criterion to maintain the balance between exploration of the input random space and exploitation of the current characteristics of the PCE together with the adaptive sequential decomposition of the input random space creating sub-domains approximated by local surrogate models. The methodology offers a general technique that can be easily adapted or modified for specific functions extending its applicability. The performance of the proposed methodology was validated on several numerical examples of increasing complexity investigating different aspects of the algorithm and leading to superior results in comparison to a single global PCE and the recently proposed SSE.
## Acknowledgments
The first author acknowledge financial support provided by the Czech Science Foundation under project number 22-00774S. Additionally, the major part of this research was conducted during the research stay of the first author at Johns Hopkins University supported by the project International Mobility of Researchers of Brno University of Technology, Czechia under project No. EF18_053/0016962.
|
2306.00188 | Multi-environment lifelong deep reinforcement learning for medical
imaging | Deep reinforcement learning(DRL) is increasingly being explored in medical
imaging. However, the environments for medical imaging tasks are constantly
evolving in terms of imaging orientations, imaging sequences, and pathologies.
To that end, we developed a Lifelong DRL framework, SERIL to continually learn
new tasks in changing imaging environments without catastrophic forgetting.
SERIL was developed using selective experience replay based lifelong learning
technique for the localization of five anatomical landmarks in brain MRI on a
sequence of twenty-four different imaging environments. The performance of
SERIL, when compared to two baseline setups: MERT(multi-environment-best-case)
and SERT(single-environment-worst-case) demonstrated excellent performance with
an average distance of $9.90\pm7.35$ pixels from the desired landmark across
all 120 tasks, compared to $10.29\pm9.07$ for MERT and $36.37\pm22.41$ for
SERT($p<0.05$), demonstrating the excellent potential for continuously learning
multiple tasks across dynamically changing imaging environments. | Guangyao Zheng, Shuhao Lai, Vladimir Braverman, Michael A. Jacobs, Vishwa S. Parekh | 2023-05-31T21:06:42Z | http://arxiv.org/abs/2306.00188v1 | # Multi-environment lifelong deep reinforcement learning for medical imaging
###### Abstract
Deep reinforcement learning(DRL) is increasingly being explored in medical imaging. However, the environments for medical imaging tasks are constantly evolving in terms of imaging orientations, imaging sequences, and pathologies. To that end, we developed a Lifelong DRL framework, SERIL to continually learn new tasks in changing imaging environments without catastrophic forgetting. SERIL was developed using selective experience replay based lifelong learning technique for the localization of five anatomical landmarks in brain MRI on a sequence of twenty-four different imaging environments. The performance of SERIL, when compared to two baseline setups: MERT(multi-environment-best-case) and SERT(single-environment-worst-case) demonstrated excellent performance with an average distance of \(9.90\pm 7.35\) pixels from the desired landmark across all 120 tasks, compared to \(10.29\pm 9.07\) for MERT and \(36.37\pm 22.41\) for SERT(\(p<0.05\)), demonstrating the excellent potential for continuously learning multiple tasks across dynamically changing imaging environments.
Main
The field of radiology is rapidly adapting artificial intelligence and machine learning techniques for clinical decision support. Deep reinforcement learning (DRL) is a particularly interesting subarea of machine learning methods, as they learn by experiencing and exploring the environment, identical to the human learning process. Diverse fields, including radiology, have benefited significantly from the use of DRL models [1, 2, 3, 4]. The unique ability of DRL to learn from exploration makes it capable of being used in the Decision Support Systems in radiology. DRL can train to identify and map pathological, anatomical, and structural relationships between distinct radiological images. Researchers are exploring new methods for using DRL in anatomical landmark localization, image segmentation, registration, treatment planning, and assessment in radiology, holding promise for improving radiological diagnosis and treatment [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15].
Reinforcement learning consists of one or more agents exploring and learning about their environments through a finite or infinite set of actions interacting with the environment. The goal of their exploration is to maximize certain reward functions in which the environments provide feedback after agents' every move [16]. To achieve this goal, DRL builds a policy model, which informs an agent of the action to take at a certain state. DRL leverages the strong learning capabilities of deep learning algorithms, for example, Convolutional Neural Networks (CNN) to learn a representation of the complex environment and simulate the best policy. In the context of medical imaging, the environment will be the 2D images or 3D volumes. However, since medical images are diverse in modality (PET, MRI, X-ray), pathology (benign or malignant tumors), or imaging orientation (axial, coronal, sagittal), the same task may be encountered in very different environments, in terms of difficulty, location, and environment structure. Training multiple DRLs across different anatomical regions and radiological applications would not only increase the space and time complexity of the application but would also be difficult to translate into a clinical workflow as this would result in hundreds of models due to the diversity in body regions and diseases.
We hypothesize that a single DRL model trained on a diverse set of environments would have an equivalent performance to multiple single environment models trained on each individual environment, therefore resulting in a practical and computationally efficient solution. However, the field of medical imaging is constantly evolving, wherein a new modality or pathology might present itself at a future time point. Therefore, a DRL model that is trained on a predefined set of tasks and environments may not work well on newer unseen tasks and environments. The model can be fine-tuned to work in the newer environment, but that would potentially result in catastrophic forgetting [17], meaning the model fails in the original environment it was trained in, as illustrated in Figure 1. Lifelong learning is models that are able to overcome catastrophic forgetting and continuously learn new tasks. Therefore, it is important to integrate lifelong learning capabilities into the existing DRL framework for medical imaging to continually learn different tasks in newer imaging environments without forgetting the old environments. To that end, we developed a selective experience replay based lifelong reinforcement learning framework (SERIL) to train a single model in a system of continuously evolving multiple tasks and multiple environments. Specifically, the SERIL framework uses selective experience replay, introduced by Isele et al. [18], to use the important experience
replay buffers, which allows agents to perform lifelong learning.
We trained and evaluated SERIL for the task of anatomical localization of five distinct landmarks in the brain across twenty-four different imaging environments from the 2017 BRATS dataset, consisting of a combination of different MRI sequences, diagnostic pathologies, and imaging orientations. The performance of the SERIL model was compared to two baseline setups: multi-environment (MERT) and single environment (SERT). The MERT setup represents the all-knowing best case model that has access to the complete set of all twenty-four enviormments and five landmarks. In contrast, the SERT setup corresponds to the collection of multiple SERT models, each optimized on a single environment.
## 2 Results
We trained SERIL across twenty-four distinct environments and five distinct landmarks (top left ventricle, top right ventricle, bottom left ventricle, bottom right ventricle, and center ventricle). For comparison, we trained twenty-four single-environment multi-agent models (SERT), one for each environment and a single multi-environment (MERT) model. The MERT, SERT, and SERIL models were compared for their performance and generalizability across different environments. Figure 2 illustrates the performance of the MERT and SERIL models across 12 different environments (four different imaging sequences: T1, T2, FLAIR, and T1CE and three diffe
Figure 1: Illustration of catastrophic forgetting in dynamically evolving medical imaging environments. (A) Baseline deep reinforcement learning model trained for ventricle localization in the brain on an environment consisting of T1-weighted pre- and post-contrast enhanced images in Sagittal orientation. (B) The trained model encounters a new environment or new dataset consisting of T2-weighted and FLAIR MRI in the Coronal orientation. (C) The baseline model fails in the new environment due to lack of similar data during training. (D) The baseline model was fine-tuned on the new dataset. (E) The fine-tuning results in catastrophic forgetting where the fine-tuned model no longer works in the original environment.
Figure 2: Illustration of the 120 task-environment pairs in our dataset, annotated by the red bounding box is the true landmark location. And the predicted landmark location of the MERT (left) and SERIL (right) models on each of these environment pairs are annotated by the yellow bounding box.
axial, coronal, and sagittal).
The SERIL model demonstrated excellent generalization performance with an average distance of \(9.90\pm 7.35\) pixels from the desired landmark across all 120 tasks, compared to \(10.29\pm 9.07\) for the MERT model. In contrast, the SERL models demonstrated poor generalizability across all 24 environments (120 task-environment pairs) with an average Euclidean distance of \(36.37\pm 22.41\) pixels from the desired landmark. The comparison of the overall performance of the MERT and SERIL models to each of the twenty-four SERL models is illustrated in Figure 3.
We compared the SERIL model to the best-performing SERT model for each environment across all task-environment pairs. Figure 4 illustrates the overall performance comparison between the twenty-four best-performing SERT models against a single MERT model and a single SERIL model for each landmark localization task, and we see that there is no significant performance difference (except one task, meaning that SERIL is able to learn each task without forgetting previous tasks. These results have been detailed in Table 1.
## 3 Discussion
Landmark localization using deep reinforcement learning in radiology has been primarily explored in the context of single image sequences/single modalities without integration of lifelong learning capabilities. Previous works such as DeepNavNet[19] or Deep Learning-Based Regression and Classification[20] have shown great results in single environment and single task landmark lo
Figure 3: Average Euclidean distance between the predictions of SERT models(Agent \(0\sim 23\)) and MERT(Agent X in red) and SERIL(Agent M in yellow) compared to target landmark locations. The MERT and SERIL outperform all single-environment models in terms of the average distance from the target landmark.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Task 1 (Top left ventricle) & Task 2 (Top right ventricle) & Task 3 (bottom left ventricle) & Task 4 (bottom right ventricle) & Task 5 (center) \\ \hline
**Aggregate of best SERT** & & & & & \\
**model for each** & 8.77\(\pm\)6.49 & 9.08\(\pm\)9.11 & 9.87\(\pm\)7.63 & 9.56\(\pm\)6.27 & 8.33\(\pm\)6.48 \\
**environment (total = 24** & & & & \\
**models)** & & & & \\ \hline
**MERT model** & 8.34\(\pm\)7.26 & 9.59\(\pm\)7.82 & 11.82\(\pm\)9.89 & 12.36\(\pm\)12.46 & 9.32\(\pm\)6.68 \\ \hline
**SERIAL model** & 8.15\(\pm\)5.42 & 10.31\(\pm\)8.71 & 11.13\(\pm\)8.28 & 11.04\(\pm\)7.23 & 8.88\(\pm\)6.65 \\ \hline
**TTEST (SERIAL, Best SERT)** & 0.26 & 0.14 & 0.09 & 0.02 & 0.35 \\
**TTEST (SERIAL, MERT)** & 0.67 & 0.14 & 0.22 & 0.14 & 0.33 \\ \hline \end{tabular}
\end{table}
Table 1: Comparative evaluation of SERT, MERT, and SERIL’s performances using average Euclidean distance error under five tasks localizing the top left ventricle, top right ventricle, bottom left ventricle, bottom right ventricle, and center ventricle. Pairwise t-test was conducted at the bottom comparing the statistical different between SERT, MERT, and SERIL.
Figure 4: Average Euclidean distance between the predictions of SERT models(Agent 0 \(\sim\) 23 in blue), MERT (in yellow), and SERIL (in red), compared to target landmark locations. There is no significant difference between the three groups, except task4, where the 24 SERT models outperform the single MERT and SERIL models in terms of the average distance from the target landmark.
calization. Similarly, End-to-End Coordinate Regression Model[21] has also shown great results in localizing anatomical landmarks in a 3D medical imaging setup. Although these frameworks can provide excellent performance given the tasks, they are limited to only the specific image dataset that they trained and tested on. For example, Deep Learning-Based Regression and Classification[20] trained eight different models to localize eight landmarks for eight different tasks. In [22], the authors trained a single model for multiple tasks across different imaging environments, but did not have the lifelong learning capability of dynamically integrating newer environments with time.
The selective experience replay based multi-task/multi-agent deep reinforcement learning model termed SERIL developed in the work has shown outstanding performance in generalizing across various different image environments. Moreover, they the performance of SERIL was equivalent to best-performing optimized single-environment models(SERT) that are trained specifically for each image environment.
These frameworks would be very beneficial for translation of medical imaging AI systems, with important practical implications. For example, multiple tasks with multiple environments may need to be trained in a clinical setup. Normally, the models would need to be trained one by one, and the correct model would then need to be selected for computation when an image and a task are identified. Furthermore, the data in a clinical setup may not be aggregated at once. Usually, a patient comes in and their images are acquired. Rather than having to wait for all images to be collected and then train, or retraining the entire model every time a new patient's data is entered, the SERIL framework allows the model to update every time new images are acquired, making use of the data faster and without the huge computational repetition that retraining requires.
There are certain limitations to the SERIL framework. For example, they require higher computational complexity to learn multiple tasks in multiple environments at the same time. In the future, we plan to optimize the hyperparameters and the deep neural network to achieve state-of-the-art performance with fewer epochs and iterations and lower the hardware GPU requirements. In conclusion, the SERIL framework demonstrated excellent potential for continuously learning multiple tasks across dynamically changing imaging environments.
## 4 Materials and Methods
### Deep reinforcement learning
In this study, we utilized a deep Q-network (DQN) algorithm to create a multi-agent deep learning framework, which is depicted in Figure 5. The multi-agent DQN model used in this study was modified from previously published works such as [1, 9, 12, 22]. The DQN is composed of a central convolutional block with four 3D convolutional layers followed by N fully connected blocks, each comprising 3 layers, as shown in Figure 5 (A). The fully connected blocks represent the task-specific blocks for localizing different landmarks. The number of fully connected blocks would dynamically increase with the number of landmarks in the framework. The DRL setup for the multi-agent DRL used in this work is shown in Figure 5 (B). There are three crucial components for DRL: the state, the action, and the reward. In the DRL environment, which is the medical images,
the agent is represented as a 3D bounding box in the environment. The state is a snapshot of the current location or a sequence of locations of the agent. The actions that allow the agent to interact with the environment are moving in the positive or negative direction along the x, y, and z-axis. Each action will result in the transition from one state to another state. The reward is measured by the difference in distance to the target location before and after a transaction. The DRL agent interacts with the environment according to an \(\epsilon\)-greedy policy, where an action is taken uniformly at random with probability \(\epsilon\) at each step. Otherwise, the action with the highest reward is chosen. An experience replay buffer (ERB) is produced at the end of each training session, containing a collection of state-reward-action-resulting state tuples, denoted as \([s,a,r,s^{\prime}]\), which are generated during the training session using its interaction with the environment across multiple episodes. The DQN algorithm is not only capable of training on the medical images, but also the ERBs as well.
#### 4.1.1 Multi-environment multi-agent deep reinforcement learning model (MERT)
A DRL agent's environment corresponds to the 3D imaging volume that the agent operates in and is characterized by the patient's pathology and image acquisition parameters. As a result, there could be potentially many imaging environments that the agent may encounter during deployment, as shown in Figure 5. Therefore, we integrated different imaging environments available during training to train a single multi-environment multi-agent deep reinforcement learning model (MERT). However, the multiplicity of the large set of training environments may potentially result in sub-optimal performance for the model across a certain subset of environments.
1.2 Selective experience replay based multi-task/multi-agent deep reinforcement learning model (SERIL)
To perform lifelong learning, we implemented a selective experience replay buffer to collect a trajectory of experience samples across the SERIL model's training history. The SERIL model attempts to learn a generalized representation of its current and previous tasks by sampling a batch of experience from both its current task's experience replay buffer (ERB) as well as from its history of previous tasks' experience replays during training. To compare the performance, we trained single-environment multi-agent deep reinforcement learning models (SERT) on each of the environments.
### Experimental Setup
#### 4.2.1 Clinical data
To assess the performance of the MERT and SERIL and SERT models, we utilized the brain tumor segmentation (BRATS) dataset which includes MRI images in the Axial orientation of 285 patients with various imaging sequences: longitudinal relaxation time (T1) pre-contrast, T1 post-contrast, transverse relaxation time (T2), and Fluid Attenuated Inversion Recovery (FLAIR). We randomly sampled 100 patients out of the 285 total patients for this experiment. Patients also have different pathologies. Out of the 100 patients, 60 patients have high-grade glioma (HGG) and 40 patients have low-grade glioma (LGG). We split the dataset 80:20, resulting in \(48\ \text{HGG}+32\ \text{LGG}\)
Figure 5: Illustration of the multi-agent deep reinforcement learning framework. (a) The deep Q-network (DQN) architecture (b) A schematic of the lifelong deep reinforcement learning setup for training multi-agent deep reinforcement learning models. ERB=Experience Replay Buffer
patients for training and \(12\text{HGG}+8\) LGG patients for testing. We also artificially generated the images in the Coronal and Sagittal orientations from the original Axial orientation. Overall, we are able to compile a dataset that consists of twenty-four unique images environments: \(4\text{ sequences}\times 2\text{ pathologies}\times 3\text{ orientations}=24\). We used five landmarks (top left ventricle, top right ventricle, bottom left ventricle, bottom right ventricle, and center ventricle) as localization tasks. As a result, we have 120 different task-environment pairs, as illustrated in Figure 2.
#### 4.2.2 Training protocol
The MERT model was trained for the localization of all five anatomical landmarks across all twenty-four imaging environments. The MERT model was trained for twenty epochs with a batch size of forty-eight, determined empirically. The agent's state was represented as a bounding box of size \(45\text{x}45\text{x}11\) with a frame history length of four. The SERT models were trained for the localization of all five landmarks in each imaging environment, resulting in a total of 24 SERT models. The SERT models were trained for four epochs with a batch size of forty-eight with the same representation for the agent's state as the MERT model. The SERIL model was iteratively trained for the localization of all five anatomical landmarks in one imaging environment at a time. Each iteration was trained for four epochs with a batch size of forty-eight with the same representation for the agent's state as the MERT model.
All the models were trained and evaluated on NVIDIA DGX-1.
#### 4.2.3 Performance Evaluation
The performance metric was set as the terminal Euclidean distance between the agent's prediction and the target landmark. A prediction is better if it is closer to the target landmark in terms of Euclidean distance. Thus we determined empirically that the prediction for a task is adequate if the average Euclidean distance of the prediction is less than 15 pixels away from the target landmark. We also performed pairwise t-tests to compare the performance of the MERT model with the SERT models. The p-value for statistical significance was set to \(p\leq 0.05\).
|
2309.08351 | Headless Language Models: Learning without Predicting with Contrastive
Weight Tying | Self-supervised pre-training of language models usually consists in
predicting probability distributions over extensive token vocabularies. In this
study, we propose an innovative method that shifts away from probability
prediction and instead focuses on reconstructing input embeddings in a
contrastive fashion via Constrastive Weight Tying (CWT). We apply this approach
to pretrain Headless Language Models in both monolingual and multilingual
contexts. Our method offers practical advantages, substantially reducing
training computational requirements by up to 20 times, while simultaneously
enhancing downstream performance and data efficiency. We observe a significant
+1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement
compared to classical LMs within similar compute budgets. | Nathan Godey, Éric de la Clergerie, Benoît Sagot | 2023-09-15T12:20:00Z | http://arxiv.org/abs/2309.08351v1 | # Headless Language Models: Learning without Predicting
###### Abstract
Self-supervised pre-training of language models usually consists in predicting probability distributions over extensive token vocabularies. In this study, we propose an innovative method that shifts away from probability prediction and instead focuses on reconstructing input embeddings in a contrastive fashion via _Contrastive Weight Tying_ (CWT). We apply this approach to pretrain Headless Language Models in both monolingual and multilingual contexts. Our method offers practical advantages, substantially reducing training computational requirements by up to 20 times, while simultaneously enhancing downstream performance and data efficiency. We observe a significant +1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement compared to classical LMs within similar compute budgets.
## 1 Introduction
Natural Language Processing (NLP) has seen tremendous progress in recent years thanks to the development of large-scale neural language models. These models have been shown to be effective in a wide range of NLP tasks such as text classification, question answering, and machine translation, either in fine-tuning, few-shot and zero-shot settings. These approaches usually involve a self-supervised pre-training step, based on tasks requiring predictions of contextual probability distributions over a large vocabulary of tokens. This method allows the model to learn from large amounts of unlabeled data, which is much easier to obtain than labeled data.
However, this approach has some limitations such as the need for a language modeling projection head which requires additional memory, slows down training and impedes scaling up to large token vocabularies. In this paper, we propose a novel approach called Headless Language Modeling, which removes the need to predict probability distributions and instead focuses on leveraging contrastive learning to reconstruct sequences of input embeddings. Instead of adding a projection head towards a high-dimensional vocabulary space in order to make a prediction about a given token, we teach those models to contrastively output static embeddings corresponding to this token. The static embeddings we use for this are the model's own input embeddings. Due to its resemblance with the well-established weight-tying trick Press and Wolf (2017); He et al. (2023), we call this pre-training technique _Contrastive Weight Tying_ (CWT).
We find that our approach outperforms usual language modeling counterparts in several aspects and by substantial margins. First, it drastically speeds up training by freeing up GPU memory and avoiding the costly language modeling projection, thus allowing up to 2\(\times\) acceleration of the training throughput, and up to 20\(\times\) less compute requirements to achieve similar performance. Moreover, given the same amount of training tokens, headless language models (HLMs) significantly outperform their classical counterparts on downstream tasks, as shown by a 2.7 gain in LAMBADA accuracy for our headless generative model. Finally, given similar compute budgets, HLMs bring substantial
Figure 1: Masked Headless Language Modeling (HLM) using Contrastive Weight Tying. The CWT objective aims to contrastively predict masked input representations using in-batch negative examples.
gains for NLU tasks, with our BERT reproduction scoring 1.6 points above its classical counterpart on the GLUE benchmark. We also show that headless models can benefit from larger token vocabularies at a much more reasonable cost than classical models.
In terms of implementation, our approach can be used as a drop-in replacement in usual pretraining codebases, as it only requires a change in the loss computation that can be applied to any kind of language model.
Overall, we make several contributions in this article:
* We introduce a pretraining objective that replaces cross-entropy, thus removing the need to project on the vocabulary high-dimensional space and instead learning to contrastively predict latent representations of tokens;
* Using this technique, we pretrain encoder models in English and multilingual settings, and decoder models in English;
* We show the various benefits of headless training, in terms of data-efficiency, compute-efficiency, and performance;
* We explore the effects of some pretraining hyperparameters, such as micro-batch size and vocabulary size, on downstream performance.
## 2 Related Work
Efficient pre-trainingWith the dawn of pre-trained language models, such as BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), GPT-2 Radford et al. (2019) or T5 Raffel et al. (2020), improving training efficiency has become an important stake in NLP.
Subsequent works have focused on changing the training objectives to improve performance. ELECTRA Clark et al. (2020) uses Replaced Token Detection as the unsupervised training task, and substantially improves both data-efficiency and compute-efficiency and downstream performance. Their work has also been extended using energy-based models Clark et al. (2020). Building upon this work, the DeBERTa models He et al. (2020) further improve over ELECTRA by disentangling weight sharing.
Contrastive learningThe Contrastive Predictive Coding loss van den Oord et al. (2019) initiated the use of pretraining approaches based on a contrastive learning objective, an idea that has obtained success in many modalities over the years Sermanet et al. (2018); Schneider et al. (2019); Baevski et al. (2020).
In NLP, contrastive learning has proven efficient in the training of sentence-level models Gao et al. (2021); Yan et al. (2021); Klein and Nabi (2023). Token-level approaches rely on contrastive auxiliary objectives that are added to the usual cross-entropy loss. SimCTG Su et al. (2022) introduces a token-level contrastive objective using in-batch output representations as negative samples, and adds this objective to a sentence-level contrastive loss and a regular causal LM loss. TaCL Su et al. (2022) relies on a similar technique for encoder models, where a teacher model is used to produce negative samples. ContraCLM Jain et al. (2023) uses an auxiliary contrastive loss for code generation.
Tokenization and frequencyThe importance of tokenization for language models has been discussed by several works Rust et al. (2021); Zouhar et al. (2023). As discussed in Zouhar et al. (2023), tokenization choices impact token probability distributions both at contextual and general scales. It has been shown that skewed token distributions can impact the quality of representations Gao et al. (2019); Zhou et al. (2021); Puccetti et al. (2022); Yu et al. (2022). Removing the language modeling head could mitigate these issues.
In the case of multilingual models, Liang et al. (2023) have shown that increasing the vocabulary size leads to better performance, at the cost of added time and memory complexity.
## 3 Method
### Classical framework
We consider a batch \(X=(x_{i,j})_{i\in[1,N],j\in[1,L]}\) of \(N\) token sequences of length \(L\). We also produce a slightly altered version of these sequences \(\tilde{X}=(\tilde{x}_{i,j})_{i\in[1,N],j\in[1,\tilde{L}]}\), optionally using masking or random replacement for instance, as some pretraining objectives require. We introduce an embedding matrix \(e_{\theta}\in\mathbb{R}^{V\times D}\) where \(V\) is the token vocabulary size and \(D\) is the hidden dimension, and a sequence-to-sequence model \(T_{\theta}:\mathbb{R}^{N\times L\times D}\rightarrow\mathbb{R}^{N\times L \times D}\) both based on a set of parameters \(\theta\in\mathbb{R}^{P}\).
A classical language modeling approach consists
in selecting a subset of tokens \(X_{\mathcal{S}}=(x_{i,j})_{i,j\in\mathcal{S}}\), and then estimating a probability distribution over the token vocabulary for these tokens from the \((\tilde{x}_{i,j})\) sequences, using \(e_{\theta}\) and \(T_{\theta}\). Learning occurs as \(X_{\mathcal{S}}\) is partially altered in \((\tilde{x}_{i,j})\) (e.g. in Masked Language Modeling) or internally in \(T_{\theta}\) (e.g. decoder models), and contextual information is essential for \(e_{\theta}\) and \(T_{\theta}\) to accurately estimate the tokens in \(X_{\mathcal{S}}\).
A trick that has been used in many such approaches relies on using \(e_{\theta}\)'s transpose (\(e_{\theta}^{T}\)) as a projection from the output space of \(T_{\theta}\) to \(\mathbb{R}^{V}\). This approach, called weight tying, can be written for a given sequence at index \(i\in[1,N]\) as:
\[\hat{p}_{i,j}=softmax(e_{\theta}^{T}(T_{\theta}(e_{\theta}(\tilde{x}_{i}))_{j}))\]
where \(\hat{p}_{i,j}\) is the estimated distribution for the \(j\)-th word of the sequence. Weight tying has been shown to improve performance while reducing the number of parameters (Clark et al., 2020). Cross-entropy loss is then used as an objective function:
\[\mathcal{L}(\theta,X,\tilde{X})=-\frac{1}{|\mathcal{S}|}\sum_{i,j\in\mathcal{ S}}\mathbf{1}_{x_{i,j}}\cdot\log(\hat{p}_{i,j})\]
### Headless modeling
While weight tying does not use additional parameters, the projection \(e_{\theta}^{T}\) actually has a non-negligible computational cost, which increases as the token vocabulary grows. Like Gao et al. (2019), we advocate that the weight tying approach tends to maximize the scalar product between the input embedding of the original token \(e_{\theta}(x_{i,j})\) and the output representation at the same position \(o_{i,j}^{\theta}=T_{\theta}(e_{\theta}(\tilde{x}_{i}))_{j}\), under the contrastive regularization of the softmax function.
Based on this understanding, we design an objective that directly optimizes this scalar product while not requiring the computation of the \(e_{\theta}^{T}\) projection. As we do not use this projection, we cannot rely on softmax regularization anymore, and instead introduce a contrastive loss using the in-batch samples from \(\mathcal{S}\) as negatives. All in all, our contrastive loss can be written as:
\[\mathcal{L}_{c}(\theta,X,\tilde{X})=-\frac{1}{|\mathcal{S}|}\sum_{i,j\in \mathcal{S}}\frac{e^{o_{i,j}^{\theta}\cdot e_{\theta}(x_{i,j})}}{\sum_{k,l\in \mathcal{S}}e^{o_{i,j}^{\theta}\cdot e_{\theta}(x_{k,l})}}\]
We call this objective _Contrastive Weight Tying_ (CWT), as weight sharing is not used _per se_ but is set as a contrastive objective. Across the paper, we _do not combine_ this loss function with the classical cross-entropy objective as in Su et al. (2022), and rather use it as the only pretraining objective. To the best of our knowledge, this work stands as the first attempt to train language models using an explicit contrastive loss as the sole objective.
Figure 2: Schematic comparison of the classical weight tying approach and the Contrastive Weight Tying loss.
### Theoretical considerations
In this section, we discuss theoretical differences between our approach and classical language modeling.
First, in terms of time and memory complexity, Headless Language Models (HLMs) are more efficient than classical language models under usual conditions. If we focus on the computation of the loss _on a single device_ from \(|\mathcal{S}|=K\) output representations, a neural probabilistic LM requires \(O(KDV)\) operations while our headless approach performs \(O(K^{2}D)\) operations1. Hence, when \(K<V\), which is very common for micro-batch sizes that fit on one device, our CWT loss is more computationally efficient than cross-entropy.
Footnote 1: We could extend our CWT loss by picking a separate set \(\mathcal{S}_{N}\) of negative samples. This allows to tune the number of negative samples, which is important in Contrastive Learning. However, for the sake of simplicity, and to avoid extensive hyperparameter tuning, we set \(\mathcal{S}_{N}=\mathcal{S}\).
In terms of memory requirements, our CWT loss is also more efficient than its classical counterpart. On the one hand, the cross-entropy loss with weight tying stores the outputs of the \(e_{\theta}^{T}\) projection of dimension \(K\times V\) in the forward pass. On the other hand, our CWT loss stores the scalar product matrix of dimension \(K\times N\), which is again smaller when \(K<V\).
In Figure 3, we provide an empirical analysis of the speed and memory improvements when training a BERT-base model using original hyperparameters, i.e. sequences of 512 tokens and 15% masking. We use HuggingFace's implementation for the Transformers blocks, and run experiments on a single RTX 8000 GPU. We observe that training latency is significantly reduced by roughly 25% for all batch sizes, and that the engine can handle a larger batch size due to the improvement in memory consumption.
## 4 Experiments
We use the Contrastive Weight Tying objective for medium-scale pre-training experiments in different contexts. We focus on monolingual encoder and decoder architectures, but we also train one multilingual encoder as we believe the uniformity brought by our contrastive objective may improve cross-lingual alignment. We compare our HLMs with classical language models that we pretrain on the same data with roughly similar compute budgets.
### Headless Monolingual Encoder
We pretrain BERT-base architectures (110M parameters) for English on the OpenWebText2 dataset extracted from The Pile Gao et al. (2020). We use the tokenizer from the Pythia suite Biderman et al. (2023), which was trained on The Pile and uses a 50k tokens vocabulary. We mostly use hyperparameters from BERT Devlin et al. (2019), although we remove the NSP objective as in RoBERTa Liu et al. (2019). For the sake of simplicity, we use a sequence length of 128 for the whole training. We give a detailed overview of the hyperparameters in Appendix A.1.
We pretrain all models using 8 A100 GPUs, with a budget of roughly 1,000 hours each. To optimize training, we use memory-efficient self-attention as implemented in xFormers Lefaudeux et al. (2022) for all experiments. For the vanilla MLM, we set a micro-batch size of 32 for each A100 GPU, then accumulate to the original 256 batch size at optimization level, and train on 1 million batches. For our headless approach, we observed that we
Figure 3: Comparison of time and memory complexities of a BERT-base model on a single RTX 8000 GPU.
could remain within compute budget when using a micro-batch size of 64. Hence, we use an effective batch size of 512 for the headless MLM (HMLM). Although the HMLM uses more pretraining sequences, it does not gain additional information compared to the vanilla MLM as both models perform several epochs on the OpenWebText2 dataset.
We evaluate on the GLUE benchmark, where we exclude the RTE dataset due to high standard deviations in the obtained scores. We fine-tune our models for 10 epochs on every dataset, and compute validation metrics once every fine-tuning epoch. We use the AdamW optimizer with a learning rate of \(10^{-5}\), a weight decay of \(0.01\) and a balanced cross-entropy loss objective. See Appendix B for more details.
In Table 1, we compare our headless MLM with the classical MLM on the GLUE benchmark. To ensure fair comparison, we display evaluations at similar amounts of tokens seen during pre-training, and at similar training durations on the same hardware. In both cases, the headless MLM outperforms the vanilla MLM by significant margins, showing that our CWT loss is both more data-efficient and compute-efficient in this setup.
We extend this analysis at various intervals along pretraining, and plot results in Figure 4.
It shows that the headless MLM outperforms the downstream performance of its vanilla counterpart after using 25% of its training compute. We notice that the performance gap is relatively constant across pretraining steps.
### Headless Monolingual Decoder
We pretrain Pythia-70M architectures for English, sticking to the Pythia procedure [1] as much as possible. We use OpenWebText2 as a pretraining dataset. We train on 143,000 batches of 1,024 sequences of length 2,048 split over 16 V100 GPUs. We use exactly the same hyperparameters as in the Pythia suite. The micro-batch size is set to 32 in both cases.
We can easily adapt the Causal Language Modeling (CLM) objective using the Contrastive Weight Tying approach. Negative samples correspond to every input embedding at a different position in the batch. However, the resulting model is not directly able to generate text, as it has no projection head towards \(\mathbb{R}^{V}\). A naive way to retrieve language generation capacities is to use the input embedding matrix transpose \(e_{\theta}^{T}\) as a projection head.
Nevertheless, we observe that this approach yields poor performance. Instead, we find that fine-tuning the headless model and a language modeling head using the predictive CLM objective on a small portion (\(<\)2%) of the pre-training dataset allows recovering an effective language model that outperforms the vanilla CLM on zero-shot language generation. More precisely, we fine-tune our headless models with an LM head initialized with \(e_{\theta}^{T}\) for 10000 steps using an effective batch size of 256 (4\(\times\) smaller that during pretraining), a learning rate of \(10^{-4}\), and a constant learning rate schedule with 2000 linear warm-up steps. All other hyperparameters are kept similar to pretraining.
We evaluate our models on the LAMBADA dataset and report accuracy and perplexity for zero-shot generation in Figure 5.
We find that the HLM fine-tuned for predictive language modeling outperforms the vanilla model by a significant margin along training. We report language generation results in Table 3. We observe that despite having a higher validation perplexity even after fine-tuning, the HLM is improving the zero-shot perplexity on the LAMBADA dataset.
Figure 4: Comparison of GLUE average scores along pretraining.
We also study the zero-shot performance of the causal models on datasets taken from the LM Evaluation Harness. At this model scale, many tasks are not relevant and thus discarded, as the results do not always significantly outperform a random baseline. We also discarded tasks where the sample size was below 1000 or where comparison was not meaningful due to low performance gaps compared to the variance level. Hence, a subset of tasks where comparison is relevant is shown in Table 4.
In Table 4, we find that the fine-tuned HLM outperforms the vanilla causal model by significant margins on BoolQ Clark et al. (2019), PubMedQA Jin et al. (2019) and QASPER Dasigi et al. (2021). Although we observe less statistically significant gaps for the other datasets, we still note that our HLM performs at least comparably to the vanilla baseline.
We also note that the HLM seems slightly less prone to stereotypes as measured by the CrowS-Pairs benchmark Nangia et al. (2020).
Overall, using the Contrastive Weight Tying loss in the context of causal LM allows obtaining models on par with vanilla counterparts at a lower compute cost. We notice that the resulting models can get surprisingly good results in challenging datasets, hence showing language understanding capabilities, while being outclassed in language generation benchmarks (before predictive fine-tuning). We believe that this study shows that language generation needs to be considered as a _downstream task_ for HLMs, as they are designed to generate representations instead of words.
## 5 Multilingual Encoder
In this section, we pretrain small multilingual MLMs and evaluate their performance on the XNLI
\begin{table}
\begin{tabular}{c c c|c c c c c c c} \hline \hline MLM type & Tokens (B) & GPU hours & MRPC & COLA & STS-B & SST2 & QNLI & QQP & MNLI & **Avg.** \\ \hline Vanilla & 4.1 & 989 & 85.87 & 54.66 & 83.7 & 92.45 & 88.38 & 89.57 & 82.4 & 82.43 (\(\pm\)0.12) \\ Headless & 4.1 & 444 & 85.31 & 58.35 & 84.54 & **93.23** & 89.49 & 89.62 & 82.54 & 83.29 (\(\pm\)0.15) \\ Headless & 8.2 & 888 & **86.89** & **60.72** & **85.98** & 92.56 & **89.75** & **89.81** & **82.87** & **84.08** (\(\pm\)0.14) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of Masked Language Models (MLMs) on the dev sets of the GLUE benchmark. Best results are **bold** and second best are underlined. We compare models at similar amounts of pre-training tokens, and at similar pre-training durations. We report Matthews’ correlation for COLA, Spearman correlation for STS-B, and accuracy elsewhere. MNLI validation datasets are concatenated. All scores are averaged over 3 different seeds.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{LM type} & Validation & LAMBADA \\ \cline{2-5} & Ppl. & Ppl. & Acc. \\ \hline Vanilla & **3.143** & 170.23 & 19.52 \\ Headless & - & 524.44 & 18.26 \\ Headless + FT & 3.283 & **153.5** & **22.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of the causal language models on the validation set after training, and on the LAMBADA dataset.
Figure 5: Comparison of LAMBADA metrics along pre-training. We display results for vanilla causal language modeling and headless models before and after causal LM fine-tuning. The pretraining token count for the fine-tuned HLM takes fine-tuning tokens into account.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline MLM type & PoolQ & CB & COPA & WiC & Avg. \\ \hline Vanilla & 68.8 & **77.8** & 60.2 & 64.9 & 67.9 (\(\pm\)0.4) \\ Headless & **69.8** & 74.7 & **62.7** & **67.2** & **68.6** (\(\pm\)0.6) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of Masked Language Models (MLMs) on the dev sets of datasets from the SuperGLUE benchmark. We report accuracy for all tasks. The scores are averaged over 10 fine-tuning runs.
dataset (Conneau et al., 2018).
Due to compute limitations, we consider architectures similar to the distilled multilingual BERT2 trained by Sanh et al. (2019). This model has 137M parameters, and uses a vocabulary of 119k tokens. As in Subsection 4.1, we train a vanilla MLM and a headless counterpart. However, we share training hyperparameters such as batch size and total number of steps between both models, without compute considerations. For both experiments, we pretrain our models on 400k batches of 64 sequences of 128 tokens taken from the multilingual Wikipedia dataset using a single RTX8000 GPU. We select 90 million entries from 10 languages (Arabic, German, English, Spanish, French, Hindi, Italian, Japanese, Korean, and Chinese). Training hyperparameters can be found in Appendix A.3.
Footnote 2: Available at [https://huggingface.co/distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased)
Models are then fine-tuned on the XNLI dataset, for both cross-lingual zero-shot transfer from English and target language fine-tuning. Fine-tuning hyperparameters can be found in Appendix B.4.
We display final results in Figure 6. We find that the headless approach leads to significantly better performance for every language in both cross-lingual transfer and language-specific fine-tuning. In average, the headless MLM outperforms its vanilla counterpart by 2 accuracy points in the cross-lingual scenario, and by 2.7 points in the language-specific fine-tuning experiments.
In Figure 6, we evaluate the models at intermediate checkpoints along pretraining, and we plot the XNLI average score as a function of used GPU hours. We observe that our HLM finishes training within 45% of the time required by the vanilla model. Moreover, our model outperforms the performance level of the fully trained vanilla model after only using 5% as much compute in Figure 5(a), and 22% in Figure 5(b).
## 6 Discussion
Token vocabularyTraining language models without output vocabulary projection makes using large vocabularies more affordable in terms of compute. As a matter of fact, the time complexity of HLMs during training is theoretically constant as we increase the vocabulary size. With input embedding lookup tables that do not require fully loading the \(e_{\theta}\) weights, the memory complexity can also be kept constant with respect to the size of the vocabulary. This property could be useful to improve the training speeds of multilingual models relying on considerable vocabulary sizes, such as XLM-V (Liang et al., 2023).
To verify this hypothesis, we pretrain models for different vocabulary sizes using the BERT-Small architecture from Turc et al. (2019). We use the CC-News dataset (Hamborg et al., 2017), and more
\begin{table}
\begin{tabular}{c c|c c c c c c c} \hline \hline LM type & GPU hours & ARC (easy) & ARC (chal.) & BoolQ & CrowS-Pairs \(\downarrow\) & RACE & SciQ & PubMedQA & QASPER \\ \hline Vanilla & 1712(\(\downarrow\)) & **40.2**(\(\pm\)1) & 17.4 (\(\pm\)1.1) & 47.8 (\(\pm\)0.9) & 57.3 (\(\pm\)1.2) & 23.7 (\(\pm\)1.3) & **66.4**(\(\pm\)1.5) & 43.8 (\(\pm\)1.6) & 41.9 (\(\pm\)4.8) \\ HLM + FT & 1052 (61\%) & 38.9 (\(\pm\)1) & **18.6**(\(\pm\)1.0) & **53.0**(\(\pm\)0.9) & **56.0**(\(\pm\)1.2) & **26.0**(\(\pm\)1.4) & 64.5 (\(\pm\)1.5) & **47.5**(\(\pm\)1.6) & **66.0**(\(\pm\)3.1) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-shot evaluation of monolingual causal language models on datasets from the LM Evaluation Harness. We report the stereotype percentage for CrowS-Pairs and accuracy elsewhere. \({}^{\dagger}\): best scores that are significantly better than the second best score according to a one-tailed t-test with power 0.95.
Figure 6: Comparison of XNLI average scores along pretraining for different setups. Models are fine-tuned/evaluated in Arabic, German, English, Spanish, French, Hindi and Chinese. We display the standard error across seeds.
details on hyperparameters can be found in Appendix A.5. For each vocabulary size, we train a BPE tokenizer similar to the BERT tokenizer, and pretrain a vanilla MLM and a headless MLM. We then compare average GLUE results, excluding RTE, MRPC and COLA, due to high variance at that model scale.
Figure 7 shows that HLMs can actually benefit from larger token vocabularies up to a certain extent, and that they outperform their vanilla counterparts for every vocabulary size. Figure (b)b demonstrate that increasing the vocabulary size comes at almost no decrease in training speed for the HLMs, contrary to vanilla MLMs. However, we observe a sudden throughput increase between 85k and 100k tokens vocabularies for both vanilla and headless models, which we attribute to a different handling of GPU memory and operations as the models get bigger.
Batch sizeAs discussed in Subsection 3.3, the micro-batch size used to compute the CWT loss is rather important as it impacts the training complexity by increasing the number of negative samples. Recent work on Contrastive Learning shows that there usually exists an optimal number of negative samples in terms of model performance (Awasthi et al., 2022; Ash et al., 2022). As a consequence, increasing the batch size when using a contrastive loss based on in-batch negative samples may not always be beneficial.
To study the impact of batch size on downstream performance, we pretrain small decoder models using different batch sizes. Our models are inspired from the smallest architecture of GPT2 (Radford et al., 2019) where many hyperparameters are divided by 4. More details about the pretraining procedure of these models can be found in Appendix A.4. HLMs are fine-tuned similarly to Subsection 4.2.
In Figure 8, we observe that increasing batch size leads to better performance for our HLMs. While smaller batch sizes train even faster, the headless model with the greatest batch size (128) is the only one that is able to significantly outperform its vanilla counterpart at the end of training.
Modeling considerationsFrom a linguistic point of view, we hypothesize that an important difference between our approach and classical predictive modeling is the fact that _headless modeling mostly pushes for discrimination between co-occurring
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline MLM type & ar & de & en & es & fr & hi & zh & Avg. \\ \hline \multicolumn{10}{l}{_Fine-tuned on English only_} \\ \hline Vanilla & 46.83 & 56.71 & 71.66 & 59.93 & 58.34 & 43.16 & 50.99 & 55.37 (\(\pm\)0.11) \\ Headless & **48.06** & **57.32** & **74.03** & **62.72** & **62** & **45.25** & **52.15** & **57.36 (\(\pm\)0.2)** \\ \hline \multicolumn{10}{l}{_Fine-tuned on target language_} \\ \hline Vanilla & 51.32 & 64.09 & 70.4 & 66.98 & 65.88 & 55.95 & 64.63 & 62.87 (\(\pm\)0.2) \\ Headless & **54.25** & **66.95** & **73.96** & **69.14** & **67.22** & **60.04** & **67.22** & **65.54 (\(\pm\)0.22)** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of multilingual models on the XNLI benchmark. We report dev accuracy, averaged over 3 runs.
Figure 7: Comparison of downstream performance and training speed for small models trained using different token vocabulary sizes.
tokens_, instead of imposing a contextual hierarchy over the whole vocabulary. For instance, in the case of synonyms A and B, each occurrence of A (or B) is pushing the input representations of A and B apart for predictive modeling, due to weight tying. For headless modeling, an occurrence of A will only push the representations apart if B appears in the same batch. Hence, the CWT objective could let models identify A and B as synonyms more easily. We provide empirical evidence of this phenomenon in Appendix C. Another advantage of pushing discrimination between co-occurring tokens only may be an improved feedback quality, as we expect distinguishing between co-occurring tokens to be more linguistically relevant than distinguishing between all tokens. We leave a thorough investigation of these hypotheses for future work.
## Conclusion
In this paper, we present a new pretraining approach called headless language modeling, that removes the need to predict probability distributions over token vocabulary spaces and instead focuses on learning to reconstruct representations in a contrastive fashion. Our method only relies on changing the objective function, allowing for straightforward adaptations of classical language modeling pretraining objectives.
Using our contrastive objective, we pretrain headless monolingual and multilingual encoders, and a headless monolingual decoder. We demonstrate that headless pretraining is significantly more compute-efficient, data-efficient, and performant than classical predictive methods.
A major advantage of our approach is that it enables the use of very large token vocabularies at virtually no increased cost.
We believe that this paper paves the way for the exploration of contrastive techniques as a replacement of cross-entropy based pretraining objectives for NLP.
## Limitations
One key limitation of this paper is the scale of the used architectures. In recent months, the dawn of Large Language Models using billions of parameters reshaped the language modeling paradigm. The research process that led to this paper is empirical and required extensive experimentation that could not be done at large scale in our academic compute budget. We believe that the results presented in this paper are still sufficiently promising to be communicated and useful to the community. We leave the scaling of these techniques to future work.
It could be opposed to this paper that as architectures grow in size, the proportion of compute that is associated with the output vocabulary projection shrinks. While we acknowledge that this effect may reduce the advantage of HLMs in terms of training throughput, our experiments show that HLMs are more performant for a given number of pretraining steps.
We chose not to compare with other efficient encoder architectures such as ELECTRA or DeBERTa in this paper. We also chose not to apply our method to encoder-decoder architectures, or to subtle masking methods such as SpanBERT Joshi et al. (2020). As a matter of fact, we argue that our work could be combined to these methods, and we thus believe that comparison is not relevant as these works are orthogonal to ours. We leave the intersection of these approaches for future work.
Finally, we decided to pick English for all monolingual experiments. Different behaviors could be observed for other languages, although our multilingual experiments gave no sign of such discrepancies.
## Ethics Statement
To the best of our knowledge, this paper does not raise any specific ethical concern that is not already inherent to the open-data pre-training paradigm. Our results on the CrowS-Pairs dataset indicate that headless language modeling may mitigate some of the biases that are measured in this task. Due to considerations that are discussed in Zhou et al. (2021),
Figure 8: LAMBADA accuracy along pretraining for different batch sizes.
and for reasons evoked in Section 6, we believe that alternatives to cross-entropy as an objective for language modeling could mitigate some of the biases that are observed in LLMs, and hope that our work can pave the way for such alternatives.
## Acknowledgements
We thank our colleagues Arij Riabi and Roman Castagne for their advice and for the helpful discussions. We are grateful to Robin Algayres for his enlightening question _"But what is the difference with softmax?"_, in the hope that this paper is a satisfying answer.
This work was funded by the last author's chair in the PRAIRIE institute funded by the French national agency ANR as part of the "Investissements d'avenir" programme under the reference ANR-19-P3IA-0001.
This work was granted access to the HPC resources of IDRIS under the allocation 2023-AD011013680R1 made by GENCI.
|
2309.11600 | Importance-aware Co-teaching for Offline Model-based Optimization | Offline model-based optimization aims to find a design that maximizes a
property of interest using only an offline dataset, with applications in robot,
protein, and molecule design, among others. A prevalent approach is gradient
ascent, where a proxy model is trained on the offline dataset and then used to
optimize the design. This method suffers from an out-of-distribution issue,
where the proxy is not accurate for unseen designs. To mitigate this issue, we
explore using a pseudo-labeler to generate valuable data for fine-tuning the
proxy. Specifically, we propose \textit{\textbf{I}mportance-aware
\textbf{C}o-\textbf{T}eaching for Offline Model-based
Optimization}~(\textbf{ICT}). This method maintains three symmetric proxies
with their mean ensemble as the final proxy, and comprises two steps. The first
step is \textit{pseudo-label-driven co-teaching}. In this step, one proxy is
iteratively selected as the pseudo-labeler for designs near the current
optimization point, generating pseudo-labeled data. Subsequently, a co-teaching
process identifies small-loss samples as valuable data and exchanges them
between the other two proxies for fine-tuning, promoting knowledge transfer.
This procedure is repeated three times, with a different proxy chosen as the
pseudo-labeler each time, ultimately enhancing the ensemble performance. To
further improve accuracy of pseudo-labels, we perform a secondary step of
\textit{meta-learning-based sample reweighting}, which assigns importance
weights to samples in the pseudo-labeled dataset and updates them via
meta-learning. ICT achieves state-of-the-art results across multiple
design-bench tasks, achieving the best mean rank of $3.1$ and median rank of
$2$, among $15$ methods. Our source code can be found here. | Ye Yuan, Can Chen, Zixuan Liu, Willie Neiswanger, Xue Liu | 2023-09-20T19:26:32Z | http://arxiv.org/abs/2309.11600v2 | # Importance-aware Co-teaching for Offline Model-based Optimization
###### Abstract
Offline model-based optimization aims to find a design that maximizes a property of interest using only an offline dataset, with applications in robot, protein, and molecule design, among others. A prevalent approach is gradient ascent, where a proxy model is trained on the offline dataset and then used to optimize the design. This method suffers from an out-of-distribution issue, where the proxy is not accurate for unseen designs. To mitigate this issue, we explore using a pseudo-labeler to generate valuable data for fine-tuning the proxy. Specifically, we propose _Importance-aware \(\mathbf{Co}\)-\(\mathbf{Teaching}\) for Offline Model-based Optimization_ (**ICT**). This method maintains three symmetric proxies with their mean ensemble as the final proxy, and comprises two steps. The first step is _pseudo-label-driven co-teaching_. In this step, one proxy is iteratively selected as the pseudo-labeler for designs near the current optimization point, generating pseudo-labeled data. Subsequently, a co-teaching process identifies small-loss samples as valuable data and exchanges them between the other two proxies for fine-tuning, promoting knowledge transfer. This procedure is repeated three times, with a different proxy chosen as the pseudo-labeler each time, ultimately enhancing the ensemble performance. To further improve accuracy of pseudo-labels, we perform a secondary step of _meta-learning-based sample reweighting_, which assigns importance weights to samples in the pseudo-labeled dataset and updates them via meta-learning. ICT achieves state-of-the-art results across multiple design-bench tasks, achieving the best mean rank of \(3.1\) and median rank of \(2\), among \(15\) methods. Our source code can be found here.
## 1 Introduction
A primary goal in many domains is to design or create new objects with desired properties [1]. Examples include the design of robot morphologies [2], protein design, and molecule design [3; 4]. Numerous studies obtain new designs by iteratively querying an unknown objective function that maps a design to its corresponding property score. However, in real-world scenarios, evaluating the objective function can be expensive or risky [3; 4; 5; 6; 7]. As a result, it is often more practical to assume access only to an offline dataset of designs and their property scores. This type of problem is referred to as offline model-based optimization (MBO) [1]. The goal of MBO is to find a design that maximizes the unknown objective function using solely the offline dataset.
Gradient ascent is a common approach to address the offline MBO problem. For example, as illustrated in Figure 2 (a), the offline dataset may consist of three robot size and robot speed pairs \(p_{1,2,3}\). A simple DNN model, referred to as the _vanilla proxy_ and represented as \(f_{\mathbf{\theta}}(\cdot)\), is trained to fit the offline dataset as an approximation to the unknown objective function. Gradient ascent is subsequently applied to existing designs with respect to the vanilla proxy \(f_{\mathbf{\theta}}(\cdot)\), aiming to generate a new design with a higher score. However, the gradient ascent method suffers from an out-of-distribution issue, where the vanilla proxy cannot accurately estimate data outside of the training distribution, leading to a significant gap between the vanilla proxy and the ground-truth function, as shown in Figure 2 (a). As a consequence, the scores of new designs obtained via gradient ascent can be erroneously high [8; 9].
To mitigate the out-of-distribution issue, recent studies have suggested applying regularization techniques to either the proxy itself [8; 9; 10] or the design under consideration [11; 12]. These methods improve the proxy's robustness and generalization ability. However, a yet unexplored approach in this domain is using a pseudo-labeler to assign pseudo-labels to designs near the current point. Fine-tuning the proxy on this pseudo-labeled dataset can lead to improvement, provided that we can identify the valuable portion of the pseudo-labeled dataset.
Inspired by this, we propose _Importance-aware Co-Teaching for Offline Model-based Optimization_ (**ICT**). This approach maintains three symmetric proxies, and their mean ensemble acts as the final proxy. ICT consists of two main steps with the **first step** being _pseudo-label-driven co-teaching_ as illustrated in Figure 1. During this step, one proxy is iteratively selected as the pseudo-labeler, followed by a co-teaching process [13] that facilitates the exchange of valuable data between the other two proxies for fine-tuning. As depicted in Figure 1, there are three symmetric proxies, \(f_{\mathbf{\theta}_{1}}(\cdot)\), \(f_{\mathbf{\theta}_{2}}(\cdot)\), and \(f_{\mathbf{\theta}_{3}}(\cdot)\). The entire learning cycle (the larger triangle) can be divided into three symmetric parts (sub-triangles), with one proxy chosen to be the pseudo-labeler in turn. Taking the top triangle as an example, we select \(f_{\mathbf{\theta}_{1}}(\cdot)\) as the pseudo-labeler to generate pseudo labels for a set of points in the neighborhood of the current optimization point \(\mathbf{x}_{t}\). The other two proxies, \(f_{\mathbf{\theta}_{2}}(\cdot)\) and \(f_{\mathbf{\theta}_{3}}(\cdot)\), then receive the pseudo-labeled dataset. They compute the sample loss for each entry in the dataset and exchange small-loss samples between them for fine-tuning. This co-teaching process encourages knowledge transfer between the two proxies, as small losses are typically indicative of valuable knowledge. The symmetric nature of the three proxies allows the above process to repeat three times, with each proxy--\(f_{\mathbf{\theta}_{1}}(\cdot)\), \(f_{\mathbf{\theta}_{2}}(\cdot)\), and \(f_{\mathbf{\theta}_{3}}(\cdot)\)--taking turns as the pseudo-label generator. This learning cycle promotes the sharing of valuable knowledge among the three symmetric proxies, allowing them to collaboratively improve the ensemble performance in handling out-of-distribution designs.
Figure 1: Pseudo-label-driven co-teaching.
Figure 2: Meta-learning-based sample reweighting.
Despite the efforts made in the first step, small-loss data may still contain inaccurate labels. During the first step, small-loss data (\(p_{a}\) and \(p_{b}\)) from the pseudo-labeled dataset produced by \(f_{\mathbf{\theta}_{1}}(\cdot)\) are identified based on the predictions of proxy \(f_{\mathbf{\theta}_{3}}(\cdot)\) and fed to proxy \(f_{\mathbf{\theta}_{2}}(\cdot)\). However, as shown in Figure 2 (a), the less accurate point \(p_{b}\) deviates noticeably from the ground-truth, causing the fine-tuned proxy \(f_{\mathbf{\theta}_{2}}(\cdot)\) to diverge from the ground-truth function. To address this, we introduce the **second step** of ICT, _meta-learning-based sample reweighting_, which aims to assign higher weights to more accurate points like \(p_{a}\) and lower weights to less accurate ones like \(p_{b}\). To accomplish this, we assign an importance weight for every sample yielded by the first step (\(\mathbf{\omega}_{a}\) for \(p_{a}\) and \(\mathbf{\omega}_{b}\) for \(p_{b}\)) and propose a meta-learning framework to update these sample weights (\(\mathbf{\omega}_{a}\) and \(\mathbf{\omega}_{b}\)) automatically by leveraging the supervision signals from the offline dataset \(p_{1,2,3}\). Specifically, the proxy fine-tuned on the weighted small-loss data (\(p_{a}\) and \(p_{b}\)) is expected to perform well on the offline dataset, provided the weights are accurate, i.e., large \(\mathbf{\omega}_{a}\) and small \(\mathbf{\omega}_{b}\). We can optimize the sample weights by minimizing the loss on the offline dataset as a function of the sample weights. As illustrated in Figure 2 (b), the weight of \(p_{a}\) is optimized to be high, while the weight of \(p_{b}\) is optimized to be low. Consequently, the proxy \(f^{(b)}_{\mathbf{\theta}_{2}}(\cdot)\) fine-tuned on the weighted samples in Figure 2 (b) is brought closer to the ground-truth objective function \(f(\cdot)\), compared to the case where the fine-tuned proxy \(f^{(a)}_{\mathbf{\theta}_{2}}(\cdot)\) is far from \(f(\cdot)\) in Figure 2 (a). Through extensive experiments across various tasks [1], ICT proves effective at mitigating out-of-distribution issues, delivering state-of-the-art results.
In summary, our paper presents three main contributions:
* We introduce _Importance-aware **Co-Teaching**_ (**ICT**) for offline MBO. ICT consists of two steps. In the _pseudo-label-driven co-teaching_ step, a proxy is iteratively chosen as the pseudo-labeler, initiating a co-teaching process that facilitates knowledge exchange between the other two proxies.
* The second step, _meta-learning-based sample reweighting_, is introduced to alleviate potential inaccuracies in pseudo-labels. In this step, pseudo-labeled samples are assigned importance weights, which are then optimized through meta-learning.
* Extensive experiments demonstrate ICT's effectiveness in addressing out-of-distribution issues, yielding state-of-the-art results in multiple MBO tasks. Specifically, ICT secures the best mean rank of \(3.1\) and median rank of \(2\), among \(15\) methods.
## 2 Preliminaries
Offline model-based optimization (MBO) targets a variety of optimization problems with the goal of maximizing an unknown objective function using an offline dataset. Consider the design space \(\mathcal{X}=\mathbb{R}^{d}\), where \(d\) represents the design dimension. Formally, the offline MBO can be expressed as:
\[\mathbf{x}^{*}=\arg\max_{\mathbf{x}\in\mathcal{X}}f(\mathbf{x}), \tag{1}\]
where \(f(\cdot)\) denotes the unknown objective function, and \(\mathbf{x}\in\mathcal{X}\) denotes a candidate design. In this scenario, an offline dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) is available, where \(\mathbf{x}_{i}\) represents a specific design, such as robot size, and \(y_{i}\) represents the corresponding score, like robot speed. In addition to robot design, similar problems also include protein and molecule design.
A common strategy for tackling offline MBO involves approximating the unknown objective function \(f(\cdot)\) using a proxy function, typically represented by a deep neural network (DNN) \(f_{\mathbf{\theta}}(\cdot)\), which is trained on the offline dataset:
\[\mathbf{\theta}^{*}=\arg\min_{\mathbf{\theta}}\frac{1}{N}\sum_{i=1}^{N}\left(f_{\mathbf{ \theta}}(\mathbf{x}_{i})-y_{i}\right)^{2}. \tag{2}\]
With the trained proxy, design optimization is performed using gradient ascent steps:
\[\mathbf{x}_{t}=\mathbf{x}_{t-1}+\eta\nabla_{\mathbf{x}}f_{\mathbf{\theta}}(\mathbf{x})\Big{|}_{ \mathbf{x}=\mathbf{x}_{i}},\quad\text{for }t\in[1,T]. \tag{3}\]
Here, \(T\) denotes the number of steps, and \(\eta\) signifies the learning rate. The optimal design \(\mathbf{x}^{*}\) is acquired as \(\mathbf{x}_{T}\). This gradient ascent approach is limited by an _out-of-distribution issue_, as the proxy \(f_{\mathbf{\theta}}(\mathbf{x})\) may not accurately predict scores for unseen designs, leading to suboptimal solutions.
Method
In this section, we introduce _Importance-aware **Co-Teaching**_ (**ICT**), which consists of two steps. We maintain three symmetric proxies and compute the mean ensemble as the final proxy. In Sec 3.1, we describe the first step, _pseudo-label-driven co-teaching_. This step involves iteratively selecting one proxy as the pseudo-label generator and implementing a co-teaching process to facilitate the exchange of valuable data between the remaining two proxies. Nevertheless, the samples exchanged during co-teaching might still contain inaccurate labels, which necessitates the second step _meta-learning-based sample reweighting_ in Sec 3.2. During this step, each sample from the previous step is assigned an importance weight and updated via meta-learning. Intuitively, the ICT process can be likened to an enhanced paper peer review procedure between three researchers preparing for submission. Each researcher, acting as an author, presents his/her paper to the other two. These two serve as reviewers and co-teach each other important points to better comprehend the paper, ultimately providing their feedback to the author. A detailed depiction of the entire algorithm can be found in Algorithm 1.
### Pseudo-label-driven Co-teaching
Vanilla gradient ascent, as expressed in Eq. (3), is prone to out-of-distribution issues in offline model-based optimization. One potential yet unexplored solution is using a pseudo-labeler to provide pseudo-labels to designs around the optimization point. By fine-tuning the proxy using the valuable portion of the pseudo-labeled dataset, we can enhance the proxy's performance. To achieve this, we maintain three proxies simultaneously, computing their mean ensemble as the final proxy, and iteratively select one proxy to generate pseudo-labeled data. The other two proxies exchange knowledge estimated to have high value, by sharing small-loss data. Due to the symmetric nature of the three proxies, this process can be repeated three times for sharing valuable knowledge further.
**Pseudo-label.** We initially train three proxies \(f_{\mathbf{\theta}_{1}}(\cdot)\), \(f_{\mathbf{\theta}_{2}}(\cdot)\), and \(f_{\mathbf{\theta}_{3}}(\cdot)\) on the whole offline dataset using Eq. (2) with different initializations, and conduct gradient ascent with their mean ensemble,
\[\mathbf{x}_{t}=\mathbf{x}_{t-1}+\eta\nabla_{\mathbf{x}}\frac{1}{3}(f_{1}(\mathbf{x}_{t-1})+f_ {2}(\mathbf{x}_{t-1})+f_{3}(\mathbf{x}_{t-1})), \tag{4}\]
where \(\eta\) is the gradient ascent learning rate. Given the current optimization point \(\mathbf{x}_{t}\), we sample \(M\) points \(\mathbf{x}_{t,1},\mathbf{x}_{t,2},\ldots,\mathbf{x}_{t,M}\) around \(\mathbf{x}_{t}\) as \(\mathbf{x}_{t,m}=\mathbf{x}_{t}+\gamma\epsilon\), where \(\gamma\) is the noise coefficient and \(\epsilon\) is drawn from the standard Gaussian distribution. An alternative way is to directly sample the \(M\) points around the offline dataset, rather than the current optimization point. We detail this option in Appendix A.1. We iteratively choose one proxy, for example \(f_{\mathbf{\theta}_{1}}(\cdot)\), to label these points, creating a pseudo-labeled dataset \(\mathcal{D}_{1}=\{(\mathbf{x}_{t,j},f_{\mathbf{\theta}_{1}}(\mathbf{x}_{t,j}))\}_{j=1}^{M}\). Lines \(5\) to \(6\) of Algorithm 1 detail the implementation of this segment.
**Co-teaching.** For each sample in the pseudo-labeled dataset \(\mathcal{D}_{1}\), we compute the sample loss for \(f_{\mathbf{\theta}_{2}}(\cdot)\) and \(f_{\mathbf{\theta}_{3}}(\cdot)\). Specifically, the losses are calculated as \(\mathcal{L}_{2,i}=(f_{\mathbf{\theta}_{2}}(\mathbf{x}_{t,i})-f_{\mathbf{\theta}_{1}}(\mathbf{ x}_{t,i}))^{2}\) and \(\mathcal{L}_{3,i}=(f_{\mathbf{\theta}_{3}}(\mathbf{x}_{t,i})-f_{\mathbf{\theta}_{1}}(\mathbf{ x}_{t,i}))^{2}\), respectively. Small-loss samples typically contain valuable knowledge, making them ideal for enhancing proxy robustness [13]. Proxies \(f_{\mathbf{\theta}_{2}}(\cdot)\) and \(f_{\mathbf{\theta}_{3}}(\cdot)\) then exchange the top \(K\) small-loss samples as valuable data to teach each other where \(K\) is a hyperparameter. The co-teaching process enables the exchange of valuable knowledge between proxies \(f_{\mathbf{\theta}_{2}}(\cdot)\) and \(f_{\mathbf{\theta}_{3}}(\cdot)\). This part is implemented as described in Lines \(7\) to \(8\) of Algorithm 1. The symmetric design of the three proxies, \(f_{\mathbf{\theta}_{1}}(\cdot),f_{\mathbf{\theta}_{2}}(\cdot)\), and \(f_{\mathbf{\theta}_{3}}(\cdot)\), enables the entire process to be iterated three times with one proxy chosen as the pseudo-labeler every time.
### Meta-learning-based Sample Reweighting
While the previous step effectively selects samples for fine-tuning, these samples may still contain inaccuracies. To mitigate this, we introduce a _meta-learning-based sample reweighting_ step. In this step, each sample obtained from the prior step is assigned an importance weight, which is then updated using a meta-learning framework. Without loss of generality, we use \(f_{\mathbf{\theta}}(\cdot)\) to represent any of \(f_{\mathbf{\theta}_{1}}(\cdot)\), \(f_{\mathbf{\theta}_{2}}(\cdot)\) and \(f_{\mathbf{\theta}_{3}}(\cdot)\) as this step applies identically to all three proxies. The top \(K\) small-loss samples selected from the previous step for fine-tuning \(f_{\mathbf{\theta}}(\cdot)\) are denoted as \(\mathcal{D}_{s}=\{(\mathbf{x}_{i}^{s},\bar{y}_{i}^{s})\}_{i=1}^{K}\).
**Sample Reweighting.** We assign an importance weight \(\mathbf{\omega}_{i}\) to the \(i^{th}\) selected sample and initialize these importance weights to ones. We expect smaller importance weights for less accurate samples
and larger importance weights for more accurate samples to improve proxy fine-tuning. With these weights, we can optimize the proxy parameters as follows:
\[\mathbf{\theta}^{*}(\mathbf{\omega})=\arg\min_{\mathbf{\theta}}\frac{1}{K}\sum_{i=1}^{K}\mathbf{ \omega_{i}}(f_{\mathbf{\theta}}(\mathbf{x}_{i}^{s})-\bar{y}_{i}^{s})^{2}. \tag{5}\]
Since we only want to perform fine-tuning based on \(\mathcal{D}_{s}\), we can adopt one step of gradient descent:
\[\mathbf{\theta}^{*}(\mathbf{\omega})=\mathbf{\theta}-\frac{\alpha}{K}\sum_{i=1}^{K}\mathbf{ \omega_{i}}\frac{\partial(f_{\mathbf{\theta}}(\mathbf{x}_{i}^{s})-\bar{y}_{i}^{s})^{2} }{\partial\mathbf{\theta}^{\top}}, \tag{6}\]
where \(\alpha\) is the learning rate for fine-tuning. This part is presented in Line \(10\) in Algorithm 1.
**Meta-learning.** The challenge now is finding a group of proper weights \(\mathbf{\omega}\). We achieve this by leveraging the supervision signals from the offline dataset, which are generally accurate. If the sample weights are accurate, the proxy fine-tuned on the weighted samples is expected to perform well on the offline dataset. This is because the weighted samples aim to reflect the underlying ground-truth function that the offline dataset already captures, and both sets of data share common patterns. We can optimize the sample weights by minimizing the loss of the offline dataset in a meta-learning framework. The loss on the offline dataset can be written as:
\[\mathcal{L}(\mathbf{\theta}^{*}(\mathbf{\omega}))=\arg\min_{\mathbf{\omega}}\frac{1}{N} \sum_{i=1}^{N}(f_{\mathbf{\theta}^{*}(\mathbf{\omega})}(\mathbf{x}_{i})-y_{i})^{2}. \tag{7}\]
The sample weight \(\mathbf{\omega}_{i}\) for the \(i^{th}\) sample can be updated by gradient descent:
\[\begin{split}\mathbf{\omega}_{i}^{{}^{\prime}}&=\mathbf{ \omega}_{i}-\beta\frac{\partial\mathcal{L}(\mathbf{\theta}^{*}(\mathbf{\omega}))}{ \partial\mathbf{\theta}}\frac{\partial\mathbf{\theta}^{*}(\mathbf{\omega})}{\partial\mathbf{ \omega}_{i}}\\ &=\mathbf{\omega}_{i}+\frac{\alpha\beta}{K}\frac{\partial\mathcal{L} (\mathbf{\theta}^{*}(\mathbf{\omega}))}{\partial\mathbf{\theta}}\frac{\partial(f_{\mathbf{ \theta}}(\mathbf{x}_{i}^{s})-\bar{y}_{i}^{s})^{2}}{\partial\mathbf{\theta}^{\top}}, \end{split} \tag{8}\]
where \(\beta\) is the learning rate for the meta-learning framework. From Eq. (8), it is worth mentioning that \(\frac{\partial\mathcal{L}(\mathbf{\theta}^{*}(\mathbf{\omega}))}{\partial\mathbf{\theta}} \frac{\partial(f_{\mathbf{\theta}}(\mathbf{x}_{i}^{s})-\bar{y}_{i}^{s})^{2}}{\partial \mathbf{\theta}^{\top}}\) represents the similarity between the gradient of the offline dataset and the gradient of the \(i^{th}\) sample. This implies that a sample with a gradient similar to the offline dataset will receive a higher weight and vice versa, revealing the inner mechanism of this framework. By applying the updated sample weights to Eq. (6) for fine-tuning, we improve the proxy's performance. This process is iteratively applied to each proxy, yielding a stronger ensemble. Lines \(11\) to \(13\) of Algorithm 1 showcase the execution of this part.
## 4 Experimental Results
### Dataset and Evaluation
**Dataset and Tasks.** In this study, we conduct experiments on four continuous tasks and three discrete tasks. The continuous tasks include: (a) Superconductor (SuperC)[5], where the objective is to develop a superconductor with \(86\) continuous components to maximize critical temperature, using \(17,010\) designs; (b) Ant Morphology (Ant)[1; 14], where the aim is to design a quadrupedal ant with \(60\) continuous components to improve crawling speed, based on \(10,004\) designs; (c) D'Kitty Morphology (D'Kitty)[1; 15], where the focus is on shaping a quadrupedal D'Kitty with \(56\) continuous components to enhance crawling speed, using \(10,004\) designs; (d) Hopper Controller (Hopper)[1], where the aim is to identify a neural network policy with \(5,126\) weights to optimize return, using \(3,200\) designs. Additionally, our discrete tasks include: (e) TF Bird \(8\) (TF8)[6], where the goal is to discover an \(8\)-unit DNA sequence that maximizes binding activity score, utilizing \(32,898\) designs; (f) TF Bind \(10\) (TF10)[6], where the aim is to find a \(10\)-unit DNA sequence that optimizes binding activity score, using \(50,000\) designs; (g) NAS [16], where the objective is to find the optimal neural network architecture to enhance test accuracy on the CIFAR-10 [17] dataset, using \(1,771\) designs.
**Evaluation and Metrics.** In accordance with the evaluation protocol used in [1; 11], we identify the top \(128\) designs from the offline dataset for each approach and report the \(100^{th}\) percentile normalized
ground-truth score. This score is computed as \(y_{n}=\frac{y-y_{min}}{y_{max}-y_{min}}\), where \(y_{min}\) and \(y_{max}\) represent the minimum and maximum scores within the entire unobserved dataset, respectively. The \(50^{th}\) percentile (median) normalized ground-truth scores are included in Appendix A.2. For a better comparison, we report the best design in the offline dataset, denoted as \(\mathcal{D}(\textbf{best})\). We also provide mean and median rankings across all seven tasks for a broad performance assessment.
### Comparison Methods
We compare our approach with two categories of baselines: (1) those that use generative models for sampling purposes, and (2) those that apply gradient updates derived from existing designs. The generative model-based methods learn and sample from the distribution of high-scoring designs, including: **(i)** MIN [18], which maps scores to designs and searches this map for optimal designs; **(ii)** CbAS [19], which uses a VAE model to adapt the design distribution towards high-scoring areas; **(iii)** Auto.CbAS [20], which employs importance sampling to retrain a regression model based on CbAS.
The second category encompasses: **(i)** Grad: carries out a basic gradient ascent on existing designs to generate new ones; **(ii)** Grad. Min: optimizes the lowest prediction from an ensemble of learned objective functions; **(iii)** Grad. Mean: optimizes the ensemble's mean prediction; **(iv)** ROMA [8]: applies smoothness regularization on the DNN; **(v)** COMs [9]: uses regularization to assign lower scores to designs obtained through gradient ascent; **(vi)** NEMO [10]: constrains the gap between the proxy and the ground-truth function via normalized maximum likelihood before performing gradient ascent; **(vii)** BDI [11] uses forward and backward mappings to distill knowledge from the offline dataset to the design; **(viii)** IOM [21]: enforces representation invariance between the training dataset and the optimized designs.
We also compare with traditional methods in [1]: **(i)** CMA-ES [22]: gradually adjusts the distribution towards the optimal design by modifying the covariance matrix. **(ii)** BO-qEI [23]: executes Bayesian Optimization to maximize the proxy, suggests designs through the quasi-Expected-Improvement acquisition function, and labels the designs using the proxy function. **(iii)** REINFORCE [24]: optimizes the distribution over the input space using the learned proxy.
### Training Details
We adopt the training settings from [1] for all comparison methods unless otherwise specified. We use a \(3\)-layer MLP (MultiLayer Perceptron) with ReLU activation for all gradient updating methods, and set the hidden size to \(2048\). Additional hyperparameter details are elaborated in Appendix A.3.
One of the top 128 designs from the offline dataset is iteratively selected as the starting point, as outlined in Line 2 of Algorithm 1. We reference results from [1] for non-gradient-ascent methods such as BO-qEI, CMA-ES, REINFORCE, CbAS, and Auto.CbAS. For gradient-based methods, we run each setting over \(8\) trials and report the mean and standard error. All experiments are run on a single NVIDIA GeForce RTX \(3090\) GPU.
### Results and Analysis
**Performance in Continuous Tasks.** Table 1 presents the results across different continuous domains. In all four continuous tasks, our ICT method achieves the top performance. Notably, it surpasses the basic gradient ascent, Grad, demonstrating its ability to mitigate the out-of-distribution issue. The superior performance of Grad.mean over Grad can be attributed to the ensemble model's robustness in making predictions [25]. Furthermore, ICT generally outperforms ensemble methods and other gradient-based techniques such as COMs and ROMA, demonstrating the effectiveness of our strategy. Generative model-based methods, such as CbAS and MINs, however, struggle with the high-dimensional task Hopper Controller. Interestingly, ICT necessitates only three standard proxies and avoids the need for training a generative model, which can often be a challenging task. These results indicate that ICT is a simple yet potent baseline for offline MBO.
**Performance in Discrete Tasks.** Table 2 showcases the outcomes across various discrete domains. ICT attains top performances in two out of the three tasks, TF Bind \(8\) and TF Bind \(10\). These results suggest that ICT is a powerful method in the discrete domain. However, in NAS, the performance of ICT is not as strong, which can be attributed to two factors. Firstly, the neural network design in NAS,
\begin{table}
\begin{tabular}{c c c c c} \hline Method & \multicolumn{1}{c}{Superconductor} & \multicolumn{1}{c}{Ant Morphology} & \multicolumn{1}{c}{D’Kitty Morphology} & \multicolumn{1}{c}{Hopper Controller} \\ \hline \(\mathcal{D}(\textbf{best})\) & \(0.399\) & \(0.565\) & \(0.884\) & \(1.0\) \\ BO-qEI & \(0.402\pm 0.034\) & \(0.819\pm 0.000\) & \(0.896\pm 0.000\) & \(0.550\pm 0.018\) \\ CMA-ES & \(0.465\pm 0.024\) & \(\textbf{1.214\pm 0.732}\) & \(0.724\pm 0.001\) & \(0.604\pm 0.215\) \\ REINFORCE & \(0.481\pm 0.013\) & \(0.266\pm 0.032\) & \(0.562\pm 0.196\) & \(-0.020\pm 0.067\) \\ CbAS & \(\textbf{0.503}\pm\textbf{0.069}\) & \(0.876\pm 0.031\) & \(0.892\pm 0.008\) & \(0.141\pm 0.012\) \\ Auto.CbAS & \(0.421\pm 0.045\) & \(0.882\pm 0.045\) & \(0.906\pm 0.006\) & \(0.137\pm 0.005\) \\ MIN & \(0.499\pm 0.017\) & \(0.445\pm 0.080\) & \(0.892\pm 0.011\) & \(0.424\pm 0.166\) \\ \hline Grad & \(0.483\pm 0.025\) & \(0.920\pm 0.044\) & \(\textbf{0.954\pm 0.010}\) & \(\textbf{1.791\pm 0.182}\) \\ Mean & \(0.497\pm 0.011\) & \(0.943\pm 0.012\) & \(\textbf{0.961\pm 0.012}\) & \(\textbf{1.815\pm 0.111}\) \\ Min & \(\textbf{0.505}\pm\textbf{0.017}\) & \(0.910\pm 0.038\) & \(0.936\pm 0.006\) & \(0.543\pm 0.010\) \\ COMs & \(0.472\pm 0.024\) & \(0.828\pm 0.034\) & \(0.913\pm 0.023\) & \(0.658\pm 0.217\) \\ ROMA & \(\textbf{0.510}\pm\textbf{0.015}\) & \(0.917\pm 0.030\) & \(0.927\pm 0.013\) & \(1.740\pm 0.188\) \\ NEMO & \(0.502\pm 0.002\) & \(0.952\pm 0.002\) & \(\textbf{0.950\pm 0.001}\) & \(0.483\pm 0.005\) \\ BDI & \(\textbf{0.513}\pm\textbf{0.000}\) & \(0.906\pm 0.000\) & \(0.919\pm 0.000\) & \(\textbf{1.993\pm 0.000}\) \\ IOM & \(\textbf{0.520}\pm\textbf{0.018}\) & \(0.918\pm 0.031\) & \(0.945\pm 0.012\) & \(1.176\pm 0.452\) \\ \hline \(\textbf{ICT}_{\rm(ours)}\) & \(\textbf{0.503}\pm\textbf{0.017}\) & \(\textbf{0.961}\pm\textbf{0.007}\) & \(\textbf{0.968}\pm\textbf{0.020}\) & \(\textbf{2.104}\pm\textbf{0.357}\) \\ \hline \end{tabular}
\end{table}
Table 1: Experimental results on continuous tasks for comparison.
\begin{table}
\begin{tabular}{c c c c|c c} \hline Method & TF Bind \(8\) & TF Bind \(10\) & NAS & Rank Mean & Rank Median \\ \hline \(\mathcal{D}(\textbf{best})\) & \(0.439\) & \(0.467\) & \(0.436\) & & \\ BO-qEI & \(0.798\pm 0.083\) & \(0.652\pm 0.038\) & \(\textbf{1.079\pm 0.059}\) & \(9.9/15\) & \(11/15\) \\ CMA-ES & \(\textbf{0.953}\pm\textbf{0.022}\) & \(0.670\pm 0.023\) & \(0.985\pm 0.079\) & \(6.1/15\) & \(3/15\) \\ REINFORCE & \(\textbf{0.948}\pm\textbf{0.028}\) & \(0.663\pm 0.034\) & \(-1.895\pm 0.000\) & \(11.3/15\) & \(15/15\) \\ CbAS & \(0.927\pm 0.051\) & \(0.651\pm 0.060\) & \(0.683\pm 0.079\) & \(9.1/15\) & \(9/15\) \\ Auto.CbAS & \(0.910\pm 0.044\) & \(0.630\pm 0.045\) & \(0.506\pm 0.074\) & \(11.6/15\) & \(12/15\) \\ MIN & \(0.905\pm 0.052\) & \(0.616\pm 0.021\) & \(0.717\pm 0.046\) & \(11.0/15\) & \(12/15\) \\ \hline Grad & \(0.906\pm 0.024\) & \(0.635\pm 0.022\) & \(0.598\pm 0.034\) & \(7.7/15\) & \(9/15\) \\ Mean & \(0.899\pm 0.025\) & \(0.652\pm 0.020\) & \(0.666\pm 0.062\) & \(6.6/15\) & \(6/15\) \\ Min & \(0.939\pm 0.013\) & \(0.638\pm 0.029\) & \(0.705\pm 0.011\) & \(7.3/15\) & \(8/15\) \\ COMs & \(0.452\pm 0.040\) & \(0.624\pm 0.008\) & \(0.810\pm 0.029\) & \(10.3/15\) & \(12/15\) \\ ROMA & \(0.924\pm 0.040\) & \(0.666\pm 0.035\) & \(0.941\pm 0.020\) & \(5.1/15\) & \(5/15\) \\ NEMO & \(0.941\pm 0.000\) & \(\textbf{0.705}\pm\textbf{0.000}\) & \(0.734\pm 0.015\) & \(5.0/15\) & \(4/15\) \\ BDI & \(0.870\pm 0.000\) & \(0.605\pm 0.000\) & \(0.722\pm 0.000\) & \(7.9/15\) & \(8/15\) \\ IOM & \(0.878\pm 0.069\) & \(0.648\pm 0.023\) & \(0.274\pm 0.021\) & \(7.6/15\) & \(6/15\) \\ \hline \(\textbf{ICT}_{\rm(ours)}\) & \(\textbf{0.958}\pm\textbf{0.008}\) & \(\textbf{0.691}\pm\textbf{0.023}\) & \(0.667\pm 0.091\) & **3.1/15** & **2/15** \\ \hline \end{tabular}
\end{table}
Table 2: Experimental results on discrete tasks, and ranking on all tasks for comparison.
represented by a \(64\)-length sequence of \(5\)-categorical one-hot vectors, has a higher dimensionality than TF Bind \(8\) and TF Bind \(10\), making the optimization process more complex. Furthermore, the simplistic encoding-decoding strategy in design-bench may not accurately capture the intricacies of the neural network's accuracy, which can only be determined after training on CIFAR10.
**Summary.** ICT attains the highest rankings with a mean of \(3.1/15\) and median of \(2/15\) as shown in Table 2 and Figure 3, and also secures top performances in \(6\) out of the \(7\) tasks. We have further run a Welch's t-test between our method and the second-best method, obtaining p-values of \(0.437\) on SuperC, \(0.004\) on Ant, \(0.009\) on D'Kitty, \(0.014\) on Hopper, \(0.000\) on TF8, \(0.045\) on TF10, \(0.490\) on NAS. This demonstrates statistically significant improvement in \(5\) out of \(7\) tasks, reaffirming the effectiveness of our method.
### Ablation Studies
To better understand the impact of pseudo-label-driven co-teaching (co-teaching) and meta-learning-based sample reweighting (reweighting) on the performance of our proposed ICT method, we conduct ablation studies by removing either co-teaching or reweighting from the full ICT approach. Table 3 presents the results. Beyond just assessing these performance indicators, we also verify the accuracy of the samples chosen by co-teaching, as well as the efficacy of the sample weights we have calculated. We do this by referring to the ground truth, with further details provided in Appendix A.4. Our reweighting module is also compared with the recently proposed RGD method [26] as detailed in the Appendix A.5.
For two of the discrete tasks (TF\(8\) and TF\(10\)), the ICT method consistently exceeds the performance of both its ablated versions. This highlights the efficacy of the two steps when handling discrete tasks. Conversely, the exclusion of the co-teaching in NAS leads to an increase in performance. This could be attributed to the fact that the encoding-decoding strategy of NAS in design-bench is unable to accurately capture the inherent complexity of neural networks. As such, the co-teaching step, reliant on this strategy, might not be as effective. For the continuous tasks (SuperC, Ant, D'Kitty, and Hopper), we observe that the complete ICT method consistently achieves superior performance. This underlines the effectiveness of the two steps when dealing with continuous tasks. The performance gains are particularly substantial in the Hopper task when the complete ICT method is compared with the ablated versions, illustrating the power of the two steps in managing high-dimensional continuous tasks. Overall, our ablation studies demonstrate that the inclusion of both co-teaching and reweighting in the ICT method generally enhances performance across diverse tasks and input dimensions, underscoring their integral role in our approach.
### Hyperparameter Sensitivity
We first assess the robustness of our ICT method by varying the number of samples (\(K\)) selected during the co-teaching process on the continuous D'Kitty Morphology task. For this analysis, \(K\) is varied among \(K=8,16,32,64\). In Figure 4 (a), we illustrate the \(100^{th}\) percentile normalized ground-truth score as a function of time step \(T\), for each of these \(K\) values. The results demonstrate that the performance of ICT is resilient to variations in \(K\), maintaining performances within a certain range. Additionally, ICT is capable of generating high-scoring designs early on in the process, specifically achieving such designs around the time step \(t=50\), and sustains this performance thereafter, demonstrating its robustness against the number of optimization steps \(T\).
We further evaluate the robustness of our ICT method against the learning rate (\(\beta\)) for the meta-learning framework. As depicted in Figure 4 (b), ICT's performance remains relatively consistent across a variety of \(\beta\) values, further demonstrating ICT's robustness with respect to the hyperparameter \(\beta\). We explore the fine-tuning learning rate \(\alpha\) and conduct further experiments and analysis on TF Bind 8. Details can be found in Appendix A.6.
\begin{table}
\begin{tabular}{c c c c c} \hline Task & D & ICT & w/o co-teaching & w/o reweighting \\ \hline TF8 & 8 & **0.958 \(\pm\) 0.008** & \(0.905\pm 0.042\) & \(0.910\pm 0.024\) \\ TF10 & 10 & **0.691 \(\pm\) 0.023** & \(0.653\pm 0.018\) & \(0.654\pm 0.023\) \\ NAS & 64 & \(0.667\pm 0.091\) & **0.779 \(\pm\) 0.071** & \(0.666\pm 0.090\) \\ \hline SuperC & 86 & **0.503 \(\pm\) 0.017** & \(0.500\pm 0.017\) & \(0.501\pm 0.017\) \\ Ant & 60 & **0.961 \(\pm\) 0.007** & \(0.927\pm 0.033\) & \(0.914\pm 0.015\) \\ D’Kitty & 56 & **0.968 \(\pm\) 0.020** & \(0.962\pm 0.021\) & \(0.959\pm 0.013\) \\ Hopper & 5126 & **2.104 \(\pm\) 0.357** & \(1.453\pm 0.734\) & \(1.509\pm 0.166\) \\ \hline \end{tabular}
\end{table}
Table 3: Ablation studies on two core steps of ICT.
## 5 Related Works
**Offline Model-based Optimization.** Contemporary offline model-based optimization methods can be generally classified into two primary groups: (i) generating novel designs through generative models, and (ii) conducting gradient ascent on existing designs. The former methods learn and sample from the distribution of high-scoring designs including MIN [18], CbAS [19], Auto.CbAS [20] and BootGen [27]. Recently, gradient-based methods have gained popularity due to their ability to leverage deep neural networks (DNNs) for improved design generation. These methods apply regularization techniques to either the proxy itself [8; 9; 10] or the design under consideration [11; 12], enhancing the proxy's robustness and generalization capabilities. An interesting subfield of offline MBO includes biological sequence design, which has potential applications such as designing drugs for treating diseases [27; 28]. In particular, the work [27] also adopts a proxy as a pseudo-labeler and aligns the generator with the proxy, a technique that resonates with our method. ICT falls under this category, but adopts a unique approach to improve proxy performance: it incorporates valuable knowledge from a pseudo-labeled dataset into other proxies for fine-tuning, thereby enhancing the ensemble performance. Notably, while the concurrent work of parallel mentoring [29] also employs pseudo-labeling, it focuses on pairwise comparison labels, potentially sacrificing some information due to its discrete nature.
**Sample Reweighting.** Sample reweighting is commonly utilized to address the issue of label noise [30; 31], where each sample is assigned a larger weight if it is more likely to be accurate, using a carefully designed function. Recent studies [32; 33; 34] suggest using a meta-set to guide the learning of sample weights, which can enhance model training. Such an approach is grounded in a meta-learning framework which can be used to learn hyperparameters [35; 36; 37; 38; 39; 40; 41; 42]. Inspired by distributionally robust optimization, recent work [26] proposes a re-weighted gradient descent algorithm that provides an efficient and effective means of reweighting. In this paper, the pseudo-labeled dataset generated by co-teaching may still contain some inaccuracies, while the offline dataset is generally accurate. We propose a sample reweighting framework to reduce the inaccuracies in the pseudo-labeled dataset by leveraging the supervision signals from the offline dataset.
**Co-teaching.** Co-teaching [13] is an effective technique for mitigating label noise by leveraging insights from peer networks. It involves the concurrent training of two proxies where one proxy identifies small-loss samples within a noisy mini-batch for fine-tuning the other. Co-teaching bears similarities to decoupling [43] and co-training [44], as they all involve the interaction between two models to enhance the training process. In this study, we adapt co-teaching to work with a pseudo-labeled dataset generated by a trained proxy, instead of relying on a noisy original dataset. Specifically, we employ one proxy to select accurate samples from this pseudo-labeled dataset for fine-tuning the other, and vice versa.
## 6 Conclusion and Discussion
In this study, we introduce the ICT (Importance-aware Co-Teaching) method for mitigating the out-of-distribution issue prevalent in offline model-based optimization. ICT is a two-step approach. The first step is pseudo-label-driven co-teaching, which iteratively selects a proxy to generate pseudo-labeled data. Valuable data are identified by co-teaching to fine-tune other proxies. This process,
repeated three times with different pseudo-labelers, facilitates knowledge transfer. In the second step, meta-learning-based sample reweighting assigns and updates importance weights to samples selected by the co-teaching process, further improving the proxy fine-tuning. Our experimental findings demonstrate the success of ICT. We discuss its limitations in Appendix A.7
**Future Work.** Though we initially design ICT with three proxies, the method's inherent scalability and flexibility make it applicable to scenarios involving \(N\) proxies. In such a scenario, we can iteratively select one proxy out of \(N\) as the pseudo-labeler to generate data. Then, each of the remaining \(N-1\) proxies could select small-loss samples from its perspective and provide these samples to the other \(N-2\) proxies for fine-tuning. This process enhances knowledge transfer and facilitates cooperative learning among the proxies. Looking to the future, we plan to conduct further research into the dynamics of such an expanded ensemble of proxies.
**Negative Impact.** It is crucial to recognize that ICT's potential benefits come with possible negative consequences. Advanced optimization techniques can be applied for both constructive and destructive purposes, depending on their use. For example, while drug development and material design can have a positive impact on society, these techniques could also be misused to create harmful substances or products. As researchers, we must remain attentive and strive to ensure that our work is employed for the betterment of society while addressing any potential risks and ethical concerns.
## 7 Acknowledgement
This research was empowered in part by the computational support provided by Compute Canada (www.computecanada.ca).
|
2309.16609 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | 2023-09-28T17:07:49Z | http://arxiv.org/abs/2309.16609v1 | # Qwen Technical Report
###### Abstract
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen1, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
Footnote 1: Qwen is a moniker of Qianwen, which means “thousands of prompts” in Chinese. The pronunciation of “Qwen” can vary depending on the context and the individual speaking it. Here is one possible way to pronounce it: /kwen/.
###### Contents
* 1 Introduction
* 2 Pretraining
* 2.1 Data
* 2.2 Tokenization
* 2.3 Architecture
* 2.4 Training
* 2.5 Context Length Extension
* 2.6 Experimental Results
* 3 Alignment
* 3.1 Supervised Finetuning
* 3.1.1 Data
* 3.1.2 Training
* 3.2 Reinforcement Learning from Human Feedback
* 3.2.1 Reward Model
* 3.2.2 Reinforcement Learning
* 3.3 Automatic and Human Evaluation of Aligned Models
* 3.4 Tool Use, Code Interpreter, and Agent
* 4 Code-Qwen: Specialized Model for Coding
* 4.1 Code Pretraining
* 4.2 Code Supervised Fine-Tuning
* 4.3 Evaluation
* 5 Math-Qwen: Specialized Model for Mathematics Reasoning
* 5.1 Training
* 5.2 Evaluation
* 6 Related Work
* 6.1 Large Language Models
* 6.2 Alignment
* 6.3 Tool Use and Agents
* 6.4 LLM for Coding
* 6.5 LLM for Mathematics
* 7 Conclusion
* A Appendix
* A.1 More Training Details
* A.1.1 Data Format for Qwen-Chat
* A.2 Evaluation
* A.2.1 Automatic Evaluation
* A.2.2 Human Evaluation
* A.3 Analysis of Code Interpreter
Introduction
Large language models (LLMs) (Radford et al., 2018; Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020; OpenAI, 2023; Chowdhery et al., 2022; Anil et al., 2023; Thoppilan et al., 2022; Touvron et al., 2023a;b) have revolutionized the field of artificial intelligence (AI) by providing a powerful foundation for complex reasoning and problem-solving tasks. These models have the ability to compress vast knowledge into neural networks, making them incredibly versatile agents. With a chat interface, LLMs can perform tasks that were previously thought to be the exclusive domain of humans, especially those involving creativity and expertise (OpenAI, 2022; Ouyang et al., 2022; Anil et al., 2023; Google, 2023; Anthropic, 2023a;b). They can engage in natural language conversations with humans, answering questions, providing information, and even generating creative content such as stories, poems, and music. This has led to the development of a wide range of applications, from chatbots and virtual assistants to language translation and summarization tools.
LLMs are not just limited to language tasks. They can also function as a generalist agent (Reed et al., 2022; Bai et al., 2022; Wang et al., 2023a; AutoGPT, 2023; Hong et al., 2023), collaborating with external systems, tools, and models to achieve the objectives set by humans. For example, LLMs can understand multimodal instructions (OpenAI, 2023; Bai et al., 2023; Liu et al., 2023; Ye et al., 2023; Dai et al., 2023; Peng et al., 2023b), execute code (Chen et al., 2021; Zheng et al., 2023; Li et al., 2023d), use tools (Schick et al., 2023; LangChain, Inc., 2023; AutoGPT, 2023), and more. This opens up a whole new world of possibilities for AI applications, from autonomous vehicles and robotics to healthcare and finance. As these models continue to evolve and improve, we can expect to see even more innovative and exciting applications in the years to come. Whether it's helping us solve complex problems, creating new forms of entertainment, or transforming the way we live and work, LLMs are poised to play a central role in shaping the future of AI.
Despite their impressive capabilities, LLMs are often criticized for their lack of reproducibility, steerability, and accessibility to service providers. In this work, we are pleased to present and release the initial version of our LLM series, Qwen. Qwen is a moniker that derives from the Chinese phrase Qianwen, which translates to "thousands of prompts" and conveys the notion of embracing a wide range of inquiries. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. The model series include the base pretrained language models, chat models finetuned with human alignment techniques, i.e., supervised finetuning (SFT), reinforcement learning with human feedback (RLHF), etc., as well as specialized models in coding and math. The details are outlined below:
Figure 1: **Model Lineage of the Qwen Series. We have pretrained the language models, namely Qwen, on massive datasets containing trillions of tokens. We then use SFT and RLHF to align Qwen to human preference and thus we have Qwen-Chat and specifically its improved version Qwen-Chat-RLHF. Additionally, we also develop specialized models for coding and mathematics, such as Code-Qwen, Code-Qwen-Chat, and Math-Qwen-Chat based on Qwen with similar techniques. Note that we previously released the multimodal LLM, Qwen-VL and Qwen-VL-Chat (Bai et al., 2023), which are also based on our Qwen base models.**
1. The base language models, namely Qwen, have undergone extensive training using up to \(3\) trillion tokens of diverse texts and codes, encompassing a wide range of areas. These models have consistently demonstrated superior performance across a multitude of downstream tasks, even when compared to their more significantly larger counterparts.
2. The Qwen-Chat models have been carefully finetuned on a curated dataset relevant to task performing, chat, tool use, agent, safety, etc. The benchmark evaluation demonstrates that the SFT models can achieve superior performance. Furthermore, we have trained reward models to mimic human preference and applied them in RLHF for chat models that can produce responses preferred by humans. Through the human evaluation of a challenging test, we find that Qwen-Chat models trained with RLHF are highly competitive, still falling behind GPT-4 on our benchmark.
3. In addition, we present specialized models called Code-Qwen, which includes Code-Qwen-7B and Code-Qwen-14B, as well as their chat models, Code-Qwen-14B-Chat and Code-Qwen-7B-Chat. Specifically, Code-Qwen has been pre-trained on extensive datasets of code and further fine-tuned to handle conversations related to code generation, debugging, and interpretation. The results of experiments conducted on benchmark datasets, such as HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and HumanEvalPack (Muennighoff et al., 2023), demonstrate the high level of proficiency of Code-Qwen in code understanding and generation.
4. This research additionally introduces Math-Qwen-Chat specifically designed to tackle mathematical problems. Our results show that both Math-Qwen-7B-Chat and Math-Qwen-14B-Chat outperform open-sourced models in the same sizes with large margins and are approaching GPT-3.5 on math-related benchmark datasets such as GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021).
5. Besides, we have open-sourced Qwen-VL and Qwen-VL-Chat, which have the versatile ability to comprehend visual and language instructions. These models outperform the current open-source vision-language models across various evaluation benchmarks and support text recognition and visual grounding in both Chinese and English languages. Moreover, these models enable multi-image conversations and storytelling. Further details can be found in Bai et al. (2023).
Now, we officially open-source the 14B-parameter and 7B-parameter base pretrained models Qwen and aligned chat models Qwen-Chat2. This release aims at providing more comprehensive and powerful LLMs at developer- or application-friendly scales.
Footnote 2: GitHub: [https://github.com/QwenLM/Qwen](https://github.com/QwenLM/Qwen).
The structure of this report is as follows: Section 2 describes our approach to pretraining and results of Qwen. Section 3 covers our methodology for alignment and reports the results of both automatic evaluation and human evaluation. Additionally, this section describes details about our efforts in building chat models capable of tool use, code interpreter, and agent. In Sections 4 and 5, we delve into specialized models of coding and math and their performance. Section 6 provides an overview of relevant related work, and Section 7 concludes this paper and points out our future work.
## 2 Pretraining
The pretraining stage involves learning vast amount of data to acquire a comprehensive understanding of the world and its various complexities. This includes not only basic language capabilities but also advanced skills such as arithmetic, coding, and logical reasoning. In this section, we introduce the data, the model design and scaling, as well as the comprehensive evaluation results on benchmark datasets.
### Data
The size of data has proven to be a crucial factor in developing a robust large language model, as highlighted in previous research (Hoffmann et al., 2022; Touvron et al., 2023). To create an effective pretraining dataset, it is essential to ensure that the data are diverse and cover a wide range
of types, domains, and tasks. Our dataset is designed to meet these requirements and includes public web documents, encyclopedia, books, codes, etc. Additionally, our dataset is multilingual, with a significant portion of the data being in English and Chinese.
To ensure the quality of our pretraining data, we have developed a comprehensive data preprocessing procedure. For public web data, we extract text from HTML and use language identification tools to determine the language. To increase the diversity of our data, we employ deduplication techniques, including exact-match deduplication after normalization and fuzzy deduplication using MinHash and LSH algorithms. To filter out low-quality data, we employ a combination of rule-based and machine-learning-based methods. Specifically, we use multiple models to score the content, including language models, text-quality scoring models, and models for identifying potentially offensive or inappropriate content. We also manually sample texts from various sources and review them to ensure their quality. To further enhance the quality of our data, we selectively up-sample data from certain sources, to ensure that our models are trained on a diverse range of high-quality content. In recent studies (Zeng et al., 2022; Aribandi et al., 2021; Raffel et al., 2020), it has been demonstrated that pretraining language models with multi-task instructions can enhance their zero-shot and few-shot performance. To further enhance the performance of our model, we have incorporated high-quality instruction data into our pretraining process. To safeguard the integrity of our benchmark assessment, we have adopted a similar approach as Brown et al. (2020) and meticulously eliminated any instruction
Figure 2: **Performance of GPT-4, GPT-3.5, the previous 13B SOTA, as well as Qwen-14B.** We demonstrate the results on \(12\) datasets covering multiple domains, including language understanding, knowledge, reasoning, etc. Qwen significantly outperforms the previous SOTA of similar model sizes, but still lag behind both GPT-3.5 and GPT-4.
samples that exhibit a 13-gram overlap with any data present in the test sets utilized in our evaluation. Given the large number of downstream tasks, it is not feasible to repeat this filtering process for all tasks. Instead, we have made sure that the instruction data for the reported tasks have undergone our filtering process to ensure their accuracy and reliability. Finally, we have built a dataset of up to \(3\) trillion tokens.
### Tokenization
The design of vocabulary significantly impacts the training efficiency and the downstream task performance. In this study, we utilize byte pair encoding (BPE) as our tokenization method, following GPT-3.5 and GPT-4. We start with the open-source fast BPE tokenizer, tiktoken (Jain, 2022), and select the vocabulary cl100k base as our starting point. To enhance the performance of our model on multilingual downstream tasks, particularly in Chinese, we augment the vocabulary with commonly used Chinese characters and words, as well as those in other languages. Also, following Touvron et al. (2023a;b), we have split numbers into single digits. The final vocabulary size is approximately \(152\)K.
The performance of the Qwen tokenizer in terms of compression is depicted in Figure 3. In this comparison, we have evaluated Qwen against several other tokenizers, including XLM-R (Conneau et al., 2019), LLaMA (Touvron et al., 2023a), Baichuan (Inc., 2023a), and InternLM (InternLM Team, 2023). Our findings reveal that Qwen achieves higher compression efficiency than its competitors in most languages. This implies that the cost of serving can be significantly reduced since a smaller number of tokens from Qwen can convey more information than its competitors. Furthermore, we have conducted preliminary experiments to ensure that scaling the vocabulary size of Qwen does not negatively impact the downstream performance of the pretrained model. Despite the increase in vocabulary size, our experiments have shown that Qwen maintains its performance levels in downstream evaluation.
### Architecture
Qwen is designed using a modified version of the Transformer architecture. Specifically, we have adopted the recent open-source approach of training large language models, LLaMA (Touvron et al., 2023a), which is widely regarded as the top open-source LLM. Our modifications to the architecture include:
Figure 3: **Encoding compression rates of different models. We randomly selected \(1\) million document corpora of each language to test and compare the encoding compression rates of different models (with XLM-R (Conneau et al., 2019), which supports \(100\) languages, as the base value \(1\), not shown in the figure). As can be seen, while ensuring the efficient decoding of Chinese, English, and code, Qwen also achieves a high compression rate for many other languages (such as th, he, ar, ko, vi, ja, tr, id, pl, ru, nl, pt, it, de, es, fr, etc.), equipping the model with strong scalability as well as high training and inference efficiency in these languages.**
* **Embedding and output projection**. Based on preliminary experimental findings, we have opted for the untied embedding approach instead of tying the weights of input embedding and output projection. This decision was made in order to achieve better performance with the price of memory costs.
* **Positional embedding**. We have chosen RoPE (Rotary Positional Embedding) (Su et al., 2021) as our preferred option for incorporating positional information into our model. RoPE has been widely adopted and has demonstrated success in contemporary large language models, notably PaLM (Chowdhery et al., 2022; Anil et al., 2023) and LLaMA (Touvron et al., 2023;b). In particular, we have opted to use FP32 precision for the inverse frequency matrix, rather than BF16 or FP16, in order to prioritize model performance and achieve higher accuracy.
* **Bias**. For most layers, we remove biases following Chowdhery et al. (2022), but we add biases in the QKV layer of attention to enhance the extrapolation ability of the model (Su, 2023b).
* **Pre-Norm & RMSNorm**. In modern Transformer models, pre-normalization is the most widely used approach, which has been shown to improve training stability compared to post-normalization. Recent research has suggested alternative methods for better training stability, which we plan to explore in future versions of our model. Additionally, we have replaced the traditional layer normalization technique described in (Ba et al., 2016) with RMSNorm (Jiang et al., 2023). This change has resulted in equivalent performance while also improving efficiency.
* **Activation function**. We have selected SwiGLU (Shazeer, 2020) as our activation function, a combination of Swish (Ramachandran et al., 2017) and Gated Linear Unit (Dauphin et al., 2017). Our initial experiments have shown that activation functions based on GLU generally outperform other baseline options, such as GeLU (Hendrycks and Gimpel, 2016). As is common practice in previous research, we have reduced the dimension of the feed-forward network (FFN) from \(4\) times the hidden size to \(\frac{8}{3}\) of the hidden size.
### Training
To train Qwen, we follow the standard approach of autoregressive language modeling, as described in Radford et al. (2018). This involves training the model to predict the next token based on the context provided by the previous tokens. We train models with context lengths of \(2048\). To create batches of data, we shuffle and merge the documents, and then truncate them to the specified context lengths. To improve computational efficiency and reduce memory usage, we employ Flash Attention in the attention modules (Dao et al., 2022). We adopt the standard optimizer AdamW (Kingma and Ba, 2014; Loshchilov and Hutter, 2017) for pretraining optimization. We set the hyperparameters \(\beta_{1}=0.9\), \(\beta_{2}=0.95\), and \(\epsilon=10^{-8}\). We use a cosine learning rate schedule with a specified peak learning rate for each model size. The learning rate is decayed to a minimum learning rate of \(10\%\) of the peak learning rate. All the models are trained with BFloat16 mixed precision for training stability.
### Context Length Extension
Transformer models have a significant limitation in terms of the context length for their attention mechanism. As the context length increases, the quadratic-complexity computation leads to a drastic increase in both computation and memory costs. In this work, we have implemented simple training-free techniques that are solely applied during inference to extend the context length of the model. One of the key techniques we have used is NTK-aware interpolation (bloc97, 2023).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \# of Params & Hidden size & Heads & Layers & Learning rate & Batch size & Training tokens \\ \hline
1.8B & 2048 & 16 & 24 & \(3.0\times 10^{-4}\) & 4M & 2.2T \\
7B & 4096 & 32 & 32 & \(3.0\times 10^{-4}\) & 4M & 2.4T \\
14B & 5120 & 40 & 40 & \(3.0\times 10^{-4}\) & 4M & 3.0T \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Model sizes, architectures, and optimization hyper-parameters.**
Unlike position interpolation (PI) (Chen et al., 2023a) which scales each dimension of RoPE equally, NTK-aware interpolation adjusts the base of RoPE to prevent the loss of high-frequency information in a training-free manner. To further improve performance, we have also implemented a trivial extension called dynamic NTK-aware interpolation, which is later formally discussed in (Peng et al., 2023a). It dynamically changes the scale by chunks, avoiding severe performance degradation. These techniques allow us to effectively extend the context length of Transformer models without compromising their computational efficiency or accuracy.
Qwen additionally incorporates two attention mechanisms: LogN-Scaling (Chiang and Cholak, 2022; Su, 2023a) and window attention (Beltagy et al., 2020). LogN-Scaling rescales the dot product of the query and value by a factor that depends on the ratio of the context length to the training length, ensuring that the entropy of the attention value remains stable as the context length grows. Window attention restricts the attention to a limited context window, preventing the model from attending to tokens that are too far away.
We also observed that the long-context modeling ability of our model varies across layers, with lower layers being more sensitive in context length extension compared to the higher layers. To leverage this observation, we assign different window sizes to each layer, using shorter windows for lower layers and longer windows for higher layers.
### Experimental Results
To evaluate the zero-shot and few-shot learning capabilities of our models, we conduct a thorough benchmark assessment using a series of datasets. We compare Qwen with the most recent open-source base models, including LLaMA (Touvron et al., 2023a), Llama 2 (Touvron et al., 2023b), MPT (Mosaic ML, 2023), Falcon (Almazrouei et al., 2023), Baichuan2 (Yang et al., 2023), ChatGLM2 (ChatGLM2 Team, 2023), InternLM (InternLM Team, 2023), XVERSE (Inc., 2023b), and StableBeluga2 (Stability AI, 2023). Our evaluation covers a total of 7 popular benchmarks,
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Model** & **Params** & **MMLU** & **C-Eval** & **GSM8K** & **MATH** & **HumanEval** & **MBPP** & **BBH** \\ & & 5-shot & 5-shot & 8-shot & 4-shot & 0-shot & 3-shot & 3-shot \\ \hline \multirow{3}{*}{MPT} & 7B & 30.8 & 23.5 & 9.1 & 3.0 & 18.3 & 22.8 & 35.6 \\ & 30B & 47.9 & - & 15.2 & 3.1 & 25.0 & 32.8 & 38.0 \\ \hline \multirow{3}{*}{Falcon} & 7B & 27.8 & - & 6.8 & 2.3 & - & 11.2 & **28.0** \\ & 40B & 57.0 & - & 19.6 & 5.5 & - & 29.8 & 37.1 \\ \cline{1-1} \cline{2-10} ChatGLM2 & 6B & 47.9 & 51.7 & 32.4 & 6.5 & - & - & **33.7** \\ \cline{1-1} \cline{2-10} InternLM & 7B & 51.0 & 53.4 & 31.2 & 6.3 & 10.4 & 14.0 & **37.0** \\ & 20B & 62.1 & 58.8 & 52.6 & 7.9 & 25.6 & 35.6 & 52.5 \\ \cline{1-1} \cline{2-10} Baichuan2 & 7B & 54.7 & 56.3 & 24.6 & 5.6 & 18.3 & 24.2 & **41.6** \\ & 13B & 59.5 & 59.0 & 52.8 & 10.1 & 17.1 & 30.2 & **49.0** \\ \hline \multirow{3}{*}{LLaMA} & 7B & 35.6 & 27.3 & 11.0 & 2.9 & 12.8 & 17.7 & **33.5** \\ & 13B & 47.7 & 31.8 & 20.3 & 4.2 & 15.8 & 22.0 & **37.9** \\ & 33B & 58.7 & 37.5 & 42.3 & 7.1 & 21.7 & 30.2 & 50.0 \\ & 65B & 63.7 & 40.4 & 54.4 & 10.6 & 23.7 & 37.7 & 58.4 \\ \hline \multirow{3}{*}{LLama 2} & **7B** & **46.8** & **32.5** & **16.7** & **3.3** & **12.8** & **20.8** & **38.2** \\ & 13B & **55.0** & **41.4** & **29.6** & **5.0** & **18.9** & **30.3** & **45.6** \\ & 34B & 62.6 & - & 42.2 & 6.2 & 22.6 & 33.0 & 44.1 \\ & 70B & 69.8 & 50.1 & 63.3 & 13.5 & 29.9 & 45.0 & 64.9 \\ \hline \multirow{3}{*}{StableBeluga2} & 70B & 68.6 & 51.4 & 69.6 & 14.6 & 28.0 & 11.4 & 69.3 \\ \cline{1-1} \cline{2-10} & 1.8B & **44.6** & **54.7** & **21.2** & **5.6** & **17.1** & **14.8** & **28.2** \\ \cline{1-1} & **7B** & 58.2 & **63.5** & **51.7** & **11.6** & **29.9** & **31.6** & **45.0** \\ \cline{1-1} & 14B & **66.3** & **72.1** & **61.3** & **24.8** & **32.3** & **40.8** & **53.4** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Overall performance on widely-used benchmarks compared to open-source base models. Our largest Qwen model with 14 billion parameters outperforms previous 13B SoTA models on all datasets.**
which are MMLU (5-shot) (Hendrycks et al., 2020), C-Eval (5-shot) (Huang et al., 2023), GSM8K (8-shot) (Cobbe et al., 2021), MATH (4-shot) (Hendrycks et al., 2021), HumanEval (0-shot) (Chen et al., 2021), MBPP (0-shot) (Austin et al., 2021), and BBH (Big Bench Hard) (3 shot) (Suzgun et al., 2022). We aim to provide a comprehensive summary of the overall performance of our models across these benchmarks.
In this evaluation, we focus on the base language models without alignment and collect the baselines' best scores from their official results and OpenCompass (OpenCompass Team, 2023). The results are presented in Table 2.
Our experimental results demonstrate that the three Qwen models exhibit exceptional performance across all downstream tasks. It is worth noting that even the larger models, such as LLaMA2-70B, are outperformed by Qwen-14B in \(3\) tasks. Qwen-7B also performs admirably, surpassing LLaMA2-13B and achieving comparable results to Baichuan2-13B. Notably, despite having a relatively small number of parameters, Qwen-1.8B is capable of competitive performance on certain tasks and even outperforms larger models in some instances. The findings highlight the impressive capabilities of the Qwen models, particularly Qwen-14B, and suggest that smaller models, such as Qwen-1.8B, can still achieve strong performance in certain applications.
To evaluate the effectiveness of context length extension, Table 3 presents the test results on arXiv3 in terms of perplexity (PPL). These results demonstrate that by combining NTK-aware interpolation, LogN-Scaling, and layer-wise window assignment, we can effectively maintain the performance of our models in the context of over \(8192\) tokens.
Footnote 3: The dataset contains academic papers from [https://arxiv.org](https://arxiv.org).
## 3 Alignment
Pretrained large language models have been found to be not aligned with human behavior, making them unsuitable for serving as AI assistants in most cases. Recent research has shown that the use of alignment techniques, such as supervised finetuning (SFT) and reinforcement learning from human feedback (RLHF), can significantly improve the ability of language models to engage in natural conversation. In this section, we will delve into the details of how Qwen models have been trained using SFT and RLHF, and evaluate their performance in the context of chat-based assistance.
### Supervised Finetuning
To gain an understanding of human behavior, the initial step is to carry out SFT, which finetunes a pretrained LLM on chat-style data, including both queries and responses. In the following sections, we will delve into the details of data construction and training methods.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**Sequence Length**} \\ \cline{2-6} & 1024 & 2048 & 4096 & 8192 & 16384 \\ \hline Qwen-7B & 4.23 & 3.78 & 39.35 & 469.81 & 2645.09 \\ + dynamic\_ntk & 4.23 & 3.78 & 3.59 & 3.66 & 5.71 \\ + dynamic\_ntk + logn & 4.23 & 3.78 & 3.58 & 3.56 & 4.62 \\ + dynamic\_ntk + logn + window\_attn & 4.23 & 3.78 & 3.58 & 3.49 & 4.32 \\ \hline Qwen-14B & - & 3.46 & 22.79 & 334.65 & 3168.35 \\ + dynamic\_ntk + logn + window\_attn & - & 3.46 & 3.29 & 3.18 & 3.42 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results of Qwen on long-context inference using various techniques. Our experimental findings reveal that the application of our crucial techniques enables the model to consistently achieve low perplexity as the context length increases. This suggests that these techniques play a significant role in enhancing the model’s ability to comprehend and generate lengthy texts.**
#### 3.1.1 Data
To enhance the capabilities of our supervised finetuning datasets, we have annotated conversations in multiple styles. While conventional datasets (Wei et al., 2022) contain a vast amount of data prompted with questions, instructions, and answers in natural language, our approach takes it a step further by annotating human-style conversations. This practice, inspired by Ouyang et al. (2022), aims at improving the model's helpfulness by focusing on natural language generation for diverse tasks. To ensure the model's ability to generalize to a wide range of scenarios, we specifically excluded data formatted in prompt templates that could potentially limit its capabilities. Furthermore, we have prioritized the safety of the language model by annotating data related to safety concerns such as violence, bias, and pornography.
In addition to data quality, we have observed that the training method can significantly impact the final performance of the model. To achieve this, we utilized the ChatML-style format (OpenAI, 2022), which is a versatile meta language capable of describing both the metadata (such as roles) and the content of a turn. This format enables the model to effectively distinguish between various types of information, including system setup, user inputs, and assistant outputs, among others. By leveraging this approach, we can enhance the model's ability to accurately process and analyze complex conversational data.
#### 3.1.2 Training
Consistent with pretraining, we also apply next-token prediction as the training task for SFT. We apply the loss masks for the system and user inputs. More details are demonstrated in Section A.1.1.
The model's training process utilizes the AdamW optimizer, with the following hyperparameters: \(\beta_{1}\) set to \(0.9\), \(\beta_{2}\) set to \(0.95\), and \(\epsilon\) set to \(10^{-8}\). The sequence length is limited to \(2048\), and the batch size is \(128\). The model undergoes a total of \(4000\) steps, with the learning rate gradually increased over the first \(1430\) steps, reaching a peak of \(2\times 10^{-6}\). To prevent overfitting, weight decay is applied with a value of \(0.1\), dropout is set to \(0.1\), and gradient clipping is enforced with a limit of \(1.0\).
### Reinforcement Learning from Human Feedback
While SFT has proven to be effective, we acknowledge that its generalization and creativity capabilities may be limited, and it is prone to overfitting. To address this issue, we have implemented Reinforcement Learning from Human Feedback (RLHF) to further align SFT models with human preferences, following the approaches of Ouyang et al. (2022); Christiano et al. (2017). This process involves training a reward model and using Proximal Policy Optimization (PPO) (Schulman et al., 2017) to conduct policy training.
#### 3.2.1 Reward Model
To create a successful reward model, like building a large language model (LLM), it is crucial to first undergo pretraining and then finetuning. This pretraining process, also known as preference model pretraining (PMP) (Bai et al., 2022), necessitates a vast dataset of comparison data. This dataset consists of sample pairs, each containing two distinct responses for a single query and their corresponding preferences. Similarly, finetuning is also conducted on this type of comparison data, but with a higher quality due to the presence of quality annotations.
During the fine-tuning phase, we gather a variety of prompts and adjust the reward model based on human feedback for responses from the Qwen models. To ensure the diversity and complexity of user prompts are properly taken into account, we have created a classification system with around \(6600\) detailed tags and implemented a balanced sampling algorithm that considers both diversity and complexity when selecting prompts for annotation by the reward model (Lu et al., 2023). To generate a wide range of responses, we have utilized Qwen models of different sizes and sampling strategies, as diverse responses can help reduce annotation difficulties and enhance the performance of the reward model. These responses are then evaluated by annotators following a standard annotation guideline, and comparison pairs are formed based on their scores.
In creating the reward model, we utilize the same-sized pre-trained language model Qwen to initiate the process. It is important to mention that we have incorporated a pooling layer into the original
Qwen model to extract the reward for a sentence based on a specific end token. The learning rate for this process has been set to a constant value of \(3\times 10^{-6}\), and the batch size is \(64\). Additionally, the sequence length is set to \(2048\), and the training process lasts for a single epoch.
We adopted the accuracy on the test dataset as an important but not exclusive evaluation metric for the reward model. In Table 4, we report the test pairwise accuracy of PMP and reward models on diverse human preference benchmark datasets (Bai et al., 2022; Stiennon et al., 2020; Ethayarajh et al., 2022; Lightman et al., 2023). Specifically, Qwen Helpful-base and Qwen Helpful-online are our proprietary datasets. The responses in Qwen Helpful-base are generated from Qwen without RLHF, whereas Qwen Helpful-online includes responses from Qwen with RLHF. The results show that the PMP model demonstrates high generalization capabilities on out-of-distribution data, and the reward model demonstrates significant improvement on our Qwen reward datasets.
#### 3.2.2 Reinforcement Learning
Our Proximal Policy Optimization (PPO) process involves four models: the policy model, value model, reference model, and reward model. Before starting the PPO procedure, we pause the policy model's updates and focus solely on updating the value model for \(50\) steps. This approach ensures that the value model can adapt to different reward models effectively.
During the PPO operation, we use a strategy of sampling two responses for each query simultaneously. This strategy has proven to be more effective based on our internal benchmarking evaluations. We set the KL divergence coefficient to \(0.04\) and normalize the reward based on the running mean.
The policy and value models have learning rates of \(1\times 10^{-6}\) and \(5\times 10^{-6}\), respectively. To enhance training stability, we utilize value loss clipping with a clip value of \(0.15\). For inference, the policy top-p is set to \(0.9\). Our findings indicate that although the entropy is slightly lower than when top-p is set to \(1.0\), there is a faster increase in reward, ultimately resulting in consistently higher evaluation rewards under similar conditions.
Additionally, we have implemented a pretrained gradient to mitigate the alignment tax. Empirical findings indicate that, with this specific reward model, the KL penalty is adequately robust to counteract the alignment tax in benchmarks that are not strictly code or math in nature, such as those that test common sense knowledge and reading comprehension. It is imperative to utilize a significantly larger volume of the pretrained data in comparison to the PPO data to ensure the effectiveness of the pretrained gradient. Additionally, our empirical study suggests that an overly large value for this coefficient can considerably impede the alignment to the reward model, eventually compromising the ultimate alignment, while an overly small value would only have a marginal effect on alignment tax reduction.
### Automatic and Human Evaluation of Aligned Models
To showcase the effectiveness of our aligned models, we conduct a comparison with other aligned models on well-established benchmarks, including MMLU (Hendrycks et al., 2020), C-Eval (Huang et al., 2023), GSM8K (Cobbe et al., 2021), HumanEval (Chen et al., 2021), and BBH (Suzgun et al., 2022). Besides the widely used few-shot setting, we test our aligned models in the zero-shot setting to demonstrate how well the models follow instructions. The prompt in a zero-shot setting consists of an instruction and a question without any previous examples in the context. The results of the baselines are collected from their official reports and OpenCompass (OpenCompass Team, 2023).
The results in Table 5 demonstrate the effectiveness of our aligned models in understanding human instructions and generating appropriate responses. Qwen-14B-Chat outperforms all other models
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Dataset & Qwen & Qwen & Anthropic & Anthropic & OpenAI & Stanford & OpenAI \\ & Helpful-base & Helpful-online & Helpful-base & Helpful-online & Summ. & SHP & PRM800K \\ \hline PMP & 62.68 & 61.62 & 76.52 & 65.43 & 69.60 & 60.05 & 70.59 \\ RM & 74.78 & 69.71 & 73.98 & 64.57 & 69.99 & 60.10 & 70.52 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test Accuracy of Qwen preference model pretraining (PMP) and reward model (RM) on diverse human preference benchmark datasets.
except ChatGPT (OpenAI, 2022) and Llama 2-Chat-70B (Touvron et al., 2023b) in all datasets, including MMLU (Hendrycks et al., 2020), C-Eval (Huang et al., 2023), GSM8K (Cobbe et al., 2021), HumanEval (Chen et al., 2021), and BBH (Suzgun et al., 2022). In particular, Qwen's performance in HumanEval, which measures the quality of generated codes, is significantly higher than that of other open-source models.
Moreover, Qwen's performance is consistently better than that of open-source models of similar size, such as LLAMA2 (Touvron et al., 2023b), ChatGLM2 (ChatGLM2 Team, 2023), InternLM (InternLM Team, 2023), and Baichuan2 (Yang et al., 2023). This suggests that our alignment approach, which involves fine-tuning the model on a large dataset of human conversations, has been effective in improving the model's ability to understand and generate human-like language.
Despite this, we have reservations about the ability of traditional benchmark evaluation to accurately measure the performance and potential of chat models trained with alignment techniques in today's landscape. The results mentioned earlier provide some evidence of our competitive standing, but we believe that it is crucial to develop new evaluation methods specifically tailored to aligned models.
We believe that human evaluation is crucial, which is why we have created a carefully curated dataset for this purpose. Our process involved collecting \(300\) instructions in Chinese that covered a wide range of topics, including knowledge, language understanding, creative writing, coding, and mathematics. To evaluate the performance of different models, we chose the SFT version of Qwen-Chat-7B and the SFT and RLHF versions of Qwen-Chat-14B, and added two strong baselines, GPT-3.5 and GPT-44, for comparison. For each instruction, we asked three annotators to rank the model responses by the overall score of helpfulness, informativeness, validity, and other relevant factors. Our dataset and evaluation methodology provides a comprehensive and rigorous assessment of the capabilities of different language models in various domains.
Footnote 4: To obtain the results from the models, we use the OpenAI APIs of GPT-3.5-turbo-0613 and GPT-4-0613.
Figure 4 illustrates the win rates of the various models. For each model, we report the percentage of wins, ties, and losses against GPT-3.5, with the segments of each bar from bottom to top representing these statistics. The experimental results clearly demonstrate that the RLHF model outperforms the SFT models by significant margins, indicating that RLHF can encourage the model to generate responses that are more preferred by humans. In terms of overall performance, we find that the RLHF model significantly outperforms the SFT models, falling behind GPT-4. This indicates the effectiveness of RLHF for aligning to human preference. To provide a more comprehensive understanding of the models' performance, we include a case study with examples from different models in Appendix A.2.2. Nonetheless, it remains difficult to accurately capture the gap between our
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline
**Model** & **Params** & \multicolumn{2}{c}{**MMLU**} & \multicolumn{2}{c}{**C-Eval**} & \multicolumn{2}{c}{**GSM8K**} & \multicolumn{2}{c}{**HumanEval**} & \multicolumn{2}{c}{**BBH**} \\ & & 0-shot / 5-shot & 0-shot / 5-shot & 0-shot / 8-shot & 0-shot & 0-shot & 0-shot / 3-shot \\ \hline \hline \multicolumn{10}{c}{_Propiferative models_} \\ \hline GPT-3.5 & - & - & / 69.1 & - / 52.5 & - & / 78.2 & 73.2 & - / 70.1 \\ GPT-4 & - & - / **83.0** & - & / **69.9** & - & / **91.4** & **86.6** & - / **86.7** \\ \hline \hline \multicolumn{10}{c}{_Open-source models_} \\ \hline ChatGLM2 & 6B & 45.5 / 46.0 & 50.1 / 52.6 & - & / 28.8 & 11.0 & - / 32.7 \\ InternLM-Chat & 7B & - & / 51.1 & - & / 53.6 & - & / 33.0 & 14.6 & - / 32.5 \\ Baichuan2-Chat & 7B & - & / 52.9 & - & / 55.6 & - & / 32.8 & 13.4 & - / 35.8 \\ & 13B & - & / 57.3 & - & / 56.7 & - & / 55.3 & 17.7 & - / 49.9 \\ \hline \multirow{3}{*}{Llama 2-Chat} & 7B & - & / 46.2 & - & / 31.9 & - & 26.3 & 12.2 & - / 35.6 \\ & 13B & - & / 54.6 & - & / 36.2 & - & / 37.1 & 18.9 & - / 40.1 \\ & 70B & - & / 63.8 & - & / 44.3 & - & / 59.3 & 32.3 & - / 60.8 \\ \hline \multirow{3}{*}{Qwen-Chat} & 1.8B & 42.4 / 43.9 & 50.7 / 50.3 & 27.8 / 19.5 & 14.6 & 27.1 / 25.0 \\ & 7B & 55.8 / 57.0 & 59.7 / 59.3 & 50.3 / 54.1 & 37.2 & 39.6 / 46.7 \\ \cline{1-1} & 14B & 64.6 / **66.5** & 69.8 / **71.7** & **60.1 /** 59.3 & **43.9** & 46.9 / **58.7** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Performance of aligned models on widely-used benchmarks.** We report both zero-shot and few-shot performance of the models.
models and the proprietary models. As such, a more extensive and rigorous assessment is required for the chat models.
### Tool Use, Code Interpreter, and Agent
The Qwen models, which are designed to be versatile, have the remarkable ability to assist with (semi-)automating daily tasks by leveraging their skills in tool-use and planning. As such, they can serve as agents or copilots to help streamline various tasks. We explore Qwen's proficiency in the following areas:
* Utilizing unseen tools through ReAct prompting (Yao et al., 2022) (see Table 6).
* Using a Python code interpreter to enhance math reasoning, data analysis, and more (see Table 7 and Table 8).
* Functioning as an agent that accesses Hugging Face's extensive collection of multimodal models while engaging with humans (see Table 9).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Params** & **Tool Selection (Acc.\(\uparrow\))** & **Tool Input (Rouge-L\(\uparrow\))** & **False Positive Error (\%)\(\downarrow\)** \\ \hline GPT-4 & - & 95 & 90 & 15.0 \\ GPT-3.5 & - & 85 & 88 & 75.0 \\ \hline \multirow{3}{*}{Qwen-Chat} & 1.8B & 92 & 89 & 19.3 \\ & 7B & **98** & 91 & 7.3 \\ \cline{1-1} & 14B & **98** & **93** & **2.4** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance of Qwen on the in-house Chinese benchmark that evaluates its ability to use unseen tools via ReAct prompting.
Figure 4: **Results of the human evaluation for chat models. We compare Qwen-7B (SFT), Qwen-14B (SFT), Qwen-14B (RLHF), as well as GPT-4 against GPT-3.5. Each bar segment represents the percentage of wins, ties, and losses, from bottom to top. On average, the RLHF model outperforms the SFT model. The dataset consists of 300 Chinese instructions.**
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Params**} & \multicolumn{4}{c}{**Category**} \\ \cline{3-6} & & Math (\%) & Vis.-Hard (\%) & Vis.-Easy (\%) & Vis.-All (\%) \\ \hline GPT-4 & - & 82.8 & 66.7 & 60.8 & 63.8 \\ GPT-3.5 & - & 47.3 & 33.3 & 55.7 & 44.2 \\ \multirow{2}{*}{Llama 2-Chat} & 7B & 3.9 & 14.3 & 39.2 & 26.4 \\ & 13B & 8.3 & 8.3 & 40.5 & 23.9 \\ \multirow{2}{*}{Code LLAMA-Instruct} & 7B & 14.3 & 26.2 & 60.8 & 42.9 \\ & 13B & 28.2 & 27.4 & 62.0 & 44.2 \\ \multirow{2}{*}{InterLM-Chat} & 7B v1.1 & 28.5 & 4.8 & 40.5 & 22.1 \\ & 20B & 34.6 & 21.4 & 45.6 & 33.1 \\ \hline \multirow{3}{*}{Qwen-Chat} & 1.8B & 14.7 & 3.6 & 20.3 & 11.7 \\ & 7B & 41.9 & 40.5 & 54.4 & 47.2 \\ \multirow{3}{*}{Llama 2-Chat} & 14B & 58.4 & 53.6 & 59.5 & 56.4 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Correctness of the final response on the in-house evaluation benchmark for Code Interpreter. Visualization-Hard tasks involve planning multiple steps, while Visualization-Easy tasks do not. Visualization-All measures both types of tasks. Code LLAMA excels in performing Visualization-Easy tasks but tends to underperform in Visualization-Hard tasks, due to its inclination to hallucinate non-existent columns based on the name of a CSV file (see Figure 5).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Params**} & \multicolumn{4}{c}{**Category**} \\ \cline{3-6} & & Math (\%) & Vis.-Hard (\%) & Vis.-Easy (\%) & Vis.-All (\%) \\ \hline GPT-4 & - & 82.8 & 66.7 & 60.8 & 63.8 \\ GPT-3.5 & - & 47.3 & 33.3 & 55.7 & 44.2 \\ \multirow{2}{*}{Llama 2-Chat} & 7B & 3.9 & 14.3 & 39.2 & 26.4 \\ & 13B & 8.3 & 8.3 & 40.5 & 23.9 \\ \multirow{2}{*}{Code LLAMA-Instruct} & 7B & 14.3 & 26.2 & 60.8 & 42.9 \\ & 13B & 28.2 & 27.4 & 62.0 & 44.2 \\ \multirow{2}{*}{InterLM-Chat} & 7B v1.1 & 28.5 & 4.8 & 40.5 & 22.1 \\ & 20B & 34.6 & 21.4 & 45.6 & 33.1 \\ \hline \multirow{3}{*}{Qwen-Chat} & 1.8B & 14.7 & 3.6 & 20.3 & 11.7 \\ & 7B & 41.9 & 40.5 & 54.4 & 47.2 \\ \multirow{3}{*}{InterLM-Chat} & 14B & 58.4 & 53.6 & 59.5 & 56.4 \\ \hline \hline \end{tabular}
\end{table}
Table 9: The proportion of code generated by Qwen that is executable on the in-house evaluation benchmark for Code Interpreter. This benchmark examines Qwen’s coding proficiency in math problem solving, data visualization, and general purposes. Code LLAMA underperforms on visualization tasks because it hallucinates non-existent columns solely based on CSV file names (see Figure 5).
To enhance Qwen's capabilities as an agent or copilot, we employ the self-instruct (Wang et al., 2023c) strategy for SFT. Specifically, we utilize the in-context learning capability of Qwen for self-instruction. By providing a few examples, we can prompt Qwen to generate more relevant queries and generate outputs that follow a specific format, such as ReAct (Yao et al., 2022). We then apply rules and involve human annotators to filter out any noisy samples. Afterwards, the samples are incorporated into Qwen's training data, resulting in an updated version of Qwen that is more dependable for self-instruction. We iterate through this process multiple times until we gather an ample number of samples that possess both exceptional quality and a wide range of diversity. As a result, our final collection consists of around \(2000\) high-quality samples.
During the finetuning process, we mix these high-quality samples with all the other general-purpose SFT samples, rather than introducing an additional training stage. By doing so, we are able to retain essential general-purpose capabilities that are also pertinent for constructing agent applications.
Using Tools via ReAct PromptingWe have created and made publicly available a benchmark for evaluating Qwen's ability to call plugins, tools, functions, or APIs using ReAct Prompting (see Qwen Team, Alibaba Group, 2023b). To ensure fair evaluation, we have excluded any plugins that were included in Qwen's training set from the evaluation set. The benchmark assesses the model's accuracy in selecting the correct plugin from a pool of up to five candidates, as well as the plausibility of the parameters passed into the plugin and the frequency of false positives. In this evaluation, a false positive occurs when the model incorrectly invokes a plugin in response to a query, despite not being required to do so.
The results presented in Table 6 demonstrate that Qwen consistently achieves higher accuracy in identifying the relevance of a query to the available tools as the model size increases. However, the table also highlights that beyond a certain point, there is little improvement in performance when it comes to selecting the appropriate tool and providing relevant arguments. This suggests that the current preliminary benchmark may be relatively easy and may require further enhancement in future iterations. It is worth noting that GPT-3.5 stands out as an exception, displaying suboptimal performance on this particular benchmark. This could potentially be attributed to the fact that the benchmark primarily focuses on the Chinese language, which may not align well with GPT-3.5's capabilities. Additionally, we observe that GPT-3.5 tends to attempt to use at least one tool, even if the query cannot be effectively addressed by the provided tools.
Using Code Interpreter for Math Reasoning and Data AnalysisThe Python code interpreter is widely regarded as a powerful tool for augmenting the capabilities of an LLM agent. It is
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Params**} & \multicolumn{4}{c}{**Metric**} \\ \cline{4-6} & & & Tool Selection \(\uparrow\) & Tool Used \(\uparrow\) & Code Correctness \(\uparrow\) \\ \hline \multirow{8}{*}{Run Mode} & GPT-4 & - & 100 & 100 & 97.4 \\ \cline{2-6} & GPT-3.5 & - & 95.4 & 96.3 & 87.0 \\ \cline{1-1} \cline{2-6} & Starcoder-Base & 15B & 86.1 & 87.0 & 68.9 \\ \cline{1-1} \cline{2-6} & Starcoder & 15B & 87.0 & 88.0 & 68.9 \\ \cline{1-1} \cline{2-6} & & 1.8B & 85.2 & 84.3 & 61.1 \\ \cline{1-1} \cline{2-6} & Qwen-Chat & 7B & 87.0 & 87.0 & 71.5 \\ \cline{1-1} \cline{2-6} & & 14B & 93.5 & 94.4 & 87.0 \\ \hline \multirow{8}{*}{Chat Mode} & GPT-4 & - & 97.9 & 97.9 & 98.5 \\ \cline{1-1} \cline{2-6} & GPT-3.5 & - & 97.3 & 96.8 & 89.6 \\ \cline{1-1} \cline{2-6} & Starcoder-Base & 15B & 97.9 & 97.9 & 91.1 \\ \cline{1-1} \cline{2-6} & Starcoder & 15B & 97.9 & 97.9 & 89.6 \\ \cline{1-1} \cline{2-6} & & 1.8B & 93.6 & 93.6 & 73.2 \\ \cline{1-1} \cline{2-6} & Qwen-Chat & 7B & 94.7 & 94.7 & 85.1 \\ \cline{1-1} \cline{2-6} & & 14B & 97.9 & 97.9 & 95.5 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Results of Qwen-Chat on the Hugging Face Agent benchmark.
worth investigating whether Qwen can harness the full potential of this interpreter to enhance its performance in diverse domains, such as mathematical reasoning and data analysis. To facilitate this exploration, we have developed and made publicly available a benchmark that is specifically tailored for this purpose (see Qwen Team, Alibaba Group, 2023a).
The benchmark encompasses three primary categories of tasks: math problem-solving, data visualization, and other general-purpose tasks like file post-processing and web crawling. Within the visualization tasks, we differentiate between two levels of difficulty. The easier level can be achieved by simply writing and executing a single code snippet without the need for advanced planning skills. However, the more challenging level requires strategic planning and executing multiple code snippets in a sequential manner. This is because the subsequent code must be written based on the output of the previous code. For example, an agent may need to examine the structure of a CSV file using one code snippet before proceeding to write and execute additional code to create a plot.
Regarding evaluation metrics, we consider both the executability and correctness of the generated code. To elaborate on the correctness metrics, for math problems, we measure accuracy by verifying if the ground truth numerical answer is present in both the code execution result and the final response. When it comes to data visualization, we assess accuracy by utilizing Qwen-VL (Bai et al., 2023), a powerful multimodal language model. Qwen-VL is capable of answering text questions paired with images, and we rely on it to confirm whether the image generated by the code fulfills the user's request.
The results regarding executability and correctness are presented in Table 7 and Table 8, respectively. It is evident that Code LLAMA generally outperforms LLAMA 2, its generalist counterpart, which is not surprising since this benchmark specifically requires coding skills. However, it is worth noting that specialist models that are optimized for code synthesis do not necessarily outperform generalist models. This is due to the fact that this benchmark encompasses various skills beyond coding, such as abstracting math problems into equations, understanding language-specified constraints, and responding in the specified format such as ReAct. Notably, Qwen-7B-Chat and Qwen-14B-Chat surpass all other open-source alternatives of similar scale significantly, despite being generalist models.
Serving as a Hugging Face AgentHugging Face provides a framework called the Hugging Face Agent or Transformers Agent (Hugging Face, 2023), which empowers LLM agents with a curated set of multimodal tools, including speech recognition and image synthesis. This framework allows an LLM agent to interact with humans, interpret natural language commands, and employ the provided tools as needed.
To evaluate Qwen's effectiveness as a Hugging Face agent, we utilized the evaluation benchmarks offered by Hugging Face. The results are presented in Table 9. The evaluation results reveal that Qwen performs quite well in comparison to other open-source alternatives, only slightly behind the proprietary GPT-4, demonstrating Qwen's competitive capabilities.
## 4 Code-Qwen: Specialized Model for Coding
Training on domain-specific data has been shown to be highly effective, particularly in the case of code pretraining and finetuning. A language model that has been reinforced with training on code data can serve as a valuable tool for coding, debugging, and interpretation, among other tasks. In this work, we have developed a series of generalist models using pretraining and alignment techniques. Building on this foundation, we have created domain-specific models for coding by leveraging the base language models of Qwen, including continued pretrained model, Code-Qwen and supervised finetuned model, Code-Qwen-Chat. Both models have \(14\) billion and \(7\) billion parameters versions.
### Code Pretraining
We believe that relying solely on code data for pretraining can result in a significant loss of the ability to function as a versatile assistant. Unlike previous approaches that focused solely on pretraining on code data (Li et al., 2022; 2023d), we take a different approach (Roziere et al., 2023) by starting with our base models Qwen trained on a combination of text and code data, and then continuing to
pretrain on the code data. We continue to pretrain the models on a total of around \(90\) billion tokens. During the pre-training phase, we initialize the model using the base language models Qwen. Many applications that rely on specialized models for coding may encounter lengthy contextual scenarios, such as tool usage and code interpretation, as mentioned in Section 3.4. To address this issue, we train our models with context lengths of up to \(8192\). Similar to base model training in Section 2.4, we employ Flash Attention (Dao et al., 2022) in the attention modules, and adopt the standard optimizer AdamW (Kingma and Ba, 2014; Loshchilov and Hutter, 2017), setting \(\beta_{1}=0.9\), \(\beta_{2}=0.95\), and \(\epsilon=10^{-8}\). We set the learning rate as \(6.0\times 10^{-5}\) for Code-Qwen-14B and \(3.0\times 10^{-5}\) for Code-Qwen-7B, with \(3\%\) warm up iterations and no learning rate decays.
### Code Supervised Fine-Tuning
After conducting a series of empirical experiments, we have determined that the multi-stage SFT strategy yields the best performance compared to other methods. In the supervised fine-tuning stage, the model Code-Qwen-Chat initialized by the code foundation model Code-Qwen are optimized by the AdamW (Kingma and Ba, 2014; Loshchilov and Hutter, 2017) optimizer (\(\beta_{1}=0.9\), \(\beta_{2}=0.95\), \(\epsilon=10^{-8}\)) with a learning rate of \(2.0\times 10^{-6}\) and \(1.0\times 10^{-5}\) for the \(14\)B and 7B model respectively. The learning rate increases to the peaking value with the cosine learning rate schedule (\(3\%\) warm-up steps) and then remains constant.
### Evaluation
Our Code-Qwen models have been compared with both proprietary and open-source language models, as shown in Tables 10 and 11. These tables present the results of our evaluation on the test sets of Humaneval (Chen et al., 2021), MBPP (Austin et al., 2021), and the multi-lingual code generation benchmark HumanEvalPack(Muennighoff et al., 2023). The comparison is based on the pass@1 performance of the models on these benchmark datasets. The results of this comparison are clearly demonstrated in Tables 10 and 11.
Our analysis reveals that specialized models, specifically Code-Qwen and Code-Qwen-Chat, significantly outperform previous baselines with similar parameter counts, such as OctoGeeX (Muennighoff et al., 2023), InstructCodeT5+ (Wang et al., 2023), and CodeGeeX2 (Zheng et al., 2023). In fact, these models even rival the performance of larger models like Starcoder (Li et al., 2023).
When compared to some of the extremely large-scale closed-source models, Code-Qwen and Code-Qwen-Chat demonstrate clear advantages in terms of pass@1. However, it is important to note that these models fall behind the state-of-the-art methods, such as GPT-4, in general. Nonetheless, with the continued scaling of both model size and data size, we believe that this gap can be narrowed in the near future.
It is crucial to emphasize that the evaluations mentioned previously are insufficient for grasping the full extent of the strengths and weaknesses of the models. In our opinion, it is necessary to develop more rigorous tests to enable us to accurately assess our relative performance in comparison to GPT-4.
## 5 Math-Qwen: Specialized Model for Mathematics Reasoning
We have created a mathematics-specialized model series called Math-Qwen-Chat, which is built on top of the Qwen pretrained language models. Specifically, we have developed assistant models that are specifically designed to excel in arithmetic and mathematics and are aligned with human behavior. We are releasing two versions of this model series, Math-Qwen-14B-Chat and Math-Qwen-7B-Chat, which have \(14\) billion and \(7\) billion parameters, respectively.
### Training
We carry out math SFT on our augmented math instructional dataset for mathematics reasoning, and therefore we obtain the chat model, Math-Qwen-Chat, directly. Owing to shorter average lengths of the math SFT data, we use a sequence length of \(1024\) for faster training. Most user inputs in the math SFT dataset are examination questions, and it is easy for the model to predict the input
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **Params** & **HumanEval** & **MBPP** \\ \hline \multicolumn{4}{c}{_Proprietary models_} \\ \hline PaLM & 540B & 26.2 & 36.8 \\ \hline PaLM-Coder & 540B & 36.0 & 47.0 \\ \hline PaLM 2-S & - & 37.6 & 50.0 \\ Code-Cushman-001 & - & 33.5 & 45.9 \\ \hline Code-Davinci-002 & - & 47.0 & 58.1 \\ \hline GPT-3.5 & - & 73.2 & - \\ GPT-4 & - & 86.6 & - \\ \hline \multicolumn{4}{c}{_Open-source models_} \\ \hline \multirow{3}{*}{Llama 2} & 7B & 12.2 & 20.8 \\ & 13B & 20.1 & 27.6 \\ & 34B & 22.6 & 33.8 \\ & 70B & 30.5 & 45.4 \\ \hline CodeGen-Multi & 16B & 18.3 & 20.9 \\ \hline CodeGen-Mono & 16B & 29.3 & 35.3 \\ \hline CodeGeeX2 & 6B & 35.9 & - \\ \hline StarCoder-Prompted & 15B & 40.8 & 49.5 \\ \hline CodeT5+ & 16B & 30.9 & - \\ \hline InstructCodeT5+ & 16B & 35.0 & - \\ \hline \multirow{3}{*}{Code LLAMA} & 7B & 33.5 & 41.4 \\ & 13B & 36.0 & 47.0 \\ & 34B & 48.8 & 55.0 \\ \hline \multirow{3}{*}{Code LLAMA-Instruct} & 7B & 34.8 & 44.4 \\ & 13B & 42.7 & 49.4 \\ & 34B & 41.5 & 57.0 \\ \hline \multirow{3}{*}{Code LLAMA-Python} & 7B & 38.4 & 47.6 \\ & 13B & 43.3 & 49.0 \\ \cline{1-1} & 34B & 53.7 & 56.2 \\ \cline{1-1} & 34B & 62.2 & 61.2 \\ \cline{1-1} \cline{2-5} WizardCoder-Python & 13B & 64.0 & **55.6** \\ & 34B & 73.2 & 61.2 \\ \hline \multirow{3}{*}{Qwen-Chat} & 7B & 37.2 & 35.8 \\ & 14B & 43.9 & 46.4 \\ \hline \multirow{3}{*}{Code-Qwen} & 7B & 40.2 & 41.8 \\ & 14B & 45.1 & 51.4 \\ \cline{1-1} \cline{2-5} & 7B & 43.3 & 44.2 \\ \cline{1-1} & 14B & **66.4** & 52.4 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Results of pass@1 (%) on HumanEval and MBPP. Most scores are retrieved from the papers of StarCoder (Li et al., 2023d), CodeT5+ (Wang et al., 2023d), WizardCoder (Luo et al., 2023b) and Code LLAMA (Roziere et al., 2023).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Params**} & \multicolumn{6}{c}{**Programming Language**} \\ \cline{3-10} & & Python & JavaScript & Java & Go & C++ & Rust & Avg. \\ \hline \multicolumn{10}{c}{_Proprietary models_} \\ \hline GPT-4 & - & 86.6 & 82.9 & 81.7 & 72.6 & 78.7 & 67.1 & 78.3 \\ \hline \multicolumn{10}{c}{_Open-source models_} \\ \hline InstructCodeT5+ & 16B & 37.0 & 18.9 & 17.4 & 9.5 & 19.8 & 0.3 & 17.1 \\ StarChat-\(\beta\) & 15B & 33.5 & 31.4 & 26.7 & 25.5 & 26.6 & 14.0 & 26.3 \\ StarCoder & 15B & 33.6 & 30.8 & 30.2 & 17.6 & 31.6 & 21.8 & 27.6 \\ CodeGeeX2 & 6B & 35.9 & 32.2 & 30.8 & 22.5 & 29.3 & 18.1 & 28.1 \\ OctoGeeX & 6B & 44.7 & 33.8 & 36.9 & 21.9 & 32.3 & 15.7 & 30.9 \\ OctoCoder & 15B & 46.2 & 39.2 & 38.2 & 30.4 & 35.6 & 23.4 & 35.5 \\ WizardCoder & 15B & 59.8 & 49.5 & 36.1 & 36.4 & 40.9 & 20.2 & 40.5 \\ \hline Qwen-Chat & 7B & 37.2 & 23.2 & 32.9 & 20.7 & 22.0 & 9.1 & 24.2 \\ & 14B & 43.9 & 38.4 & 42.7 & 34.1 & 24.4 & 18.9 & 33.7 \\ Code-Qwen & 7B & 40.2 & 40.4 & 40.2 & 26.2 & 20.7 & 15.8 & 30.6 \\ & 14B & 45.1 & 51.8 & 57.3 & 39.6 & 18.2 & 20.7 & 38.8 \\ Code-Qwen-Chat & 7B & 43.3 & 41.5 & 49.4 & 29.3 & 32.9 & 20.1 & 36.1 \\ & 14B & **66.4** & **58.5** & **56.1** & **47.6** & **54.2** & **28.7** & **51.9** \\ \hline \hline \end{tabular}
\end{table}
Table 11: **Zero-shot pass@1 (%) performance on the HumanEvalPack (synthesize) benchmark.** The baseline results are partly from OctoPack (Muennighoff et al., 2023).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **Params** & **GSM8K** & **MATH** & **Math401** & **Math23K** \\ \hline \multicolumn{6}{c}{_Proprietary models_} \\ \hline GPT-4 & - & **92.0** & **42.5** & 83.5 & 74.0 \\ GPT-3.5 & - & 80.8 & 34.1 & 75.1 & 60.0 \\ \hline \multirow{3}{*}{Minerva} & 8B & 16.2 & 14.1 & - & - \\ & 62B & 52.4 & 27.6 & - & - \\ & 540B & 58.8 & 33.6 & - & - \\ \hline \multicolumn{6}{c}{_Open-source models_} \\ \hline LLaMA-1 RFT & 7B & 46.5 & 5.2 & - & - \\ & 13B & 52.1 & 5.1 & - & - \\ \hline \multirow{3}{*}{WizardMath} & 7B & 54.9 & 10.7 & - & - \\ & 13B & 63.9 & 14.0 & - & - \\ & 70B & 81.6 & 22.7 & - & - \\ \hline \multirow{3}{*}{GAIRMath-Abel} & 7B & 59.7 & 13.0 & - & - \\ & 13B & 66.4 & 17.3 & - & - \\ & 70B & 83.6 & 28.3 & - & - \\ \hline \multirow{3}{*}{Qwen-Chat} & 7B & 50.3 & 6.8 & 57.4 & 51.2 \\ & 14B & 60.1 & 18.4 & 70.1 & 67.0 \\ \hline \multirow{3}{*}{Math-Qwen-Chat} & 7B & 62.5 & 17.2 & 80.8 & 75.4 \\ & 14B & 69.8 & 24.2 & **85.0** & **78.4** \\ \hline \hline \end{tabular}
\end{table}
Table 12: **Results of models on mathematical reasoning.** We report the accuracy of Qwen for all benchmarks using greedy decoding. For MATH, we are reporting Qwen’s performances on the test set from Lightman et al. (2023).
format and it is meaningless for the model to predict the input condition and numbers which could be random. Thus, we mask the inputs of the system and user to avoid loss computation on them and find masking them accelerates the convergence during our preliminary experiments. For optimization, we use the AdamW optimizer with the same hyperparameters of SFT except that we use a peak learning rate of \(2\times 10^{-5}\) and a training step of \(50\,000\).
### Evaluation
We evaluate models on the test sets of GSM8K (Grade school math) (Cobbe et al., 2021), MATH (Challenging competition math problems) (Hendrycks et al., 2021), Math401 (Arithmetic ability) (Yuan et al., 2023b), and Math23K (Chinese grade school math) (Wang et al., 2017). We compare Math-Qwen-Chat with proprietary models ChatGPT and Minerva (Lewkowycz et al., 2022) and open-sourced math-specialized model RFT (Yuan et al., 2023a), WizardMath (Luo et al., 2023a), and GAIRMath-Abel (Chern et al., 2023a) in Table 12. Math-Qwen-Chat models show better math reasoning and arithmetic abilities compared to open-sourced models and Qwen-Chat models of similar sizes. Compared to proprietary models, Math-Qwen-7B-Chat outperforms Minerva-8B in MATH. Math-Qwen-14B-Chat is chasing Minerva-62B and GPT-3.5 in GSM8K and MATH and delivers better performance on arithmetic ability and Chinese math problems.
## 6 Related Work
### Large Language Models
The excitement of LLM began with the introduction of the Transformer architecture (Vaswani et al., 2017), which was then applied to pretraining large-scale data by researchers such as Radford et al. (2018); Devlin et al. (2018); Liu et al. (2019). These efforts led to significant success in transfer learning, with model sizes growing from \(100\) million to over \(10\) billion parameters (Raffel et al., 2020; Shoeybi et al., 2019).
In 2020, the release of GPT-3, a massive language model that is \(10\) times larger than T5, demonstrated the incredible potential of few-shot and zero-shot learning through prompt engineering and in-context learning, and later chain-of-thought prompting (Wei et al., 2022c). This success has led to a number of studies exploring the possibilities of further scaling these models (Scao et al., 2022; Zhang et al., 2022; Du et al., 2021; Zeng et al., 2022; Lepikhin et al., 2020; Fedus et al., 2022; Du et al., 2022; Black et al., 2022; Rae et al., 2021; Hoffmann et al., 2022; Chowdhery et al., 2022; Thoppilan et al., 2022). As a result, the community has come to view these large language models as essential foundations for downstream models (Bommasani et al., 2021).
The birth of ChatGPT (OpenAI, 2022) and the subsequent launch of GPT-4 (OpenAI, 2023) marked two historic moments in the field of artificial intelligence, demonstrating that large language models (LLMs) can serve as effective AI assistants capable of communicating with humans. These events have sparked interests among researchers and developers in building language models that are aligned with human values and potentially even capable of achieving artificial general intelligence (AGI) (Anil et al., 2023; Anthropic, 2023a;b).
One notable development in this area is the emergence of open-source LLMs, specifically LLMA (Touvron et al., 2023a) and Llama 2 (Touvron et al., 2023b), which have been recognized as the most powerful open-source language models ever created. This has led to a surge of activity in the open-source community (Wolf et al., 2019), with a series of large language models being developed collaboratively to build upon this progress (Mosaic ML, 2023; Almazrouei et al., 2023; ChatGLM2 Team, 2023; Yang et al., 2023; InternLM Team, 2023).
### Alignment
The community was impressed by the surprising effectiveness of alignment on LLMs. Previously, LLMs without alignment often struggle with issues such as repetitive generation, hallucination, and deviation from human preferences. Since 2021, researchers have been diligently working on developing methods to enhance the performance of LLMs in downstream tasks (Wei et al., 2022a; Sanh et al., 2021; Longpre et al., 2023; Chung et al., 2022; Muennighoff et al., 2022). Furthermore,
researchers have been actively exploring ways to align LLMs with human instructions (Ouyang et al., 2022; Askell et al., 2021; Bai et al., 2022b;c). One major challenge in alignment research is the difficulty of collecting data. While OpenAI has utilized its platform to gather human prompts or instructions, it is not feasible for others to collect such data.
However, there has been some progress in this area, such as the self-instruct approach proposed in Wang et al. (2023c). This innovative work offers a potential solution to the data collection problem in alignment research. As a result, there has been a surge in open-source chat data, including Alpaca (Taori et al., 2023), MOSS (Sun et al., 2023a), Dolly (Conover et al., 2023), Evol-Instruct (Xu et al., 2023b), and others (Sun et al., 2023b; Xu et al., 2023a;c; Chen et al., 2023c; Ding et al., 2023; Ji et al., 2023; Yang, 2023). Similarly, there has been an increase in open-source chat models, such as Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Guanaco (Dettmers et al., 2023), MOSS (Sun et al., 2023a), WizardLM (Xu et al., 2023b), and others (Xu et al., 2023c; Chen et al., 2023c; Ding et al., 2023; Wang et al., 2023b).
To train an effective chat model, available solutions are mostly based on SFT and RLHF (Ouyang et al., 2022). While SFT is similar to pretraining, it focuses on instruction following using the aforementioned data. However, for many developers, the limited memory capacity is a major obstacle to further research in SFT. As a result, parameter-efficient tuning methods, such as LoRA (Hu et al., 2021) and Q-LoRA (Dettmers et al., 2023), have gained popularity in the community. LoRA tunes only low-rank adapters, while Q-LoRA builds on LoRA and utilizes 4-bit quantized LLMs and paged attention (Dettmers et al., 2022; Frantar et al., 2022; Kwon et al., 2023). In terms of RLHF, recent methods such as PPO (Schulman et al., 2017; Touvron et al., 2023b) have been adopted, but there are also alternative techniques aimed at addressing the complexity of optimization, such as RRHF (Yuan et al., 2023c), DPO (Rafailov et al., 2023), and PRO (Song et al., 2023). Despite the ongoing debate about the effectiveness of RLHF, more evidence is needed to understand how it enhances the intelligence of LLMs and what potential drawbacks it may have.
### Tool Use and Agents
LLM's planning function allows for the invocation of tools, such as APIs or agent capabilities, through in-context learning, as demonstrated by Schick et al. (2023). Yao et al. (2022) introduced ReAct, a generation format that enables the model to generate thoughts on which tool to use, accept input from API observations, and generate a response. GPT-3.5 and GPT-4, when prompted with few shots, have shown consistent and impressive performance. In addition to tool usage, LLMs can utilize external memory sources like knowledge bases (Hu et al., 2023; Zhong et al., 2023b) or search engines (Nakano et al., 2021; Liu et al., 2023b) to generate more accurate and informative answers. This has led to the popularity of frameworks like LangChain (LangChain, Inc., 2023). The research on LLMs for tool use has also sparked interest in building agents with LLM capabilities, such as agents that can call different AI models (Shen et al., 2023; Li et al., 2023a), embodied lifelong learning or multimodal agents (Wang et al., 2023a; Driess et al., 2023), and multiple agents interacting with each other and even building a micro-society (Chen et al., 2023b; Li et al., 2023b; Xu et al., 2023d; Hong et al., 2023).
### LLM for Coding
Previous research has demonstrated that LLMs possess remarkable capabilities in code understanding and generation, particularly those with massive numbers of parameters (Chowdhery et al., 2022; Anil et al., 2023; Rae et al., 2021; Hoffmann et al., 2022). Moreover, several LLMs have been pre-trained, continued pre-trained, or fine-tuned on coding-related data, which has resulted in significantly improved performance compared to general-purpose LLMs. These models include Codex Chen et al. (2021), AlphaCode (Li et al., 2022), Santacoder (Allal et al., 2023), Starcoder-Base (Li et al., 2023d), InCoder (Fried et al., 2022), CodeT5 (Wang et al., 2021), CodeGeeX (Zheng et al., 2023), and Code LLaMA (Roziere et al., 2023). In addition to these models, recent studies have focused on developing specialized alignment techniques for coding, such as Code Llama-Instruct (Roziere et al., 2023) and StarCoder (Li et al., 2023d). These models can assist developers in various code-related tasks, including code generation (Chen et al., 2021; Austin et al., 2021), code completion (Zhang et al., 2023a), code translation (Szafraniec et al., 2023), bug fixing (Muennighoff et al., 2023), code refinement (Liu et al., 2023c), and code question answering (Liu & Wan, 2021). In a word, LLMs
have the potential to revolutionize the field of coding by providing developers with powerful tools for code comprehension, generation, and related tasks.
### LLM for Mathematics
LLMs with a certain model scale have been found to possess the ability to perform mathematical reasoning (Wei et al., 2022b; Suzgun et al., 2022). In order to encourage LLMs to achieve better performance on math-related tasks, researchers have employed techniques such as chain-of-thought prompting (Wei et al., 2022c) and scratchpad (Nye et al., 2021), which have shown promising results. Additionally, self-consistency (Wang et al., 2022) and least-to-most prompting (Zhou et al., 2022) have further improved the performance of these models on these tasks. However, prompt engineering is a time-consuming process that requires a lot of trial and error, and it is still difficult for LLMs to consistently perform well or achieve satisfactory results in solving mathematical problems. Moreover, simply scaling the data and model size is not an efficient way to improve a model's mathematical reasoning abilities. Instead, pretraining on math-related corpora has been shown to consistently enhance these capabilities (Hendrycks et al., 2021; Lewkowycz et al., 2022; Taylor et al., 2022; Lightman et al., 2023). Additionally, fine-tuning on math-related instruction-following datasets (Si et al., 2023; Yuan et al., 2023; Luo et al., 2023; Yue et al., 2023; Chern et al., 2023; Yu et al., 2023), has also been effective and more cost-effective than math-specific pretraining. Despite their limitations in terms of accuracy, LLMs still have significant potential to assist users with practical mathematical problems. There is ample scope for further development in this area.
## 7 Conclusion
In this report, we present the Qwen series of large language models, which showcase the latest advancements in natural language processing. With 14B, 7B, and 1.8B parameters, these models have been pre-trained on massive amounts of data, including trillions of tokens, and fine-tuned using cutting-edge techniques such as SFT and RLHF. Additionally, the Qwen series includes specialized models for coding and mathematics, such as Code-Qwen, Code-Qwen-Chat, and Math-Qwen-Chat, which have been trained on domain-specific data to excel in their respective fields. Our results demonstrate that the Qwen series is competitive with existing open-source models and even matches the performance of some proprietary models on comprehensive benchmarks and human evaluation.
We believe that the open access of Qwen will foster collaboration and innovation within the community, enabling researchers and developers to build upon our work and push the boundaries of what is possible with language models. By providing these models to the public, we hope to inspire new research and applications that will further advance the field and contribute to our understanding of the variables and techniques introduced in realistic settings. In a nutshell, the Qwen series represents a major milestone in our development of large language models, and we are excited to see how it will be used to drive progress and innovation in the years to come. |
2305.19837 | EAMDrift: An interpretable self retrain model for time series | The use of machine learning for time series prediction has become
increasingly popular across various industries thanks to the availability of
time series data and advancements in machine learning algorithms. However,
traditional methods for time series forecasting rely on pre-optimized models
that are ill-equipped to handle unpredictable patterns in data. In this paper,
we present EAMDrift, a novel method that combines forecasts from multiple
individual predictors by weighting each prediction according to a performance
metric. EAMDrift is designed to automatically adapt to out-of-distribution
patterns in data and identify the most appropriate models to use at each moment
through interpretable mechanisms, which include an automatic retraining
process. Specifically, we encode different concepts with different models, each
functioning as an observer of specific behaviors. The activation of the overall
model then identifies which subset of the concept observers is identifying
concepts in the data. This activation is interpretable and based on learned
rules, allowing to study of input variables relations. Our study on real-world
datasets shows that EAMDrift outperforms individual baseline models by 20% and
achieves comparable accuracy results to non-interpretable ensemble models.
These findings demonstrate the efficacy of EAMDrift for time-series prediction
and highlight the importance of interpretability in machine learning models. | Gonçalo Mateus, Cláudia Soares, João Leitão, António Rodrigues | 2023-05-31T13:25:26Z | http://arxiv.org/abs/2305.19837v1 | # EAMDrift: An interpretable self retrain model for time series+
###### Abstract
The use of machine learning for time series prediction has become increasingly popular across various industries thanks to the availability of time series data and advancements in machine learning algorithms. However, traditional methods for time series forecasting rely on pre-optimized models that are ill-equipped to handle unpredictable patterns in data.
In this paper, we present EAMDrift, a novel method that combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric. EAMDrift is designed to automatically adapt to out-of-distribution patterns in data and identify the most appropriate models to use at each moment through interpretable mechanisms, which include an automatic retraining process. Specifically, we encode different concepts with different models, each functioning as an observer of specific behaviors. The activation of the overall model then identifies which subset of the concept observers is identifying concepts in the data. This activation is interpretable and based on learned rules, allowing to study of input variables relations.
Our study on real-world datasets shows that EAMDrift outperforms individual baseline models by 20% and achieves comparable accuracy results to non-interpretable ensemble models. These findings demonstrate the efficacy of EAMDrift for time-series prediction and highlight the importance of interpretability in machine learning models.
Keywords:Time series forecasting Ensemble Prediction Model Dynamic prediction model Interpretability Feature Extraction
## 1 Introduction
Nowadays, vast amounts of time series data are generated and collected from various sources in a streaming setting. Novel algorithms and hardware enable extracting valuable insights from these data streams through machine learning algorithms in fields such as finance [1] and public health [2]. Furthermore, the
SARS-CoV-2 pandemic has highlighted the importance of time series prediction in forecasting, such as the spread of infectious diseases and the demand for medical supplies and services [3, 4].
However, the widespread implementation of artificial intelligence is hindered by a lack of trust in multiple industries due to the absence of clarity on the model behavior to back up decisions [5, 6, 7, 8]. Confidence and interpretability are closely connected, and as a public concern, the European ethics guidelines for trustworthy AI [9] state that "the degree of interpretability needed is highly dependent on the context," but recommend having "transparent AI systems that explain their decisions to those directly and indirectly affected."
As a general concept, time series data is a sequence of unpredictable and varying patterns that evolve. These patterns, which we call "concepts" in this work, are often characterized by high seasonalities [2, 10, 11]. Existing approaches typically rely on a single model trained on pre-defined assumptions based on past data [12, 13, 14, 15, 16, 17, 18, 19, 20]. While such approaches can produce good results in some instances, different models may yield better estimations for different concepts, as demonstrated by various ensemble modeling approaches [21, 22, 23, 24, 25, 26, 27]. Furthermore, many existing approaches do not consider external factors. For example, relying solely on established concepts to predict future stock trends can be risky in the stock market. Stocks are influenced by politics, events, and investor sentiments [28, 29].
Motivated by the need for interpretability, handling of different concepts, and external factors, this paper proposes a novel machine-learning method to forecast time series called the Ensemble Adaptive Model with a Drift detector (EAMDrift). EAMDrift combines the power of multiple individual predictors, such as Prophet, ARIMA, and LSTM, through an interpretable model.
The key idea is to use different models to encode different concepts, each observing specific behaviors. The activation of EAMDrift recognizes concepts in data and assigns weights to each observer for each prediction. These weights are
Figure 1: Proposed model overview. For each historical window, the model extracts statistics and finds the best model to create a structured table to train the ensemble model.
calculated at run time and combined with the observer's predictions to assemble the final result. Additionally, EAMDrift accepts external covariates, allows the study of relations between input variables, and contains a self-retrain mechanism that helps the model adapt to unexpected concepts over time.
As shown in Figure 1, EAMDrift generates various splits from historical data and, for each split, extracts a handful of statistics and tests different models to create a structured table. This table serves as the training data for the ensemble model, with extracted statistics and external covariates as the input (\(X\)) and the best model found as the output (\(Y\)). Based on interpretable learned rules, this ensemble model assigns weights to each predictor, determining their contribution to the final prediction.
Experimental evaluations using different real-world datasets demonstrate that our model outperforms single approaches and achieves on-par results compared to non-interpretable ensemble models.
The main contributions of this paper are:
* A interpretable ensemble method that selects the bests predictors at each point, identifying relevant concepts.
* A method based on statistics to be easier to interpret;
* A model that accepts past covariates1 (either numerical or categorical) and allows studying relations between them. Footnote 1: Our model can automatically add covariates related to dates.
* A retrain method that strategically finds potential points to retrain.
The rest of this paper is organized as follows: In Section 2, we presented the Related Work. Subsequently, in Section 3, we present the detailed EAMDrift model architecture. The experimental methodology and datasets used to test our model are presented in Section 4, and the respective results are presented in Section 5. Finally, in Section 6, we discuss the conclusion and future work.
## 2 Related Work
The _"one-fits-all"_ style, where a single predictive model is used, has been the most popular technique due to its simplicity and good prediction power [30]. These models rely on regression, machine learning, and time series techniques. However, although time series models such as ARIMA [12, 13] and variations of Exponential Smoothing [14, 15] are the most commonly used methods due to their ability to detect seasonality and cyclic behaviors and their ease of use, they fail when dealing with unpredictable concepts in data. As a result, some works employ machine learning models such as LSTM, CNN, and Transformers. Although training these models requires more effort, they can learn long-term and complex concepts [16, 17, 18, 19, 20].
However, research has shown that even with more complex models, a single model cannot handle all unpredictable concepts in data. Therefore, different ensemble and adaptive models have been proposed [31, 32, 33, 34, 35].
For different concepts, different models yield better estimations [36, 37]. To address this issue, Iqbal _et al._ proposed a novel adaptive method that automatically identifies the most appropriate model for specific scenarios [22]. They used classical machine learning methods like LR, SVM, and GBT to forecast data.
Jungang _et al._ proposed a combined Prophet-LSTM method to leverage time series features, such as trend and periodicity, while learning long-term concepts in data [23]. The algorithm obtains the final result through linear weighting of the two models.
Kim _et al._ proposed a method called CloudInsight, inspired by the mixture of experts problems [38], which assigns weights to each predictor to forecast data [24].
There have also been efforts to employ interpretability in time series predictions [39]. Some of the frequently used models include linear regression, logistic regression, and the decision tree due to their internal transparency [40]. In other way, drifts can also add interpretability by sounding an alarm when changes in the data are detected [41, 42].
## 3 EAMDrift: Proposed model architecture
Using an innovative model architecture, EAMDrift combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric. EAMDrift identifies the most promising predictors for each concept and assigns them higher weights. By defining a correspondence between models and concepts, we can view each predictor in the ensemble as an observer of the data, indicating the presence of a given pattern by the strength of its prediction.
The architecture of our proposed model is depicted in Figure 2 and will be described next.
* In the first step (**Create training set**), the model starts by using historical workload and covariates data to create a new training set that will be used to train our ensemble model. This historical data will be split into different sliding windows, each with the same size. Then, for each sliding window, we will: run different models (previously selected by the user) and find the best, extract statistics about that window (like mean, number of peaks, and others), and pre-process covariates data (i.e., for categorical covariates the model chooses the most frequent and for numerical the model sums that values for each window). Ultimately, we will have a train set with the columns: statistics, processed covariates, and best model. Each row of the train set will represent a sliding window. Sliding windows encode concepts, and the best model for each concept represents an expert on a given concept.
* In the second step (**Create ensemble model**), the model will use the training set previously created and build an ensemble model. This ensemble model uses RuleFit, an interpretable machine learning model based on rules, and will have as input the columns referring to statistics and processed
covariates and as output the column with the best model. In each prediction, the RuleFit model will output probabilities for the selection of each model, and the final prediction will be a combination of the output of different models multiplied by their respective probabilities. Unlike the usual time-series models, our model does not use the time-series points directly to select the weights for each model but instead uses statistics-related data.
* The third step (**Detect drifts**) occurs during real-time usage. When new points are given to the model, the model processes these data to create an input in the necessary format for the ensemble model. At the same time this processing is done, the model tests if these new points trigger the drift detector to query the need to retrain the model. In the positive case, the model retrains before giving the respective predictions.
### Create initial training set
Before preparing the training set, the user must create the Models Database. This database contains the models the user has chosen to use during training. This model works with any model as long as it is implemented in the code. The model will receive historical data to be trained as input with this definition. The model expects this data to be a time series DataFrame with a column for dates, another for the variable we want to predict, and the rest regarding covariates
Figure 2: Proposed model architecture. The three main components of our proposed model are highlighted in blue.
(either numerical or categorical) that the user wants to add. Next, the pre-processing step will begin, and the historical input will be divided into different splits and fixed ranges of points.
The user must define the number of \(n\) training points and \(m\) prediction points. The user can also select the number of splits they want to create to train the model. Suppose the user does not choose any number of splits. In that case, the model will automatically find the maximum number of splits that can be created depending on the values of \(n\) and \(m\) variables.
With the split ranges defined, the next step will be to process each split, as depicted in Figure 3. For each split, we will extract statistics from the training points, test models from the Models DB using the training and prediction points to validate forecasts, and finally process the covariates.
The statistics extraction was performed using the Python library tsfresh[43], which can automatically compute numerous features of time series data. tsfresh extracts around 800 statistics, ranging from simple and well-known ones to more complex ones. However, handling such a large number of statistics would make the model less interpretable, as it would be too many statistics to process. Therefore, we used data pre-processing and feature reduction techniques to reduce the number of features.
The model first removes columns with more than 50% null values, columns with more than 95% feature similarity in values and variance, and correlated columns. On average, these steps reduced the number of statistics from 800 to 250. However, this still remained a very high number to analyze, so we performed an additional feature selection technique based on ElasticNet regression.
ElasticNet is a linear regression model that combines the properties of Lasso and Ridge Regression by adding L1 and L2 regularization terms. ElasticNet minimizes the objective function depicted in equation (1), where the first term is the MSE loss, and the second term is the L1 regularization term which encourages sparsity (more features being set to zero) by adding a penalty for non-zero weights. The third term is the L2 regularization term which encourages small
Figure 3: Schema of each training set line.
weights by adding a penalty for large weights. Here, \(y\) represents the target sample, \(w\) represents the weight vector, \(n\) is the number of samples, and \(p\) is the number of features.
ElasticNet has two parameters to be defined, the \(l1\_ratio\) (\(\rho\) in the equation) and the \(\alpha\), to balance L1 and L2 regularization. A larger value for alpha means stronger regularization, and a larger value for \(l1\_ratio\) means more L1 regularization, which leads to sparsity (if \(l1\_ratio=1\), only L1 regularization is used, and if \(l1\_ratio=0\), only L2 regularization is used). Our objective in using ElasticNet is to select the most important features while simultaneously shrinking the less important, leading to a more interpretable model. We selected \(l1\_ratio\) and \(\alpha\) to be 0.7 and 0.9, respectively, leading to an average feature reduction from 250 to 45.
\[\begin{split} L_{enet}(w)&=\frac{1}{2n}||y-X_{w}|| _{2}^{2}+\alpha\rho||w||_{1}+\frac{\alpha(1-\rho)}{2}||w||_{2}^{2}\\ &=\frac{1}{2n}||y-X_{w}||_{2}^{2}+\alpha\rho\sum_{j=1}^{p}|w_{j}| +\frac{\alpha(1-\rho)}{2}\sum_{j=1}^{p}w_{j}^{2}\end{split} \tag{1}\]
To test models, we use the training points for each split to train every model present in the Models DB and predict the following \(m\) points. Then, we register the model that provides the best accuracy, as demonstrated in Figure 3.
The format covariates step is the simplest of the three, and its objective is to aggregate the \(n\) covariate points into one line. For numerical covariates, we sum the values of all points, whereas for categorical covariates, we count the most common category in that period. After encoding with the \(OneHotEncoder\) function from scikit-learn, the most common category will also be numerical. Our model can also automatically add time covariates such as day, month, and year. If the user wants to use these covariates for each sliding window, our model adds the date from the last training point.
Ultimately, we will have a training set with multiple rows, where each row corresponds to a sliding window. For each line, we will have columns regarding statistics, covariates, and the best model for the respective window, as shown in Figure 3.
### Creating the ensemble model
With the training set created, the model is ready to train the ensemble part. We will train a model whose main objective is to provide probabilities for the models present in the Models DB. This model will receive columns regarding statistics and covariates from the training set and will output probabilities for each model based on the best model column. We have selected the RuleFit framework [44] and adapted the output to solve our probability task (original RuleFit solves regression tasks). RuleFit was chosen because it is an interpretable model that generates a set of human-readable rules that can be used to make predictions.
After training, the overall model will be stored to predict new points in the future. When a new point appears that needs to be predicted, this point will be
added to the data and processed to create a new line similar to the ones created for the training set, but without the _best model_ column. This new line will have the same format as the ones given to our RuleFit model for training so that our model can output a prediction. The output of our RuleFit will be a vector \(Y\) in the format \(n\) (2). \(n\) corresponds to the size of the Models DB, and each line of the matrix will contain the probability of use for each model. The probability values will be between 0 and 1, where a value closer to 1 indicates that it will have more impact on the final prediction.
\[Y=\begin{pmatrix}p_{1}\\ p_{2}\\...\\ p_{n}\end{pmatrix} \tag{2}\]
With the matrix \(Y\) obtained, our overall model will run every model present in Model DB with the training points of the split that is being predicted, creating a new matrix \(P\) with the same format of \(Y\), but now with the model's predictions. The final prediction will be computed based on equation (3), where the matrices \(P\) and \(Y\) will be multiplied.
\[pred(Y,P)=\sum_{i=1}^{n}Y_{i}\cdot P_{i} \tag{3}\]
### Drift detector
The final part of our model is the Drift detector. This component will try to find changes in the data distribution that may indicate the need for retraining. When a new point arrives in the model, the drift detector will run, and if needed, the model will be retrained before giving the prediction [45, 46, 47]. Moreover, this will also warn the user to be more careful when analyzing the workload in the following days.
For this paper, we focused on concept drifts, a type of drift occurring when the relationships between the input and output variables change. Concerning our work, a concept drift detector would trigger when the workload trend suffers a change. We tested two concept drift detectors, ADaptive WINdowing (ADWIN) and Kolmogorov-Smirnov Windowing (KSWIN). The drift that had an overall better performance on tests and was thus selected was KSWIN, as it was the one that detects drifts in more critical zones (zones with abrupt changes). KSWIN is based on the Kolmogorov-Smirnov (KS) statistical test that compares a part of data to a reference distribution. It is based on the maximum difference between some sample empirical Cumulative Distribution Function (CDF) and the reference CDF. The CDF encodes the probability of a random variable \(X\), with a given probability distribution, being found at a value less than or equal to \(u\) (Equation (4)) [48].
\[CDF_{x}(u)=P(X\leq u)\]
Our method defines an interval to avoid successive retrains in zones where the workload changes significantly. If this interval is defined as 14 days, the model must wait at least 14 days to retrain. Thus, if drifts are detected on 1, 7, and 20 days of a month, the model will retrain in the 1 and 20 days.
### Model Interpretability
As mentioned earlier, the interpretability in EAMDrift does not explain the prediction itself. Instead, it is in the ensemble and allows us to point to a concept mediated by the model's choice. We can observe which rules and features contribute more to the prediction. EAMDrift lets us know which models contribute to a specific prediction and what weight was attributed to each. This model uses a combination of different predictors for the final forecast. Furthermore, each rule has a support parameter related to the number of points that satisfy a rule. Rules with more significant values will also help us understand the relations between input variables in our data.
EAMDrift also has an implemented drift detector. Although it is meant to detect moments of potential retraining, it also warns the user to be more attentive to data in the following periods.
### Details and implementation of EAMDrift
EAMDrift is compatible with any model as long as the user implements it. If the user selects a model X and during training, it is chosen as the best for less than or equal to one prediction, it will be discarded from the pool of predictors because it did not classify enough points. However, if drift is detected later and the model needs to be retrained, this model X will have the opportunity to enter the predictor pool again.
Our model also allows for selecting a restricted number of predictors in each prediction. For example, if we have six models in the Models DB and select three predictors in each prediction, the model will only use the three predictors with the highest probability in the matrix P (see (2)). To predict subsequent periods, the user can also tune other parameters, such as the number of training points and covariates. Additionally, the user can choose between Mean Square Error (MSE), Mean Absolute Error (MAE), or Mean Absolute Percentage Error (MAPE) as the metric for selecting the best models during the creation of the training set. Finally, the user can define a length for the training set, which restricts the number of points used in each retrain to avoid excessive training set rows. EAMDrift was implemented in Python (version 3.9.12) on Anaconda using mainly Pandas, NumPy, and scikit-learn libraries. For the machine learning models, we used mainly River, Darts, and statsmodels libraries. The training and inference of EAMDrift were executed in a 6-core Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz. The creation of the training set was parallelized through the function \(Parallel\) from joblib library to reduce the processing time. We used six jobs to take maximum advantage of the computer's cores.
## 4 Experimental Methodology
This section outlines the methodology and experimental procedures for validating our proposed model. To this end, we describe the datasets selected for this study and the methodology chosen for conducting the experiments, which includes the setup of EAMDrift parameters.
### Datasets
To test EAMDrift, we used five different datasets from different backgrounds. As one of the innovations of our model is the possibility to use and study relations between input variables, we added covariates to each dataset. We sliced a subset of each dataset to avoid creating predictions with excessive points, ensuring that each slice contained at most 5000 points. Moreover, all the datasets were previously processed, and the variable to predict was standardized.
The first two datasets, **NB1** and **NB2**, are private and correspond to the CPU usage of real servers that manage a company's main application. The time step in data is in days and comprises 1460 records. As this corresponds to a real application, we extracted tweets related to the company and ran a sentiment analysis to be used as covariates. The third dataset is the Google Cluster Trace (**GCT**) [49]. It corresponds to a set of workload data from different servers for May 2019 (for this work, just one server was used). The data has a 5-minute step for the entire month of May and provides CPU and Memory metrics. We grouped data in hours, giving us 744 entries, and memory was used as a covariate. The fourth dataset is the Electric Power Consumption (**EPC**) [50]. It measures the electric power usage in different houses in the zone of Paris, France, and for our test, just one house was chosen. The data has a 1-minute step for nearly four years but was aggregated in hours, giving us 35063 entries. As electric consumption can be related to weather, we used data from "AWOS" sensors available in [51] to be used as covariates. The last dataset used is the Microsoft Stock Value (**MSV**) [52], which consists of Microsoft stock close value from 2006 to 2016 with a 1-day time step. This dataset also includes a sentiment analysis of the financial news related to Microsoft for that period, which will be used as covariates.
### Experiments Setup
Our model was designed to work with any model, so to test, we selected as predictors five models with different backgrounds, including Prophet, Exponential smoothing (ES), Seasonal Autoregressive Integrated Moving Average (SARIMA), Long-Short Term Memory (LSTM), and Transformer. The rest of the EAMDrift parameters were defaulted and can be consulted in the code repository.
To compare validation results, we evaluated our proposed model against single-model approaches. In addition, we created two non-interpretable variations of our model (based on some state-of-the-art works (Section 2)), in which we replaced the interpretable ensemble part (RuleFit) of EAMDrift with the
Random Decision Forest (RDF) and Support Vector Machine (SVM) models. The purpose of these variations was to compare the effectiveness of RuleFit with other black-box models acting as our ensemble selector.
For each dataset, we used 40% of the points to train and the remaining to forecast. We elected to utilize only 40% of the available data points for training, motivated by two primary factors. Firstly, some of the datasets contained a substantial number of observations, and secondly, to test the efficacy of our retrain mechanism. To obtain the results, we used blocked cross-validation, where a default set range is defined for training. That range is used in all iterations of the cross-validation folds. Each prediction was made by training the model with data until the prediction day. The Mean Absolute Percentage Error (MAPE) (5) was calculated for each fold and then averaged to produce the final score for each model to assess predictions.
\[MAPE=\frac{1}{n}\sum_{i=1}^{n}\left|\frac{Actual_{i}-Predicted_{i}}{Actual_{i }}\right|\times 100 \tag{5}\]
## 5 Experimental Results
In this section, we evaluate EAMDrift, encompassing various facets of its performance and capabilities. Firstly, we present a comparative analysis of our proposed method, and next, we present a study of its results.
### EAMDrift evaluation
Table 1 shows how our proposed model performed compared to other baseline and state-of-the-art methods on five different datasets. Each dataset was tested in two different steps. For example, for the NB1 dataset, we tested steps 1 and 7, which correspond to forecasting 1 and 7 days, respectively. Three columns are presented for the EAMDrift model, two for the non-interpretable versions that employ SVM and RDF as an ensemble (details in Section 4.2), and the other for the original version of our proposed model, denoted as "Original". We did a previous grid search for the SARIMA and LSTM models to find the best parameters for each dataset.
Analyzing Table 1, we can see that the baseline models had significantly higher errors than all versions of EAMDrift. This is because, although the models made predictions by training with data up to the prediction point, the parameters were only tuned once at the beginning, which likely worsened the results over time. In contrast, EAMDrift adjusts the parameters for each concept over time, allowing the model to automatically retrain and improve results.
Among all versions of EAMDrift, our proposed model had lower errors in just one test than the non-interpretable versions. The SVM version had the best mean error among all tests, but the difference to the other versions was only around 1-2%. Therefore, while the non-interpretable versions of EAMDrift slightly improved outcomes, the differences in MAPE error were not significant
enough to assume they were better. This highlights the effectiveness of EAMDrift and motivates the use of interpretability in time series analysis.
### Analysis of results
To gain a better understanding of the results of EAMDrift, we selected three parts of the NB1 dataset. We analyzed the predictions made by our proposed model against the individual predictors used by our model. The results are depicted in Figure 4. Each graph represents a different concept (identified in the top right corner) and contains two thick lines -- one in red with the actual values and one in blue with the predictions made by our model. The remaining lines represent the baseline model's predictions. Our model provides better results when compared to the baseline model in all three patterns. Even in the first two patterns, where we observe a high presence of burst, our model adapted well. Furthermore, in the third plot, where a cyclic pattern is presented, all the models provided a good forecast, except for the Prophet model, which had some parts that yielded unreasonable results. Based on these three patterns, our model adapts well to bursts without sacrificing accuracy in simple patterns.
Next, in Figure 5, we highlight a part of the workload from the NB2 dataset to understand the predictions of EAMDrift better. The first plot presents the real and predicted workload, and the second shows the assigned weights to each model to obtain the final prediction. The Transformer model does not appear due to it being discarded during training because it was not the best for at least
\begin{table}
\begin{tabular}{c|c|c c|c c c} \hline \hline
**Datasets** & **SARIMA** & **LSTM** & \multicolumn{4}{c}{**EAMDrift**} \\ & Step & & & **SVM** & **RDF** & **Original** \\ \hline \multirow{3}{*}{**EAMDrift**} & 1 & 30.45\% & 33.82\% & **16.92\%** & 18.46\% & 19.87\% \\ & 7 & 31.68\% & 35.34\% & **18.28\%** & 21.12\% & 20.01\% \\ \hline \multirow{3}{*}{**EAMDrift**} & 1 & 49.11\% & 40.99\% & 26.87\% & **25.03\%** & 26.77\% \\ \cline{2-7} & 7 & 56.57\% & 39.46\% & **21.01\%** & 21.89\% & 23.78\% \\ \hline \multirow{3}{*}{**EAMDrift**} & 6 & 38.56\% & 57.77\% & 28.19\% & **27.32\%** & 27.8\% \\ & 12 & 47.99\% & 61.88\% & **29.89\%** & 33.97\% & 31.09\% \\ \hline \multirow{3}{*}{**EAMDrift**} & 12 & 60.91\% & 42.62\% & **15.32\%** & 16.31\% & 17.89\% \\ \cline{2-7} & 24 & 48.89\% & 46.58\% & **16.18\%** & 18.36\% & 19.54\% \\ \hline \multirow{3}{*}{**EAMDrift**} & 1 & 24.99\% & 19.12\% & 17.02\% & 16.67\% & **16.41\%** \\ & 7 & 34.06\% & 30.87\% & **22.15\%** & 23.11\% & 24.56\% \\ \hline \hline
**Mean Error** & 42.32\% & 40.85\% & **21.18\%** & 22.22\% & 22.77\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of MAPE values for different datasets using different models. A step column is presented for each dataset, meaning the number of points predicted. For the EAMDrift, three sub-columns are presented regarding the non-interpretable (“SVM” and “RDF”) and the original interpretable version (“Original”).
more than one point. We note that the SARIMA model plays a significant role in the final predictions while the workload grows. When the workload decreases, SARIMA loses importance, and the remaining models take a different path.
In addition, the vertical red dashed lines in this Figure represent the zones detected by our drift to our model to be retrained. In the range of days presented in this Figure, the model was retrained twice, and each retraining period generated a new set of interpretable rules. For example, for the highlighted retrain point, one rule created was: variance <= 7.2 AND sentiment_overall <= 1.0 AND number_negative_tweets > 14.0 AND mean > 40, which suggests a negative sentiment in zones where the mean is greater than 40%.
## 6 Conclusions
Using machine learning prediction models in time series has shown tremendous potential in various research fields. In this work, we propose an innovative novel model called EAMDrift, which combines multiple models' strengths to produce more accurate and robust predictions. EAMDrift also has interpretable mechanisms that allow a better understanding of the predictions. This is particularly important for applications where precise forecasting is critical, such as financial markets and health problems.
To evaluate EAMDrift, we conducted comprehensive evaluations on different datasets over different models. The results show that our model outperforms the baseline ones by 20%. Our model achieved comparable error rates compared to the best non-interpretable ensemble models. This suggests that interpretable machine learning models can be a viable solution for time series prediction.
Figure 4: Proposed method vs. individual predictors in three different workloads of NB1.
Our results over different time series datasets were promising. We believe that the findings presented in this work can contribute to advancing the field of machine learning and inspire further research on bringing interpretability to time series forecasting. Finally, our methodology is easy to implement. A more detailed view of EAMDrift, its methods, and its usage (through a tutorial using the MSV dataset) can be seen in [https://anonymous.4open.science/r/EAMDrift-6DCO/README.md](https://anonymous.4open.science/r/EAMDrift-6DCO/README.md).
## 7 Ethical issues
In this work, most data is publicly available and does not raise privacy concerns. However, we used news and user tweets to analyze sentiment and included them as covariates for some models. We want to clarify that we only used the tweet text and publication date for sentiment analysis. We discarded all text and ignored private information, such as user names or publication locations.
It is crucial to be cautious when working with data because no model is 100% accurate. The responsibility for actions based on the model's predictions should be carefully considered. The data centers used in this study are an excellent example of this double-edged sword. While the forecasts can help allocate data center resources more effectively, saving energy and computing resources and offering better services by avoiding under-provisioning resources, predictions may also help hackers determine the best times to launch different attacks.
Furthermore, following the European ethics guidelines for trustworthy AI, our model provides interpretability for its predictions. This allows for transparency, confidence, and understanding of the predictions.
Figure 5: Detailed analysis of our proposed model in a part of the NB1 dataset. |
2309.07704 | NutritionVerse: Empirical Study of Various Dietary Intake Estimation
Approaches | Accurate dietary intake estimation is critical for informing policies and
programs to support healthy eating, as malnutrition has been directly linked to
decreased quality of life. However self-reporting methods such as food diaries
suffer from substantial bias. Other conventional dietary assessment techniques
and emerging alternative approaches such as mobile applications incur high time
costs and may necessitate trained personnel. Recent work has focused on using
computer vision and machine learning to automatically estimate dietary intake
from food images, but the lack of comprehensive datasets with diverse
viewpoints, modalities and food annotations hinders the accuracy and realism of
such methods. To address this limitation, we introduce NutritionVerse-Synth,
the first large-scale dataset of 84,984 photorealistic synthetic 2D food images
with associated dietary information and multimodal annotations (including depth
images, instance masks, and semantic masks). Additionally, we collect a real
image dataset, NutritionVerse-Real, containing 889 images of 251 dishes to
evaluate realism. Leveraging these novel datasets, we develop and benchmark
NutritionVerse, an empirical study of various dietary intake estimation
approaches, including indirect segmentation-based and direct prediction
networks. We further fine-tune models pretrained on synthetic data with real
images to provide insights into the fusion of synthetic and real data. Finally,
we release both datasets (NutritionVerse-Synth, NutritionVerse-Real) on
https://www.kaggle.com/nutritionverse/datasets as part of an open initiative to
accelerate machine learning for dietary sensing. | Chi-en Amy Tai, Matthew Keller, Saeejith Nair, Yuhao Chen, Yifan Wu, Olivia Markham, Krish Parmar, Pengcheng Xi, Heather Keller, Sharon Kirkpatrick, Alexander Wong | 2023-09-14T13:29:41Z | http://arxiv.org/abs/2309.07704v2 | # NutritionVerse: Empirical Study of Various Dietary Intake Estimation Approaches
###### Abstract
Accurate dietary intake estimation is critical for informing policies and programs to support healthy eating, as malnutrition has been directly linked to decreased quality of life. However self-reporting methods such as food diaries suffer from substantial bias. Other conventional dietary assessment techniques and emerging alternative approaches such as mobile applications incur high time costs and may necessitate trained personnel. Recent work has focused on using computer vision and machine learning to automatically estimate dietary intake from food images, but the lack of comprehensive datasets with diverse viewpoints, modalities and food annotations hinders the accuracy and realism of such methods. To address this limitation, we introduce NutritionVerse-Synth, the first large-scale dataset of 84,984 photorealistic synthetic 2D food images with associated dietary information and multimodal annotations (including depth images, instance masks, and semantic masks). Additionally, we collect a real image dataset, NutritionVerse-Real, containing 889 images of 251 dishes to evaluate realism. Leveraging these novel datasets, we develop and benchmark NutritionVerse, an empirical study of various dietary intake estimation approaches, including indirect segmentation-based and direct prediction networks. We further fine-tune models pretrained on synthetic data with real images to provide insights into the fusion of synthetic and real data. Finally, we release both datasets (NutritionVerse-Synth, NutritionVerse-Real) on [https://www.kaggle.com/nutritionverse/datasets](https://www.kaggle.com/nutritionverse/datasets) as part of an open initiative to accelerate machine learning for dietary sensing.
dietary assessment, datasets, image segmentation, deep learning, synthetic dataset
## I Introduction
Accurate dietary intake estimation is critical for informing policies and programs to support healthy eating, as malnutrition has been directly linked to decreased quality of life [8]. However, conventional diet assessment techniques such as food frequency questionnaires, food diaries, and 24-hour recall [9] are subject to substantial bias [10, 11, 12]. Emerging alternative approaches for diet assessment, including mobile applications [13, 14], digital photography [15], and personal assistants [16] incur high time costs and may necessitate trained personnel. Fortunately, recent promising methods combine these alternative methods with computer vision and machine learning algorithms to automatically estimate nutritional information from food images [2, 17].
Existing literature [2, 3, 4, 5, 6] collects images of real scenes to train models that achieve high accuracy. However, these techniques operate on fixed modalities and viewpoints, hindering systematic comparison due to data limitations. For example, [3] is only trained and evaluated on the RGB image of the top view of a food scene. Furthermore, current food recognition and intake estimation methods face several key limitations: restricted output variables (e.g. only calories or mass), lack of diverse viewpoints or incomplete food annotations in datasets, and biases from predefined camera angles during data capture.
Subsequently, the lack of a comprehensive high-quality image dataset hinders the accuracy and realism of systems based on machine learning and computer vision. For such dietary intake estimation systems to be effective, diverse high-quality training data capturing multiple angles and modalities are required. However, manual creation of large-scale datasets with such diversity is time-consuming and hard to scale. On the other hand, synthesized 3D food models enable view aug
mentation to generate countless photorealistic 2D renderings from any viewpoint, reducing imbalance across camera angles. As shown in Figure 1, leveraging 3D assets facilitates creation of rich multi-modal datasets (e.g., RGB, depth) with photorealistic images, perfect annotations, and dietary metadata through algorithmic scene composition. Compared to existing datasets that are focused solely on quantity, our contributions also address the gap in the quality of the data by procedurally generating scenes that span a huge diversity of food items, placements, and camera angles.
In this paper, we present a process to collect a large image dataset of food scenes that span diverse viewpoints. We first leverage high-quality photorealistic 3D food models and introduce NutritionVerse-Synth (NV-Synth), a dataset of 84,984 high-resolution 2D food images algorithmically rendered from 7,081 unique scenes, along with associated diet information derived from the 3D models. To evaluate realism, we also collect the NutritionVerse-Real (NV-Real) dataset of 889 manually captured images across 251 distinct dishes. We benchmark various intake estimation approaches on these datasets and present NutritionVerse, a collection of models that estimate intake from 2D food images. We release both the synthetic and real-world datasets at [https://www.kaggle.com/nutritionverse/datasets](https://www.kaggle.com/nutritionverse/datasets) to accelerate machine learning research on dietary sensing.
This paper presents several contributions as follows:
1. Introduction of two novel food image datasets, namely NutritionVerse-Synth (NV-Synth) and NutritionVerse-Real (NV-Real), enriched with both diet information and segmentation masks.
2. Evaluation of two approaches (indirect and direct prediction) for food estimation on the identical dataset, aiming to identify the most effective approach.
3. Exploration of the benefits of incorporating depth information in food estimation tasks, accompanied by comprehensive experimental results.
4. Valuable insights into the synergistic utilization of synthetic and real data to enhance the accuracy of diet estimation methods.
## II Related Work
A number of prior works have explored computer vision techniques for food recognition and dietary intake estimation, though significant limitations persist in terms of scope, data, and methodology. Recently released quality food image datasets such as UECF Food 100 [18], FoodX-251 [19], and Food2K [20] contain a significant number of food images with diverse food items. Unfortunately, the dietary information linked to these 2D images is not made available, posing a challenge in utilizing these datasets to estimate energy, macronutrient and micronutrient intake. In addition, existing datasets comprise of 2D images with fixed or randomly selected camera views that are discretely sampled [17, 18, 19, 20, 21, 22]. These set views introduce bias in terms of how individuals take images with
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Work**} & \multirow{2}{*}{**Public**} & \multicolumn{6}{c}{**Data**} & \multicolumn{6}{c}{**Dietary Info**} \\ \cline{3-13} & & **\# Img** & **\# Items** & **Real** & **Mixed** & **\# Angles** & **Depth** & **Annotation Masks** & **CL** & **M** & **P** & **F** & **CB** \\ \hline
[2] & ✓ & 18 & 3 & Y & N & 1 & & & ✓ & & & & \\
[3] & ✓ & 646 & 41 & Y & Y & 1 & & & ✓ & & & \\
[4] & ✓ & 50,374 & 201 & Y & Y & 1 & & & ✓ & & & \\
[5] & ✓ & 2,978 & 160 & Y & N & 2 & & & ✓ & ✓ & & \\
[6] & ✓ & 5,006 & 555 & Y & Y & 4 & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ \\
[7] & & 3000 & 8 & Y & Y & 2 & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ NV-Real & ✓ & 889 & 45 & Y & Y & 4 & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ NV-Synth & ✓ & 84,984 & 45 & N & Y & 12 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE I: Overview of existing dietary intake estimation datasets compared to ours where Mixed refers to whether multiple food item types are present in an image, and CL refers to calories, M to mass, P to protein, F to fat, and CB to carbohydrate.
Fig. 1: Sample scene from NV-Synth dataset with the associated multi-modal image data (e.g., RGB and depth data) and annotation metadata (e.g., instance and semantic segmentation masks) derived using objects from the NutritionVerse-3D dataset [1]. There are 2 meatloaves, 1 chicken leg, 1 chicken wing, 1 pork rib, and 2 sushi rolls in this scene.
their camera which would affect the training and accuracy of the model. Recipe-related datasets, like Recipe1M [23, 24], are extensively utilized in food recognition and recipe generation studies. However, these datasets lack crucial components such as food segmentation and ingredient labels, which make it very difficult for the task of estimating nutritional information. Chen and Ngo investigated a deep learning-based ingredient recognition system for cooking recipe retrieval and couples the problem of food categorization with ingredient recognition by simultaneously training the model with both tasks [25]. Notably, their model does not examine the accuracy of food intake estimation and the images in their dataset only had an average of three recognizable ingredients, unrealistic of real-world scenarios [25].
Bolanos and Radeva [26] proposed a method using the modified GoogLeNet architecture to simultaneously recognize and localize foods in images but did not estimate food volume or dietary information. DepthCalorieCam [2] utilized visual-inertial odometry on a smartphone to estimate food volume and derive caloric content by multiplying the calories density of the food's category with the estimated size of the food. However, their contribution was only demonstrated on three food types. Menu-Match [3] provides an automated computer vision system for food logging and tracking of calories, but focus only on the restaurant scenario and have only 646 images in their dataset. Comparable studies [4, 7] focus on recognizing meal contents and estimating calories from individual meals. However, the methodologies in [4] are also primarily tailored to restaurant scenarios, and there is limited testing conducted in settings outside of restaurants. On the other hand, the dataset and methodologies in [7] are not publicly available and are limited to only 8 food categories. Furthermore, all these works [2, 3, 4, 7] are constrained to calories, and do not predict other dietary components.
Nutrition5k [6] presents a promising development in image-based recognition systems. However, a major limitation of the dataset is that the models are trained on images captured from only four specific viewpoints [6]. This narrow range of viewpoints does not accurately reflect the diverse angles from which individuals typically capture meal images, limiting the model's ability to generalize to various real-life scenarios. Liang and Li [5] also present a promising computer vision-based food calories estimation dataset and method, but their dataset is limited to only calories and includes only 2978 images [5]. Furthermore, they require that images are taken with a specific calibration reference to ensure accurate calories estimation, infeasible for real-world usage [5]. Table I provides a general overview of existing dietary intake estimation datasets and methods. As seen, NV-Synth and NV-Real datasets are the only ones that are publicly available and have annotation data (e.g., segmentation masks), and dietary information.
## III Data Collection
### _NutritionVerge-Synth (NV-Synth)_
Using the 3D meshes from the open access NutritionVerge-3D dataset [1], Nvidia's Omniverse IsaacSim simulation framework [27] was used to generate synthetic scenes of meals. For each scene, up to 7 ingredients were sampled and then procedurally dropped onto a plate to simulate realistic food scenes. Using more than 7 often leads to items falling off the plate due to simulation physics. To maximize the realism and capture diverse plating conditions (including scenarios where the ingredients are highly disordered), the internal physics engine was leveraged to simulate physics-based interactions between ingredients of different masses and densities. Furthermore, realistic images were captured (e.g., some parts of dish are out of focus or occluded by other items) by using a variety of diverse and realistic camera perspectives and lighting conditions. The RGB image, corresponding depth image, associated object detection bounding boxes and segmentation masks were then generated using Omniverse for each scene for 12 random camera angles. An example of two random camera angles for a food scene is shown in Figure 2. The nutritional metadata for the synthetic scenes was then calculated based on the metadata available in the NutritionVerge-3D dataset [1] and the outputted annotation metadata from Omniverse.
NV-Synth is a collection of 84,984 2D images of 7,082 distinct dishes with associated dietary metadata including mass, calories, carbohydrates, fats, and protein contents and ingredient labels for which food items are in each dish. 105 individual food items are represented in the dataset (with 45 unique food types), and the mean number of times each food item appeared in a food scene is 369.59. An average of 5.62 food items are present in each dish, and the mean dietary content of each food scene is 602.1 kcal, 315.9 g, 55.1 g, 34.2 g, and 30.0 g for calories, mass, protein, carbohydrate, and fat content, respectively. A subset of this dataset (28,328) was used for model development and was created by randomly selecting 4 different viewpoints (from 12 different angles) for each food scene. We use a 60%/20%/20% training/validation/testing split of the scenes for the experiments and ensured all images from the same scene are kept in the same split.
Fig. 2: An example food scene from NV-Synth with two different camera angles.
### _NutritionVerge-Real (NV-Real)_
The NV-Real dataset was created by manually collecting images of food scenes in real life. The food items in the dishes was limited to those available in NutritionVerse-3D [1] to ensure appropriate verification of the approach. We used an iPhone 13 Pro Max [28] to collect 10 images at random camera angles for each food dish. An example of two random camera angles for a food scene is shown in Figure 3. To determine the dietary content of the dish, we measured the weight of every ingredient using a food scale. We then gathered the food composition information either from the packaging of the ingredients or from the Canada Nutrient File available on the Government of Canada website [29] in cases where packaging did not contain the dietary data. The segmentation masks was then obtained through human labelling of the images. For feasibility, four randomly selected images per dish were included in the annotation set to be labelled. Any images found with labelling inconsistencies were subsequently removed. We spent a total of 60 hours collecting images and 40 hours annotating the images.
NV-Real includes 889 2D images of 251 distinct dishes comprised of the real food items used to generate synthetic images. The metadata associated with the real-world dataset includes the type of food for each item on the plate with 45 unique food types present in the dataset. Each food item appears at least once in a dish an average of 18.29 times. The mean values represented in the scenes comprising the real-world dataset for calories, mass, protein, carbohydrate, and fat content are 830.0 kcal, 406.3 g, 59.9 g, 38.2 g, and 64.0 g, respectively. We use a 70%/30% training/testing split for the experiments and ensured all images from the same scene are kept in the same split. No validation data was required for the experiments as we used the same model hyperparameters from the synthetic experimental model runs for comparison parity between the synthetic and real training results.
## IV Examined Approaches
As seen in Table II, there are two main approaches for dietary assessment: indirect and direct prediction. Unlike direct prediction, indirect prediction correlates dietary intake with the pixel counts of food items in an image. To determine the pixel count, segmentation models are employed to identify the image pixels corresponding to food items or classes. The obtained pixel count is then used to establish the association with the dietary intake.
There are three prominent types of segmentation models in literature: semantic, instance, and amodal instance segmentations. Semantic segmentation aims to classify each pixel in the image into predefined categories [30]. For dietary intake prediction, portion size is reflected in the number of pixels for each category. Instance segmentation extends beyond semantic segmentation and tries to also distinguish individual instances of objects, assigning unique labels to each pixel corresponding to a different object of the same category [31]. This is particularly useful when one of the instances is occluding the other, e.g., if there are two apples where one apple is partially occluded by the other but the model can identify that two apples exist in the dish. Amodal instance segmentation further builds on instance segmentation by accounting for occluded or partially obscured objects [32] as seen in Figure 4. By predicting a complete object mask, amodal helps under conditions where the object is heavily occluded, such as a burger buried in fires.
For direct prediction, various model architectures have been extensively studied in literature [2, 3, 4, 5, 6] with the latest state-of-the-art model architecture being the Nutrition5k model architecture [6] that estimates all five dietary intake tasks.
### _Model Hyperparameters_
#### Iv-A1 Direct Prediction
Motivated by Nutrition5k [6] which comprises an InceptionV2 backbone encoder [33] and a head module with four fully connected layers, we examine two deep learning architecture weight initializations to estimate the dietary information directly from the raw RGB image. For preprocessing, the RGB channels for the images were normalized based on their mean and standard deviation. We implemented the model architecture and hyperparameters used in the experimental setup for Nutrition5k [6] and fine-tuned this architecture using two sets of pre-trained weights for the InceptionV2 backbone encoder: (1) weights trained on
\begin{table}
\begin{tabular}{l c} \hline \hline
**Approach** & **Work** \\ \hline Direct & [2, 3, 4, 5, 6] \\ Indirect & [7] \\ \end{tabular}
\end{table} TABLE II: Overview of approaches studied in literature.
Fig. 4: The blue segmentation annotates a chicken wing that is partially occluded by a chicken leg in amodal instance compared to instance segmentation.
Fig. 3: An example food scene from NV-Real with two different camera angles.
the ImageNet dataset [33] and (2) weights trained on the Nutrition5k dataset. These models were trained with 50 epochs and no early stopping criteria.
#### Iv-A2 Indirect Prediction
Mask2Former [30], Mask R-CNN [31], and UOAIS-Net [32] were used for prediction of semantic segmentation, instance segmentation, and amodal instance segmentation respectively. The original UOAIS-Net targeted at category-agnostic prediction of objects. Nevertheless, we found it to be effective in multi-category prediction when trained with multi-category ground truth labels.
For comparison parity, the same model hyperparameters were used for all experiments with the exception of base learning rate. A base learning rate of 0.02 was used for Mask R-CNN and UOAIS-Net which has similar architecture designed for instance segmentation. However, Mask2Former requires a lower base learning rate of 0.0001 to be stable because of its different architecture designed for semantic segmentation. We used SGD optimizer with momentum of 0.9, and weight decay of 0.0001 for training. The ResNet-50 [34] backbone initialized with weights pretrained on ImageNet [35] was used for the indirect approach in all three segmentation methods. Those models were trained for 12 epochs using input size of 512x512 pixels and batch size of 16.
#### Iv-A3 Depth Input
Two model variations of each method were trained using 3-channel RGB input and 4-channel RGB-depth input respectively. The RGB channels were normalized based on their mean and standard deviation, and the depth channel was min-max normalized.
### _Implementation Details_
#### Iv-B1 Direct Prediction
Two weight initialization were considered for the backbone in the Nutrition5k direct prediction model architecture: weights from training on ImageNet [33] and weights from training on the Nutrition5k dataset [6]. The ImageNet weights were selected due to their widespread usage, while the Nutrition5k weights were used as Nutrition5k is state-of-the-art in food intake estimation. We report the performance for these two weight approaches as Direct Prediction (ImageNet) and Direct Prediction (Nutrition5k).
#### Iv-B2 Indirect Prediction
Indirect prediction relies on assuming a linear relationship between the pixel count and the nutritional content specific to each food type. To establish this relationship, we leverage the data collected in the training set. For each nutrient, we estimate the average nutrient amount per pixel for each food type from the training set, using the ground truth data.
To obtain the pixel count, we follow a two-step process. First, we employ segmentation models to effectively segment the intake scene image, generating a segmentation mask for each food item. Second, we use these masks to count the number of pixels associated with each item.
By multiplying the pixel count with the average nutrient amount per pixel, we can effectively determine the dietary information for each nutrient and for each individual item in the scene. The comprehensive dietary intake information can then be derived by summing up all the nutrient values across all items within the scene.
For example, Figure 5 displays the example segmentation mask, depicting 273,529 pixels of the half bread loaf (left) and 512,985 pixels of lasagna (right). Given that the average calories per pixel for the half bread loaf is 9.08e-4 and the average calories for the lasagna is 6.36e-4, the total calories would equal:
\[\text{273,529 * 9.08e-4 + 512,985 * 6.36e-4 = 574.3}.\]
Notably, the prediction results of semantic segmentation and instance segmentation are in different formats and requires different processing when calculating the pixel area. The semantic segmentation prediction result of each image is a mask where each pixel is assigned a label. The pixel area of each food ingredient can be counted without any preprocessing on the result. On the other hand, the instance segmentation prediction result of each image is a set of binary masks that each has an assigned label and a confidence score between 0 to 1 which represents the likelihood of correct prediction. Therefore, a threshold value needs to be chosen to filter out the predictions with low confidence scores. A parameter sweep for the threshold value in the range of 0 to 1 is conducted by applying the threshold filtering on all prediction results of the validation set and comparing the mean absolute error (MAE) for the five diet components. The threshold value that achieves the lowest MAE is chosen to be used on the processing of prediction results of the test set. Hence, it is possible that a different threshold value is chosen for the instance and amodal instance model using this method.
## V Experiments
The comprehensive datasets NV-Synth and NV-Real enable us to conduct novel experiments that are helpful in dietary
Fig. 5: Example segmentation mask for a food dish with a half bread loaf (left) and lasagna (right) for nutrition calculation demonstration. The half bread loaf has a mask with 273,529 pixels, and the lasagna has a mask with 512,985 pixels.
assessment. Specifically, given the perfect labels in NV-Synth, we can evaluate different vision-based dietary assessment approaches to determine the most effective approach. We can also examine the merit of using depth information in dietary assessment. Depth directly relates to object volume and portion size which was shown to previously improve model performance [7, 36]. Hence, we can compare the performance of models trained with and without depth information. Finally, being the pioneer in providing paired datasets comprising synthetic and real images, we can investigate the growing concern regarding the potential impact of synthetic data utilization on model performance in real-world scenarios. Notably, we can assess the synergistic utilization of synthetic and real data through three scenarios: (A) models trained solely on synthetic data, (B) models trained on synthetic data and fine-tuned on real data, and (C) models trained exclusively on real data, with the evaluation conducted on the on NV-Real test set.
Notably, these three core questions are studied:
1. What is the best approach for dietary assessment?
2. Does depth information improve model performance?
3. What is the impact of using synthetic data?
### _What is the best approach for dietary assessment?_
To answer this question, we compare the performance of the models trained using RGB images on the NV-Synth test set. As previously mentioned, for the indirect approach using the instance and amodal instance model, thresholding using the validation set was conducted. As seen in Figure 6 and Figure 7, the best threshold for both the instance and amodal instance model is 0.9 as it resulted in the best MAE values for the five diet components on the validation set.
Table III shows the NV-Synth test set results for the model architectures trained on the NV-Synth train set with the lowest MAE for each nutrient bolded and indicated with an *. As seen in Table III, semantic segmentation outperformed both instance and amodal instance methods for all dietary tasks with instance performing better than amodal instance. An example of the predicted segmentation masks and associated prediction results for the indirect approach is shown in Figure 8. Although the direct prediction models have the lowest MAE for at least one of the dietary components, Direct Prediction (Nutrition5k) performs the best holistically followed by Direct Prediction (ImageNet) as they generally have the lowest MAE across the five diet components. As such, the best approach for dietary assessment is using the direct prediction approach with initialization using Nutrition5k weights performing better than initialization using ImageNet weights.
### _Does depth information improve model performance?_
To answer this question, we compare the performance of the models with and without depth information using the NV-Synth test set. Table IV shows the NV-Synth test set results for the model architectures trained on the RGBD images in the NV-Synth train set with the lowest MAE for each nutrient bolded and indicated with an *. As seen in Table III and Table IV, using depth for the direct prediction models leads to generally worse MAE values than using the pure RGB images, but using depth appears to improve the indirect approach with segmentation models. Hence, it appears that depth information does not improve model performance for direct prediction but may slightly help for the indirect approach. This finding is congruent with [6] who observed a decline in their direct model performance when using depth images and [36] who observed an improvement with their indirect approach using segmentation models.
### _What is the impact of using synthetic data?_
We investigate this question by comparing the performance on the NV-Real test set for three scenarios: (A) Using models trained only on NV-Synth, (B) Fine-tuning models trained on NV-Synth using NV-Real, and (C) Training models only on NV-Real. Notably, inference with the RGBD trained models and fine-tuning of the instance and amodal instance segmentation models is omitted due to the absence of depth and instance masks in the NV-Real dataset.
When looking at the model performance for models trained solely on the synthetic data (Scenario A), the direct prediction models have the lowest MAE for at least one of the dietary components and outperform the segmentation models as seen in Table V. Significantly higher MAE values were observed with the indirect approach employing segmentation models. This discrepancy can be attributed to the utilization of average pixel counts from the synthetic dataset, which do
Fig. 6: Validation MAE performance for the instance model for various confidence score thresholds.
Fig. 7: Validation MAE performance for the amodal instance model for various confidence score thresholds.
not align with the individual food items' average pixel counts in the real dataset. These variations stem from differences in camera setups during data collection and highlight an area of improvement for the synthetic dataset. The best model based on the lowest MAE across the five diet components was Direct Prediction (Nutrition5k) model.
On the other hand, for the fine-tuned models (Scenario B), the Direct Prediction (ImageNet) model had generally better MAE performance than the other models except for carbohydrate, where the semantic model achieved lower MAE values (Table VI). Fine-tuning the model generally resulted in better (lower) MAE values for the semantic and direct prediction using ImageNet weights models, but adversely affected the direct prediction using Nutrition5k weights.
For models trained exclusively on real data, the results in Table VII shows that the semantic model trained on the NV-Real train set has the lowest MAE for all of the dietary components compared to the other models trained on the NV-Real train set. In fact, the MAE values on the real dataset for the direct prediction models trained on the real dataset were significantly worse compared to the direct prediction models trained on the synthetic dataset. This decline in performance can be attributed to the limited amount of data available in the real image dataset when compared to the more extensive synthetic dataset.
Through the comparisons, the best model (deemed by the lowest MAE for NV-Real test set) generally across the five diet components is the Direct Prediction (ImageNet) model
\begin{table}
\begin{tabular}{l l r r r r r} \hline
**Model (Scenario A)** & **Eval Dataset** & **Calories MAE** & **Mass MAE** & **Protein MAE** & **Fat MAE** & **Carb MAE** \\ \hline Semantic & NV-Real & 40830.5 & 17342.0 & 2086.4 & 1630.4 & 4432.3 \\ Instance & NV-Real & 50190.0 & 33774.6 & 2950.5 & 2009.5 & 5108.0 \\ Amodal Instance & NV-Real & 72999.6 & 38379.2 & 4460.2 & 3225.3 & 6580.1 \\ \hline Direct Prediction (ImageNet) & NV-Real & 530.6 & **182.9*** & 62.6 & 27.7 & **54.4*** \\ Direct Prediction (Nutrition5k) & NV-Real & **525.9*** & 188.4 & **39.1*** & **27.4*** & 54.6 \\ \end{tabular}
\end{table} TABLE V: Scenario A: Models trained only on NV-Synth, with the lowest MAE value for each column bolded with an * next to it.
\begin{table}
\begin{tabular}{l l r r r r r} \hline
**Model (RGB)** & **Eval Dataset** & **Calories MAE** & **Mass MAE** & **Protein MAE** & **Fat MAE** & **Carb MAE** \\ \hline Semantic & NV-Synth & 418.1 & 185.4 & 39.0 & 23.5 & 32.3 \\ Instance & NV-Synth & 430.9 & 191.4 & 39.3 & 24.1 & 34.4 \\ Amodal Instance & NV-Synth & 451.3 & 202.8 & 39.6 & 24.8 & 38.5 \\ \hline Direct Prediction (ImageNet) & NV-Synth & 229.2 & 102.6 & 56.0 & 12.0 & **19.4*** \\ Direct Prediction (Nutrition5k) & NV-Synth & **128.7*** & **77.2*** & **18.5*** & **9.1*** & 21.5 \\ \end{tabular}
\end{table} TABLE III: Evaluation of model architectures using NV-Synth (RGB images) with the lowest MAE value for each column bolded with an * next to it.
Fig. 8: Segmentation and prediction results of models trained with RGB input where CL refers to calories, M to mass, P to protein, F to fat, and CB to carbohydrate.
\begin{table}
\begin{tabular}{l l r r r r r} \hline
**Model (Scenario A)** & **Eval Dataset** & **Calories MAE** & **Mass MAE** & **Protein MAE** & **Fat MAE** & **Carb MAE** \\ \hline Semantic & NV-Real & 40830.5 & 17342.0 & 2086.4 & 1630.4 & 4432.3 \\ Instance & NV-Real & 50190.0 & 33774.6 & 2950.5 & 2009.5 & 5108.0 \\ Amodal Instance & NV-Real & 72999.6 & 38379.2 & 4460.2 & 3225.3 & 6580.1 \\ \hline Direct Prediction (ImageNet) & NV-Real & 530.6 & **182.9*** & 62.6 & 27.7 & **54.4*** \\ Direct Prediction (Nutrition5k) & NV-Real & **525.9*** & 188.4 & **39.1*** & **27.4*** & 54.6 \\ \end{tabular}
\end{table} TABLE V: Scenario A: Models trained only on NV-Synth, with the lowest MAE value for each column bolded with an * next to it.
trained on the NV-Synth train set and fine-tuned on the NV-Real train set (as seen in Table VIII). Notably, the semantic model trained on NV-Real achieves better performance for carbohydrate but the semantic model has higher MAE scores for the other four dietary components compared to the fine-tuned Direct Prediction (ImageNet) model.
## VI Conclusion
In this paper, we investigate various intake estimation approaches and introduce two new food datasets with associated food composition information: NV-Synth (created using the open access NV-3D dataset) and NV-Real (manually collected). Unlike other datasets, NV-Synth contains a comprehensive set of labels that no other dataset has, including depth images, instance masks, and semantic masks. With these comprehensive labels, we compared various approaches side-by-side to determine the best approach for dietary estimation. We then attempted to verify our findings using the NV-Real dataset and found that the Direct Prediction (ImageNet) model trained on the NV-Synth dataset and fine-tuned on the NV-Real dataset achieves the best performance. Interestingly, it was more advantageous to leverage the weights trained on the ImageNet dataset rather than the weights trained on the Nutrition5k dataset. Hence, our results indicate that it is beneficial to leverage synthetic images in real image application for model training. Future work involves iterating on the synthetic dataset to more closely mirror images collected in real life through increasing the diversity of images and viewpoints per scene and applying these models on an external food dataset to validate their generalization to different situations.
## Acknowledgements
This work was supported by the National Research Council Canada (NRC) through the Aging in Place (AiP) Challenge Program, project number AiP-006. The authors also thank the graduate student partner in the Kinesiology and Health Sciences department Meagan Jackson and undergraduate research assistants Tanisha Nigam, Komal Vachhani, and Cosmo Zhao.
|
2309.10278 | Parameter-Varying Koopman Operator for Nonlinear System Modeling and
Control | This paper proposes a novel approach for modeling and controlling nonlinear
systems with varying parameters. The approach introduces the use of a
parameter-varying Koopman operator (PVKO) in a lifted space, which provides an
efficient way to understand system behavior and design control algorithms that
account for underlying dynamics and changing parameters. The PVKO builds on a
conventional Koopman model by incorporating local time-invariant linear systems
through interpolation within the lifted space. This paper outlines a procedure
for identifying the PVKO and designing a model predictive control using the
identified PVKO model. Simulation results demonstrate that the proposed
approach improves model accuracy and enables predictions based on future
parameter information. The feasibility and stability of the proposed control
approach are analyzed, and their effectiveness is demonstrated through
simulation. | Changyu Lee, Kiyong Park, Jinwhan Kim | 2023-09-19T03:07:21Z | http://arxiv.org/abs/2309.10278v1 | # Parameter-Varying Koopman Operator for Nonlinear System Modeling and Control
###### Abstract
This paper proposes a novel approach for modeling and controlling nonlinear systems with varying parameters. The approach introduces the use of a parameter-varying Koopman operator (PVKO) in a lifted space, which provides an efficient way to understand system behavior and design control algorithms that account for underlying dynamics and changing parameters. The PVKO builds on a conventional Koopman model by incorporating local time-invariant linear systems through interpolation within the lifted space. This paper outlines a procedure for identifying the PVKO and designing a model predictive control using the identified PVKO model. Simulation results demonstrate that the proposed approach improves model accuracy and enables predictions based on future parameter information. The feasibility and stability of the proposed control approach are analyzed, and their effectiveness is demonstrated through simulation.
Parameter-varying system, Koopman operator, Model predictive control
## I Introduction
Model predictive control (MPC) is a powerful algorithm that has proven to be effective for controlling nonlinear systems in various applications, including robotics and transportation [1, 2, 3]. MPC offers several advantages, such as the ability to handle state and input constraints and the capacity to tackle multi-input multi-output nonlinear systems. However, nonlinear systems pose challenges in optimizing control due to their non-convex nature, resulting in computational complexity and difficulties in ensuring stability and robustness. Additionally, unreliable models can lead to performance degradation and system failure due to constraint violations [4]. Therefore, obtaining accurate system models and addressing non-convex problems are essential for effective MPC, but these tasks can be challenging in practical applications.
Recently, data-driven Koopman operator (KO)-based system identification has gained popularity in research. The KO provides a linear representation of nonlinear autonomous systems in infinite dimensions [5], which can further be approximated in a finite number of dimensions through data-driven approaches [6]. In this approach, user-defined lifting functions and extended dynamic mode decomposition (EDMD) methods are often utilized [7, 8]. Deep neural networks also offer the capability to simultaneously identify lifting functions as well as the KO [9]. By incorporating a linear MPC algorithm into the linear system in the lifted dimension, the approach can be executed to nonlinear MPC [10]. Futhermore, robust MPC has been developed to address model uncertainty resulting from the identification process based on KOs [11, 12]. These findings suggest the potential of the KO-based approach to address non-convex problems. However, the success of data-driven identification methods heavily depends on the quantity and quality of data, and challenges still remain in this area.
In previous research, linear time-invariant models have often been used to represent nonlinear systems in the lifted space. However, in many real-world systems, the dynamics are dependent on the operating point. For instance, the lateral dynamics of vehicles are influenced by speed, and chemical process models are highly affected by temperature [13, 14]. To address this issue, linear parameter varying (LPV) or quasi-LPV models have been proposed for modeling and designing control systems [15, 16, 17, 18, 19]. These models account for the influence of exogenous parameters on the system dynamics and provide a more accurate representation of the system behavior. By considering the dependence of the system dynamics on the operating point, LPV models enable the design of controllers that are more robust and effective.
Motivated by recent advances in LPV systems and identification approaches [20, 21], this paper proposes a parameter-varying KO (PVKO) for modeling and controlling nonlinear systems with varying parameters in the lifted space. The proposed approach is based on collecting data for each operating point, identifying a KO for each point, and local interpolation between the KOs is conducted following the approach in [20]. The resulting PVKO provides an accurate and predictable model that accounts for the underlying dynamics and varying parameters. To synthesize the control system, the LPV-MPC approach [19] is used with the PVKO, assuming
Fig. 1: An illustration of the proposed parameter-varying Koopman operator.
the predictability of future parameters. The proposed control system addresses identification uncertainties, and recursive feasibility and stability analysis are provided. Finally, numerical simulations are conducted to verify the effectiveness of the proposed modeling and control approaches.
The rest of this paper is structured as follows. The following section presents the proposed PVKO approach, and Section 3 describes the control system design. The results of the simulations are discussed in Section 4, and the study is concluded in Section 5.
_Notation_: The notation \(I_{n\times m}\) and \(\mathbf{0}_{n\times m}\) denote that \(n\times m\) identity and zero matrices, respectively. The Minkowski sum and Pontryagin set difference of two sets \(\mathbb{X},\mathbb{Y}\subset\mathbb{R}^{n}\) are denoted as \(\mathbb{X}\oplus\mathbb{Y}\) and \(\mathbb{X}\ominus\mathbb{Y}\), respectively. Additionally, \(\text{Conv}\{\cdot\}\) represents the convex hull formed by the vertices within \(\{\cdot\}\).
## II Parameter-varying Koopman Operator
Consider the discrete-time nonlinear system defined by:
\[\mathbf{x}_{k+1}=f(\mathbf{x}_{k},\mathbf{u}_{k}), \tag{1}\]
where \(\mathbf{x}_{k}\in\mathbb{X}\subset\mathbb{R}^{n}\) and \(\mathbf{u}_{k}\in\mathbb{U}\subset\mathbb{R}^{m}\) denote the state and input vectors, respectively, and the subscript \(k\) indicates the time index. Let \(\Psi(\mathbf{x}_{k},\mathbf{u}_{k})\in\mathbb{G}:\mathbb{R}^{n+m}\to\mathbb{R }^{q+m}\) be an observation function that maps the state and input vectors to the lifted space. The observation function can be defined as follows:
\[\Psi(\mathbf{x}_{k},\mathbf{u}_{k})=\left[\psi_{1}(\mathbf{x}_{k}),\psi_{2}( \mathbf{x}_{k}),\cdots,\psi_{q}(\mathbf{x}_{k}),\mathbf{u}_{k}^{\top}\right]^ {\top}, \tag{2}\]
where \(\psi_{i}:\mathbb{R}^{n}\to\mathbb{R}\) is the \(i\)-th component of the observation function. Then, the lifted state vector can be expressed as follows:
\[\mathbf{y}_{k}=\Psi_{\mathbf{x}}(\mathbf{x}_{k})=\left[\psi_{1}(\mathbf{x}_{ k}),\psi_{2}(\mathbf{x}_{k}),\cdots,\psi_{q}(\mathbf{x}_{k})\right]^{\top}, \tag{3}\]
where \(\mathbf{y}_{k}\in\mathbb{R}^{q}\) is the lifted state vector. The KO \(\mathcal{K}:\mathbb{G}\to\mathbb{G}\) can represent the lifted system in the linear form:
\[\mathcal{K}(\Psi(\mathbf{x}_{k},\mathbf{u}_{k}))=\Psi(\mathbf{x}_{k+1}, \mathbf{u}_{k+1}), \tag{4}\]
which can be approximated in a finite-dimensional space higher than \(n\) (typically \(q\gg n\)) using data. Since this approximation is data-driven, a large amount of data is required, and it is necessary to reduce the dimensions \(q\) to a manageable level from a control perspective.
In this paper, we focus on a nonlinear system with exogenous parameters defined as follows:
\[\mathbf{x}_{k+1}=f(\mathbf{x}_{k},\mathbf{u}_{k},p_{k}), \tag{5}\]
where \(p_{k}\in\mathbb{P}\subset\mathbb{R}\) is a bounded parameter that introduces uncertainty into the system. To address this, we propose a new approach for modeling the system as a LPV model in a lifted space. Our approach involves using a PVKO \(\mathcal{K}_{p_{k}}:\mathbb{G}\to\mathbb{G}\) defined as:
\[\mathcal{K}_{p_{k}}(\Psi(\mathbf{x}_{k},\mathbf{u}_{k})) =\Psi(f(\mathbf{x}_{k},\mathbf{u}_{k},p_{k}),\mathbf{u}_{k+1}) \tag{6}\] \[=\Psi(\mathbf{x}_{k+1},\mathbf{u}_{k+1}), \tag{7}\]
where \(\mathcal{K}_{p_{k}}\) depends on the parameter.
To identify the PVKO, we use an EDMD-based approach, which involves collecting data from the state and input variables of the system at each working point and using the data to identify the KO for each point. We then use an interpolation-based modeling method to find the PVKO. Let
\[\mathbf{X}(i) =[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{M-1}]\in \mathbb{R}^{n\times(M-1)}, \tag{8}\] \[\mathbf{X}^{+}(i) =[\mathbf{x}_{2},\mathbf{x}_{3},\ldots,\mathbf{x}_{M}]\in \mathbb{R}^{n\times(M-1)},\] \[\mathbf{U}(i) =[\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{M-1}]\in \mathbb{R}^{m\times(M-1)},\]
denote the collected state and input data at the \(i\)-th working point, where \(M\) is the number of data points. We then lift the collected data using a lifting function to obtain:
\[\mathbf{Y}(i) =[\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{M-1}]\in \mathbb{R}^{q\times(M-1)}, \tag{9}\] \[\mathbf{Y}^{+}(i) =[\mathbf{y}_{2},\mathbf{y}_{3},\ldots,\mathbf{y}_{M}]\in \mathbb{R}^{q\times(M-1)}.\]
Using the collected data, we can establish the following relations:
\[\mathbf{Y}^{+}(i) =A(p^{i})\mathbf{Y}(i)+B(p^{i})\mathbf{U}(i), \tag{10}\] \[\mathbf{X}(i) =C\mathbf{Y}(i),\]
where \(p^{i}\) represents the parameter at the \(i\)-th working point and \(C\) is the output matrix. We can then find the state matrix by minimizing the following problems:
\[\min_{A(p^{i}),B(p^{i})}\lVert\mathbf{Y}^{+}(i)-(A(p^{i})\mathbf{ Y}(i)+B(p^{i})\mathbf{U}(i))\rVert_{F}, \tag{11}\] \[\min_{C}\lVert\mathbf{X}(i)-C\mathbf{Y}(i)\rVert_{F},\]
where \(\lVert\cdot\rVert_{F}\) represents the Frobenius norm. We can solve these problems analytically using the pseudo-inverse of the matrix \(\left[\mathbf{Y}(i)\ \mathbf{U}(i)\right]^{\top}\) as follows:
\[\left[A(p^{i})\quad B(p^{i})\right] =\mathbf{Y}^{+}(i)\left[\begin{matrix}\mathbf{Y}(i)\\ \mathbf{U}(i)\end{matrix}\right]^{\top}, \tag{12}\] \[C =\mathbf{X}(i)\mathbf{Y}(i)^{\dagger},\]
where \(\dagger\) indicates the Moore-Penrose inverse. To find the pseudo-inverse matrix, we can use the singular value decomposition to decompose \(\left[\mathbf{Y}(i)\ \mathbf{U}(i)\right]^{\top}\) as follows:
\[\left[\begin{matrix}\mathbf{Y}(i)\\ \mathbf{U}(i)\end{matrix}\right]=U\Sigma V^{\top}. \tag{13}\]
Then, we can approximate \(A(p^{i})\) and \(B(p^{i})\) as follows:
\[\left[A(p^{i})\quad B(p^{i})\right] \approx\mathbf{Y}^{+}(i)V\Sigma^{-1}U^{\top} \tag{14}\] \[=\mathbf{Y}^{+}(i)V\Sigma^{-1}\left[U_{A}\ U_{B}\right],\]
now we can obtain \(A(p^{i})\approx\mathbf{Y}^{+}(i)V\Sigma^{-1}U_{A}\) and \(B(p^{i})\approx\mathbf{Y}^{+}(i)V\Sigma^{-1}U_{B}\).
For a system with \(l\in\mathbb{N}\) working points, we can obtain \(l\) different \((A(p^{i}),B(p^{i}))\) matrices. The PVKO can then be obtained by interpolating these matrices as follows:
\[A(p_{k}) =\alpha_{1}(p_{k})A(p^{1})+\alpha_{2}(p_{k})A(p^{2})+\cdots+\alpha_{l}(p _{k})A(p^{l}), \tag{15}\] \[B(p_{k}) =\alpha_{1}(p_{k})B(p^{1})+\alpha_{2}(p_{k})B(p^{2})+\cdots+\alpha_ {l}(p_{k})B(p^{l}),\]
where \(\alpha_{1}(p_{k}),\alpha_{2}(p_{k}),\ldots,\alpha_{l}(p_{k})\) are weighting coefficients that depend on the parameter \(p_{k}\). Once we have future parameter information, we can predict the future system matrix
using the identified PVKO \((A(p_{k}),B(p_{k}))\). This approach allows us to use LPV-MPC [19].
**Remark 1**: _A subsequent identification procedure is required to determine the functional form of the weighting coefficients. In this paper, we use the simplest interpolation technique, linear interpolation, which is cost-effective and can provide adequate results for many applications._
## III PVKO-based Model Predictive Control
**Assumption 1**: _We assumed that the uncertainty of the model approximation, \(\mathbf{w}_{k}\), is unknown and bounded, i.e., \(\mathbf{w}_{k}=\mathbf{y}_{k+1}-(A(p_{k})\mathbf{y}_{k}+B(p_{k})\mathbf{u}_{k} )\in\mathbb{W}\subset\mathbb{R}^{q}\)._
We propose a method for synthesizing the LPV-MPC algorithm on the lifted space, named PVKO-MPC, based on the identified PVKO. The LPV system with bounded uncertainty \(\mathbf{w}_{k}\) (as stated in Assumption 1) can be represented in the lifted space as follows:
\[\mathbf{y}_{k+1}=A(p_{k})\mathbf{y}_{k}+B(p_{k})\mathbf{u}_{k}+ \mathbf{w}_{k}, \tag{16}\] \[\text{s.t.}\ \mathbf{y}_{k}\in\mathbb{Y},\] \[\mathbf{u}_{k}\in\mathbb{U},\] \[\mathbf{w}_{k}\in\mathbb{W}.\]
Let the nominal system of (16) be represented as:
\[\bar{\mathbf{y}}_{k+1}=A(p_{k})\bar{\mathbf{y}}_{k}+B(p_{k})\bar{\mathbf{u}}_ {k}, \tag{17}\]
where \(\bar{\mathbf{u}}_{k}\) and \(\bar{\mathbf{y}}_{k}\) are the nominal input and state vectors that correspond to the system without uncertainty. The control input of the system (16) is then designed as follows:
\[\mathbf{u}_{k}=\bar{\mathbf{u}}_{k}+K(\mathbf{y}_{k}-\bar{\mathbf{y}}_{k}), \tag{18}\]
where the second term in (18) is the auxiliary state feedback control that compensates for the error.
**Definition 1** (Robust positively invariant set): _A set \(\Omega\) is a robust positively invariant (RPI) set of the system \(\mathbf{e}_{k+1}=(A(p_{k})+B(p_{k})K)\mathbf{e}_{k}+\mathbf{w}_{k}\), if \((A(p_{k})+B(p_{k})K)\mathbf{e}_{k}+\mathbf{w}_{k}\in\Omega\) for all \(\mathbf{e}_{k}\in\Omega\), \(p_{k}\in\mathbb{P}\), and \(\mathbf{w}_{k}\in\mathbb{W}\)._
**Definition 2** (Quadratic stability): _The system \(\mathbf{y}_{k+1}=A^{c}(p_{k})\mathbf{y}_{k}\) is quadratically stable if there exists \(P>0\) such that \(A^{c}(p_{k})^{\top}PA^{c}(p_{k})-P\leq-Q-K^{\top}RK\) for all \(p_{k}\in\mathbb{P}\), where \(A^{c}(p_{k})=A(p_{k})+B(p_{k})K\)._
### _Uncertainty compensation and RPI set calculation_
Let the error vector be described by \(\mathbf{e}_{k}=\mathbf{y}_{k}-\bar{\mathbf{y}}_{k}\). The error system can be represented using (16)-(18) as follows:
\[\mathbf{e}_{k+1} =A(p_{k})(\mathbf{y}_{k}-\bar{\mathbf{y}}_{k})+B(p_{k})(\mathbf{u }_{k}-\bar{\mathbf{u}}_{k})+\mathbf{w}_{k} \tag{19}\] \[=(A(p_{k})+B(p_{k})K)\mathbf{e}_{k}+\mathbf{w}_{k}\] \[=A^{c}(p_{k})\mathbf{e}_{k}+\mathbf{w}_{k}.\]
**Assumption 2**: _The system (19) is quadratically stable._
Under the Assumption 2, the state feedback controller that minimizes the worst-case cost can be obtained by solving the following semidefinite programming problem:
\[\min_{P,K}\text{tr}(P) \tag{20}\] \[\text{s.t.}\ A^{c}(p^{i})^{\top}PA^{c}(p^{i})-P\leq-Q-K^{\top}RK,\] \[\text{for}\ i=1,2,\ldots,l,\]
where \(Q,R\) are weight matrices. We can transform the optimization problem (20) into the following problem using the Schur complement as follows:
\[\begin{bmatrix}P-Q-K^{\top}RK&A^{c}(p^{i})^{\top}\\ A^{c}(p^{i})&P^{-1}\end{bmatrix}\geq 0,\ \text{for}\ i=1,2,\ldots,l. \tag{21}\]
Then, by performing a congruence transformation with \(S=P^{-1}\) and introducing \(Y=KS\)[22], we can transform the problem into the following form:
\[\max_{S,Y}\text{tr}(S) \tag{22}\] \[\text{s.t.}\] \[\begin{bmatrix}S&SA(p^{i})^{\top}+Y^{\top}B^{\top}&SQ^{1/2}&Y^{ \top}R^{1/2}\\ A(p^{i})S+BY&S&\mathbf{0}_{q\times q}&\mathbf{0}_{q\times m}\\ Q^{1/2}S&\mathbf{0}_{q\times q}&I_{q\times q}&\mathbf{0}_{q\times m}\\ R^{1/2}Y&\mathbf{0}_{m\times q}&\mathbf{0}_{m\times q}&I_{m\times m}\end{bmatrix}\] \[\geq 0,\ \text{for}\ i=1,2,\ldots,l.\]
The problem (22) can be solved by convex optimization software, YALMIP [23]. Once the state feedback gain \(K\) is obtained, the Assumption 2 is satisfied, and then the RPI set \(\mathbb{S}\) of the error system (19) can be calculated as follows:
\[\mathbb{S}=\mathbb{W} \oplus\text{Conv}\{A^{c}(p^{i})\mathbb{W},\forall i\in\{1,2, \ldots,l\}\} \tag{23}\] \[\oplus\text{Conv}\{A^{c}(p^{i})A^{c}(p^{j})\mathbb{W},\forall i,j \in\{1,2,\ldots,l\}\}\] \[\oplus\cdots.\]
### _Robust MPC strategy_
The nominal control input can be computed using the following MPC problem with the RPI set:
\[\min_{\bar{\mathbf{y}}_{(\cdot)},\bar{\mathbf{u}}_{(\cdot)}} \sum_{k=0}^{N-1}(||\bar{\mathbf{y}}_{k|t}||_{\bar{Q}}^{2}+||\bar{ \mathbf{u}}_{k|t}||_{R}^{2})+||\bar{\mathbf{y}}_{N|t}||_{\mathbb{P}}^{2}, \tag{24}\] \[\text{s.t.}\ \bar{\mathbf{y}}_{0|t}=\Psi_{\mathbf{x}}(\mathbf{x}_{0|t}),\] (25) \[\bar{\mathbf{y}}_{k+1|t}=A(p_{k|t})\bar{\mathbf{y}}_{k|t}+B(p_{k|t })\bar{\mathbf{u}}_{k|t},\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad k=0, \cdots,N-1,\] \[C\bar{\mathbf{y}}_{k|t}\in\mathbb{X}\ominus C\mathbb{S},\ k=0, \cdots,N-1,\] \[\bar{\mathbf{u}}_{k|t}\in\mathbb{U}\ominus CK\mathbb{S},\ k=0, \cdots,N-1,\] \[\bar{\mathbf{y}}_{N|t}\in\mathbb{Y}_{f}\ominus\mathbb{S},\]
where \(N\) is the prediction horizon, \(Q\), \(R\), and \(P\) penalize the state, input, and terminal state, respectively, the subscript \((\cdot)_{k|t}\) represents the value at time \(t+k\) predicted at time \(t\), and \(\mathbb{Y}_{f}\) is the terminal set.
**Definition 3** (Maximal positively invariant set): _A set \(\Omega_{\infty}\subset\mathbb{Y}\) is a maximal positively invariant set (MPI) set of the system \(\mathbf{y}_{k+1}=A^{c}(p_{k})\mathbf{y}_{k}+\mathbf{w}_{k}\) if \(\Omega_{\infty}\) is invariant and all RPI sets are contained._
In MPC design, the state feedback gain \(K\), obtained from (22) and its corresponding \(P\) matrix, can be used to establish recursive feasibility and stability through a terminal set and cost [24]. The terminal set is obtained by implementing the
terminal control input strategy \(\mathbf{\bar{u}}_{N|t}=K\mathbf{\bar{y}}_{N|t}\). The set is designed to ensure the satisfaction of the following condition:
\[\mathbf{y}_{N|t}\in\mathbb{Y}_{f}\ \Rightarrow\ \mathbf{y}_{N+1|t}\in\mathbb{Y}_{f},\ \forall t\in\mathbb{N}^{+},C\mathbb{Y}_{f}\subset\mathbb{X}. \tag{26}\]
The MPI set is often chosen as the terminal set, but in practice, the RPI set can be used if the nominal system (17) is stable.
### _Recursive feasibility and stability analysis_
**Assumption 3**: _At the initial time, a feasible solution exists for the nominal PVKO-MPC problem._
**Assumption 4**: _The model parameter \(p_{k}\) is known over the prediction horizon._
**Assumption 5**: _The stage cost and terminal cost are positive definite functions, i.e., they are strictly positive and only equal to zero at the origin._
**Theorem 1**: _Assume that Assumptions 3 and 4 hold. Then, for any time \(t\), a feasible solution to the PVKO-MPC problem (24) always exists._
Let the initial time be \(t\), and let the feasible optimal control sequence and the corresponding state sequence be as follows:
\[\bar{U}_{t}^{*}=[\mathbf{\bar{u}}_{0|t}^{*},\mathbf{\bar{u}}_{1|t }^{*},\ldots,\mathbf{\bar{u}}_{N-1|t}^{*}], \tag{27}\] \[\bar{Y}_{t}^{*}=[\mathbf{\bar{y}}_{0|t}^{*},\mathbf{\bar{y}}_{1| t}^{*},\ldots,\mathbf{\bar{y}}_{N|t}^{*}].\]
At the next time \(t+1\), we can obtain the predicted state sequence with the control law \(\bar{U}_{t+1}=[\mathbf{\bar{u}}_{1|t}^{*},\mathbf{\bar{u}}_{2|t}^{*},\ldots, \mathbf{\bar{u}}_{N-1|t}^{*},K\mathbf{\bar{y}}_{N|t}^{*}]\) as \(\bar{Y}_{t+1}=[\mathbf{\bar{y}}_{1|t}^{*},\mathbf{\bar{y}}_{2|t}^{*},\ldots, \mathbf{\bar{y}}_{N|t}^{*},A^{p}(p_{N-1|t+1})\mathbf{\bar{y}}_{N|t}^{*}]\). Under the Assumption 3, the terminal state \(\mathbf{\bar{y}}_{N|t}\) at time \(t\) satisfies the terminal constraints. Then under the condition of the terminal set (26), \(A^{c}(p_{N-1|t+1})\mathbf{\bar{y}}_{N|t}^{*}\) also satisfies the terminal constraints. As a result, the MPC problem (24) is recursively feasible due to the above recursion.
**Theorem 2**: _Suppose that Assumptions 3 to 5 hold, the system (17) is asymptotically stable under the solution to the MPC problem (24)._
Let \(J_{t}\) be a Lyapunov function defined as follows:
\[J_{t}=\sum_{k=0}^{N-1}(||\mathbf{\bar{y}}_{k|t}||_{Q}^{2}+||\mathbf{\bar{u}}_{ k|t}||_{R}^{2})+||\mathbf{\bar{y}}_{N|t}||_{P}^{2}. \tag{28}\]
Let \(J_{t}^{*}\) be the optimal cost at time \(t\), which can be computed by (27), and also let \(\hat{J}_{t+1}\) be the cost at time \(t+1\), which can be computed by \(\bar{U}_{t+1}\) and \(\bar{Y}_{t+1}\) as follows:
\[\hat{J}_{t+1} =\underbrace{\sum_{k=0}^{N-1}(||\mathbf{\bar{y}}_{k|t}^{*}||_{Q} ^{2}+||\mathbf{\bar{u}}_{k|t}^{*}||_{R}^{2})}_{=J_{t}^{*}-||\mathbf{\bar{y}}_ {N|t}||_{P}^{2}}-\underbrace{(||\mathbf{\bar{y}}_{0|t}^{*}||_{Q}^{2}+||\mathbf{ \bar{u}}_{0|t}^{*}||_{R}^{2})}_{\geq 0\ (Assumption\ 5)}\] \[\quad+(||\mathbf{\bar{y}}_{N|t}||_{Q}^{2}+||K\mathbf{\bar{y}}_{ N|t}^{*}||_{R}^{2})+||\mathbf{\bar{y}}_{N|t+1}||_{P}^{2}\] \[\leq J_{t}^{*}-||\mathbf{\bar{y}}_{N|t}^{*}||_{P}^{2}+||\mathbf{ \bar{y}}_{N|t}^{*}||_{Q}^{2}+||\mathbf{\bar{y}}_{N|t}^{*}||_{K}^{2}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
simulation data were used for conventional (time-invariant) KO modeling. The prediction is performed for 2 s, and the resulting trajectory and parameter over time are shown in Figs. 2 and 3.
To evaluate the quantitative performance and the effects of the order of lifting function, a Monte-Carlo simulation was conducted. For each order, 500 prediction simulations with a 200-step prediction (2 seconds) were conducted, and the prediction accuracy was computed using the root mean square error (RMSE) as follows:
\[\text{RMSE}=100\frac{\sqrt{\sum_{k}||\mathbf{\hat{x}}_{k}-\mathbf{x}_{k}||_{2 }^{2}}}{\sqrt{\sum_{k}||\mathbf{x}_{k}||_{2}^{2}}}, \tag{31}\]
where \(\mathbf{\hat{x}}_{k}\) is a predicted state vector. As shown in Fig. 4, the proposed PVKO approach outperforms the time-invariant KO for the parameter-varying Lorenz model simulation.
### _Control Performance_
The performance of the PVKO-MPC is evaluated using the Van der Pol oscillator model with a time-varying model, given by:
\[\begin{split}\dot{x}&=2y,\\ \dot{y}&=-0.8x+p(y-2x^{2}y)+u,\end{split} \tag{32}\]
where the control input \(u\) and the time-varying parameter \(p\) are subject to a random walk model and are constrained to specific value ranges. The proposed PVKO model is identified by using the polynomial function as a lifting functions, given by \(\Psi=\left[x,y,xy,x^{2},y^{2},x^{2}y,xy^{2},x^{3},y^{3}\right]^{\top}\), resulting in a dimension of 9. For the PVKO modeling, five working points with \(p=1,2,3,4,5\) are used. A 1000 s simulation data with 0.01 s sampling time were used for each working point's KO modeling, while 5000 s simulation data were used for conventional KO modeling. Linear interpolation is used to construct a complete PVKO model. We compared the performance of the PVKO-MPC algorithm with the KO-MPC [10] and nonlinear MPC (NMPC) algorithms. It's worth noting that only the NMPC algorithm utilizes full knowledge of the model. The PVKO-MPC algorithm is compared with the KO-MPC and nonlinear MPC (NMPC) algorithm with full knowledge of the model as follows:
\[\min_{\mathbf{x}_{(\cdot)},\mathbf{u}_{(\cdot)}}\sum_{k=0}^{N-1}(|| \mathbf{x}_{k|t}||_{CQC^{\top}}^{2}+||\mathbf{u}_{k|t}||_{R}^{2})+||\mathbf{x} _{N|t}||_{CPC^{\top}}^{2} \tag{33}\] \[\text{s.t. }\mathbf{x}_{k+1|t}=f_{d}(\mathbf{x}_{k|t},\mathbf{u}_{k|t},p _{k|t}),\ k=0,\cdots,N-1,\] \[\mathbf{x}_{k|t}\in\mathbb{X},\ k=0,\cdots,N,\] (34) \[\mathbf{u}_{k|t}\in\mathbb{U},\ k=0,\cdots,N-1,\]
where \(N\) is the prediction horizon, the weight matrices \(Q\), \(R\), and \(P\) are defined as in (24), and matrix \(C\) is identified in (12). The function \(f_{d}\) is obtained by discretizing the nonlinear function (5) using the Euler method with a sampling time of \(T_{s}=0.01\) s. The controller's parameters are provided in Table I.
To compare the performance of two controllers, simulations were conducted using (32) with an initial state of \([x,y]=[3,0.5]\) and a time-varying parameter is shown in Fig. 5b. The PVKO-MPC problem (24) was solved using the light-weight sparse quadratic programming solver, qpSWIFT [25], while the interior point optimizer, IPOPT [26], with CasADi software [27] in MATLAB was used for NMPC.
Figure 5 shows the result of the three controllers and optimal trajectory obtained by (33) with \(N=\infty\). The cumulative cost is calculated as \(J_{c}(k)=\sum_{i=0}^{k}(||\mathbf{x}_{i}||_{CQC^{\top}}^{2}+||\mathbf{u}_{i} ||_{R}^{2})\), and the resulting costs are shown in Fig. 5d. As can be seen, the PVKO-MPC spent less cost than KO-MPC in this simulation and almost similar with NMPC, which uses full knowledge of the model. The average computation time and the cumulative cost are summarized in Table. II.
## V Conclusion
In this paper, we proposed the data-driven PVKO approach for modeling and controlling parametric uncertain nonlinear systems. Our method involved identifying local Koopman operators at each working point and interpolating them to form a complete PVKO. Furthermore, we designed a PVKO-MPC approach with a robust error-compensation controller, derived through linear matrix inequality, and provided recursive feasibility and stability analysis. The efficacy of the proposed approach was demonstrated through simulations, which showed improved accuracy in modeling and performance in controlling for uncertain nonlinear systems.
\begin{table}
\begin{tabular}{c|c|c|c} \hline & **NMPC** & **KO-MPC** & **PVKO-MPC** \\ \hline \hline
**Avg. computation time** & 0.0056 & 0.0032 & 0.0033 \\ \hline
**Cumulative cost \(J_{c}\)** & 5136.7 & 5585.5 & 5246.8 \\ \hline \(100\cdot(J_{c}-J_{c}^{*})/J_{c}^{*}\) & 0.71\% & 9.51\% & 2.87\% \\ \hline \end{tabular}
\end{table} TABLE II: Average computation time, the cumulative cost, and the cost ratio of three controllers (where \(J_{c}^{*}\) is the cumulative cost of global optimal trajectory)
Fig. 4: Comparison of performance according to the order of lifting function (The shaded region represents one standard deviation from the mean).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Symbol** & **Value** & **Symbol** & **Value** & **Symbol** & **Value** \\ \hline \hline \(N\) & 50 & \(T_{s}\) & 0.01 & \([\underline{p},\overline{p}]\) & \([1,5]\) \\ \(Q\) & \(\text{diag}([1,1])\) & \(R\) & 0.1 & \([\underline{u},\overline{u}]\) & \([-3,3]\) \\ \hline \(K\) & \([-0.2036,-0.3152,0.0117,5.3363\cdot 10^{-5},-0.0062,\) & \(0.0489,-0.0147,-4.3624\cdot 10^{-5},0.0035]\) & \\ \hline \end{tabular}
\end{table} TABLE I: Parameters of controllers |
2309.08437 | Quasi-BPS categories for K3 surfaces | We introduce and begin the study of quasi-BPS categories for K3 surfaces,
which are a categorical version of the BPS cohomologies for K3 surfaces. We
construct semiorthogonal decompositions of derived categories of coherent
sheaves on moduli stacks of semistable objects on K3 surfaces, where each
summand is a categorical Hall product of quasi-BPS categories. We also prove
the wall-crossing equivalence of quasi-BPS categories, which generalizes
Halpern-Leistner's wall-crossing equivalence of moduli spaces of stable objects
for primitive Mukai vectors on K3 surfaces.
We also introduce and study a reduced quasi-BPS category. When the weight is
coprime to the Mukai vector, the reduced quasi-BPS category is proper, smooth,
and its Serre functor is trivial \'{e}tale locally on the good moduli space.
Moreover we prove that its topological K-theory recovers the BPS invariants of
K3 surfaces, which are known to be equal to the Euler characteristics of
Hilbert schemes of points on K3 surfaces. We regard reduced quasi-BPS
categories as noncommutative hyperk\"ahler varieties which are categorical
versions of crepant resolutions of singular symplectic moduli spaces of
semistable objects on K3 surfaces. | Tudor Pădurariu, Yukinobu Toda | 2023-09-15T14:38:06Z | http://arxiv.org/abs/2309.08437v1 | # Quasi-BPS categories for K3 surfaces
###### Abstract.
We introduce and begin the study of quasi-BPS categories for K3 surfaces, which are a categorical version of the BPS cohomologies for K3 surfaces.
We construct semiorthogonal decompositions of derived categories of coherent sheaves on moduli stacks of semistable objects on K3 surfaces, where each summand is a categorical Hall product of quasi-BPS categories. We also prove the wall-crossing equivalence of quasi-BPS categories, which generalizes Halpern-Leistner's wall-crossing equivalence of moduli spaces of stable objects for primitive Mukai vectors on K3 surfaces.
We also introduce and study a reduced quasi-BPS category. When the weight is coprime to the Mukai vector, the reduced quasi-BPS category is proper, smooth, and its Serre functor is trivial etale locally on the good moduli space. Moreover we prove that its topological K-theory recovers the BPS invariants of K3 surfaces, which are known to be equal to the Euler characteristics of Hilbert schemes of points on K3 surfaces. We regard reduced quasi-BPS categories as noncommutative hyperkahler varieties which are categorical versions of crepant resolutions of singular symplectic moduli spaces of semistable objects on K3 surfaces.
## 1. Introduction
Let \(S\) be a K3 surface, \(v\) a Mukai vector, and \(w\) an integer. The purpose of this paper is to introduce and study a category
\[\mathbb{T}=\mathbb{T}_{S}(v)_{w}^{\text{red}} \tag{1.1}\]
called _(reduced) quasi-BPS category_. When \(v\) is primitive, (1.1) is equivalent to the derived category of twisted sheaves over the moduli space \(M\) of stable objects on \(S\) with Mukai vector \(v\), which is a holomorphic symplectic manifold. When \(v\) is not necessarily primitive, but \(w\) is coprime to \(v\), we show that \(\mathbb{T}\) is proper, smooth, and has trivial Serre functor etale locally on the good moduli space \(M\) of semistable objects with Mukai vector \(v\), which is a singular symplectic variety. So we obtain a category \(\mathbb{T}\) which we regard as a (twisted) categorical (etale locally) crepant resolution of singularities of \(M\).
The construction of the category (1.1) is motivated by enumerative geometry: quasi-BPS categories are a categorical replacement of BPS cohomologies [10, 11], constructed from semiorthogonal decompositions of derived categories of moduli stacks of semistable sheaves which approximate the PBW theorem in cohomological Donaldson-Thomas (DT) theory studied in loc. cit. Below, we first mention our main results, and then explain how the construction of the category (1.1) is motivated by DT theory and the study of singular symplectic varieties.
### Semiorthogonal decompositions into quasi-BPS categories
For a K3 surface \(S\), let
\[\operatorname{Stab}(S)\]
be the main connected component of the space of Bridgeland stability conditions [1] on \(D^{b}(S)\). Let \(\Gamma=\mathbb{Z}\oplus\operatorname{NS}(S)\oplus\mathbb{Z}\) be the Mukai lattice. For \(\sigma\in\operatorname{Stab}(S)\) and \(v\in\Gamma\), consider the moduli stacks
\[\mathfrak{M}_{S}^{\sigma}(v)\hookleftarrow\mathcal{M}_{S}^{\sigma}(v)\to M_{S }^{\sigma}(v),\]
where \(\mathfrak{M}_{S}^{\sigma}(v)\) is the derived moduli stack of \(\sigma\)-semistable objects in \(D^{b}(S)\) with Mukai vector \(v\), \(\mathcal{M}_{S}^{\sigma}(v)\) is its classical truncation, and \(M_{S}^{\sigma}(v)\) is its good moduli space. Below we write \(v=dv_{0}\) for \(d\in\mathbb{Z}_{\geqslant 1}\) and \(v_{0}\) a primitive Mukai vector with \(\langle v_{0},v_{0}\rangle=2g-2\). We say \(w\in\mathbb{Z}\)_is coprime with \(v\)_ if \(\gcd(w,d)=1\). We use the following structures on the derived category of \(\mathfrak{M}_{S}^{\sigma}(v)\):
**(The weight decomposition)**: every point in \(\mathfrak{M}_{S}^{\sigma}(v)\) admits scalar automorphisms \(\mathbb{C}^{*}\), and thus there is an orthogonal decomposition of \(D^{b}(\mathfrak{M}_{S}^{\sigma}(v))\) into \(\mathbb{C}^{*}\)-weight categories
\[D^{b}(\mathfrak{M}_{S}^{\sigma}(v))=\bigoplus_{w\in\mathbb{Z}}D^{b}(\mathfrak{ M}_{S}^{\sigma}(v))_{w}.\]
**(The categorical Hall product)**: for a decomposition \(d=d_{1}+\cdots+d_{k}\), the stack of filtrations of \(\sigma\)-semistable objects induces _the categorical Hall product_ defined by Porta-Sala [10]:
\[\boxtimes_{i=1}^{k}D^{b}(\mathfrak{M}_{S}^{\sigma}(d_{i}v_{0}))\to D^{b}( \mathfrak{M}_{S}^{\sigma}(v)). \tag{1.2}\]
Davison-Hennecart-Schlegel Mejia [15, Theorem 1.5] proved that the Hall algebra of a K3 surface is generated by its BPS cohomology. The categorical analogue of their result is the following result, which can also be regarded as a partial categorification of a BBDG-type decomposition theorem, see Subsection 1.3:
**Theorem 1.1**.: (Theorem 5.1) _Let \(\sigma\in\operatorname{Stab}(S)\) be a generic stability condition. Then there exists a subcategory (called quasi-BPS category)_
\[\mathbb{T}_{S}^{\sigma}(v)_{w}\subset D^{b}\left(\mathfrak{M}_{S}^{\sigma}(v) \right)_{w} \tag{1.3}\]
_such that there is a semiorthogonal decomposition_
\[D^{b}\left(\mathfrak{M}_{S}^{\sigma}(v)\right)=\left\langle\mathbb{E}_{i=1}^{ k}\mathbb{T}_{S}^{\sigma}(d_{i}v_{0})_{w_{i}+(g-1)d_{i}(\sum_{i>j}d_{j}-\sum_{i<j}d_{j}) }\right\rangle. \tag{1.4}\]
_The right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and all weights \((w_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) such that_
\[\frac{w_{1}}{d_{1}}<\cdots<\frac{w_{k}}{d_{k}},\]
_and each fully-faithful functor in (1.4) is given by the categorical Hall product (1.2)._
The order of the summands in the semiorthogonal decomposition (1.4) is not immediate to state and we do not make it explicit in this paper, see Remark 3.5. If \(v\) is primitive, then
\[\mathbb{T}_{S}^{\sigma}(v)_{w}=D^{b}\left(\mathfrak{M}_{S}^{\sigma}(v) \right)_{w}.\]
In general, the category \(\mathbb{T}_{S}^{\sigma}(v)_{w}\) is uniquely determined by the semiorthogonal decomposition (1.4). Locally on \(M_{S}^{\sigma}(v)\), the category \(\mathbb{T}_{S}^{\sigma}(v)_{w}\) is defined to be the subcategory of objects which are Koszul dual to matrix factorizations with some weight conditions for the maximal torus of the stabilizer groups. Such a subcategory was first considered by Spenko-Van den Bergh [11] to construct noncommutative crepant resolutions of quotients of quasi-symmetric representations by reductive groups. It was later used in [12] to prove the "magic window theorem" for GIT
quotient stacks, and in [12] to give PBW type decompositions for categorical (and K-theoretic) Hall algebras of symmetric quivers with potential.
We regard the subcategory (1.3) as a global version of these categories in the case of moduli stacks of semistable objects on K3 surfaces. The main tool in investigating the category (1.3) is its local description via categories of matrix factorizations on the moduli stacks of representations of Ext-quivers of \(\sigma\)-polystable objects. We study quasi-BPS categories in this local context in [13].
### Quasi-BPS categories for reduced stacks
The derived stack \(\mathfrak{M}^{\sigma}_{S}(v)\) is never classical because of the existence of the trace map \(\operatorname{Ext}^{2}(E,E)\twoheadrightarrow\mathbb{C}\) for any object \(E\in D^{b}(S)\). Let
\[\mathfrak{M}^{\sigma}_{S}(v)^{\operatorname{red}}\hookrightarrow\mathfrak{M}^{ \sigma}_{S}(v)\]
be the reduced derived stack, which roughly speaking is obtained by taking the traceless part of its obstruction theory. By [10], it is known that the reduced derived stack is classical when \(g\geqslant 2\). We also have a reduced version of the quasi-BPS category
\[\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}\subset D^{b}\left( \mathfrak{M}^{\sigma}_{S}(v)^{\operatorname{red}}\right)_{w} \tag{1.5}\]
and a reduced version of the semiorthogonal decomposition (1.4), see Theorem 5.2. When \(v\) is primitive, we have
\[\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}=D^{b}\left(M^{\sigma}_{S} (v),\alpha^{w}\right), \tag{1.6}\]
where \(\alpha\) is the Brauer class which represents the obstruction to the existence of a universal object, and \(M^{\sigma}_{S}(v)\) is a projective holomorphic symplectic manifold [14, 15]. From the above description, we have the following properties of the category (1.5) when \(v\) is primitive: (i) the category \(\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}\) is smooth and proper; (ii) the Serre functor \(S_{\mathbb{T}}\) of \(\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}\) is isomorphic to the shift functor \([\dim M^{\sigma}_{S}(v)]\); (iii) by Halpern-Leistner [10], the category \(\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}\) is independent of \(\sigma\) up to equivalence. The proof of the above properties relies on the description (1.6) for primitive \(v\), and a priori there is no reason that these properties hold for non-primitive \(v\). Nevertheless, we have the following:
**Theorem 1.2**.: (Corollary 6.8, Theorem 7.4, Theorem 4.8) _Suppose that \(g\geqslant 2\), \(\sigma,\sigma^{\prime}\in\operatorname{Stab}(S)\) are generic stability conditions, and \(w\) is coprime to \(v\). Then:_
_(i) **(smooth and properness):** the category \(\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}\) is smooth and proper;_
_(ii) **(etale locally trivial Serre functor):** the Serre functor \(S_{\mathbb{T}}\) of \(\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}\) is trivial etale locally on \(M^{\sigma}_{S}(v)\);_
_(iii) **(wall-crossing equivalence):** there is an equivalence \(\mathbb{T}^{\sigma}_{S}(v)_{w}^{\operatorname{red}}\simeq\mathbb{T}^{\sigma^ {\prime}}_{S}(v)_{w}^{\operatorname{red}}\). Hence we may write the quasi-BPS category as \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\)._
The key point in the proof of (i) above is Lemma 6.6 (the categorical support lemma), which says that any object in \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\) has a nilpotent singular support if \(w\) is coprime to \(v\). By combining with the strong generation, we conclude that \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\) is smooth and proper if \(w\) is coprime to \(v\). In particular, it admits a Serre functor \(S_{\mathbb{T}}\). We expect that \(S_{\mathbb{T}}\) is globally isomorphic to \([\dim M^{\sigma}_{S}(v)]\). However, currently there is a technical subtlety of proving this, and we only prove it is trivial etale locally in (ii). Globally, we prove an isomorphism \(S_{\mathbb{T}}\cong[\dim M^{\sigma}_{S}(v)]\) on the level of cohomologies, see Corollary 7.12, and also for perfect complexes, see Corollary 7.13. In view of parts (i) and (ii) of Theorem 1.2, we view \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\) as a categorical version of a crepant resolution of \(M^{\sigma}_{S}(v)\).
It is an interesting question to see the relation between (reduced) quasi-BPS categories and categorical crepant resolutions in the sense of Kuznetsov [10] or noncommutative crepant resolutions in the sense of Van den Bergh [12]. We plan to investigate this relation in future work.
The main tool in proving Theorem 1.2 is its local version for stacks of representations of preprojective algebras constructed from Ext-quivers of \(\sigma\)-polystable objects, see [13]. Along the way, we obtain generation statements for singular support quotient categories of more general quasi-smooth stacks that may be of independent interest, see Theorem 6.11.
### Topological K-theory of quasi-BPS categories
We finally relate topological K-theory of quasi-BPS category with the cohomology of the BPS sheaf \(\mathcal{BPS}_{v}\) on \(M^{\sigma}_{S}(v)\) studied in [10] (i.e. with BPS cohomology). Note that \(\mathcal{BPS}_{v}=\operatorname{IC}_{M^{\sigma}_{S}(v)}=\mathbb{Q}_{M^{ \sigma}_{S}(v)}[\dim M^{\sigma}_{S}(v)]\) if \(v\) is a primitive Mukai vector and \(\sigma\) is generic, and in general it is a semisimple perverse sheaf which contains \(\operatorname{IC}_{M^{\sigma}_{S}(v)}\) as a proper direct summand.
For a dg-category \(\mathcal{D}\) and \(i\in\mathbb{Z}\), we denote by \(K^{\operatorname{top}}_{i}(\mathcal{D})\) the topological K-theory of \(\mathcal{D}\) defined by Blanc [1]. We prove the following:
**Theorem 1.3**.: (Theorem 8.1) _Suppose that \(\sigma\) is a generic Gieseker stability condition, \(g\geqslant 2\), and \(w\) is coprime to \(v\). For \(i\in\mathbb{Z}\), we have the identity:_
\[\dim K^{\operatorname{top}}_{i}(\mathbb{T}^{\sigma}_{S}(v)_{w})=\sum_{j\in \mathbb{Z}}\dim H^{i+2j}(M^{\sigma}_{S}(v),\mathcal{BPS}_{v}).\]
The above result is motivated by categorification of BPS invariants in Donaldson-Thomas theory, which will be explained in the next subsection. We regard Theorem 1.3 as a weight-independence phenomenon reminiscent of the (numerical and cohomological) \(\chi\)-independence phenomenon [14, 15, 16, 17]. It is an interesting problem to define a primitive part \(\operatorname{P}K^{\operatorname{top}}_{i}(\mathbb{T}^{\sigma}_{S}(v)_{w}) \subset K^{\operatorname{top}}_{i}(\mathbb{T}^{\sigma}_{S}(v)_{w})\) whose dimension is independent of _all_\(w\in\mathbb{Z}\).
Theorem 1.3 can be seen as part of the more general problem of categorifying perverse sheaves of interest [13], [15]. Such a problem is the first step in categorifying instances of the BBDG decomposition theorem [1]. In the context of good moduli space maps for objects in certain Calabi-Yau 2-categories, a BBDG-type decomposition was proved by Davison [16]. Theorem 1.1 can be seen as a partial categorification of the decomposition theorem for the morphism \(\mathcal{M}^{\sigma}_{S}(v)\to M^{\sigma}_{S}(v)\).
### Motivation from Donaldson-Thomas theory
We now explain how the study of quasi-BPS categories is motivated by DT theory. Let \(X\) be a smooth Calabi-Yau 3-fold. For a given numerical class \(v\) and a stability condition \(\sigma\) on \(D^{b}(X)\), the DT invariant is defined to be a rational number
\[\operatorname{DT}^{\sigma}(v)\in\mathbb{Q} \tag{1.7}\]
which virtually counts \(\sigma\)-semistable (compactly supported) objects with numerical class \(v\), see [15, 16, 17]. It is defined via the moduli stack \(\mathcal{M}^{\sigma}_{X}(v)\) of \(\sigma\)-semistable objects with numerical class \(v\) or its good moduli space
\[\mathcal{M}^{\sigma}_{X}(v)\to M^{\sigma}_{X}(v).\]
When \(\sigma\)-semistable objects coincide with \(\sigma\)-stable objects (e.g. \(v\) is primitive and \(\sigma\) is generic), then the DT invariant is an integer and can be also computed as
\[\mathrm{DT}^{\sigma}(v)=\int_{[M^{\sigma}_{X}(v)]^{\mathrm{vir}}}1=\int_{M^{ \sigma}_{X}(v)}\chi_{B}\ de\in\mathbb{Z},\]
where \(\chi_{B}\) is the Behrend constructible function [1]. Otherwise, (1.7) is defined as the weighted Euler characteristic with respect to the Behrend function of the 'log' of \(\mathcal{M}^{\sigma}_{X}(v)\) in the motivic Hall algebra, see [10].
For a generic \(\sigma\), the BPS invariant \(\Omega^{\sigma}(v)\) is inductively defined by the multiple cover formula
\[\mathrm{DT}^{\sigma}(v)=\sum_{k\geqslant 1,k|v}\frac{1}{k^{2}}\Omega^{\sigma}(v/ k).\]
Although (1.7) is a rational number in general, the BPS number \(\Omega^{\sigma}(v)\) is an integer. The integrality of \(\Omega^{\sigma}(v)\) is conjectured in [10, Conjecture 6], [10, Conjecture 6.12] and proved in [11] combined with [14]. We address the following categorification problem of BPS invariants:
**Problem 1.1**.: Is there a dg-category \(\mathbb{T}^{\sigma}(v)\) which recovers \(\Omega^{\sigma}(v)\) by taking the Euler characteristic of an additive invariant, e.g.
\[\chi(K^{\mathrm{top}}(\mathbb{T}^{\sigma}(v))):=\dim_{\mathbb{Q}}K^{\mathrm{ top}}_{0}(\mathbb{T}^{\sigma}(v))_{\mathbb{Q}}-\dim_{\mathbb{Q}}K^{\mathrm{ top}}_{1}(\mathbb{T}^{\sigma}(v))_{\mathbb{Q}}=-\Omega^{\sigma}(v)?\]
The above problem is open even if \(v\) is primitive, and in this case it is related to the gluing problem of matrix factorizations, see [13] for the case of local surfaces and [12] for work in progress addressing the general case.
Now, for a K3 surface \(S\), we consider the local K3 surface
\[X=\mathrm{Tot}_{S}(K_{S})=S\times\mathbb{A}^{1}_{\mathbb{C}}. \tag{1.8}\]
The (\(\mathbb{C}^{*}\)-equivariant) DT category for the moduli stack \(\mathcal{M}^{\sigma}_{X}(v)\) is defined in [13] via categorical dimensional reduction
\[\mathcal{DT}(\mathcal{M}^{\sigma}_{X}(v)):=D^{b}(\mathfrak{M}^{\sigma}_{S}(v)).\]
We regard the subcategory \(\mathbb{T}^{\sigma}_{S}(v)_{w}\subset\mathcal{DT}(\mathcal{M}^{\sigma}_{X}(v))\) as a categorification of the BPS invariant for local K3 surface when \((v,w)\) is coprime. Indeed, Theorem 1.3 implies that
\[\chi(K^{\mathrm{top}}(\mathbb{T}^{\sigma}_{S}(v)_{w}))=-\Omega^{\sigma}(v), \tag{1.9}\]
where the right hand side is explicitly computed in terms of Hilbert schemes of points, see the next subsection. Thus the category \(\mathbb{T}^{\sigma}_{S}(v)_{w}\) gives a solution to Problem 1.1 for the local K3 surface (1.8).
### Motivation from hyperkahler geometry
Let \(S\) be a K3 surface, and consider the local K3 surface (1.8). The BPS invariant in this case is completely known:
\[\Omega^{\sigma}(v)=-\chi(S^{[\langle v,v\rangle/2+1]}), \tag{1.10}\]
where, for a positive integer \(n\), we denote by \(S^{[n]}\) the Hilbert scheme of \(n\) points on \(S\). The above identity is conjectured by the second named author [14] and proved by Maulik-Thomas [15, Corollary 6.10]. The identity (1.10) is an instance of _the \(\chi\)-independence phenomena_ (e.g. when \(v=(0,\beta,\chi)\), the right hand side of (1.10) is independent of \(\chi\)), see [14, Conjecture 6.3], [14, Conjecture 2.15] and [10, KK] for the recent development of \(\chi\)-independence phenomena.
When \(v\) is primitive, the identity (1.10) holds since \(M^{\sigma}_{S}(v)\) is a holomorphic symplectic manifold [10] deformation equivalent to \(S^{[\langle v,v\rangle/2+1]}\). However, it is much less obvious and mysterious when \(v\) is not primitive. For non-primitive \(v\), the good moduli space \(M=M^{\sigma}_{S}(v)\) is a singular symplectic variety. O'Grady [10] constructed a symplectic resolution of singularities
\[\widetilde{M}\to M \tag{1.11}\]
when \(v=2v_{0}\) for a primitive \(v_{0}\) with \(\langle v_{0},v_{0}\rangle=2\). But this turned out to be the only exceptional case: Kaledin-Lehn-Sorger [11] proved that \(M\) does not admit a symplectic resolution in all other cases with \(\langle v,v\rangle\geqslant 2\). By [14, Proposition 1.1], the existence of a symplectic resolution (1.11) is equivalent to the existence of a crepant resolution of \(M\), so \(M\) does not admit a crepant resolution except in the example studied by O'Grady. Instead of a usual (geometric) crepant resolution, it is interesting to investigate if \(M\) admits a crepant resolution of singularities in a categorical sense:
**Problem 1.2**.: Is there a categorical version of a crepant resolution of \(M^{\sigma}_{S}(v)\)?
Inspired by Theorem 1.2, we regard the category \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) as a categorical version of a (twisted, etale local) crepant resolution of \(M^{\sigma}_{S}(v)\). Note that, even in the situation of the O'Grady resolution (1.11) (that is, if \(d=2\) and \(\langle v_{0},v_{0}\rangle=2\)), the category \(\mathbb{T}_{S}(2)_{1}^{\mathrm{red}}\) is different from \(D^{b}(\widetilde{M})\) because its topological K-theory is a proper direct summand of the topological K-theory of \(\widetilde{M}\), see by Theorem 1.3 and [11].
In view of (1.9) and (1.10), we further expect \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) to be a "non-commutative hyperkahler variety" deformation equivalent to \(S^{[\langle v,v\rangle/2+1]}\). In particular, it is natural to investigate how \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) is analogous to \(D^{b}(M)\) for a smooth projective hyperkahler variety of \(K3^{[\langle v,v\rangle/2+1]}\)-type. More precisely, we may expect the following, which we regard as a categorical \(\chi\)-independence phenomenon:
**Conjecture 1.4**.: (Conjecture 4.13) _For any \(g\geqslant 0\) and any \(w\in\mathbb{Z}\) coprime with \(v\), the category \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) is deformation equivalent to \(D^{b}\left(S^{[\langle v,v\rangle/2+1]}\right)\)._
Recall that \(\langle v_{0},v_{0}\rangle=2g-2\). The above conjecture is easy to check for \(g=0\), see Proposition 4.17. For \(g=1\), we conjecture that the category \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) is equivalent to the derived category of a K3 surface (possibly twisted and not necessarily isomorphic to \(S\)), and we show this follows from an explicit computation of the quasi-BPS categories of \(\mathbb{C}^{3}\) studied in [PTa, PTb], see Conjectures 4.18, 4.19 and Proposition 4.20. In the forthcoming paper [PTe], we prove Conjecture 4.19 for \(d=2\), which implies Conjecture 1.4 for \((d,w)=(2,1)\). More precisely, there is an equivalence \(D^{b}(S)\stackrel{{\sim}}{{\rightarrow}}\mathbb{T}_{S}(2v_{0})_{1} ^{\mathrm{red}}\) in this case.
### Acknowledgements
We thank Tasuki Kinjo, Davesh Maulik, Yalong Cao, Junliang Shen, Georg Oberdieck, and Jorgen Rennemo for discussions related to this work. T. P. is grateful to Columbia University in New York and to Max Planck Institute for Mathematics in Bonn for their hospitality and financial support during the writing of this paper. The project of this paper started when Y. T. was visiting Columbia University in April 2023. Y. T. thanks Columbia University for their hospitality. Y. T. is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in Aid for Scientific Research grant (No. 19H01779) from MEXT, Japan.
## 2. Preliminaries
In this section, we introduce notations and review definitions related to stacks, matrix factorizations, and window categories. We also include a table with the most important notation we use later in the paper.
### Notations for (derived) stacks
All the spaces \(\mathcal{X}\) considered are quasi-smooth (derived) stacks over \(\mathbb{C}\), see [Toda, Subsection 3.1] for references. The classical truncation of \(\mathcal{X}\) is denoted by \(\mathcal{X}^{\mathrm{cl}}\). We denote by \(\mathbb{L}_{\mathcal{X}}\) the cotangent complex of \(\mathcal{X}\).
For \(G\) an algebraic group and \(X\) a dg-scheme with an action of \(G\), denote by \(X/G\) the corresponding quotient stack. When \(X\) is affine, we denote by \(X/\!\!/G\) the quotient dg-scheme with dg-ring of regular functions \(\mathcal{O}_{X}^{G}\). For a morphism \(f\colon X\to Y\) and
Figure 1. Notation used in the paper
for a closed point \(y\in Y\), we denote by \(\widehat{X}_{y}\) the following base change
\[\widehat{X}_{y}:=X\times_{Y}\operatorname{Spec}\left(\widehat{\mathcal{O}}_{Y,y }\right).\]
We call \(\widehat{X}_{y}\) the _formal fiber_, though it is a scheme over a complete local ring rather than a formal scheme. When \(X\) is a \(G\)-representation, \(f\colon X\to Y:=X/\!\!/G\), and \(y=0\), we omit the subscript \(y\) from the above notation.
We use the terminology of _good moduli spaces_ of Alper, see [1, Section 8] for examples of stacks with good moduli spaces.
### DG-categories
For \(\mathcal{X}\) a quasi-smooth stack, we denote by \(D^{b}(\mathcal{X})\) the bounded derived category of coherent sheaves and by \(\operatorname{Perf}(\mathcal{X})\) the subcategory of perfect complexes on \(\mathcal{X}\), see Subsection 2.6 for more details and for more categories of (quasi)coherent sheaves.
#### 2.2.1. Generation of dg-categories
Any dg-category considered is a \(\mathbb{C}\)-linear pre-triangulated dg-category, in particular its homotopy category is a triangulated category. For a pre-triangulated dg-category \(\mathcal{D}\) and a full subcategory \(\mathcal{C}\subset\mathcal{D}\), we say that \(\mathcal{C}\)_classically generates_\(\mathcal{D}\) if \(\mathcal{D}\) coincides with the smallest thick pre-triangulated subcategory of \(\mathcal{D}\) which contains \(\mathcal{C}\). If \(\mathcal{D}\) is furthermore cocomplete, then we say that \(\mathcal{C}\)_generates_\(\mathcal{D}\) if \(\mathcal{D}\) coincides with the smallest thick pre-triangulated subcategory of \(\mathcal{C}\) which contains \(\mathcal{C}\) and is closed under taking colimits.
We also recall some terminology related to strong generation. For a set of objects \(\mathcal{S}\subset\mathcal{D}\), we denote by \(\langle\mathcal{S}\rangle\) the smallest subcategory which contains \(S\) and is closed under shifts, finite direct sums, and direct summands. If \(\mathcal{D}\) is cocomplete, we denote by \(\langle\!\langle S\rangle\!\rangle\) the smallest subcategory which contains \(S\) and is closed under shifts, arbitrary direct sums, and direct summands. For subcategories \(\mathcal{C}_{1},\mathcal{C}_{2}\subset\mathcal{D}\), we denote by \(\mathcal{C}_{1}\star\mathcal{C}_{2}\subset\mathcal{D}\) the smallest subcategory which contains objects \(E\) which fit into distinguished triangles \(A_{1}\to E\to A_{2}\to A_{1}[1]\) with \(A_{i}\in\mathcal{C}_{i}\) for \(i\in\{1,2\}\), and is closed under shifts, finite direct sums, and direct summands. We say that \(\mathcal{D}\) is _strongly generated_ by \(C\in\mathcal{D}\) if \(\mathcal{D}=\langle C\rangle^{\star n}\) for some \(n\geqslant 1\). This is equivalent to \(\operatorname{Ind}\mathcal{D}=\langle\!\langle C\rangle\!\rangle^{\star n}\) for some \(n\geqslant 1\), see [11, Proposition 1.9]. A dg-category \(\mathcal{D}\) is called _regular_ if it has a strong generator. A dg-category \(\mathcal{D}\) is called _smooth_ if the diagonal dg-module of \(\mathcal{D}\) is perfect. It is proved in [12, Lemma 3.5, 3.6] that if \(\mathcal{D}\) is smooth, then \(\mathcal{D}\) is regular.
#### 2.2.2. Semiorthogonal decompositions
Let \(R\) be a set. Consider a set \(O\subset R\times R\) such that for any \(i,j\in R\) we have \((i,j)\in O\), or \((j,i)\in O\), or both \((i,j)\in O\) and \((j,i)\in O\). Let \(\mathbb{T}\) be a pre-triangulated dg-category. We will construct semiorthogonal decompositions
\[\mathbb{T}=\langle\mathbb{A}_{i}\mid i\in R\rangle \tag{2.1}\]
with summands pre-triangulated subcategories \(\mathbb{A}_{i}\) indexed by \(i\in R\) such that for any \(i,j\in R\) with \((i,j)\in O\) and for any objects \(\mathcal{A}_{i}\in\mathbb{A}_{i}\), \(\mathcal{A}_{j}\in\mathbb{A}_{j}\), we have \(\operatorname{Hom}_{\mathbb{T}}(\mathcal{A}_{i},\mathcal{A}_{j})=0\).
Let \(\pi\colon\mathcal{X}\to S\) be a morphism from a quasi-smooth stack to a scheme \(S\) and assume \(\mathbb{T}\) is a subcategory of \(D^{b}(\mathcal{X})\). We say the decomposition (2.1) is _\(S\)-linear_ if \(\mathbb{A}_{i}\otimes\pi^{*}\mathrm{Perf}(S)\subset\mathbb{A}_{i}\).
### Graded matrix factorizations
References for this subsection are [Todb, Section 2.2], [Toda, Section 2.2], [BFK19, Section 2.3], [PV11, Section 1]. Let \(G\) be an algebraic group and let \(Y\) be a smooth affine scheme with an action of \(G\). Let \(\mathcal{Y}=Y/G\) be the corresponding quotient stack and let \(f\) be a regular function
\[f\colon\mathcal{Y}\to\mathbb{C}.\]
Assume that there exists an extra action of \(\mathbb{C}^{*}\) on \(Y\) which commutes with the action of \(G\) on \(Y\), trivial on \(\mathbb{Z}/2\subset\mathbb{C}^{*}\), and \(f\) is weight two with respect to the above \(\mathbb{C}^{*}\)-action.
Consider the category of graded matrix factorizations
\[\operatorname{MF}^{\operatorname{gr}}(\mathcal{Y},f).\]
Its objects are pairs \((P,d_{P})\) with \(P\) a \(G\times\mathbb{C}^{*}\)-equivariant coherent sheaf on \(Y\) and \(d_{P}\colon P\to P(1)\) a \(G\times\mathbb{C}^{*}\)-equivariant morphism satisfying \(d_{P}^{2}=f\). Here (1) is the twist by the character \(\operatorname{pr}_{2}\colon G\times\mathbb{C}^{*}\to\mathbb{C}^{*}\). Note that as the \(\mathbb{C}^{*}\)-action is trivial on \(\mathbb{Z}/2\), we have the induced action of \(\mathbb{C}^{*}=\mathbb{C}^{*}/(\mathbb{Z}/2)\) on \(Y\) and \(f\) is weight one with respect to the above \(\mathbb{C}^{*}\)-action. The objects of \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{Y},f)\) can be alternatively described as tuples
\[(E,F,\alpha\colon E\to F(1)^{\prime},\beta\colon F\to E), \tag{2.2}\]
where \(E\) and \(F\) are \(G\times\mathbb{C}^{*}\)-equivariant coherent sheaves on \(Y\), \((1)^{\prime}\) is the twist by the character \(G\times\mathbb{C}^{*}\to\mathbb{C}^{*}\), and \(\alpha\) and \(\beta\) are \(\mathbb{C}^{*}\)-equivariant morphisms such that \(\alpha\circ\beta\) and \(\beta\circ\alpha\) are multiplication by \(f\).
For a pre-triangulated subcategory \(\mathbb{M}\) of \(D^{b}(\mathcal{Y})\), define \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{M},f)\) as the full subcategory of \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{Y},f)\) with objects totalizations of pairs \((P,d_{P})\) with \(P\in\mathbb{M}\) equipped with \(\mathbb{C}^{*}\)-equivariant structure, see [PTa, Subsection 2.6.2]. If \(\mathbb{M}\) is generated by a set of vector bundles \(\{\mathcal{V}_{i}\}_{i\in I}\) on \(\mathcal{Y}\), then \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{M},f)\) is generated by matrix factorizations whose factors are direct sums of vector bundles from \(\{\mathcal{V}_{i}\}_{i\in I}\), see [PTa, Lemma 2.3].
Functoriality of categories of graded matrix factorizations for pullback and proper pushfoward is discussed in [PV11]. In Subsection 7.2, we will also consider the category \(D^{\operatorname{gr}}(Y)\) for a possibly singular affine variety \(Y\) with a \(\mathbb{C}^{*}\)-action as above. It consists of objects (2.2) with \(f=0\), so its definition is the same as \(\operatorname{MF}^{\operatorname{gr}}(Y,0)\), but when \(Y\) is singular an object (2.2) may not be isomorphic to the one such that \(E,F\) are locally free of finite rank. See [EP15] for factorization categories over possibly singular varieties. Note that if the \(\mathbb{C}^{*}\)-action on \(Y\) is trivial, then \(D^{\operatorname{gr}}(Y)=D^{b}(Y)\).
### The Koszul equivalence
Let \(Y\) be a smooth affine scheme with an action of an algebraic group \(G\), let \(\mathcal{Y}=Y/G\), and let \(V\) be a \(G\)-equivariant vector bundle on \(Y\). We always assume that \(Y\) is either of finite type over \(\mathbb{C}\), or is a formal fiber of a map \(X\to X/\!\!/H\) for a finite type scheme \(X\) and an algebraic group \(H\) as in Subsection 2.1. Let \(\mathbb{C}^{*}\) act on the fibers of \(V\) with weight \(2\) and consider a section \(s\in\Gamma(Y,V)\). It induces a map \(\partial\colon V^{\vee}\to\mathcal{O}_{Y}\). Let \(s^{-1}(0)\) be the derived zero locus of \(s\) with dg-algebra of regular functions
\[\mathcal{O}_{s^{-1}(0)}:=\mathcal{O}_{Y}\left[V^{\vee}[1];\partial\right]. \tag{2.3}\]
Consider the quotient (quasi-smooth) stack
\[\mathcal{P}:=s^{-1}(0)/G. \tag{2.4}\]
We call \(\mathcal{P}\) the _Koszul stack_ associated with \((Y,V,s,G)\). There is a natural inclusion
\[j\colon\mathcal{P}\hookrightarrow\mathcal{Y}.\]
The section \(s\) also induces the regular function
\[f\colon\mathcal{V}^{\vee}:=\operatorname{Tot}_{Y}\left(V^{\vee}\right)/G\to \mathbb{C} \tag{2.5}\]
defined by \(f(y,v)=\langle s(y),v\rangle\) for \(y\in Y(\mathbb{C})\) and \(v\in V^{\vee}|_{y}\). Consider the category of graded matrix factorizations \(\operatorname{MF}^{\operatorname{gr}}\left(\mathcal{V}^{\vee},f\right)\) with respect to the \(\mathbb{C}^{*}\)-action mentioned above. The Koszul equivalence, also called dimensional reduction in the literature, says the following:
**Theorem 2.1**.: ([13, Hir17, Toda]) _There is an equivalence_
\[\Theta\colon D^{b}(\mathcal{P})\xrightarrow{\sim}\operatorname{MF}^{ \operatorname{gr}}(\mathcal{V}^{\vee},f) \tag{2.6}\]
_given by \(\Theta(-)=\mathcal{K}\otimes_{\mathcal{O}_{\mathcal{P}}}(-)\), where \(\mathcal{K}\) is the Koszul factorization, see [11, Theorem 2.3.3]._
We will use the following lemma:
**Lemma 2.2**.: ([2, Lemma 2.6]) _Let \(\{V_{a}\}_{a\in A}\) be a set of \(G\)-representations and let \(\mathbb{S}\subset\operatorname{MF}^{\operatorname{gr}}(\mathcal{V}^{\vee},f)\) be the subcategory generated by matrix factorizations whose factors are direct sums of vector bundles \(\mathcal{O}_{\mathcal{V}^{\vee}}\otimes V_{a}\). Then an object \(\mathcal{E}\in D^{b}(\mathcal{P})\) satisfies \(\Theta(\mathcal{E})\in\mathbb{S}\) if and only if \(j_{*}\mathcal{E}\in D^{b}(\mathcal{Y})\) is generated by \(\mathcal{O}_{\mathcal{Y}}\otimes V_{a}\) for \(a\in A\)._
### Window categories
#### 2.5.1. Attracting stacks
Let \(Y\) be an affine variety with an action of a reductive group \(G\). Let \(\lambda\) be a cocharacter of \(G\). Let \(G^{\lambda}\) and \(G^{\lambda\geqslant 0}\) be the Levi and parabolic groups associated to \(\lambda\). Let \(Y^{\lambda}\subset Y\) be the closed subvariety of \(\lambda\)-fixed points. Consider the attracting variety
\[Y^{\lambda\geqslant 0}:=\{y\in Y|\,\lim_{t\to 0}\lambda(t)\cdot y\in Y^{ \lambda}\}\subset Y.\]
Consider the attracting and fixed stacks
\[\mathcal{Z}:=Y^{\lambda}/G^{\lambda}\stackrel{{ q}}{{ \leftarrow}}\mathcal{S}:=Y^{\lambda\geqslant 0}/G^{\lambda\geqslant 0} \xrightarrow{p}\mathcal{Y}. \tag{2.7}\]
The map \(p\) is proper. Kempf-Ness strata are connected components of certain attracting stacks \(\mathcal{S}\), and the map \(p\) restricted to a Kempf-Ness stratum is a closed immersion, see [12, Section 2.1]. The attracting stacks also appear in the definition of Hall algebras [10] (for \(Y\) an affine space), where the Hall product is induced by the functor
\[*:=p_{*}q^{*}\colon D^{b}(\mathcal{Z})\to D^{b}(\mathcal{Y}). \tag{2.8}\]
In this case, the map \(p\) may not be a closed immersion.
Let \(T\subset G\) be a maximal torus and let \(\lambda\) be a cocharacter \(\lambda\colon\mathbb{C}^{*}\to T\). For a \(G\)-representation \(Y\), the attracting variety \(Y^{\lambda\geqslant 0}\subset Y\) coincides with the sub \(T\)-representation generated by weights which pair non-negatively with \(\lambda\). We may abuse notation and denote by \(\langle\lambda,Y^{\lambda\geqslant 0}\rangle:=\langle\lambda,\det Y^{ \lambda\geqslant 0}\rangle\), where \(\det Y^{\lambda\geqslant 0}\) is the sum of \(T\)-weights of \(Y^{\lambda\geqslant 0}\).
#### 2.5.2. The definition of window categories
Let \(Y\) be an affine variety with an action of a reductive group \(G\) and a linearization \(\ell\). Consider the stacks
\[j\colon\mathcal{Y}^{\ell\text{-}\operatorname{ss}}:=Y^{\ell\text{-} \operatorname{ss}}/G\hookrightarrow\mathcal{Y}:=Y/G.\]
We review the construction of window categories of \(D^{b}(\mathcal{Y})\) which are equivalent to \(D^{b}(\mathcal{Y}^{\ell\text{-}\operatorname{ss}})\) via the restriction map, due to Segal [10], Halpern-Leistner [12], and Ballard-Favero-Katzarkov [1]. We follow the presentation from [12].
By also fixing a Weyl-invariant norm on the cocharacter lattice, the unstable locus \(\mathcal{Y}\setminus\mathcal{Y}^{\ell\text{-ss}}\) has a stratification in Kempf-Ness strata \(\mathcal{S}_{i}\) for \(i\in I\) a finite ordered set:
\[\mathcal{Y}\setminus\mathcal{Y}^{\ell\text{-ss}}=\bigsqcup_{i\in I}\mathcal{S}_ {i}.\]
A Kempf-Ness stratum \(\mathcal{S}_{i}\) is the attracting stack in \(\mathcal{Y}\setminus\bigsqcup_{j<i}\mathcal{S}_{j}\) for a cocharacter \(\lambda_{i}\), with the fixed stack \(\mathcal{Z}_{i}:=\mathcal{S}_{i}^{\lambda_{i}}\). Let \(N_{\mathcal{S}_{i}/\mathcal{Y}}\) be the normal bundle of \(\mathcal{S}_{i}\) in \(\mathcal{Y}\). Define the width of the window categories
\[\eta_{i}:=\left\langle\lambda_{i},\det\left(N_{\mathcal{S}_{i}/\mathcal{Y}}^{ \vee}|z_{i}\right)\right\rangle.\]
For a choice of real numbers \(m_{\bullet}=(m_{i})_{i\in I}\in\mathbb{R}^{I}\), define the category
\[\mathbb{G}_{m_{\bullet}}^{\ell}:=\{\mathcal{F}\in D^{b}(\mathcal{Y})\text{ such that }\operatorname{wt}_{\lambda_{i}}(\mathcal{F}|_{\mathcal{Z}_{i}})\subset[m_{i},m_{i}+ \eta_{i})\text{ for all }i\in I\}. \tag{2.9}\]
In the above, \(\operatorname{wt}_{\lambda_{i}}(\mathcal{F}|_{z_{i}})\) is the set of \(\lambda_{i}\)-weights on \(\mathcal{F}|_{\mathcal{Z}_{i}}\). Then [15, Theorem 2.10] says that the restriction functor \(j^{*}\) induces an equivalence of categories:
\[j^{*}\colon\mathbb{G}_{m_{\bullet}}^{\ell}\xrightarrow{\sim}D^{b}\big{(} \mathcal{Y}^{\ell\text{-ss}}\big{)} \tag{2.10}\]
for any choice of real numbers \(m_{\bullet}=(m_{i})_{i\in I}\in\mathbb{R}^{I}\).
### Quasi-smooth derived stacks
#### 2.6.1. Derived categories of (quasi-)coherent sheaves
Let \(\mathfrak{M}\) be a derived stack over \(\mathbb{C}\) and let \(\mathcal{M}\) be its classical truncation. Let \(\mathbb{L}_{\mathfrak{M}}\) be the cotangent complex of \(\mathfrak{M}\). The stack \(\mathfrak{M}\) is called _quasi-smooth_ if for all closed points \(x\to\mathcal{M}\), the restriction \(\mathbb{L}_{\mathfrak{M}}|_{x}\) has cohomological amplitude in \([-1,1]\). By [1, Theorem 2.8], a stack \(\mathfrak{M}\) is quasi-smooth if and only if it is a \(1\)-stack and any point of \(\mathfrak{M}\) lies in the image of a \(0\)-representable smooth morphism
\[\alpha\colon\mathcal{U}\to\mathfrak{M} \tag{2.11}\]
for a Koszul scheme \(\mathcal{U}\) as in (2.3). Let \(D_{\mathrm{qc}}(\mathcal{U})\) be the derived category of dg-modules over \(\mathcal{O}_{\mathcal{U}}\) and let \(D^{b}(\mathcal{U})\subset D_{\mathrm{qc}}(\mathcal{U})\) be the subcategory of objects with bounded coherent cohomologies. Further, let \(\operatorname{Ind}D^{b}(\mathcal{U})\) be the ind-completion of \(D^{b}(\mathcal{U})\)[10]. For a quasi-smooth stack \(\mathfrak{M}\), the dg-categories \(D_{\mathrm{qc}}(\mathfrak{M})\), \(D^{b}(\mathfrak{M})\), and \(\operatorname{Ind}D^{b}(\mathfrak{M})\) are defined to be limits in the \(\infty\)-category of smooth morphisms (2.11), see [11], [10]:
\[D_{\mathrm{qc}}(\mathfrak{M})=\lim_{\mathcal{U}\to\mathfrak{M}}D_{\mathrm{qc} }(\mathcal{U}),\ D^{b}(\mathfrak{M})=\lim_{\mathcal{U}\to\mathfrak{M}}D^{b}( \mathcal{U}),\ \operatorname{Ind}D^{b}(\mathfrak{M})=\lim_{\mathcal{U}\to\mathfrak{M}} \operatorname{Ind}D^{b}(\mathcal{U}).\]
The category \(\operatorname{Ind}D^{b}(\mathfrak{M})\) is a module over \(D_{\mathrm{qc}}(\mathfrak{M})\) via the tensor product. For \(\mathcal{E}_{1},\mathcal{E}_{2}\in\operatorname{Ind}D^{b}(\mathfrak{M})\), there exist an internal homomorphism, see [1, Remark 3.4.5]
\[\mathcal{H}om(\mathcal{E}_{1},\mathcal{E}_{2})\in D_{\mathrm{qc}}(\mathfrak{M}),\]
such that for any \(\mathcal{A}\in D_{\mathrm{qc}}(\mathfrak{M})\) we have
\[\operatorname{Hom}_{D_{\mathrm{qc}}(\mathfrak{M})}(\mathcal{A},\mathcal{H}om( \mathcal{E}_{1},\mathcal{E}_{2}))\cong\operatorname{Hom}_{\operatorname{ Ind}D^{b}(\mathfrak{M})}(\mathcal{A}\otimes\mathcal{E}_{1},\mathcal{E}_{2}).\]
If \(\mathfrak{M}\) is QCA (quasi-compact and with affine automorphism groups) [1, Definition 1.1.8], then \(\operatorname{Ind}D^{b}(\mathfrak{M})\) is compactly generated with compact objects \(D^{b}(\mathfrak{M})\), see [1, Theorem 3.3.5].
#### 2.6.2. Etale and formal local structures along good moduli spaces
Let \(\mathfrak{M}\) be a quasi-smooth stack over \(\mathbb{C}\) and let \(\mathcal{M}\) be its classical truncation. Suppose that \(\mathcal{M}\) admits a good moduli space map
\[\pi\colon\mathcal{M}\to M,\]
see [1] for the notion of a good moduli space. In particular, \(M\) is an algebraic space and \(\pi\) is a quasi-compact morphism. For each point in \(M\), there are an etale neighborhood \(U\to M\) and Cartesian squares:
(2.12)
where each vertical arrow is etale and \(\mathfrak{M}_{U}\) is equivalent to a Koszul stack \(\mathcal{P}=s^{-1}(0)/G\) as in (2.4), see [11], [12, Theorem 4.2.3], [1]. Similarly, for each closed point \(y\in M\), there exist Cartesian squares, see [11, Subsection 3.1.4]:
(2.13)
#### 2.6.3. \((-1)\)-shifted cotangent stacks
Let \(\mathfrak{M}\) be a quasi-smooth stack. Let \(\mathbb{T}_{\mathfrak{M}}\) be the tangent complex of \(\mathfrak{M}\), which is the dual complex to the cotangent complex \(\mathbb{L}_{\mathfrak{M}}\). We denote by \(\Omega_{\mathfrak{M}}[-1]\) the \((-1)\)_-shifted cotangent stack_ of \(\mathfrak{M}\):
\[\Omega_{\mathfrak{M}}[-1]:=\operatorname{Spec}_{\mathfrak{M}}\left( \operatorname{Sym}(\mathbb{T}_{\mathfrak{M}}[1])\right).\]
Consider the projection map
\[p_{0}\colon\mathcal{N}:=\Omega_{\mathfrak{M}}[-1]^{\mathrm{cl}}\to\mathfrak{M}. \tag{2.14}\]
For a Koszul stack \(\mathcal{P}\) as in (2.4), recall the function \(f\) from (2.5) and consider the critical locus \(\operatorname{Crit}(f)\subset\mathcal{V}^{\vee}\). In this case, the map \(p_{0}\) is the natural projection
\[p_{0}\colon\operatorname{Crit}(f)=\Omega_{\mathcal{P}}[-1]^{\mathrm{cl}}\to \mathcal{P}. \tag{2.15}\]
For an object \(\mathcal{F}\in D^{b}(\mathfrak{M})\), Arinkin-Gaitsgory [1] defined the notion of singular support denoted by
\[\operatorname{Supp}^{\mathrm{sg}}(\mathcal{F})\subset\mathcal{N}.\]
The definition is compatible with maps \(\alpha\) as in (2.11), see [1, Section 7]. Consider the group \(\mathbb{C}^{*}\) scaling the fibers of the map \(p_{0}\). A closed substack \(\mathcal{Z}\) of \(\mathcal{N}\) is called _conical_ if it is closed under the action of \(\mathbb{C}^{*}\). The singular support \(\operatorname{Supp}^{\mathrm{sg}}(\mathcal{F})\) of \(\mathcal{F}\) is a conical subset \(\mathcal{Z}\) of \(\mathcal{N}\). For a given conical closed substack \(\mathcal{Z}\subset\mathcal{N}\), we denote by \(\mathcal{C}_{\mathcal{Z}}\subset D^{b}(\mathfrak{M})\) the subcategory of objects whose singular supports are contained in \(\mathcal{Z}\).
Consider a Koszul stack \(\mathcal{P}\) as in (2.4) and recall the Koszul equivalence \(\Theta\) from (2.6). Under \(\Theta\), the singular support of \(\mathcal{F}\in D^{b}(\mathcal{P})\) corresponds to the support \(\mathcal{Z}\) of the matrix factorization \(\Theta(\mathcal{F})\), namely the maximal closed substack \(\mathcal{Z}\subset\operatorname{Crit}(f)\) such that \(\mathcal{F}|_{\mathcal{V}^{\vee}\setminus\mathcal{Z}}=0\) in \(\operatorname{MF}^{\mathrm{gr}}(\mathcal{V}^{\vee}\setminus\mathcal{Z},f)\), see [11, Subsection 2.3.9].
### The window theorem for quasi-smooth stacks
We review the theory of window categories for singular support quotients of quasi-smooth stacks [Toda, Chapter 5], which itself is inspired by Halpern-Leistner's theory of window categories for \(0\)-shifted symplectic derived stacks [HLa]. We continue with the notation from the previous subsection.
Let \(\mathfrak{M}\) be a quasi-smooth stack and assume throughout this subsection that its classical truncation \(\mathcal{M}\) admits a good moduli space \(\mathcal{M}\to M.\) Let \(\ell\) be a line bundle on \(\mathcal{M}\) and let \(b\in H^{4}(\mathcal{M},\mathbb{Q})\) be a positive definite class, see [HLb, Definition 3.7.6]. We also use the same symbols \((\ell,b)\) for \(p_{0}^{*}\ell\in\operatorname{Pic}(\mathcal{N})\) and \(p_{0}^{*}b\in H^{4}(\mathcal{N},\mathbb{Q})\). Then there is a \(\Theta\)-stratification with respect to \((\ell,b)\):
\[\mathcal{N}=\mathcal{S}_{1}\sqcup\cdots\sqcup\mathcal{S}_{N}\sqcup\mathcal{N }^{\ell\text{-ss}} \tag{2.16}\]
with centers \(\mathcal{Z}_{i}\subset\mathcal{S}_{i}\), see [HLb, Theorem 5.2.3, Proposition 5.3.3]. In the above situation, an analogue of the window theorem is proved in [Todc, Theorem 1.1], [Toda, Theorem 5.3.13] (which generalizes [HLa, Theorem 3.3.1] in the case that \(\mathfrak{M}\) is \(0\)-shifted symplectic):
**Theorem 2.3**.: ([Toda, Todc]) _In addition to the above, suppose that \(\mathcal{M}\to M\) satisfies the formal neighborhood theorem, see below. Then for each \(m_{\bullet}=(m_{i})_{i=1}^{N}\in\mathbb{R}^{N}\), there is a subcategory \(\mathbb{W}(\mathfrak{M})_{m_{\bullet}}^{\ell}\subset D^{b}(\mathfrak{M})\) such that the composition_
\[\mathbb{W}(\mathfrak{M})_{m_{\bullet}}^{\ell}\subset D^{b}(\mathfrak{M}) \twoheadrightarrow D^{b}(\mathfrak{M})/\mathcal{C}_{\mathcal{Z}}\]
_is an equivalence, where \(\mathcal{Z}:=\mathcal{N}\setminus\mathcal{N}^{\ell\text{-ss}}\)._
**Remark 2.4**.: When \(\mathcal{N}\) is a (global) quotient stack \(\mathcal{N}=Y/G\) for a reductive algebraic group \(G\), a \(\Theta\)-stratification (2.16) is the same as a Kempf-Ness stratification [HLb, Example 0.0.5]. The class \(b\) is then constructed as the pull-back of the class in \(H^{4}(BG,\mathbb{Q})\) corresponding to the chosen positive definite form [HLb, Example 5.3.4].
**Remark 2.5**.: Suppose that \(\mathbb{L}_{\mathfrak{M}}\) is self-dual, e.g. \(\mathfrak{M}\) is \(0\)-shifted symplectic. In this case, we have \(\mathcal{N}^{\ell\text{-ss}}=\Omega_{\mathfrak{M}^{\ell\text{-ss}}}[-1]^{ \mathrm{cl}}\), which easily follows from [HLa, Lemma 4.3.22]. Then we have the equivalence, see [Toda, Lemma 3.2.9]:
\[D^{b}(\mathfrak{M})/\mathcal{C}_{\mathcal{Z}}\stackrel{{\sim}}{{ \rightarrow}}D^{b}(\mathfrak{M}^{\ell\text{-ss}}).\]
We now explain the meaning of "the formal neighborhood theorem" in the statement of Theorem 2.3, see [Toda, Definition 5.2.3]. For a closed point \(y\in M\), denote also by \(y\in\mathcal{M}\) the unique closed point in the fiber of \(\mathcal{M}\to M\) at \(y\). Set \(G_{y}:=\operatorname{Aut}(y)\), which is a reductive algebraic group. Let \(\widehat{\mathcal{M}}_{y}\) be the formal fiber along with \(\mathcal{M}\to M\) at \(y\). Let \(\widehat{\mathcal{H}}^{0}(\mathbb{T}_{\mathcal{M}}|_{y})\) be the formal fiber at the origin of \(\mathcal{H}^{0}(\mathbb{T}_{\mathcal{M}}|_{y})\rightarrow\mathcal{H}^{0}( \mathbb{T}_{\mathcal{M}}|_{y})\mathbin{/\!\!\!/}\,G_{y}\) at the origin, and define \(\widehat{\mathcal{H}}^{0}(\mathbb{T}_{\mathfrak{M}}|_{y})\) similarly, see also the convention from Subsection 2.1. Then the formal neighborhood theorem says that there is a \(G_{y}\)-equivariant morphism
\[\kappa_{y}\colon\widehat{\mathcal{H}}^{0}(\mathbb{T}_{\mathcal{M}}|_{y}) \rightarrow\mathcal{H}^{1}(\mathbb{T}_{\mathcal{M}}|_{y})\]
such that, by setting \(\mathcal{U}_{y}\) to be the classical zero locus of \(\kappa_{y}\), there is an isomorphism \(\widehat{\mathcal{M}}_{y}\cong\mathcal{U}_{y}/G_{y}\). Let \(\mathfrak{U}_{y}\) be the derived zero locus of \(\kappa_{y}\). Then, by replacing \(\kappa_{y}\) if necessary, \(\widehat{\mathfrak{M}}_{y}\) is equivalent to \(\mathfrak{U}_{y}/G\), see [Toda, Lemma 5.2.5].
Below we give a formal local description of \(\mathbb{W}(\mathfrak{M})^{\ell}_{m_{\bullet}}\). Consider the pair of a smooth stack and a regular function \((\mathscr{X}_{y},f_{y})\):
\[\mathscr{X}_{y}:=\left(\widehat{\mathcal{H}}^{0}(\mathbb{T}_{\mathfrak{M}}|_{y })\times\mathcal{H}^{1}(\mathbb{T}_{\mathfrak{M}}|_{y})^{\vee}\right)/G_{y} \stackrel{{ f_{y}}}{{\to}}\mathbb{C},\]
where \(f_{y}(u,v)=\left\langle\kappa_{y}(u),v\right\rangle\). From (2.15), the critical locus of \(f_{y}\) is isomorphic to the classical truncation of the \((-1)\)-shifted cotangent stack over \(\widehat{\mathfrak{M}}_{y}\), so it is isomorphic to the formal fiber \(\widehat{\mathcal{N}}_{y}\) of \(\mathcal{N}\to\mathcal{M}\to M\) at \(y\). The pull-back of the \(\Theta\)-stratification (2.16) to \(\widehat{\mathcal{N}}_{y}\) gives a Kempf-Ness stratification
\[\widehat{\mathcal{N}}_{y}=\widehat{\mathscr{S}}_{1,y}\sqcup\cdots\sqcup \widehat{\mathscr{S}}_{N,y}\sqcup\widehat{\mathcal{N}}_{y}^{\ell\text{-ss}}\]
with centers \(\widehat{\mathscr{S}}_{i,y}\subset\widehat{\mathscr{S}}_{i,y}\) and one parameter subgroups \(\lambda_{i}\colon\mathbb{C}^{*}\to G_{y}\). By the Koszul equivalence, see Theorem 2.1, there is an equivalence:
\[\Theta_{y}\colon D^{b}(\widehat{\mathfrak{M}}_{y})\stackrel{{ \sim}}{{\to}}\operatorname{MF}^{\operatorname{gr}}(\mathscr{X}_{y},f_{y}).\]
Then the subcategory \(\mathbb{W}(\mathfrak{M})^{\ell}_{m_{\bullet}}\) in Theorem 2.3 is characterized as follows: an object \(\mathcal{E}\in D^{b}(\mathfrak{M})\) is an object of \(\mathbb{W}(\mathfrak{M})^{\ell}_{m_{\bullet}}\) if and only if, for any closed point \(y\in M\), we have
\[\Theta_{y}(\mathcal{E}|_{\widehat{\mathfrak{M}}_{y}})\in\operatorname{MF}^{ \operatorname{gr}}(\mathbb{G}^{\ell}_{m^{\prime}_{\bullet}},f_{y}),\ m^{\prime}_{i}=m_{i}- \left\langle\lambda_{i},\det\left(\mathcal{H}^{1}(\mathbb{T}_{\mathfrak{M}}|_ {y})^{\lambda_{i}>0}\right)\right\rangle. \tag{2.17}\]
The category \(\mathbb{G}^{\ell}_{m^{\prime}_{\bullet}}\) is the window category (2.9) for the weights \((m^{\prime}_{i})_{i=1}^{N}\) and the line bundle \(\ell\). The difference between \(m_{i}\) and \(m^{\prime}_{i}\) is due to the discrepancy of categorical Hall products on \(\mathfrak{M}_{y}\) and \(\mathscr{X}_{y}\), see [10, Proposition 3.1].
### Intrinsic window subcategory
We continue to consider a quasi-smooth derived stack \(\mathfrak{M}\) whose classical truncation \(\mathcal{M}\) admits a good moduli space \(\mathcal{M}\to M\). We say that \(\mathfrak{M}\) is _symmetric_ if for any closed point \(y\in\mathfrak{M}\), the \(G_{y}:=\operatorname{Aut}(y)\)-representation
\[\mathcal{H}^{0}(\mathbb{T}_{\mathfrak{M}}|_{y})\oplus\mathcal{H}^{1}( \mathbb{T}_{\mathfrak{M}}|_{y})^{\vee}\]
is a self-dual \(G_{y}\)-representation. In this subsection, we assume that \(\mathfrak{M}\) is symmetric. Let \(\delta\in\operatorname{Pic}(\mathfrak{M})_{\mathbb{R}}\). We now define a different kind of window categories, called _intrinsic window subcategories_\(\mathbb{W}(\mathfrak{M})^{\operatorname{int}}_{\delta}\subset D^{b}( \mathfrak{M})\), see [21, Definition 5.2.12, 5.3.12]. These categories are the quasi-smooth version of "magic window categories" from [11, 12].
First, assume that \(\mathfrak{M}\) is a Koszul stack associated with \((Y,V,s,G)\) as in (2.4)
\[\mathfrak{M}=s^{-1}(0)/G. \tag{2.18}\]
Consider the quotient stack \(\mathscr{Y}=Y/G\), the closed immersion \(j\colon\mathfrak{M}\hookrightarrow\mathscr{Y}\), and let \(\mathscr{V}\to\mathscr{Y}\) be the total space of \(V/G\to Y/G\). In this case, we define \(\mathbb{W}(\mathfrak{M})^{\operatorname{int}}_{\delta}\subset D^{b}( \mathfrak{M})\) to be consisting of \(\mathcal{E}\in D^{b}(\mathfrak{M})\) such that for any map \(\nu\colon B\mathbb{C}^{*}\to\mathfrak{M}\) we have
\[\operatorname{wt}(\nu^{*}j^{*}j_{*}\mathcal{E})\subset\left[\frac{1}{2} \operatorname{wt}\left(\det\nu^{*}(\mathbb{L}_{\mathscr{V}}|_{y})^{\nu<0} \right),\frac{1}{2}\operatorname{wt}\left(\det\nu^{*}(\mathbb{L}_{\mathscr{V}}|_ {y})^{\nu>0}\right)\right]+\operatorname{wt}(\nu^{*}\delta).\]
The above subcategory \(\mathbb{W}(\mathfrak{M})^{\operatorname{int}}_{\delta}\) is intrinsic to \(\mathfrak{M}\), that is, independent of a choice of a presentation \(\mathfrak{M}\) as (2.18) for \((Y,V,s,G)\), see [21, Lemma 5.3.14].
In general, the intrinsic window subcategory is defined as follows (which generalizes the magic window category in [12, Definition 4.3.5] considered when \(\mathbb{L}_{\mathfrak{M}}\) is self-dual):
**Definition 2.6**.: ([Toda, Definition 5.3.12]) We define the subcategory
\[\mathbb{W}(\mathfrak{M})^{\mathrm{int}}_{\delta}\subset D^{b}(\mathfrak{M})\]
to be consisting of objects \(\mathcal{E}\) such that, for any etale morphism \(\iota_{U}\colon U\to M\) such that \(\mathfrak{M}_{U}\) is of the form \(s^{-1}(0)/G\) as in (2.18) and \(\iota_{U}\) induces an etale morphism \(\iota_{U}\colon\mathfrak{M}_{U}\to\mathfrak{M}\), we have \(\iota_{U}^{*}\mathcal{E}\in\mathbb{W}(\mathfrak{M}_{U})^{\mathrm{int}}_{\iota _{U}^{*}\delta}\subset D^{b}(\mathfrak{M}_{U})\).
## 3. Quasi-BPS categories for doubled quivers
In this section, we review the results in [PTd] about quasi-BPS categories of doubled quivers, focusing on the example of doubled quivers of \(g\)-loop quivers for \(g\geqslant 1\). These results are the local analogues of Theorems 1.1 and 1.2. We also discuss similar results for formal fibers along good moduli space morphisms.
### Moduli stacks of representations of quivers
#### 3.1.1. Moduli stacks
Let \(Q=(I,E)\) be a quiver. For a dimension vector \(\boldsymbol{d}=(d^{(a)})_{a\in I}\in\mathbb{N}^{I}\subset\mathbb{Z}^{I}\), we denote by
\[\mathfrak{X}(\boldsymbol{d})=R_{Q}(\boldsymbol{d})/G(\boldsymbol{d}) \tag{3.1}\]
the moduli stack of \(Q\)-representations of dimension \(\boldsymbol{d}\). Here, the affine space \(R_{Q}(\boldsymbol{d})\) and the reductive group \(G(\boldsymbol{d})\) are defined by
\[R_{Q}(\boldsymbol{d})=\bigoplus_{(a\to b)\in E}\mathrm{Hom}(V^{(a)},V^{(b)}), \ G(\boldsymbol{d})=\prod_{a\in I}GL(V^{(a)}),\]
where \(V^{(a)}\) is a \(\mathbb{C}\)-vector space of dimension \(d^{(a)}\). We denote by \(\mathfrak{g}(\boldsymbol{d})\) the Lie algebra of \(G(\boldsymbol{d})\).
#### 3.1.2. Doubled quivers
Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver. Let \(E^{\circ*}\) be the set of edges \(e^{*}=(b\to a)\) for each \(e=(a\to b)\) in \(E^{\circ}\). Consider the _doubled quiver_ of \(Q^{\circ}\):
\[Q^{\circ,d}=(I,E^{\circ,d}),\ E^{\circ,d}=E^{\circ}\sqcup E^{\circ*}.\]
Let \(\mathcal{I}\) be the quadratic relation \(\sum_{e\in E^{\circ}}[e,e^{*}]\in\mathbb{C}[Q^{\circ,d}]\). For a dimension vector \(\boldsymbol{d}=(d^{(a)})_{a\in I}\), the relation \(\mathcal{I}\) induces a moment map:
\[\mu\colon R_{Q^{\circ,d}}(\boldsymbol{d})=T^{*}R_{Q^{\circ}}(\boldsymbol{d}) \to\mathfrak{g}(\boldsymbol{d}). \tag{3.2}\]
The derived zero locus
\[\mathcal{P}(\boldsymbol{d}):=\mu^{-1}(0)/G(\boldsymbol{d})\xrightarrow{j} \mathcal{Y}(\boldsymbol{d}):=R_{Q^{\circ,d}}(\boldsymbol{d})/G(\boldsymbol{ d}) \tag{3.3}\]
is the derived moduli stack of \((Q^{\circ,d},\mathcal{I})\)-representations of dimension vector \(\boldsymbol{d}\). Note that a \((Q^{\circ,d},\mathcal{I})\)-representation is the same as a representation of the preprojective algebra \(\pi_{Q^{\circ}}:=\mathbb{C}[Q^{\circ,d}]/(\mathcal{I})\) of \(Q^{\circ}\), and we will use these two names interchangeably.
#### 3.1.3. Tripled quivers
Consider a quiver \(Q^{\circ}=(I,E^{\circ})\). For \(a\in I\), let \(\omega_{a}\) be a loop at \(a\). _The tripled quiver_ of \(Q^{\circ}\) is:
\[Q=(I,E),\ E=E^{\circ,d}\sqcup\{\omega_{a}\}_{a\in I}.\]
The tripled potential \(W\) of \(Q\) is:
\[W=\left(\sum_{a\in I}\omega_{a}\right)\left(\sum_{e\in E^{\circ}}[e,e^{*}] \right).\]
Consider the stack (3.1) of representations of dimension \(d\) for the tripled quiver \(Q\):
\[\mathscr{X}(\boldsymbol{d}):=R_{Q}(\boldsymbol{d})/G(\boldsymbol{d}):=R_{Q^{ \circ,d}}(\boldsymbol{d})\oplus\mathfrak{g}(\boldsymbol{d})/G(d).\]
The potential \(W\) induces the regular function:
\[\operatorname{Tr}W\colon\mathscr{X}(\boldsymbol{d})=R_{Q}(\boldsymbol{d})/G( \boldsymbol{d})\to\mathbb{C}. \tag{3.4}\]
We have the Koszul duality equivalence, see Theorem 2.1:
\[\Theta\colon D^{b}(\mathscr{P}(\boldsymbol{d}))\stackrel{{ \sim}}{{\to}}\operatorname{MF}^{\operatorname{gr}}(\mathscr{X}( \boldsymbol{d}),\operatorname{Tr}W). \tag{3.5}\]
### The weight lattice
Let \(Q=(I,E)\) be a quiver. For a dimension vector \(\boldsymbol{d}\in\mathbb{N}^{I}\), let \(T(\boldsymbol{d})\subset G(\boldsymbol{d})\) be the maximal torus and let \(M(\boldsymbol{d})\) be the character lattice for \(T(\boldsymbol{d})\):
\[M(\boldsymbol{d})=\bigoplus_{a\in I}\bigoplus_{1\leqslant i\leqslant d^{(a)} }\mathbb{Z}\beta_{i}^{(a)}.\]
Here \(\beta_{1}^{(a)},\ldots,\beta_{d^{(a)}}^{(a)}\) are the weights of the standard representation of \(GL(V^{(a)})\) for \(a\in I\). In the case that \(I\) consists of one element, we omit the subscript \((a)\). We denote by \(\rho\in M(\boldsymbol{d})_{\mathbb{Q}}\) half of the sum of the positive roots of \(\mathfrak{g}(\boldsymbol{d})\). Let \(W\) be the Weyl group of \(G(\boldsymbol{d})\) and let \(M(\boldsymbol{d})_{\mathbb{R}}^{W}\subset M(\boldsymbol{d})_{\mathbb{R}}\) be the Weyl invariant subspace. There is a decomposition:
\[M(\boldsymbol{d})_{\mathbb{R}}^{W}=\bigoplus_{a\in I}\mathbb{R}\sigma^{(a)},\]
where \(\sigma^{(a)}:=\sum_{i=1}^{d^{(a)}}\beta_{i}^{(a)}\). There is a natural pairing:
\[\langle-,-\rangle\colon M(\boldsymbol{d})_{\mathbb{R}}^{W}\times\mathbb{R}^{ I}\to\mathbb{R},\ \langle\sigma^{(a)},e^{(b)}\rangle=\delta^{ab}.\]
We denote by \(\iota\colon M(\boldsymbol{d})_{\mathbb{R}}\to\mathbb{R}\) the linear map sending \(\beta_{i}^{(a)}\) to \(1\), and its kernel by \(M(\boldsymbol{d})_{0,\mathbb{R}}\). An element \(\ell\in M(\boldsymbol{d})_{0,\mathbb{R}}^{W}\) is written as
\[\ell=\sum_{a\in I}\ell^{(a)}\sigma^{(a)},\ \langle\ell,\boldsymbol{d} \rangle=\sum_{a}\ell^{(a)}d^{(a)}=0\]
i.e. \(\ell\) is an \(\mathbb{R}\)-character of \(G(\boldsymbol{d})\) which is trivial on the diagonal torus \(\mathbb{C}^{*}\subset G(\boldsymbol{d})\). Denote by \(\underline{\boldsymbol{d}}:=\sum_{a\in I}d^{(a)}\) the total dimension. Define the following Weyl invariant weight:
\[\tau_{\boldsymbol{d}}:=\frac{1}{\underline{\boldsymbol{d}}}\cdot\sum_{a\in I,1\leqslant i\leqslant d^{(a)}}\beta_{i}^{(a)}.\]
Define the polytope:
\[\mathbf{W}(\boldsymbol{d}):=\frac{1}{2}\mathrm{sum}[0,\beta]\subset M( \boldsymbol{d})_{\mathbb{R}}, \tag{3.6}\]
where the Minkowski sum is after all \(T(\boldsymbol{d})\)-weights \(\beta\) of \(R(\boldsymbol{d})\).
**Definition 3.1**.: A weight \(\ell\in M(\boldsymbol{d})_{0,\mathbb{R}}^{W}\) is _generic_ if the following conditions hold:
* if \(H\subset M(\boldsymbol{d})_{0,\mathbb{R}}\) is a hyperplane parallel to a face in \(\mathbf{W}(\boldsymbol{d})\) which contains \(\ell\), then \(M(\boldsymbol{d})_{0,\mathbb{R}}^{W}\subset H\),
* for any decomposition \(\boldsymbol{d}=\boldsymbol{d}_{1}+\boldsymbol{d}_{2}\) such that \(\boldsymbol{d}_{1},\boldsymbol{d}_{2}\in\mathbb{N}^{I}\) are not proportional to \(\boldsymbol{d}\), we have that \(\langle\ell,\boldsymbol{d}_{i}\rangle\neq 0\) for \(i\in\{1,2\}\).
Note that the set of generic weights is a dense open subset in \(M(\boldsymbol{d})_{0,\mathbb{R}}^{W}\).
### Quasi-BPS categories for stacks of representations of preprojective algebras
Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver and let \(\mathcal{P}(d)\) be the derived moduli stack of representations of its preprojective algebra (3.3). For \(\delta\in M(\boldsymbol{d})_{\mathbb{R}}^{W}\), define the quasi-BPS category to be the intrinsic window subcategory in Definition 2.6:
\[\mathbb{T}(\boldsymbol{d})_{\delta}:=\mathbb{W}(\mathcal{P}(\boldsymbol{d}))_ {\delta}^{\mathrm{int}}\subset D^{b}(\mathcal{P}(\boldsymbol{d})),\ \mathbb{T}(\boldsymbol{d})_{w}:=\mathbb{T}(\boldsymbol{d};w\tau_{ \boldsymbol{d}}). \tag{3.7}\]
An alternative description is as follows, where recall the map \(j\colon\mathcal{P}(\boldsymbol{d})\hookrightarrow\mathcal{Y}(\boldsymbol{d})\) and choose a dominant chamber \(M(\boldsymbol{d})^{+}\subset M(\boldsymbol{d})\), for example the one in [PTd, Subsection 2.2.2]:
**Lemma 3.2**.: ([PTd, Corollary 3.20]) _The subcategory (3.7) consists of objects \(\mathcal{E}\in D^{b}(\mathcal{P}(\boldsymbol{d}))\) such that \(j_{*}\mathcal{E}\) is classically generated by the vector bundle \(\mathcal{O}_{\mathcal{Y}(d)}\otimes\Gamma_{G(\boldsymbol{d})}(\chi)\), where \(\chi\) is a dominant weight such that_
\[\chi+\rho-\delta\in\mathbf{W}(\boldsymbol{d}). \tag{3.8}\]
_Here, \(\Gamma_{G(\boldsymbol{d})}(\chi)\) is the irreducible representation of \(G(\boldsymbol{d})\) with highest weight \(\chi\), and \(\mathbf{W}(\boldsymbol{d})\) is the polytope (3.6) for the tripled quiver \(Q\) of \(Q^{\circ}\)._
For \(\ell\in M(\boldsymbol{d})_{0,\mathbb{R}}^{W}\), let \(\mathcal{P}(\boldsymbol{d})^{\ell\text{-ss}}\subset\mathcal{P}(\boldsymbol{d})\) be the open substack of \(\ell\)-semistable locus. The quasi-BPS category for \(\ell\)-semistable locus is defined to be
\[\mathbb{T}^{\ell}(\boldsymbol{d})_{\delta}:=\mathbb{W}(\mathcal{P}(\boldsymbol {d})^{\ell\text{-ss}})_{\delta}^{\mathrm{int}}\subset D^{b}(\mathcal{P}( \boldsymbol{d})^{\ell\text{-ss}}),\ \mathbb{T}^{\ell}(\boldsymbol{d})_{w}:=\mathbb{T}^{\ell}( \boldsymbol{d};w\tau_{\boldsymbol{d}}).\]
Consider the restriction functor
\[\text{res}\colon D^{b}(\mathcal{P}(\boldsymbol{d}))\twoheadrightarrow D^{b}( \mathcal{P}(\boldsymbol{d})^{\ell\text{-ss}}). \tag{3.9}\]
We recall a wall-crossing equivalence proved in [PTd]:
**Theorem 3.3**.: ([PTd, Corollary 3.19, Remark 3.12]) _For generic \(\ell_{+},\ell_{-}\in M(\boldsymbol{d})_{0,\mathbb{R}}^{W}\), let \(\delta^{\prime}=\varepsilon_{+}\cdot\ell_{+}+\varepsilon_{-}\cdot\ell_{-}\) for general \(0<\varepsilon_{\pm}\ll 1\). Let \(\delta\in M(\boldsymbol{d})_{\mathbb{R}}^{W}\) and let \(\delta^{\prime\prime}=\delta+\delta^{\prime}\). Then the restriction functor (3.9) induces equivalences:_
\[\mathbb{T}(\boldsymbol{d})_{\delta^{\prime\prime}}\stackrel{{ \sim}}{{\to}}\mathbb{T}^{\ell_{\pm}}(\boldsymbol{d})_{\delta^{\prime \prime}}.\]
_In particular, there is an equivalence \(\mathbb{T}^{\ell_{+}}(\boldsymbol{d})_{\delta^{\prime\prime}}\simeq\mathbb{T} ^{\ell_{-}}(\boldsymbol{d})_{\delta^{\prime\prime}}\)._
### Semiorthogonal decompositions for preprojective algebras of quivers with one vertex
In the remaining of this section, we focus on the case of the \(g\)-loop quiver \(Q^{\circ}=Q_{g}\) with loops \(x_{1},\ldots,x_{g}\). In this case, we write the dimension vector by \(\boldsymbol{d}=d\in\mathbb{N}\). The doubled quiver is \(Q^{\circ,d}=Q_{2g}\) with loops \(x_{1},\ldots,x_{g},y_{1},\ldots,y_{g}\) and the relation \(\mathcal{I}\) is given by \(\sum_{i=1}^{g}[x_{i},y_{i}]\in\mathbb{C}[Q_{2g}]\). The map (3.2) in this case is
\[\mu\colon\mathfrak{gl}(d)^{\oplus 2g}\to\mathfrak{gl}(d),\ (x_{1},\ldots,x_{g},y_{1}, \ldots,y_{g})\mapsto\sum_{i=1}^{g}[x_{i},y_{i}].\]
Then the derived stack in (3.3) is
\[\mathcal{P}(d)=\mu^{-1}(0)/GL(d)\hookrightarrow\mathcal{Y}(d)=\mathfrak{gl}(d )^{\oplus 2g}/GL(d).\]
For a partition \(d=d_{1}+\cdots+d_{k}\), let \(\mathcal{P}(d_{1},\ldots,d_{k})\) be the derived moduli stack of filtrations
\[0=R_{0}\subset R_{1}\subset\cdots\subset R_{k}\]
of \((Q^{\circ,d},\mathcal{I})\)-representations such that \(R_{i}/R_{i-1}\) has dimension \(d_{i}\). Explicitly, let \(\lambda\colon\mathbb{C}^{*}\to T(d)\) be an antidominant cocharacter corresponding to the decomposition \(d=d_{1}+\cdots+d_{k}\) and set
\[\mu^{\geqslant 0}\colon\left(\mathfrak{gl}(d)^{\oplus 2g}\right)^{\lambda\geqslant 0 }\to\mathfrak{gl}(d)^{\lambda\geqslant 0}\]
to be the restriction of \(\mu\). Then
\[\mathcal{P}(d_{1},\dots,d_{k})=\left(\mu^{\geqslant 0}\right)^{-1}(0)/GL(d)^{ \lambda\geqslant 0}.\]
Consider the evaluation morphisms
\[\times_{i=1}^{k}\mathcal{P}(d_{i})\overset{q}{\leftarrow}\mathcal{P}(d_{1}, \dots,d_{k})\overset{p}{\rightarrow}\mathcal{P}(d).\]
The map \(q\) is quasi-smooth and the map \(p\) is proper. Consider the categorical Hall product for the preprojective algebra of \(Q^{\circ}\)[13, 14]:
\[p_{*}q^{*}\colon\boxtimes_{i=1}^{k}D^{b}(\mathcal{P}(d_{i}))\to D^{b}( \mathcal{P}(d)). \tag{3.10}\]
We recall a result from [21].
**Theorem 3.4**.: ([21, Theorem 4.20, Example 4.21]) _There is a semiorthogonal decomposition_
\[D^{b}(\mathcal{P}(d))=\left\langle\boxtimes_{i=1}^{k}\mathbb{T}(d_{i})_{w_{i} +(g-1)d_{i}(\sum_{i>j}d_{j}-\sum_{i<j}d_{j})}\right\rangle \tag{3.11}\]
_where the right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and weights \((w_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) such that_
\[\frac{w_{1}}{d_{1}}<\cdots<\frac{w_{k}}{d_{k}}. \tag{3.12}\]
_The fully-faithful functor_
\[\boxtimes_{i=1}^{k}\mathbb{T}(d_{i})_{w_{i}+(g-1)d_{i}(\sum_{i>j}d_{j}-\sum_{ i<j}d_{j})}\to D^{b}(\mathcal{P}(d))\]
_is given by the categorical Hall product (3.10). The order is as in [21, Subsection 3.4], [21, Subsection 4.6]._
**Remark 3.5**.: The semiorthogonal decomposition (3.11) is obtained from that of \((2g+1)\)-loop quiver and applying Koszul equivalence. The order of summands in (3.11) is not immediate to state, and is the same one for the \((2g+1)\)-loop quiver explained in [21, Subsection 4.6], see also [21, Subsection 3.4] for the case \(g=1\).
### Semiorthogonal decompositions on formal fibers
We have the following diagram:
(3.13)
Here, the vertical arrows are good moduli space morphisms and horizontal arrows are closed immersions. Consider a closed point \(p\in P(d)\) which corresponds to a semisimple \((Q^{\circ,d},\mathcal{I})\)-representation
\[R_{p}=\bigoplus_{i=1}^{m}W^{(i)}\otimes R^{(i)}, \tag{3.14}\]
where \(R^{(i)}\) is a simple \((Q^{\circ,d},\mathcal{I})\)-representation of dimension \(r^{(i)}\) and \(W^{(i)}\) is a finite dimensional \(\mathbb{C}\)-vector space. We denote by \(\widehat{\mathcal{Y}}(d)_{p}\) the formal fiber of the right vertical arrow in (3.13) at \(p\). By the etale slice theorem, we have
\[\widehat{\mathcal{Y}}(d)_{p}=\widehat{\operatorname{Ext}}_{Q^{\circ,d}}(F_{p},F_{p})/G_{p},\]
where \(G_{p}=\operatorname{Aut}(R_{p})=\prod_{i=1}^{m}GL(W^{(i)})\), and see Subsection 2.1 for the notation. We denote by
\[j_{p}\colon\widehat{\mathcal{P}}(d)_{p}\hookrightarrow\widehat{\mathcal{Y}}(d )_{p} \tag{3.15}\]
the natural inclusion of the derived zero locus of \(\mu\) restricted to \(\widehat{\mathcal{Y}}(d)_{p}\).
**Remark 3.6**.: Let \(\kappa\) be the morphism
\[\kappa\colon\operatorname{Ext}^{1}_{(Q^{\circ,d},\mathcal{I})}(R_{p},R_{p}) \to\operatorname{Ext}^{2}_{(Q^{\circ,d},\mathcal{I})}(R_{p},R_{p})\]
given by \(x\mapsto[x,x]\). By the formality of polystable objects in CY2 category, see [Davc, Corollary 4.9], the derived stack \(\widehat{\mathcal{P}}(d)_{p}\) is equivalent to the formal fiber of \(\kappa^{-1}(0)/G_{p}\) at \(0\in\kappa^{-1}(0)^{\operatorname{cl}}/\!\!/G_{p}\).
We define
\[\mathbb{T}_{p}(d)_{w}:=\mathbb{W}(\widehat{\mathcal{P}}(d)_{p})_{w\tau_{d}}^{ \operatorname{int}}\subset D^{b}(\widehat{\mathcal{P}}(d)_{p}). \tag{3.16}\]
There is a description of \(\mathbb{T}_{p}(d)_{w}\) similar to Lemma 3.2, see Subsection 5.4. Consider a partition \(d=d_{1}+\cdots+d_{k}\). We have the commutative diagram:
(3.17)
where \(\times_{i=1}^{k}\pi_{P,d_{i}}\) and \(\pi_{P,d}\) are good moduli space maps. The base change of the categorical Hall product gives the functor
\[\bigoplus_{p_{1}+\cdots+p_{k}=p}\mathbb{S}_{i=1}^{k}D^{b}(\widehat{\mathcal{ P}}(d_{i})_{p_{i}})\to D^{b}(\widehat{\mathcal{P}}(d)_{p}), \tag{3.18}\]
where the sum on the left hand side consists of the fiber of the bottom horizontal arrow \(\oplus\) in (3.17), which is a finite map. Also see [Todc, (6.12), Lemma 6.4] for the existence of base change diagram of (3.17) extended to derived stacks.
The following proposition is a formal fiber version of Theorem 3.4. The proof is technical and will be postponed to Subsection 5.4.
**Proposition 3.7**.: _There is a semiorthogonal decomposition_
\[D^{b}(\widehat{\mathcal{P}}(d)_{p})=\left\langle\bigoplus_{p_{1}+\cdots+p_{k}= p}\mathbb{S}_{i=1}^{k}\mathbb{T}_{p_{i}}(d_{i})_{w_{i}+(g-1)d_{i}(\sum_{i>j}d_{j}- \sum_{i<j}d_{j})}\right\rangle.\]
_The right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\), all points \((p_{1},\ldots,p_{k})\) in the fiber over \(p\) of the addition map \(\oplus\colon\times_{i=1}^{k}P(d_{i})\to P(d)\), and all weights \((w_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) such that_
\[\frac{w_{1}}{d_{1}}<\cdots<\frac{w_{k}}{d_{k}}.\]
_The order of the semiorthogonal decomposition is the same as the order of (3.11). The fully-faithful functor_
\[\bigoplus_{p_{1}+\dots+p_{k}=p}\boxtimes_{i=1}^{k}\mathbb{T}_{p_{i}}(d_{i})_{w_{i }+(g-1)d_{i}(\sum_{i>j}d_{j}-\sum_{i<j}d_{j})}\to D^{b}(\widehat{\mathcal{P}}(d )_{p})\]
_is given by the base change of the categorical Hall product (3.18)._
During the proof of Proposition 3.7, we will also obtain the following:
**Corollary 3.8**.: _The map \(\iota_{p}\colon\widehat{\mathcal{P}}(d)_{p}\to\mathcal{P}(d)\) induces the functor_
\[\iota_{p}^{*}\colon\mathbb{T}(d)_{w}\to\mathbb{T}_{p}(d)_{w}\]
_and its image classically generates \(\mathbb{T}_{p}(d)_{w}\)._
### Reduced quasi-BPS categories
We continue the discussion from the previous subsection. Let \(\mathfrak{gl}(d)_{0}\subset\mathfrak{gl}(d)\) be the traceless Lie subalgebra, and let \(\mu_{0}\) be the map
\[\mu_{0}\colon\mathfrak{gl}(d)^{\oplus 2g}\to\mathfrak{gl}(d)_{0},\ (x_{1},\dots,x_{g},y_{1},\dots,y_{g})\mapsto\sum_{i=1}^{g}[x_{i},y_{i}]. \tag{3.19}\]
Define the reduced stack:
\[\mathcal{P}(d)^{\mathrm{red}}:=\mu_{0}^{-1}(0)/GL(d).\]
We define the reduced quasi-BPS category to be
\[\mathbb{T}(d)^{\mathrm{red}}_{w}:=\mathbb{W}(\mathcal{P}(d)^{\mathrm{red}})^{ \mathrm{int}}_{w\tau_{d}}\subset D^{b}(\mathcal{P}(d)^{\mathrm{red}}). \tag{3.20}\]
There is a description similar to Lemma 3.2 using the embedding \(\mathcal{P}(d)^{\mathrm{red}}\hookrightarrow\mathcal{Y}(d)\). Denote by \(\mathfrak{gl}(d)_{\mathrm{nil}}\subset\mathfrak{gl}(d)_{0}\) the subset of nilpotent elements. The categorical support lemma in [PTd] is the following:
**Lemma 3.9**.: ([PTd, Corollary 5.5]) _For coprime \((d,w)\in\mathbb{N}\times\mathbb{Z}\), any object \(\mathcal{E}\in\mathbb{T}(d)^{\mathrm{red}}_{w}\) satisfies:_
\[\mathrm{Supp}^{\mathrm{sg}}(\mathcal{E})\subset(\mathfrak{gl}(d)^{\oplus 2g} \oplus\mathfrak{gl}(d)_{\mathrm{nil}})/GL(d).\]
For \(g\geqslant 2\), the derived stack \(\mathcal{P}(d)^{\mathrm{red}}\) is classical by [10, Proposition 3.6], in particular there is a good moduli space morphism
\[\pi_{P}\colon\mathcal{P}(d)^{\mathrm{red}}\to P(d)=\mu^{-1}(0)^{\mathrm{red}} /\!\!/G(d).\]
It follows that the Hom-space between any two objects in \(D^{b}(\mathcal{P}(d)^{\mathrm{red}})\) is a module over \(\mathcal{O}_{P(d)}\). The categorical support lemma is the main ingredient in the proof of the following:
**Proposition 3.10**.: ([PTd, Proposition 5.9]) _For coprime \((d,w)\in\mathbb{N}\times\mathbb{Z}\) and objects \(\mathcal{E}_{i}\in\mathbb{T}(d)^{\mathrm{red}}_{w}\) for \(i=1,2\), the \(\mathcal{O}_{P(d)}\)-module_
\[\bigoplus_{i\in\mathbb{Z}}\mathrm{Hom}^{i}_{\mathcal{P}(d)^{\mathrm{red}}}( \mathcal{E}_{1},\mathcal{E}_{2})\]
_is finitely generated. In particular, we have \(\mathrm{Hom}^{i}_{\mathcal{P}(d)^{\mathrm{red}}}(\mathcal{E}_{1},\mathcal{E}_ {2})=0\) for \(|i|\gg 0\)._
### Relative Serre functor on reduced quasi-BPS categories
We continue the discussion from the previous subsection. We have that \(\mathbb{T}:=\mathbb{T}(d)_{w}^{\mathrm{red}}\) is a subcategory of \(D^{b}(\mathrm{?}^{\mathrm{red}})\), which is a module over \(\mathrm{Perf}(\mathrm{?}(\mathrm{d})^{\mathrm{red}})\). Thus there is an associated internal homomorphism, see Subsection 2.6:
\[\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2})\in D_{\mathrm{qc}}( \mathrm{?}(d)^{\mathrm{red}})\]
for \(\mathcal{E}_{1},\mathcal{E}_{2}\in\mathbb{T}\). Proposition 3.10 implies that \(\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2})\) is an object of \(D^{b}(P(d))\).
**Theorem 3.11**.: ([PTd, Theorem 5.10]) _For coprime \((d,w)\in\mathbb{N}\times\mathbb{Z}\) and \(\mathcal{E}_{1},\mathcal{E}_{2}\in\mathbb{T}\), there is an isomorphism:_
\[\mathrm{Hom}_{P(d)}(\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1}, \mathcal{E}_{2}),\mathcal{O}_{P(d)})\cong\mathrm{Hom}_{\mathbb{T}}(\mathcal{ E}_{2},\mathcal{E}_{1}). \tag{3.21}\]
For \(\mathcal{E}_{1}=\mathcal{E}_{2}=\mathcal{E}\), the identity \(\mathrm{id}\colon\mathcal{E}\to\mathcal{E}\) corresponds, under (3.21), to the morphism
\[\mathrm{tr}_{\mathcal{E}}\colon\pi_{*}\mathcal{H}om(\mathcal{E},\mathcal{E}) \to\mathcal{O}_{P(d)}.\]
From the construction in [PTd], the above morphism coincides with the trace map determined by \((GL(d),\mathfrak{gl}(d)^{\oplus 2g},\mathfrak{gl}(d)_{0},\mu_{0})\), see Subsection 7.2 for the construction of the trace map, especially (7.7).
## 4. Quasi-BPS categories for K3 surfaces
In this section, we introduce (non-reduced and reduced) quasi-BPS categories for K3 surfaces. In Theorem 4.8, we prove the wall-crossing equivalence for quasi-BPS categories. We state a categorical version of the \(\chi\)-independence phenomenon, see Conjecture 4.13, which we prove for \(g=0\) and for \(g=1\) and \((d,w)=(2,1)\).
### Generalities on K3 surfaces
Let \(S\) be a smooth projective K3 surface, i.e. \(K_{S}\) is trivial and \(H^{1}(\mathcal{O}_{S})=0\). Let \(K(S)\) be the Grothendieck group of \(S\). Denote by \(\chi(-,-)\) the Euler pairing
\[\chi(E,F)=\sum_{j}(-1)^{j}\mathrm{ext}^{j}(E,F).\]
Let \(N(S)\) be the numerical Grothendieck group:
\[N(S):=K(S)/\equiv,\]
where \(E_{1}\equiv E_{2}\) in \(K(S)\) if \(\chi(E_{1},F)=\chi(E_{2},F)\) for any \(F\in K(S)\). There is an isomorphism by taking the Mukai vector:
\[v(-)=\mathrm{ch}(-)\sqrt{\mathrm{td}}_{S}\colon N(S)\stackrel{{ \cong}}{{\to}}\mathbb{Z}\oplus\mathrm{NS}(S)\oplus\mathbb{Z}. \tag{4.1}\]
Write a vector \(v\in N(S)\) as \(v=(r,\beta,\chi)\in\mathbb{Z}\oplus\mathrm{NS}(S)\oplus\mathbb{Z}\) via the above isomorphism. There is a symmetric bilinear pairing on \(N(S)\) defined by \(\langle E_{1},E_{2}\rangle=-\chi(E_{1},E_{2})\). Under the isomorphism (4.1), we have
\[\langle E_{1},E_{2}\rangle=\beta_{1}\beta_{2}-r_{1}\chi_{2}-r_{2}\chi_{1},\]
where \(v(E_{i})=(r_{i},\beta_{i},\chi_{i})\).
We say \(v\in N(S)\) is _primitive_ if it cannot be written as \(v=dv_{0}\) for an integer \(d\geqslant 2\) and \(v_{0}\in N(S)\). Let \(v\in N(S)\) and \(w\in\mathbb{Z}\). Write \(v=dv_{0}\) for \(d\in\mathbb{Z}\) and \(v_{0}\) primitive. We define \(\gcd(v,w):=\gcd(d,w)\). Below we identify \(N(S)\) with \(\mathbb{Z}\oplus\mathrm{NS}(S)\oplus\mathbb{Z}\) via the isomorphism (4.1), and write an element \(v\in N(S)\) as \(v=(r,\beta,\chi)\).
### Bridgeland stability conditions on K3 surfaces
For a K3 surface \(S\), we denote by
\[\operatorname{Stab}(S)\]
the (main connected component of) the space of Bridgeland stability conditions [1, 1] on \(D^{b}(S)\). A point \(\sigma\in\operatorname{Stab}(S)\) consists of a pair
\[\sigma=(Z,\mathcal{A}),\ Z\colon N(S)\to\mathbb{C},\ \mathcal{A}\subset D^{b}(S),\]
where \(Z\) is a group homomorphism (called _central charge_) and \(\mathcal{A}\) is the heart of a bounded t-structure satisfying some axioms, see [1]. One of the axioms is the following positivity property
\[Z(E)\in\{z\in\mathbb{C}:\operatorname{Im}(z)>0\ \text{or}\ z\in\mathbb{R}_{<0}\}\]
for any \(0\neq E\in\mathcal{A}\). An object \(E\in\mathcal{A}\) is called _\(Z\)-(semi)stable_ if for any subobject \(0\neq F\subsetneq E\) we have \(\arg Z(F)<(\leqslant)\arg Z(E)\) in \((0,\pi]\). An object \(E\in D^{b}(X)\) is called _\(\sigma\)-(semi)stable_ if \(E[a]\in\mathcal{A}\) is \(Z\)-semistable for some \(a\in\mathbb{Z}\).
For each \(B+iH\in\operatorname{NS}(S)_{\mathbb{C}}\) such that \(H\) is ample with \(H^{2}>2\), there is an associated stability condition
\[\sigma_{B,H}=(Z_{B,H},\mathcal{A}_{B,H})\in\operatorname{Stab}(S),\]
where \(\mathcal{A}_{B,H}\subset D^{b}(S)\) is the heart of a bounded t-structure obtained by a tilting of \(\operatorname{Coh}(S)\) and \(Z_{B,H}\) is given by
\[Z_{B,H}(E)=-\int_{S}e^{-B-iH}v(E)\in\mathbb{C}.\]
We refer to [1, Section 6] for the construction of the above stability conditions. A stability condition \(\sigma_{B,mH}\) for \(m\gg 0\) is said to be in a _neighborhood of the large volume limit_. Recall the following proposition about semistable objects at the large volume limit:
**Proposition 4.1**.: ([1, Proposition 14.2], [1, Proposition 6.4, Lemma 6.5]) _If \(v=(r,\beta,\chi)\) such that \(r\geqslant 0\) and \(H\cdot\beta>0\), or \(r=H\cdot\beta=0\) and \(\chi>0\), then an object \(E\in D^{b}(S)\) of Mukai vector \(v\) is \(\sigma_{0,mH}\)-semistable for \(m\gg 0\) if and only if \(E[2a]\) is an \(H\)-Gieseker semistable sheaf for some \(a\in\mathbb{Z}\)._
### Moduli stacks of semistable objects on K3 surfaces
For each \(\sigma\in\operatorname{Stab}(S)\) and \(v\in N(S)\), we denote by
\[\mathfrak{M}_{S}^{\sigma}(v)\]
the derived moduli stack of \(\sigma\)-semistable objects \(E\in\mathcal{A}\cup\mathcal{A}[1]\) with numerical class \(v\). We denote by \(\mathbb{F}\) the universal object
\[\mathbb{F}\in D^{b}(S\times\mathfrak{M}_{S}^{\sigma}(v)).\]
We also consider the reduced version of the stack \(\mathfrak{M}_{S}^{\sigma}(v)\). Let \(v=(r,\beta,\chi)\). Let \(\mathcal{P}ic^{\beta}(S)\) be the derived moduli stack of line bundles on \(S\) with first Chern class \(\beta\). Then \(\mathcal{P}ic^{\beta}(S)=\operatorname{Spec}\mathbb{C}[\varepsilon]/\mathbb{ C}^{*}\), where \(\varepsilon\) is of degree \(-1\). We consider the determinant morphism
\[\det\colon\mathfrak{M}_{S}^{\sigma}(v)\to\mathcal{P}ic^{\beta}(S)= \operatorname{Spec}\mathbb{C}[\varepsilon]/\mathbb{C}^{*}.\]
Define the reduced stack:
\[\mathfrak{M}_{S}^{\sigma}(v)^{\operatorname{red}}:=\mathfrak{M}_{S}^{\sigma} (v)\times_{\mathcal{P}ic^{\beta}(S)}B\mathbb{C}^{*}. \tag{4.2}\]
The obstruction space of the reduced stack \(\mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}}\) at \(F\) is the kernel of the trace map:
\[\operatorname{Ext}^{2}_{S}(F,F)_{0}:=\operatorname{Ker}\left(\operatorname{Ext}^ {2}_{S}(F,F)\stackrel{{\mathrm{tr}}}{{\twoheadrightarrow}}H^{2}( \mathcal{O}_{S})=\mathbb{C}\right).\]
Note that \(\mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}}\) may still not be a classical stack. There are decompositions:
\[D^{b}(\mathfrak{M}^{\sigma}_{S}(v))=\bigoplus_{w\in\mathbb{Z}}D^{b}( \mathfrak{M}^{\sigma}_{S}(v))_{w},\ D^{b}(\mathfrak{M}^{\sigma}_{S}(v)^{ \mathrm{red}})=\bigoplus_{w\in\mathbb{Z}}D^{b}(\mathfrak{M}^{\sigma}_{S}(v)^{ \mathrm{red}})_{w},\]
where each summand contains complexes \(F\) of weight \(w\) with respect to the scaling automorphisms \(\mathbb{C}^{*}\subset\operatorname{Aut}(F)\), see [Toda, Subsection 3.2.4].
We denote by \(\mathcal{M}^{\sigma}_{S}(v)\) the classical truncation of \(\mathfrak{M}^{\sigma}_{S}(v)\). It admits a good moduli space (cf. [1], [1]):
\[\pi\colon\mathcal{M}^{\sigma}_{S}(v)\to M^{\sigma}_{S}(v), \tag{4.3}\]
where \(M^{\sigma}_{S}(v)\) is a proper algebraic space. A closed point \(y\in M^{\sigma}_{S}(v)\) corresponds to a \(\sigma\)-polystable object
\[F=\bigoplus_{i=1}^{m}V^{(i)}\otimes F^{(i)}, \tag{4.4}\]
where \(F^{(1)},\dots,F^{(m)}\) are mutually non-isomorphic \(\sigma\)-stable objects such that \(\arg Z(F^{(i)})=\arg Z(F)\), and \(V^{(i)}\) is a finite dimensional vector space with dimension \(d^{(i)}\) for \(1\leqslant i\leqslant m\).
Let \(G_{y}:=\operatorname{Aut}(F)=\prod_{i=1}^{m}GL(V^{(i)})\) and let \(\widehat{\operatorname{Ext}}^{1}_{S}(F,F)\) be the formal fiber at the origin of the morphism
\[\operatorname{Ext}^{1}_{S}(F,F)\to\operatorname{Ext}^{1}_{S}(F,F)/\!\!/G_{y}.\]
By the formality of the dg-algebra \(\operatorname{RHom}(F,F)\), see [Davc, Corollary 4.9], there are equivalences
\[\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\simeq\widehat{\kappa}^{-1}(0)/G_{ y},\ \widehat{\mathfrak{M}}^{\sigma}_{S}(v)^{\mathrm{red}}_{y}\simeq\widehat{ \kappa}^{-1}_{0}(0)/G_{y}, \tag{4.5}\]
where \(\kappa\), \(\kappa_{0}\) are the maps
\[\kappa\colon\operatorname{Ext}^{1}_{S}(F,F)\to\operatorname{Ext}^{2}_{S}(F,F ),\ \kappa_{0}\colon\operatorname{Ext}^{1}_{S}(F,F)\to\operatorname{Ext}^{2}_{S}(F, F)_{0}\]
given by \(x\mapsto[x,x]\), and \(\widehat{\kappa}\), \(\widehat{\kappa}_{0}\) are their restrictions to \(\widehat{\operatorname{Ext}}^{1}_{S}(F,F)\).
**Remark 4.2**.: The stack \(\kappa^{-1}(0)/G_{y}\) is described in terms of the \(\operatorname{Ext}\)-quiver of \(F\) as follows. Let \(Q^{\circ,d}_{y}\) be the quiver with vertices \(\{1,\dots,m\}\) and the number of edges from \(i\) to \(j\) is \(\dim\operatorname{Ext}^{1}_{S}(F^{(i)},F^{(j)})\) for any \(1\leqslant i,j\leqslant m\). By Serre duality, \(Q^{\circ,d}_{y}\) is symmetric. Moreover the number of loops at each vertex is even, so \(Q^{\circ,d}_{y}\) is the doubled quiver of some quiver \(Q^{\circ}_{y}\). The derived stack \(\kappa^{-1}(0)/G_{y}\) is identified with the derived moduli stack of representations of the preprojective algebra of \(Q^{\circ}_{y}\) (alternatively, of \(Q^{\circ,d}_{y}\)-representations with quadratic relation \(\mathcal{I}_{y}\)) as in Subsection 3.3, and dimension vector \((d^{(i)})_{i=1}^{m}\) where \(d^{(i)}=\dim V^{(i)}\).
There is a wall-chamber structure on \(\operatorname{Stab}(S)\) such that \(\mathcal{M}^{\sigma}_{S}(v)\) is constant if \(\sigma\) lies in a chamber, but may change when \(\sigma\) crosses a wall. Locally, a wall is defined by the equation
\[\frac{Z(v_{1})}{Z(v_{2})}\in\mathbb{R}_{>0},\ v=v_{1}+v_{2}\]
such that \(v_{1}\) and \(v_{2}\) are not proportional, see [1, Proposition 9.3].
A stability condition \(\sigma\in\operatorname{Stab}(S)\) is _generic_ if \(\sigma\) is not on a wall. If \(\sigma\) is generic, then for a polystable object (4.4) each numerical class of \(F^{(i)}\) is proportional to \(v\). Let \(v=dv_{0}\) for a primitive \(v_{0}\). Then we have
\[[F^{(i)}]=r^{(i)}v_{0},\ d^{(1)}r^{(1)}+\cdots+d^{(m)}r^{(m)}=d.\]
The good moduli space \(M^{\sigma}_{S}(v)\) has a stratification indexed by data \((d^{(i)},r^{(i)})_{i=1}^{m}\), and the deepest stratum corresponds to \(m=1\), \(\dim V^{(1)}=d\) and \(d^{(1)}=1\).
Let \(v=dv_{0}\) for a primitive \(v_{0}\) with \(\langle v_{0},v_{0}\rangle=2g-2\), and \(Q^{\circ,d}=Q_{2g}\) be the quiver with one vertex and \(2g\)-loops with relation \(\mathcal{I}\) as in Subsection 3.3. Recall the stacks:
\[\mathcal{P}(d)=\mu^{-1}(0)/GL(d),\ \mathcal{P}(d)^{\operatorname{red}}=\mu_{0}^ {-1}(0)/GL(d)\]
where \(\mu\colon\mathfrak{gl}(d)^{\oplus 2g}\to\mathfrak{gl}(d)\) and \(\mu_{0}\colon\mathfrak{gl}(d)^{\oplus 2g}\to\mathfrak{gl}(d)_{0}\) are moment maps.
**Lemma 4.3**.: _For any closed point \(y\in M^{\sigma}_{S}(v)\), there is a point \(p\in P(d)\) which is sufficiently close to \(0\in P(d)\) such that we have equivalences_
\[\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\simeq\widehat{\mathcal{P}}(d)_{p},\ \widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}^{\operatorname{red}}\simeq\widehat{ \mathcal{P}}(d)_{p}^{\operatorname{red}}. \tag{4.6}\]
_If \(y\) lies in the deepest stratum, we can take \(p=0\)._
Proof.: Let \(y\) corresponds to a direct sum (4.4) such that \([F^{(i)}]=r^{(i)}v_{0}\), and let \(R^{(i)}\) be a simple \((Q^{\circ,d},\mathcal{I})\)-representation with dimension \(r^{(i)}\). Such \(R^{(i)}\) exists by a straightforward dimension count argument, for example see the proof of [12, Lemma 5.7 (i)]. Let
\[R=\bigoplus_{i=1}^{m}V^{(i)}\otimes R^{(i)}\]
be a semisimple \((Q^{\circ,d},\mathcal{I})\)-representation and let \(p\in P(d)\) be the corresponding point. Note that
\[\chi(R^{(i)},R^{(j)})=\chi(F^{(i)},F^{(j)})=r^{(i)}r^{(j)}(2-2g).\]
By the CY2 property of \((Q^{\circ,d},\mathcal{I})\)-representations (cf. see [10], [11]), and the fact that \(\hom(R^{(i)},R^{(j)})=\hom(F^{(i)},F^{(j)})=\delta_{ij}\), we have an isomorphism
\[\operatorname{Ext}^{*}(R^{(i)},R^{(j)})\cong\operatorname{Ext}^{*}(F^{(i)},F^ {(j)}).\]
Then by the formality of polystable objects CY2 categories [11, Corollary 4.9], there is an isomorphism of dg-algebras \(\operatorname{RHom}(R,R)\cong\operatorname{RHom}(F,F)\). Therefore we have equivalences (4.6), see Remark 3.6.
There is an action of \(\mathbb{C}^{*}\) on the moduli of \(Q^{\circ,d}\)-representations which scales the linear maps corresponding to each edge of \(Q^{\circ,d}\), which induces an action on \(P(d)\). The above \(\mathbb{C}^{*}\)-action preserves the type of the semi-simplification, and any point \(p\in P(d)\) satisfies \(\lim_{t\to 0}(t\cdot p)=0\). Therefore we can take \(p\) to be sufficiently close to \(0\). By the above construction, we can take \(p=0\) if \(y\) lies in the deepest stratum.
Combining Lemma 4.3 with [10], we have the following:
**Lemma 4.4**.: _Suppose that \(g\geqslant 2\). Then for a generic \(\sigma\), the derived stack \(\mathfrak{M}^{\sigma}_{S}(v)^{\operatorname{red}}\) is classical, i.e. the natural morphism \(\mathcal{M}^{\sigma}_{S}(v)\to\mathfrak{M}^{\sigma}_{S}(v)^{\operatorname{red}}\) is an equivalence._
Proof.: For \(g\geqslant 2\), the derived stack \(\mathcal{P}(d)^{\operatorname{red}}\) is classical by [10, Proposition 3.6]. Therefore the conclusion holds by Lemma 4.3.
### Quasi-BPS categories for K3 surfaces
Let \(v\in N(S)\) and \(w\in\mathbb{Z}\). Take \(a\in K(S)_{\mathbb{R}}\) such that \(\chi(a\otimes v)=w\in\mathbb{Z}\). We define the \(\mathbb{R}\)-line bundle \(\delta\) on \(\mathfrak{M}^{\sigma}_{S}(v)\) to be
\[\delta=\det p_{\mathfrak{M}_{*}}(a\boxtimes\mathbb{F}), \tag{4.7}\]
where \(p_{\mathfrak{M}}\colon S\times\mathfrak{M}^{\sigma}_{S}(v)\to\mathfrak{M}^{ \sigma}_{S}(v)\) is the projection. Note that the object \(p_{\mathfrak{M}*}(A\boxtimes\mathbb{F})\) is a perfect complex on \(\mathfrak{M}^{\sigma}_{S}(v)\) for any \(A\in D^{b}(S)\), so the \(\mathbb{R}\)-line bundle (4.7) is well-defined. The pull-back of \(\delta\) to \(\mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}}\) is also denoted by \(\delta\). We define the (non-reduced or reduced) quasi-BPS categories to be the following intrinsic window categories from Definition 2.6:
\[\mathbb{T}^{\sigma}_{S}(v)_{\delta}:=\mathbb{W}(\mathfrak{M}^{ \sigma}_{S}(v))_{\delta}^{\mathrm{int}}\subset D^{b}(\mathfrak{M}^{\sigma}_{ S}(v))_{w},\] \[\mathbb{T}^{\sigma}_{S}(v)_{\delta}^{\mathrm{red}}:=\mathbb{W}( \mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}})_{\delta}^{\mathrm{int}}\subset D ^{b}(\mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}})_{w}. \tag{4.8}\]
**Remark 4.5**.: For each \(y\in M^{\sigma}_{S}(v)\), let \(\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\) be the formal fiber at \(y\) and \(\delta_{y}\) the pull-back of \(\delta\) to it. The quasi-BPS category for the formal fiber is defined in a similar way:
\[\mathbb{T}^{\sigma}_{S,y}(v)_{\delta_{y}}:=\mathbb{W}(\widehat{\mathfrak{M}}^ {\sigma}_{S}(v)_{y})_{\delta_{y}}^{\mathrm{int}}\subset D^{b}(\widehat{ \mathfrak{M}}^{\sigma}_{S}(v)_{y}).\]
By the definition of \(\mathbb{T}^{\sigma}_{S}(v)_{\delta}\), an object \(\mathcal{E}\in D^{b}(\mathfrak{M}^{\sigma}_{S}(v))\) is an object in \(\mathbb{T}^{\sigma}_{S}(v)_{\delta}\) if and only if its restriction to any formal fiber is an object in \(\mathbb{T}^{\sigma}_{S,y}(v)_{\delta_{y}}\). There is an analogous statement for the reduced version.
**Lemma 4.6**.: _If \(\sigma\in\mathrm{Stab}(S)\) is generic, then \(\mathbb{T}^{\sigma}_{S}(v)_{\delta}\) and \(\mathbb{T}^{\sigma}_{S}(v)_{\delta}^{\mathrm{red}}\) are independent of \(a\in K(S)_{\mathbb{R}}\) satisfying \(\chi(a\otimes v)=w\)._
Proof.: Let \(b\in K(S)_{\mathbb{R}}\) such that \(\chi(b\otimes v)=0\) and set \(a^{\prime}=a+b\). Let \(\delta^{\prime}\) be the \(\mathbb{R}\)-line bundle defined as in (4.7) for \(a^{\prime}\). By Remark 4.5, it is enough to show that \(\delta_{y}=\delta^{\prime}_{y}\) for any closed point \(y\in M^{\sigma}_{S}(v)\). Let \(y\) be a point which corresponds to the polystable object \(F\) as in (4.4). By the decomposition (4.4), we have \(\mathrm{Aut}(F)=\prod_{i=1}^{m}GL(V^{(i)})\) and \(\delta_{y}\) is the character of \(\mathrm{Aut}(F)\) given by
\[\delta_{y}=\det\left(\sum_{i=1}^{m}V^{(i)}\otimes\chi(a\otimes F^{(i)})\right) =\bigotimes_{i=1}^{m}(\det V^{(i)})^{\chi(a\otimes F^{(i)})}. \tag{4.9}\]
As \(\sigma\) is generic, the numerical class of \(F^{(i)}\) is proportional to \(v\). Therefore \(\chi(b\otimes v)=0\) implies \(\chi(b\otimes F^{(i)})=0\) for \(1\leqslant i\leqslant m\), hence \(\delta_{y}=\delta^{\prime}_{y}\).
By the above lemma, the following definition makes sense.
**Definition 4.7**.: For \(v\in N(S)\), let \(\sigma\in\mathrm{Stab}(S)\) be generic. For \(w\in\mathbb{Z}\), define
\[\mathbb{T}^{\sigma}_{S}(v)_{w}:=\mathbb{T}^{\sigma}_{S}(v)_{\delta},\ \mathbb{T}^{\sigma}_{S}(v)_{w}^{\mathrm{red}}:=\mathbb{T}^{\sigma}_{S}(v)_{ \delta}^{\mathrm{red}}.\]
Here \(\delta\) is defined as in (4.7) for any \(a\in K(S)_{\mathbb{R}}\) such that \(\chi(a\otimes v)=w\).
The first main result of this section is the following wall-crossing equivalence of quasi-BPS categories.
**Theorem 4.8**.: _Let \(\sigma_{1},\sigma_{2}\in\mathrm{Stab}(S)\) be generic stability conditions. Then there exist equivalences_
\[\mathbb{T}^{\sigma_{1}}_{S}(v)_{w}\stackrel{{\sim}}{{\to}} \mathbb{T}^{\sigma_{2}}_{S}(v)_{w},\ \mathbb{T}^{\sigma_{1}}_{S}(v)_{w}^{\mathrm{red}}\stackrel{{ \sim}}{{\to}}\mathbb{T}^{\sigma_{2}}_{S}(v)_{w}^{\mathrm{red}}. \tag{4.10}\]
Proof.: We only prove the first equivalence, the second one follows by the same argument. We reduce the proof of the equivalence to a local statement as in Theorem 3.3.
Consider a stability \(\sigma=(Z,\mathcal{A})\in\operatorname{Stab}(S)\) lying on a wall and consider stability conditions \(\sigma_{\pm}=(Z_{\pm},\mathcal{A}_{\pm})\in\operatorname{Stab}(S)\) lying in adjacent chambers. Let \(b\in K(S)_{\mathbb{R}}\) be an element satisfying \(\chi(b\otimes v)=0\) and let \(\delta^{\prime}\in\operatorname{Pic}(\mathfrak{M}_{S}^{\sigma}(v))_{\mathbb{R}}\) be defined as in (4.7) using \(b\). Let \(\delta\in\operatorname{Pic}(\mathfrak{M}_{S}^{\sigma}(v))_{\mathbb{R}}\) be as in Definition 4.7, and set \(\delta^{\prime\prime}=\delta+\delta^{\prime}\). It is enough to show that, there exists \(b\) as above such that the restriction functors for the open immersions \(\mathfrak{M}_{S}^{\sigma_{\pm}}(v)\subset\mathfrak{M}_{S}^{\sigma}(v)\) restrict to the equivalence
\[\mathbb{T}_{S}^{\sigma}(v)_{\delta^{\prime\prime}}\overset{\sim}{\to}\mathbb{ T}_{S}^{\sigma_{\pm}}(v)_{\delta^{\prime\prime}}. \tag{4.11}\]
The open substacks \(\mathfrak{M}_{S}^{\sigma_{\pm}}(v)\subset\mathfrak{M}_{S}^{\sigma}(v)\) are semistable loci with respect to line bundle \(\ell_{\pm}\) on \(\mathfrak{M}_{S}^{\sigma}(v)\) and they are parts of \(\Theta\)-stratifications, see [HLa, Proposition 4.4.5]. The line bundles \(\ell_{\pm}\) are constructed as follows. We may assume that \(Z(v)=Z_{\pm}(v)=\sqrt{-1}\), and write \(Z_{\pm}(-)=\chi(\omega_{\pm}\otimes-)\) for \(\omega_{\pm}\in K(S)_{\mathbb{C}}\). Then set \(b_{\pm}\in K(S)_{\mathbb{R}}\) to be the real parts of \(\omega_{\pm}\), which satisfy \(\chi(b_{\pm}\otimes v)=0\). The line bundles \(\ell_{\pm}\) are defined by \(\ell_{\pm}=\det p_{\mathfrak{M}*}(b_{\pm}\boxtimes\mathbb{F})\), see [HLb, Theorem 6.4.11]. Then we set \(b=\varepsilon_{+}b_{+}+\varepsilon_{-}b_{-}\) for general elements \(0<\varepsilon_{\pm}\ll 1\).
Since \(\mathfrak{M}_{S}^{\sigma}(v)\) is \(0\)-shifted symplectic, from Theorem 2.3 and Remark 2.5 (see also [HLa, Theorem 3.3.1]), there exist subcategories \(\mathbb{W}(\mathfrak{M}_{S}^{\sigma}(v))_{m_{\bullet\pm}}^{\ell_{\pm}}\subset D ^{b}(\mathfrak{M}_{S}^{\sigma}(v))\) which induce equivalences:
\[\mathbb{W}(\mathfrak{M}_{S}^{\sigma}(v))_{m_{\bullet\pm}}^{\ell_{\pm}} \overset{\sim}{\to}D^{b}(\mathfrak{M}_{S}^{\sigma_{\pm}}(v)). \tag{4.12}\]
Moreover, there exist choices of \(m_{\bullet\pm}\) such that \(\mathbb{T}_{S}^{\sigma}(v)_{\delta^{\prime\prime}}\subset\mathbb{W}( \mathfrak{M}_{S}^{\sigma}(v))_{m_{\bullet\pm}}^{\ell_{\pm}}\), see [HLa, Lemma 4.3.10] or [Todc, Proposition 6.15] for a choice of \(m_{\bullet}\). Therefore, by Remark 4.5, it is enough to show that, for each closed point \(y\in M_{S}^{\sigma}(v)\), we have the equivalences
\[\mathbb{T}_{S,y}^{\sigma}(v)_{\delta^{\prime\prime}_{y}}\overset{\sim}{\to} \mathbb{T}_{S,y}^{\sigma_{\pm}}(v)_{\delta^{\prime\prime}_{y}}. \tag{4.13}\]
Here, on the right hand side we consider the intrinsic window subcategories for the formal fibers of the morphisms \(\mathcal{M}_{S}^{\sigma_{\pm}}(v)\subset\mathcal{M}_{S}^{\sigma}(v)\to M_{S}^ {\sigma}(v)\) at \(y\).
Let \(y\) corresponds to the polystable object (4.4) and set \(\boldsymbol{d}=(\dim V^{(i)})_{i=1}^{m}\). Let \((Q_{y}^{\circ,d},\mathcal{I}_{y})\) be the Ext-quiver at \(y\) with relation \(\mathcal{I}_{y}\), see Remark 4.2. The quiver with relation \((Q_{y}^{\circ,d},\mathcal{I}_{y})\) is the double of some quiver \(Q_{y}^{\circ}\). Let \(\mathcal{P}_{y}(\boldsymbol{d})\) be the derived stack of \((Q_{y}^{\circ,d},\mathcal{I}_{y})\)-representations with dimension vector \(\boldsymbol{d}\), see (3.3), and \(P_{y}(\boldsymbol{d})\) the good moduli space of its classical truncation. By the equivalence (4.5), there is an equivalence
\[\widehat{\mathfrak{M}}_{S}^{\sigma}(v)_{y}\simeq\widehat{\mathcal{P}}_{y}( \boldsymbol{d}). \tag{4.14}\]
Here the right hand side is the formal fiber of \(\mathcal{P}(\boldsymbol{d})\) at \(0\in P(\boldsymbol{d})\). The line bundles \(\ell_{\pm}\) restricted to \(\widehat{\mathfrak{M}}_{S}^{\sigma}(v)\) correspond to generic elements \(\ell_{\pm}\in M(\boldsymbol{d})_{\mathbb{R}}^{W}\), where \(M(\boldsymbol{d})_{\mathbb{R}}\) is the character lattice of the maximal torus of \(G_{y}:=\operatorname{Aut}(y)=\prod_{i=1}^{m}GL(V^{(i)})\). Moreover the \(\sigma_{\pm}\)-semistable loci in the left hand side of (4.14) correspond to \(\ell_{\pm}\)-semistable \((Q_{y}^{\circ,\tilde{d}},\mathcal{I}_{y})\)-representations. Therefore the equivalences (4.13) follow from the formal fiber version of the equivalences
\[\mathbb{T}(\boldsymbol{d})_{\delta^{\prime\prime}_{y}}\overset{\sim}{\to} \mathbb{T}^{\pm}(\boldsymbol{d})_{\delta^{\prime\prime}_{y}}\]
in Theorem 3.3, whose proof is identical to loc. cit.
By Lemma 4.6 and Theorem 4.8, the following definition makes sense:
**Definition 4.9**.: For \(v\in N(S)\) and \(w\in\mathbb{Z}\), define
\[\mathbb{T}_{S}(v)_{w}:=\mathbb{T}_{S}^{\sigma}(v)_{w},\ \mathbb{T}_{S}(v)_{w}^{ \operatorname{red}}:=\mathbb{T}_{S}^{\sigma}(v)_{w}^{\operatorname{red}},\]
where \(\sigma\in\operatorname{Stab}(S)\) is a generic stability condition.
**Remark 4.10**.: The category \(\mathbb{T}_{S}(v)_{w}\) is defined as an abstract pre-triangulated dg-category. If we take a generic \(\sigma\in\operatorname{Stab}(S)\), it is realized as a subcategory of \(D^{b}(\mathfrak{M}_{S}^{\sigma}(v))\) by the identification \(\mathbb{T}_{S}(v)_{w}=\mathbb{T}_{S}^{\sigma}(v)_{w}\subset D^{b}(\mathfrak{ M}_{S}^{\sigma}(v))\).
**Remark 4.11**.: Suppose that \(g\geqslant 2\) and take a generic \(\sigma\in\operatorname{Stab}(S)\). Then we have \(\mathfrak{M}_{S}^{\sigma}(v)^{\operatorname{red}}=\mathcal{M}_{S}^{\sigma}(v)\). Let \(\mathcal{M}_{S}^{\sigma\text{-st}}(v)\subset\mathcal{M}_{S}^{\sigma}(v)\) be the open substack of \(\sigma\)-stable objects. Then the good moduli space morphism \(\mathcal{M}_{S}^{\sigma\text{-st}}(v)\to M_{S}^{\sigma\text{-st}}(v)\) is a \(\mathbb{C}^{*}\)-gerbe classified by \(\alpha\in\operatorname{Br}(M_{S}^{\sigma\text{-st}}(v))\) which gives the obstruction of the existence of a universal object in \(S\times M_{S}^{\sigma\text{-st}}(v)\). We then have that
\[\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}|_{M_{S}^{\sigma\text{-st}}(v)}=D^{ b}(M_{S}^{\sigma}(v),\alpha^{w}), \tag{4.15}\]
where the right hand is the derived category of \(\alpha^{w}\)-twisted coherent sheaves on \(M_{S}^{\sigma\text{-st}}(v)\), see [10, 11], and the left hand side is the subcategory of \(D^{b}(\mathcal{M}^{\sigma\text{-st}}(v))\) classically generated by the restriction of objects in \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\).
If \(v\) is primitive, then \(M_{S}^{\sigma\text{-st}}(v)=M_{S}^{\sigma}(v)\) and it is a non-singular holomorphic symplectic variety deformation equivalent to the Hilbert scheme of points \(S^{[n]}\), where \(n=\langle v,v\rangle/2+1\). By (4.15), we have
\[\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}=D^{b}(M_{S}^{\sigma}(v),\alpha^{w}). \tag{4.16}\]
**Remark 4.12**.: We can also define quasi-BPS categories for other Calabi-Yau surfaces, i.e. abelian surfaces, similarly to Definition 4.9. When \(S\) is an abelian surface, the derived Picard stack is \(\mathcal{P}ic^{\beta}(S)=\widehat{S}\times\operatorname{Spec}\mathbb{C}[ \varepsilon]/\mathbb{C}^{*}\), where \(\widehat{S}\) is the dual abelian surface, and we define the reduced stack (4.2) by \(\mathfrak{M}_{S}^{\sigma}(v)\times_{\mathcal{P}ic^{\beta}(S)}\mathcal{P}ic^{ \beta}(S)^{\operatorname{cl}}\). The results in this paper also hold for abelian surfaces.
Let \(v=dv_{0}\) for a primitive \(v_{0}\). We expect that, if \(\gcd(d,w)=1\), the category \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\) is a "non-commutative hyperkahler manifold", so that it shares several properties with \(D^{b}(M)\) for a smooth projective hyperkahler variety of \(K3^{[n]}\)-type for \(n=\langle v,v\rangle/2+1\). More precisely, we may expect the following, which we view as a categorical \(\chi\)-independence phenomenon.
**Conjecture 4.13**.: _Let \(v=dv_{0}\) for \(d\geqslant 1\) and \(v_{0}\) a primitive vector with \(\langle v_{0},v_{0}\rangle=2g-2\). Suppose that \(g\geqslant 0\). For \(\gcd(d,w)=1\), the category \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\) is deformation equivalent to \(D^{b}(S^{[n]})\) for \(n=\langle v,v\rangle/2+1\)._
### Quasi-BPS categories for Gieseker semistable sheaves
In Definition 4.9, we defined quasi-BPS categories for Bridgeland semistable objects. By applying the categorical wall-crossing equivalence in Theorem 4.8, we can relate the categories in Definition 4.9 with those under Hodge isometries, and with those for moduli stacks of Gieseker semistable sheaves.
Let \(G\) be the group of Hodge isometries of the Mukai lattice \(H^{*}(S,\mathbb{Z})\), preserving the orientation of the positive definite four dimensional plane of \(H^{*}(S,\mathbb{R})\). Note that it acts on the algebraic part \(\mathbb{Z}\oplus\operatorname{NS}(S)\oplus\mathbb{Z}\). The following is a categorical analogue of derived invariance property of counting invariants for K3 surfaces [13, 14].
**Corollary 4.14**.: _For any \(\gamma\in G\), there is an equivalence_
\[\mathbb{T}_{S}(v)_{w}\simeq\mathbb{T}_{S}(\gamma v)_{w}.\]
Proof.: Let \(\operatorname{Aut}_{\circ}(D^{b}(S))\) be the group of autoequivalences \(\Phi\) of \(D^{b}(S)\) whose action \(\Phi_{*}\) on the space of stability conditions preserves the component \(\operatorname{Stab}(S)\). It also acts on \(H^{*}(S,\mathbb{Z})\), and denote the action by \(\Phi_{*}\colon H^{*}(S,\mathbb{Z})\to H^{*}(S,\mathbb{Z})\). Then we have the surjective group homomorphism, see [1, Proposition 7.9], [10, Corollary 4.10]:
\[\operatorname{Aut}_{\circ}(D^{b}(S))\to G,\ \Phi\mapsto\Phi_{*} \tag{4.17}\]
For \(\Phi\in\operatorname{Aut}_{\circ}(D^{b}(S))\), there is an equivalence of derived stacks \(\phi\colon\mathfrak{M}_{S}^{\sigma}(v)\simeq\mathfrak{M}_{S}^{\Phi_{*}\sigma }(\Phi_{*}\sigma)\) given by \(F\mapsto\Phi(F)\). The above equivalence induces an equivalence
\[\phi_{*}\colon\mathbb{T}_{S}^{\sigma}(v)_{\delta}\simeq\mathbb{T}_{S}^{\Phi_ {*}\sigma}(\Phi_{*}v)_{\delta^{\prime}}\]
where \(\delta^{\prime}\) is determined by \(a^{\prime}=\Phi_{*}^{-1}a\in K(S)_{\mathbb{R}}\) which satisfies \(\chi(a^{\prime}\otimes v^{\prime})=w\) for \(v^{\prime}=\Phi_{*}v\). By Theorem 4.8 and the surjectivity of (4.17), we obtain the corollary.
Let \(H\) be an ample divisor on \(S\). We denote by \(\mathfrak{M}_{S}^{H}(v)\) the derived moduli stack of \(H\)-Gieseker semistable sheaves on \(S\) with Mukai vector \(v\), and by \(\mathfrak{M}_{S}^{H}(v)^{\operatorname{red}}\) its reduced stack. For \(a\in K(S)_{\mathbb{R}}\), we define the \(\mathbb{R}\)-line bundle \(\delta\) on \(\mathfrak{M}_{S}^{H}(v)\), \(\mathfrak{M}_{S}^{H}(v)^{\operatorname{red}}\) similarly to (4.7). Then we define
\[\mathbb{T}_{S}^{H}(v)_{\delta}:=\mathbb{W}(\mathfrak{M}_{S}^{H}(v))_{\delta}^{ \operatorname{int}}\subset D^{b}(\mathfrak{M}_{S}^{H}(v)),\]
\[\mathbb{T}_{S}^{H}(v)_{\delta}^{\operatorname{red}}:=\mathbb{W}(\mathfrak{M} _{S}^{H}(v)^{\operatorname{red}})_{\delta}^{\operatorname{int}}\subset D^{b} (\mathfrak{M}_{S}^{H}(v)^{\operatorname{red}}).\]
Below we consider \(H\) generic with respect to \(v\), so that all Jordan-Holder factors of objects in \(\mathfrak{M}_{S}^{H}(v)\) have numerical class proportional to \(v\). The following is a corollary of the wall-crossing equivalence in Theorem 4.8.
**Corollary 4.15**.: _For \(v\in N(S)_{\mathbb{R}}\) and generic \(\sigma\in\operatorname{Stab}(S)\), there is \(\varepsilon\in\{0,1\}\) and \(m\gg 0\) such that, by setting \(v^{\prime}=(-1)^{\varepsilon}v(mH)\) we have equivalences_
\[\mathbb{T}_{S}(v)_{\delta}\simeq\mathbb{T}_{S}^{H}(v^{\prime})_{\delta^{ \prime}},\ \mathbb{T}_{S}(v)_{\delta}^{\operatorname{red}}\simeq\mathbb{T}_{S}^{H}(v^{ \prime})_{\delta^{\prime}}^{\operatorname{red}}.\]
_Here, \(\delta^{\prime}\) is a line bundle on \(\mathfrak{M}_{S}^{H}(v)\) determined by \(a^{\prime}=(-1)^{\varepsilon}a(-mH)\in K(S)_{\mathbb{R}}\) with \(\chi(a\otimes v)=\chi(a^{\prime}\otimes v^{\prime})=w\). Then:_
\[\mathbb{T}_{S}(v)_{w}\simeq\mathbb{T}_{S}^{H}(v^{\prime})_{w},\ \mathbb{T}_{S}(v)_{w}^{ \operatorname{red}}\simeq\mathbb{T}_{S}^{H}(v^{\prime})_{w}^{\operatorname{ red}}.\]
Proof.: We take the autoequivalence \(\Phi\) of \(D^{b}(S)\) to be either \(\Phi=\otimes\mathcal{O}(mH)\) or \(\otimes\mathcal{O}(mH)[1]\) for \(m\gg 0\), such that the vector \(\Phi_{*}v=(r,\beta,\chi)\) either has \(r\geqslant 0\) and \(H\cdot\beta>0\), or \(r=H\cdot\beta=0\), \(\chi>0\). Then applying Corollary 4.14, Theorem 4.8, and Proposition 4.1 we obtain the conclusion.
We next mention the natural periodicity and symmetry of quasi-BPS categories:
**Lemma 4.16**.: _Let \(m:=\gcd\{\chi(a\otimes v):a\in K(S)\}\). We have equivalences_
\[\mathbb{T}_{S}(v)_{w}\simeq\mathbb{T}_{S}(v)_{w+m},\ \mathbb{T}_{S}(v)_{w} \simeq\mathbb{T}_{S}(v)_{-w}^{\operatorname{op}}.\]
_The similar equivalences also hold for \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\)._
Proof.: For \(a\in K(S)\), let \(\delta\) be the line bundle (4.7). Then tensoring by \(\delta\) induces equivalences:
\[\mathbb{T}_{S}^{\sigma}(v)_{w}\simeq\mathbb{T}_{S}^{\sigma}(v)_{w+\chi(a \otimes v)},\ \mathbb{T}_{S}^{\sigma}(v)_{w}^{\operatorname{red}}\simeq\mathbb{T}_{S}^{ \sigma}(v)_{w+\chi(a\otimes v)}^{\operatorname{red}}.\]
Therefore we obtain the first equivalence. The second equivalence is given by the restriction of \(\mathcal{H}om(-,\mathcal{O}_{\mathfrak{M}^{\sigma}_{S}(v)})\) to \(\mathbb{T}^{g}_{S}(v)\).
### Conjecture 4.13 for \(g=0,1\)
Write the Mukai vector as \(v=dv_{0}\), where \(d\in\mathbb{Z}_{\geqslant 1}\) and for \(v_{0}\) a primitive Mukai vector with \(\langle v_{0},v_{0}\rangle=2g-2\) and \(g\geqslant 0\).
The following proposition proves Conjecture 4.13 when \(g=0\).
**Proposition 4.17**.: _Suppose that \(g=0\) and \(\gcd(d,w)=1\). Then we have_
\[\mathbb{T}_{S}(dv_{0})^{\mathrm{red}}_{w}=\begin{cases}D^{b}(\operatorname{ Spec}\mathbb{C}),&d=1,\\ 0,&d>1.\end{cases}\]
Proof.: By Corollary 4.15, we can assume that \(\mathbb{T}_{S}(v)_{w}=\mathbb{T}^{H}_{S}(v)_{\delta}\) in \(D^{b}(\mathfrak{M}^{H}_{S}(v))\) for \(H\) an ample divisor on \(S\). It is well-known that \(\mathfrak{M}^{H}_{S}(v)\) consists of a single point \(F^{\oplus d}\) for a spherical stable sheaf \(F\), so we have
\[\mathfrak{M}^{H}_{S}(v)^{\mathrm{red}}=\operatorname{Spec}\mathbb{C}[\mathfrak{ gl}(d)_{0}^{\vee}[1]]/GL(d).\]
By the definition of \(\mathbb{T}_{S}(v)^{\mathrm{red}}_{w}\), it consists of objects such that for the inclusion
\[j\colon\mathfrak{M}^{H}_{S}(v)^{\mathrm{red}}\hookrightarrow BGL(d),\]
the object \(j_{*}\mathcal{E}\) is generated by \(\Gamma_{GL(d)}(\chi)\) for a dominant weight \(\chi\) such that
\[\chi+\rho\in\frac{1}{2}\mathrm{sum}[0,\beta_{i}-\beta_{j}]+\frac{w}{d}\sum_{i =1}^{d}\beta_{i},\]
where the Minkowski sum is after all \(1\leqslant i,j\leqslant d\). By [13, Lemma 3.2], such a weight exists if and only if \(d|w\), and thus only if \(d=1\) because \(\gcd(d,w)=1\). Therefore, together with (4.16) in the primitive case, the proposition follows.
We next discuss the case of \(g=1\). Then \(v=dv_{0}\), where \(v_{0}\) is primitive with \(\langle v_{0},v_{0}\rangle=0\). For a generic \(\sigma\), set
\[S^{\prime}:=M^{\sigma}_{S}(v_{0})\]
which is well-known to be a K3 surface [14, 15]. We have the good moduli space morphism \(\mathcal{M}^{\sigma}_{S}(v_{0})\to S^{\prime}\) which is a \(\mathbb{C}^{*}\)-gerbe classified by some \(\alpha\in\operatorname{Br}(S^{\prime})\). There is an equivalence
\[D^{b}(S^{\prime},\alpha)\overset{\sim}{\to}D^{b}(S) \tag{4.18}\]
given by the Fourier-Mukai transform with kernel the universal \((1\boxtimes\alpha)\)-twisted sheaf on \(S\times S^{\prime}\), see [10]. There is also an isomorphism given by the direct sum map
\[\operatorname{Sym}^{d}(S^{\prime})\overset{\cong}{\to}M^{\sigma}_{S}(dv_{0}). \tag{4.19}\]
Let \(\mathcal{M}^{\sigma}_{S}(v_{0},\dots,v_{0})\) be the classical moduli stack of filtrations of semistable objects on \(S\):
\[0=F_{0}\subset F_{1}\subset\dots\subset F_{d}\]
such that \(F_{i}/F_{i-1}\) is a \(\sigma\)-semistable object with numerical class \(v_{0}\). We define \(\mathcal{Z}_{S}\) and \(\widetilde{\mathcal{M}}^{\sigma}_{S}(v_{0})\) by the following diagram, where the two squares are Cartesian in the
classical sense:
(4.20)
Let \(T(d)=(\mathbb{C}^{*})^{\times d}\). The map \(\widetilde{\mathcal{M}}^{\sigma}_{S}(v_{0})\to S^{\prime}\) is a \(T(d)\)-gerbe, so we have the decomposition into \(T(d)\)-weights
\[D^{b}(\widetilde{\mathcal{M}}^{\sigma}_{S}(v_{0}))=\bigoplus_{(w_{1},\dots,w_{ d})\in\mathbb{Z}^{d}}D^{b}(\widetilde{\mathcal{M}}^{\sigma}_{S}(v_{0}))_{(w_{1}, \dots,w_{d})} \tag{4.21}\]
where the summand corresponding to \((w_{1},\dots,w_{d})\) is equivalent to \(D^{b}(S^{\prime},\alpha^{w_{1}+\dots+w_{d}})\). For \(1\leqslant i\leqslant d\), define \(m_{i}\) by the formula
\[m_{i}:=\left\lceil\frac{wi}{d}\right\rceil-\left\lceil\frac{w(i-1)}{d}\right \rceil+\delta_{id}-\delta_{i1}\in\mathbb{Z}. \tag{4.22}\]
Define the functor
\[\Phi_{d,w}\colon D^{b}(S^{\prime},\alpha^{w})\to D^{b}(\mathfrak{M}^{\sigma}_{ S}(v)^{\text{red}})_{w},\ \mathcal{F}\mapsto p_{S*}\left(q^{*}_{S}\circ i_{(m_{1},\dots,m_{d})}\mathcal{F }\right),\]
where \(i_{(m_{1},\dots,m_{d})}\) is the inclusion of \(D^{b}(S^{\prime},\alpha^{w})\) into the weight \((m_{1},\dots,m_{d})\)-part of (4.21). When \(v_{0}=[\mathcal{O}_{x}]\) for a point \(x\in S\), it is proved in [PTc, Proposition 4.7] that the image of the functor \(\Phi_{d,w}\) lies in \(\mathbb{T}^{\sigma}_{S}(v)_{w}\). We now state a stronger form of Conjecture 4.13 for \(g=1\).
**Conjecture 4.18**.: _Let \(v=dv_{0}\) such that \(d\in\mathbb{Z}_{\geqslant 1}\) and \(v_{0}\) is primitive with \(\langle v_{0},v_{0}\rangle=0\). If \(\gcd(d,w)=1\), then the functor \(\Phi_{d,w}\) restricts to the equivalence_
\[\Phi_{d,w}\colon D^{b}(S^{\prime},\alpha^{w})\stackrel{{\sim}}{{ \rightarrow}}\mathbb{T}^{\sigma}_{S}(dv_{0})_{w}^{\text{red}}. \tag{4.23}\]
_In particular for \(w=1\), the category \(\mathbb{T}^{\sigma}_{S}(dv_{0})_{1}\) is equivalent to \(D^{b}(S)\)._
In [PTa, PTb], we addressed a similar conjecture for \(\mathbb{C}^{2}\) which we recall here. Let \(\mathcal{C}(d)^{\text{red}}\) be the reduced derived moduli stack of zero-dimensional sheaves on \(\mathbb{C}^{2}\) with length \(d\). It is the quotient stack
\[\mathcal{C}(d)^{\text{red}}=\mu_{0}^{-1}(0)/GL(d),\]
where \(\mu_{0}\colon\mathfrak{gl}(d)^{\oplus 2}\rightarrow\mathfrak{gl}(d)_{0}\) is the commuting map, so the map (3.19) for \(g=1\). Let \(\mathcal{C}(1,\dots,1)\) be the classical moduli stack of filtrations of zero-dimensional sheaves on \(\mathbb{C}^{2}\):
\[0=Q_{0}\subset Q_{1}\subset\dots\subset Q_{d}\]
such that \(Q_{i}/Q_{i-1}\) is isomorphic to \(\mathcal{O}_{x_{i}}\) for some \(x_{i}\in\mathbb{C}^{2}\). Similarly to (4.20), we have the following diagram
(4.24)
The functor
\[\Phi_{d,w}^{\mathbb{C}^{2}}\colon D^{b}(\mathbb{C}^{2})\to D^{b}(\mathcal{C}(d) ^{\mathrm{red}}) \tag{4.25}\]
is defined similarly to (4.23).
**Conjecture 4.19**.: ([PTa, PTb, PTe]) _If \(\gcd(d,w)=1\), the functor (4.25) restricts to the equivalence_
\[\Phi_{d,w}^{\mathbb{C}^{2}}\colon D^{b}(\mathbb{C}^{2})\stackrel{{ \sim}}{{\to}}\mathbb{T}(d)_{w}^{\mathrm{red}}. \tag{4.26}\]
_Here the right hand side is defined in (3.20) for \(g=1\)._
We have the following proposition:
**Proposition 4.20**.: _Conjecture 4.19 implies Conjecture 4.18._
Proof.: Consider the composition
\[D^{b}(\mathfrak{M}_{S}^{\sigma}(dv_{0})^{\mathrm{red}})\stackrel{{ p_{S}^{\prime}}}{{\to}}\operatorname{Ind}D^{b}(\mathcal{Z}_{S}) \stackrel{{ q_{S}\sigma}}{{\to}}\operatorname{Ind}D^{b}( \widetilde{\mathcal{M}}_{S}^{\sigma}(v_{0}))\stackrel{{\mathrm{ pr}}}{{\to}}\operatorname{Ind}D^{b}(S^{\prime},\alpha^{w}), \tag{4.27}\]
where \(\mathrm{pr}\) is the projection onto the weight \((m_{1},\dots,m_{d})\)-component. We claim that, assuming Conjecture 4.19, the above functor restricts to the functor
\[\Phi_{d,w}^{R}\colon\mathbb{T}_{S}^{\sigma}(dv_{0})_{w}^{\mathrm{red}}\to D^{ b}(S^{\prime},\alpha^{w}), \tag{4.28}\]
which is a right adjoint of \(\Phi_{d,w}\). Let
\[\mathcal{M}_{S}^{\sigma}(dv_{0})\to M_{S}^{\sigma}(dv_{0})\stackrel{{ \cong}}{{\leftarrow}}\operatorname{Sym}^{d}(S^{\prime})\]
be the good moduli space morphism, see (4.19).
For a point \(p\in S^{\prime}\), the diagram (4.20) pulled back over the formal completion \(\operatorname{Spec}\widehat{\mathcal{O}}_{\operatorname{Sym}^{d}(S^{\prime}),d[p]}\to\operatorname{Sym}^{d}(S^{\prime})\) is isomorphic to the diagram (4.24) pulled back via \(\operatorname{Spec}\widehat{\mathcal{O}}_{\operatorname{Sym}^{d}(\mathbb{C}^ {2}),d[0]}\to\operatorname{Sym}^{d}(\mathbb{C}^{2})\). The ind-completion of the equivalence (4.26) gives an equivalence
\[\operatorname{Ind}D^{b}(\mathbb{C}^{2})\stackrel{{\sim}}{{\to}} \operatorname{Ind}\mathbb{T}(d)_{w}^{\mathrm{red}},\]
whose inverse is
\[\mathrm{pr}\circ q_{\mathbb{C}^{2}*}p_{\mathbb{C}^{2}}^{\dagger}\colon \operatorname{Ind}\mathbb{T}(d)_{w}^{\mathrm{red}}\to\operatorname{Ind}D^{b}( \mathbb{C}^{2}). \tag{4.29}\]
In the above, \(\mathrm{pr}\) is again the projection functor onto the weight \((m_{1},\dots,m_{d})\)-component. By the equivalence (4.26), the functor (4.29) restricts to the functor \(\mathbb{T}(d)_{w}^{\mathrm{red}}\to D^{b}(\mathbb{C}^{2})\). Therefore the functor (4.27) restricts to the functor (4.28), giving a right adjoint of \(\Phi_{d,w}\).
We have the natural transformations \(\mathrm{id}\to\Phi_{d,w}^{R}\circ\Phi_{d,w}\), \(\Phi_{d,w}\circ\Phi_{d,w}^{R}\to\mathrm{id}\) by adjunction, which are isomorphisms formally locally on \(\operatorname{Sym}^{d}(S^{\prime})\). Hence they are isomorphisms and thus \(\Phi_{d,w}\) is an equivalence.
**Remark 4.21**.: In [PTe], we prove Conjecture 4.19 for \((d,w)=(2,1)\). By Proposition 4.20, it implies that Conjecture 4.18 is true for \((d,w)=(2,1)\).
## 5. Semiorthogonal decompositions into quasi-BPS categories
In this section, we prove a categorical version of the PBW theorem for cohomological Hall algebras of K3 surfaces [DHSMb], see Theorem 5.1. We first prove Theorem 5.1 assuming Proposition 3.7, which states that there is a semiorthogonal decomposition formally locally on the good moduli space. We then prove Proposition 3.7.
### Semiorthogonal decomposition
Let \(S\) be a K3 surface. We take \(v\in N(S)\) and write \(v=dv_{0}\) for \(d\in\mathbb{Z}_{\geqslant 1}\) and primitive \(v_{0}\). For a partition \(d=d_{1}+\cdots+d_{k}\), let \(\mathfrak{M}^{\sigma}_{S}(d_{1}v_{0},\ldots,d_{k}v_{0})\) be the derived moduli stack of filtrations
\[0=F_{0}\subset F_{1}\subset\cdots\subset F_{k}\]
such that \(F_{i}/F_{i-1}\) is \(\sigma\)-semistable with numerical class \(d_{i}v_{0}\). Consider the natural morphisms
\[\times_{i=1}^{k}\mathfrak{M}^{\sigma}_{S}(d_{i}v_{0})\stackrel{{ q}}{{\leftarrow}}\mathfrak{M}^{\sigma}_{S}(d_{1}v_{0},\ldots,d_{k}v_{0}) \stackrel{{ p}}{{\rightarrow}}\mathfrak{M}^{\sigma}_{S}(dv_{0}), \tag{5.1}\]
where \(q\) is quasi-smooth and \(p\) is proper. The above morphisms induce the categorical Hall product, see [10]:
\[p_{*}q^{*}\colon\boxtimes_{i=1}^{k}D^{b}(\mathfrak{M}^{\sigma}_{S}(d_{i}v_{0} ))\to D^{b}(\mathfrak{M}^{\sigma}_{S}(dv_{0})). \tag{5.2}\]
We next discuss a semiorthogonal decomposition of \(D^{b}(\mathfrak{M}^{\sigma}_{S}(v))\) using categorical Hall products of quasi-BPS categories, which we view as a categorical version of the PBW theorem for cohomological Hall algebras [13, Theorem C], [5]. When \(v_{0}\) is the class of a point, the statement was proved in [11].
**Theorem 5.1**.: _Assume \(v=dv_{0}\) for \(d\in\mathbb{Z}_{\geqslant 1}\) and for a primitive Mukai vector \(v_{0}\). For a generic stability condition \(\sigma\), there is a semiorthogonal decomposition_
\[D^{b}(\mathfrak{M}^{\sigma}_{S}(v))=\left\langle\boxtimes_{i=1}^{k}\mathbb{T} ^{\sigma}_{S}(d_{i}v_{0})_{w_{i}+(g-1)d_{i}(\sum_{i>j}d_{j}-\sum_{i<j}d_{j}) }\right\rangle. \tag{5.3}\]
_The right hand side is after all partitions \((d_{i})_{i=1}^{k}\) of \(d\) and all weights \((w_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\) such that_
\[\frac{w_{1}}{d_{1}}<\cdots<\frac{w_{k}}{d_{k}}.\]
_Each semiorthogonal summand is given by the restriction of the categorical Hall product (5.2), and the order of the semiorthogonal decomposition is the same as that of (3.11)._
Proof.: We first explain that the semiorthogonal decomposition (5.3) holds formally locally over the good moduli space \(M^{\sigma}_{S}(v)\). For each \(y\in M^{\sigma}_{S}(v)\), recall the equivalence
\[\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\simeq\widehat{\mathcal{P}}(d)_{p} \tag{5.4}\]
from Lemma 4.3. For a \(\mathbb{R}\)-line bundle \(\delta\) on \(\mathfrak{M}^{\sigma}_{S}(v)\) as in (4.7), its restriction \(\delta_{y}\) to \(\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\) corresponds to \(w\tau_{d}\) under the above equivalence by the computation (4.9). Therefore, the category \(\mathbb{T}^{\sigma}_{S,y}(v)_{\delta_{y}}\) from Remark 4.5 is equivalent to the category \(\mathbb{T}_{p}(d)_{w}\) from (3.16) under the equivalence (5.4), as both of them are intrinsic window
subcategories of equivalent derived stacks. Therefore the statement holds formally locally at any point \(y\in M^{\sigma}_{S}(v)\) by Proposition 3.7. We set
\[A=(d_{i},w^{\prime}_{i})_{i=1}^{k},\ w^{\prime}_{i}:=w_{i}+(g-1)d_{i}\left(\sum_{ i>j}d_{j}-\sum_{i<j}d_{j}\right). \tag{5.5}\]
Every functor
\[\Upsilon_{A}\colon\,\boxtimes_{i=1}^{k}\mathbb{T}^{\sigma}_{S}(d_{i}v_{0})_{w^ {\prime}_{i}}\to D^{b}(\mathfrak{M}^{\sigma}_{S}(v)) \tag{5.6}\]
is globally defined via categorical Hall product, hence a standard argument reduces the existence of the desired SOD to the formal local statement as in [10, Section 4.2], also see [21, Toda, Todc, Todb] for the similar arguments on reduction to formal fibers.
We give more details on the proof. We prove the semiorthogonal decomposition (5.3) by induction on \(d\). The case of \(d=1\) is obvious, so we assume that \(d\geqslant 2\). We first show that, for \(w_{1}/d_{1}<\cdots<w_{k}/d_{k}\), the functor (5.6) is fully-faithful. By the induction hypothesis, the inclusion
\[\boxtimes_{i=1}^{k}\mathbb{T}^{\sigma}_{S}(d_{i}v_{0})_{w^{\prime}_{i}} \hookrightarrow\boxtimes_{i=1}^{k}D^{b}(\mathfrak{M}^{\sigma}_{S}(d_{i}v_{0}) )_{w^{\prime}_{i}}\]
admits a right adjoint. The categorical Hall product restricted to the fixed \((\mathbb{C}^{*})^{k}\)-weights \((w_{i})_{i=1}^{k}\):
\[\boxtimes_{i=1}^{k}D^{b}(\mathfrak{M}^{\sigma}_{S}(d_{i}v_{0}))_{w^{\prime}_{ i}}\to D^{b}(\mathfrak{M}^{\sigma}_{S}(v))\]
also admits a right adjoint, see the proof of [20, Lemma 6.7] or [10, Theorem 1.1]. Therefore the functor (5.6) admits a right adjoint \(\Upsilon^{R}_{A}\). To show that (5.6) is fully-faithful, it is enough to show that the natural transform
\[\operatorname{id}\to\Upsilon^{R}_{A}\circ\Upsilon_{A} \tag{5.7}\]
is an isomorphism. This is a local question for \(M^{\sigma}_{S}(v)\), i.e. it is enough to show that (5.7) is an isomorphism after restricting to \(\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\) for any \(y\in M^{\sigma}_{S}(v)\). Since \(\Upsilon_{A}\) and \(\Upsilon^{R}_{A}\) are compatible with pull-backs to \(\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\), the isomorphism (5.7) on \(\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\) follows from Lemma 4.3 and Proposition 3.7.
We next show that there is a semiorthogonal decomposition of the form
\[D^{b}(\mathfrak{M}^{\sigma}_{S}(v))_{w}=\langle\{\operatorname{Im}\Upsilon_ {A}\}_{A\in\Gamma},\mathbb{W}\rangle, \tag{5.8}\]
where \(\Gamma\) is the set of partitions \(A=(d_{i},w^{\prime}_{i})_{i=1}^{k}\) of \((d,w)\) as in (5.5) such that \(k\geqslant 2\) and \(w_{1}/d_{1}<\cdots<w_{k}/d_{k}\). For \(A>B\), we have \(\operatorname{Hom}(\operatorname{Im}\Upsilon_{A},\operatorname{Im}\Upsilon_ {B})=0\). Indeed it is enough to show that \(\Upsilon^{R}_{A}\circ\Upsilon_{B}=0\), which is a property local on \(M^{\sigma}_{S}(v)\). Hence similarly to showing (5.7) is an isomorphism, the desired vanishing follows from Proposition 3.7. We next show that the functor (5.6) admits a left adjoint \(\Upsilon^{L}_{A}\). Let \(\mathbb{D}_{\mathfrak{M}}\) be the dualizing functor
\[\mathbb{D}_{\mathfrak{M}}\colon D^{b}(\mathfrak{M}^{\sigma}_{S}(v)) \xrightarrow{\sim}D^{b}(\mathfrak{M}^{\sigma}_{S}(v))^{\operatorname{op}}.\]
The above functor restricts to the equivalence \(\mathbb{D}_{\mathbb{T}(d)}\colon\mathbb{T}^{\sigma}_{S}(v)_{\delta} \xrightarrow{\sim}\mathbb{T}^{\sigma}_{S}(v)_{-\delta}^{\operatorname{op}}\). For a partition \(A\) in (5.5), we set \(A^{\vee}=(d_{i},-w^{\prime}_{i})_{i=1}^{k}\). Then the functor
\[\Upsilon^{L}_{A}:=\left(\boxtimes_{i=1}^{k}\mathbb{D}_{\mathbb{T}(d_{i})} \right)\circ(\Upsilon^{R}_{A^{\vee}})^{\operatorname{op}}\circ\mathbb{D}_{ \mathfrak{M}}\colon D^{b}(\mathfrak{M}^{\sigma}_{S}(v))\to\boxtimes_{i=1}^{k} \mathbb{T}^{\sigma}_{S}(d_{i}v_{0})_{w^{\prime}_{i}}\]
gives a left adjoint of \(\Upsilon_{A}\). Therefore we obtain the semiorthogonal decomposition of the form (5.8).
It is enough to show that \(\mathbb{W}=\mathbb{T}^{\sigma}_{S}(v)_{w}\) in the semiorthogonal decomposition (5.8). The inclusion \(\mathbb{T}^{\sigma}_{S}(v)_{w}\subset\mathbb{W}\) follows from a formal local argument as above. It thus suffices to show that \(\mathbb{W}\subset\mathbb{T}^{\sigma}_{S}(v)_{w}\). The subcategory \(\mathbb{W}\) consists of \(\mathcal{E}\in D^{b}(\mathfrak{M}^{\sigma}_{S}(v))_{w}\) such that \(\Upsilon^{L}_{A}(\mathcal{E})=0\) for all \(A\in\Gamma\). This is a local property on \(M^{\sigma}_{S}(v)\). The functor \(\Upsilon^{L}_{A}\) is compatible with pull-back to \(\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}\). Thus, for any \(\mathcal{E}\in\mathbb{W}\), we have \(\mathcal{E}|_{\widehat{\mathfrak{M}}^{\sigma}_{S}(v)_{y}}\in\mathbb{T}^{ \sigma}_{S,y}(v)_{\delta_{y}}\) by Lemma 4.3 and Proposition 3.7. Therefore, from Remark 4.5, we conclude that \(\mathcal{E}\in\mathbb{T}^{\sigma}_{S}(v)_{w}\).
The reduced version of the semiorthogonal decomposition is as follows:
**Theorem 5.2**.: _Assume \(v=dv_{0}\) for \(d\in\mathbb{Z}_{\geqslant 1}\) and for a primitive Mukai vector \(v_{0}\). For a generic stability condition \(\sigma\), there is a semiorthogonal decomposition_
\[D^{b}(\mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}})=\] \[\left\langle\boxtimes_{i=1}^{k-1}\mathbb{T}^{\sigma}_{S}(d_{i}v_{ 0})_{w_{i}+(g-1)d_{i}(\sum_{i>j}d_{j}-\sum_{i<j}d_{j})}\boxtimes\mathbb{T}^{ \sigma}_{S}(d_{k}v_{0})^{\mathrm{red}}_{w_{k}+(g-1)d_{k}(\sum_{k>j}d_{j})} \right\rangle.\]
_The right hand side is after \(d_{1}+\cdots+d_{k}=d\) such that \(w_{1}/d_{1}<\cdots<w_{k}/d_{k}\)._
Proof.: Let \(v_{0}=(r,\beta,\chi)\). We have the commutative diagram
where the middle horizontal arrow is \((L_{1},\ldots,L_{k})\mapsto L_{1}\otimes\cdots\otimes L_{k}\). By base change, the categorical Hall product induces the functor
\[\boxtimes_{i=1}^{k-1}D^{b}(\mathfrak{M}^{\sigma}_{S}(d_{i}v_{0}))\boxtimes D ^{b}(\mathfrak{M}^{\sigma}_{S}(d_{k}v_{0})^{\mathrm{red}})\to D^{b}( \mathfrak{M}^{\sigma}_{S}(dv_{0})^{\mathrm{red}}).\]
The rest of the argument is the same as in Theorem 5.1.
### Generation from ambient spaces
The rest of this section is devoted to the proof of Proposition 3.7. In this subsection, we prove technical preliminary results about generation of dg-categories from ambient spaces and the restriction of semiorthogonal decompositions to formal fibers.
Let \(\mathcal{U}\) be a reduced \(\mathbb{C}\)-scheme of finite type with an action of a reductive algebraic group \(G\). Let \(\mathcal{U}/G\to T\) be a morphism to an affine scheme \(T\) of finite type. For a closed point \(y\in T\), we denote by \(\widehat{\mathcal{U}}_{y}/G\) the formal fiber at \(y\). We denote by \(\iota_{y}\) the induced map \(\iota_{y}\colon\widehat{\mathcal{U}}_{y}/G\to\mathcal{U}/G\). Recall the definition of classical generation from Subsection 2.2.1.
**Lemma 5.3**.: _The image of the pull-back functor_
\[\iota_{y}^{*}\colon D^{b}(\mathcal{U}/G)\to D^{b}(\widehat{\mathcal{U}}_{y}/G) \tag{5.9}\]
_classically generates \(D^{b}(\widehat{\mathcal{U}}_{y}/G)\)._
Proof.: It is enough to show that \(\operatorname{Ind}D^{b}(\widehat{\mathcal{U}}_{y}/G)\) is generated by the image of
\[\iota_{y}^{*}\colon\operatorname{Ind}D^{b}(\mathcal{U}/G)\to\operatorname{ Ind}D^{b}(\widehat{\mathcal{U}}_{y}/G). \tag{5.10}\]
Indeed, suppose that \(\operatorname{Ind}D^{b}(\widehat{\mathcal{U}}_{y}/G)\) is generated by the image of (5.10). Let \(\mathcal{C}_{y}\subset D^{b}(\widehat{\mathcal{U}}_{y}/G)\) be the subcategory classically generated by the image of (5.9). Then we have \(\operatorname{Ind}\mathcal{C}_{y}\xrightarrow{\sim}\operatorname{Ind}D^{b}( \widehat{\mathcal{U}}_{y}/G)\), hence \(\mathcal{C}_{y}=D^{b}(\widehat{\mathcal{U}}_{y}/G)\) as both of them are the subcategories of compact objects in \(\operatorname{Ind}\mathcal{C}_{y}\) and \(\operatorname{Ind}D^{b}(\widehat{\mathcal{U}}_{y}/G)\), respectively.
Let \(\mathcal{Z}\subset\mathcal{U}\) be a \(G\)-invariant closed subset, and define \(\mathcal{U}^{\circ}=\mathcal{U}\setminus\mathcal{Z}\). Let \(i\colon\mathcal{Z}\hookrightarrow\mathcal{U}\) be the closed immersion and \(j\colon\mathcal{U}^{\circ}\hookrightarrow\mathcal{U}\) be the open immersion. For any \(\mathcal{E}\in\operatorname{Ind}D^{b}(\mathcal{U}/G)\), we have the distinguished triangle
\[R\Gamma_{\mathcal{Z}}(\mathcal{E})\to\mathcal{E}\to j_{*}j^{*}\mathcal{E}\to R \Gamma_{\mathcal{Z}}(\mathcal{E})[1],\]
where \(R\Gamma_{\mathcal{Z}}(\mathcal{E})\) is an object in
\[\operatorname{Ind}D^{b}_{\mathcal{Z}}(\mathcal{U}/G)=\operatorname{Ker}\left( j^{*}\colon\operatorname{Ind}D^{b}(\mathcal{U}/G)\to\operatorname{Ind}D^{b}( \mathcal{U}^{\circ}/G)\right)\]
and \(j_{*}j^{*}\mathcal{E}\) is an object in \(j_{*}\operatorname{Ind}D^{b}(\mathcal{U}^{\circ}/G)\). Note that by [1, Proposition 6.1.3], the category \(\operatorname{Ind}D^{b}_{\mathcal{Z}}(\mathcal{U}/G)\) is generated by the image of
\[i_{*}\colon\operatorname{Ind}D^{b}(\mathcal{Z}/G)\to\operatorname{Ind}D^{b}_{ \mathcal{Z}}(\mathcal{U}/G).\]
We have the Cartesian diagrams
There are base change isomorphisms, see [1, Corollary 3.7.14]:
\[\iota_{y}^{*}j_{*}\cong\widehat{j}_{*}\iota_{y}^{\circ*}\colon \operatorname{Ind}D^{b}(\mathcal{U}^{\circ}/G)\to\operatorname{Ind}D^{b}( \widehat{\mathcal{U}}_{y}/G),\] \[\iota_{y}^{*}i_{*}\cong\widehat{i}_{*}\iota_{y}^{*}\colon \operatorname{Ind}D^{b}(\mathcal{Z}/G)\to\operatorname{Ind}D^{b}(\widehat{ \mathcal{U}}_{y}/G),\]
we can replace \(\mathcal{U}\) with \(\mathcal{U}^{\circ}\sqcup\mathcal{Z}\). Then, by taking a stratification of \(\mathcal{U}\) and repeating the above argument, we can assume that \(\mathcal{U}\) is smooth. Then
\[\operatorname{Ind}D^{b}(\mathcal{U}/G)=D_{\operatorname{qc}}(\mathcal{U}/G)= \operatorname{Ind}\operatorname{Perf}(\mathcal{U}/\mathrm{G})\]
and it is a standard fact that the image of \(\operatorname{Perf}(\mathcal{U}/\mathrm{G})\to\operatorname{Perf}(\widehat{ \mathcal{U}}_{y}/\mathrm{G})\) classically generates \(\operatorname{Perf}(\widehat{\mathcal{U}}_{y}/\mathrm{G})\) (see the argument of [12, Lemma 5.2]).
Let \(Y\) be a smooth affine variety with an action of a reductive algebraic group \(G\). Let \(V\to Y\) be a \(G\)-equivariant vector bundle with a \(G\)-invariant section \(s\). We set \(\mathfrak{U}\) to be the derived zero locus of \(s\), and \(\mathcal{U}\hookrightarrow\mathfrak{U}\) its classical truncation. We have the following diagram
For \(y\in\mathcal{U}/\!/G\), we denote by \(\widehat{Y}_{y}\) the formal fiber of \(Y\to Y/\!/G\) at \(y\), and by \(\widehat{\mathfrak{U}}_{y}\hookrightarrow\widehat{Y}_{y}\) the derived zero locus of \(s\) restricted to \(\widehat{Y}_{y}\). Let \(\iota_{y}\colon\widehat{\mathfrak{U}}_{y}/G\to\mathfrak{U}/G\) be the induced map.
**Lemma 5.4**.: _The image of the pull-back functor_
\[\iota_{y}^{*}\colon D^{b}(\mathfrak{U}/G)\to D^{b}(\widehat{\mathfrak{U}}_{y}/G)\]
_classically generates \(D^{b}(\widehat{\mathfrak{U}}_{y}/G)\)._
Proof.: Since \(D^{b}(\widehat{\mathfrak{U}}_{y}/G)\) is classically generated by the image of
\[D^{b}(\widehat{\mathfrak{U}}_{y}/G)\to D^{b}(\widehat{\mathfrak{U}}_{y}/G)\]
the claim follows from Lemma 5.3.
**Lemma 5.5**.: _Let \(D^{b}(\mathfrak{U}/G)=\langle\mathbb{T}_{i}\mid i\in I\rangle\) be a \((Y\not|\ G)\)-linear semiorthogonal decomposition. Let \(\widehat{\mathbb{T}}_{i,y}\subset D^{b}(\widehat{\mathfrak{U}}_{y}/G)\) be the subcategory classically generated by the image of \(\iota_{y}^{*}\colon\mathbb{T}_{i}\to D^{b}(\widehat{\mathfrak{U}}_{y}/G)\). Then there is a semiorthogonal decomposition_
\[D^{b}(\widehat{\mathfrak{U}}_{y}/G)=\langle\widehat{\mathbb{T}}_{i,y}\mid i \in I\rangle.\]
Proof.: The subcategories \(\widehat{\mathbb{T}}_{i,y}\) classically generate \(D^{b}(\widehat{\mathfrak{U}}_{y}/G)\) by Lemma 5.4. As for the semiorthogonality, take \(i,j\in I\) such that \(\operatorname{Hom}(\mathbb{T}_{i},\mathbb{T}_{j})=0\). Then for \(A\in\mathbb{T}_{i}\) and \(B\in\mathbb{T}_{j}\), we have
\[\operatorname{Hom}(\iota_{y}^{*}A,\iota_{y}^{*}B)=\operatorname{Hom}(A,B \otimes\iota_{y*}\mathcal{O}_{\widehat{\mathfrak{U}}_{y}/G})=\operatorname{ Hom}(A,B\otimes f^{*}\widehat{\mathcal{O}}_{Y\not|\ G,y}). \tag{5.11}\]
The sheaf \(\widehat{\mathcal{O}}_{Y\not|G,y}\) is an object of \(D_{\operatorname{qc}}(Y/\!\!/G)=\operatorname{Ind}\operatorname{Perf}(Y/\!\!/G)\), hence \(f^{*}\widehat{\mathcal{O}}_{Y\not|G,y}\in D_{\operatorname{qc}}(\mathfrak{U}/G)\), and \(\otimes\) is the action of \(D_{\operatorname{qc}}(\mathfrak{U}/G)\) on \(\operatorname{Ind}D^{b}(\mathfrak{U}/G)\), which recall is continuous (i.e. it preserves small coproducts). Then \(B\otimes f^{*}\widehat{\mathcal{O}}_{Y\not|G,y}\) is an object of \(\operatorname{Ind}\mathbb{T}_{j}\), and by writing it as \(\operatorname{colim}_{k\in K}B_{k}\) for \(B_{k}\in\mathbb{T}_{j}\), we have
\[\operatorname{Hom}(A,\operatorname{colim}_{k\in K}B_{k})=\operatorname{colim} _{k\in K}\operatorname{Hom}(A,B_{k})=0,\]
where the first identity follows as \(A\) is compact, and the second vanishing holds from \(\operatorname{Hom}(\mathbb{T}_{i},\mathbb{T}_{j})=0\).
### Descriptions of quasi-BPS categories for doubled quivers
In this subsection, we give an alternative description of quasi-BPS categories for doubled quivers, which will be used in the proof of Proposition 3.7. Below we keep the notation in Subsection 3.5.
Let \(Q^{\circ}\) be a \(g\)-loop quiver. For \(d\in\mathbb{N}\), let \(\mathcal{X}(d)\) be moduli stack of \(d\)-dimensional representations of the tripled quiver of \(Q^{\circ}\):
\[\mathcal{X}(d)=\mathfrak{gl}(d)^{\oplus 2g+1}/GL(d).\]
Consider the regular function induced by the tripled potential:
\[\operatorname{Tr}W(x_{1},\dots,x_{g},y_{1},\dots,y_{g},z)=\operatorname{Tr} \sum_{i=1}^{g}z[x_{i},y_{i}]\colon\mathcal{X}(d)\to\mathbb{C}. \tag{5.12}\]
Let \(\mathbb{C}^{*}\) act on \(z\) with weight two. We define the subcategory
\[\mathbb{S}^{\operatorname{gr}}(d)_{w}\subset\operatorname{MF}^{\operatorname {gr}}(\mathcal{X}(d),\operatorname{Tr}W) \tag{5.13}\]
to be classically generated by matrix factorizations whose factors are direct sums of \(\mathcal{O}_{\mathcal{X}(d)}\otimes\Gamma_{GL(d)}(\chi)\) such that
\[\chi+\rho\in\mathbf{W}(d)_{w}=\frac{1}{2}\text{sum}[0,\beta]+w\tau_{d}=\left( \frac{2g+1}{2}\text{sum}_{1\leqslant i,j\leqslant d}[0,\beta_{i}-\beta_{j}] \right)+w\tau_{d}\]
where the Minkowski sums above are after all the \(T(d)\)-weights of \(R_{Q}(d)=\mathfrak{gl}(d)^{\oplus 2g+1}\). Alternatively, by [10, Lemma 2.9], the subcategory (5.13) consists of matrix factorizations with factors \(\mathcal{O}_{\mathcal{X}(d)}\otimes\Gamma\) for a \(GL(d)\)-representation \(\Gamma\) whose \(T(d)\)-weights are contained in
\[\nabla(d)_{w}=\left\{\chi\in M(d)_{\mathbb{R}}:-\frac{1}{2}n_{\lambda}\leqslant \langle\lambda,\chi\rangle\leqslant\frac{1}{2}n_{\lambda}\text{ for all }\lambda\colon\mathbb{C}^{*}\to T(d) \right\}+w\tau_{d}.\]
Here, the width \(n_{\lambda}\) is defined by
\[n_{\lambda}:=\left\langle\lambda,\det\left((R_{Q}(d)^{\vee})^{\lambda>0} \right)-\det\left((\mathfrak{gl}(d)^{\vee})^{\lambda>0}\right)\right\rangle =2g\left\langle\lambda,\det\left(\mathfrak{gl}(d)^{\lambda>0}\right)\right\rangle.\]
The equivalence (3.5) restricts to the equivalence, see Lemma 2.2:
\[\Theta\colon\mathbb{T}(d)_{w}\stackrel{{\sim}}{{\to}}\mathbb{S} ^{\mathrm{gr}}(d)_{w}. \tag{5.14}\]
We next give another descriptions of the subcategory (3.16) based on Lemma 3.2. As in Subsections 3.4, 3.5, let \(\mathcal{P}(d)\) be the derived moduli stack of \(d\)-dimensional representations of the quiver \(Q^{\circ,d}\) with relation \(\mathcal{I}\). There is a good moduli space map
\[\pi_{P,d}\colon\mathcal{P}(d)^{\mathrm{cl}}\to P(d).\]
Let \(p\in P(d)\) be a closed point corresponding to the semisimple \((Q^{\circ,d},\mathcal{I})\)-representation \(R_{p}\) as in (3.14):
\[R_{p}=\bigoplus_{i=1}^{m}W^{(i)}\otimes R^{(i)}, \tag{5.15}\]
where \(R^{(i)}\) is a simple representation of dimension \(r^{(i)}\) and \(W^{(i)}\) is a finite dimensional \(\mathbb{C}\)-vector space. Recall that \(G_{p}=\prod_{i=1}^{m}GL(W^{(i)})\) and let \(T_{p}\subset G_{p}\) be a maximal torus. Note that we have an isomorphism of \(G_{p}\)-representations:
\[\mathrm{Ext}^{1}_{Q^{\circ,d}}(R_{p},R_{p})\oplus\mathfrak{gl}(d)^{\vee}= \bigoplus_{i,j}\mathrm{Hom}(W^{(i)},W^{(j)})^{\oplus(\delta_{ij}+2gr^{(i)}r^{( j)})}. \tag{5.16}\]
Let \(M_{p}\) be the character lattice of \(T_{p}\) and let \(\tau_{d,p}\in(M_{p})_{\mathbb{R}}\) be the restriction of \(\tau_{d}\) to \(G_{p}\subset GL(d)\). For \(w\in\mathbb{Z}\), we set
\[\mathbf{W}_{p}(d)_{w}=\frac{1}{2}\mathrm{sum}[0,\beta]+w\tau_{d,p}\subset(M_{p })_{\mathbb{R}}, \tag{5.17}\]
where the Minkowski sum is after all \(T_{p}\)-weights \(\beta\) in the representation (5.16). Let \(\beta_{i}^{(j)}\) for \(1\leqslant i\leqslant\dim W^{(j)}\) be the weights of the standard representation of \(GL(W^{(j)})\). Then a weight \(\chi\) in \(\mathbf{W}_{p}(d)_{w}\) is written as
\[\chi=\sum_{i,j,a,b}c_{ij}^{(ab)}(\beta_{i}^{(a)}-\beta_{j}^{(b)})+\frac{w}{d} \sum_{i,a}r^{(a)}\beta_{i}^{(a)}, \tag{5.18}\]
where the sum above is after all \(1\leqslant a,b\leqslant m\), \(1\leqslant i\leqslant\dim W^{(a)}\), \(1\leqslant j\leqslant\dim W^{(b)}\), and where \(|c_{ij}^{(ab)}|\leqslant\delta_{ab}/2+gr^{(a)}r^{(b)}\) for all such \(a,b,i,j\).
**Lemma 5.6**.: _Recall the map \(j_{p}\colon\widehat{\mathcal{P}}(d)_{p}\hookrightarrow\widehat{\mathcal{P}}( d)_{p}\) from (3.15). The subcategory introduced in (3.16):_
\[\mathbb{T}_{p}(d)_{w}\subset D^{b}(\widehat{\mathcal{P}}(d)_{p}) \tag{5.19}\]
coincides with the subcategory of objects \(\mathcal{E}\) such that \(j_{p*}\mathcal{E}\) is generated by the vector bundles \(\Gamma_{G_{p}}(\chi)\otimes\mathcal{O}_{\widehat{\mathfrak{Y}}(d)_{p}}\), where \(\chi\) is a dominant \(T_{p}\)-weight satisfying_
\[\chi+\rho_{p}\in\mathbf{W}_{p}(d)_{w},\]
_where \(\rho_{p}\) is half the sum of positive roots of \(G_{p}\)._
Proof.: The lemma follows similarly to Lemma 3.2, using the Koszul equivalence and [22, Corollary 3.14].
**Remark 5.7**.: Alternatively, by Lemma 5.6 and [13, Lemma 2.9], the subcategory (5.19) consists of objects \(\mathcal{E}\) such that \(j_{p*}\mathcal{E}\) is generated by vector bundles \(W\otimes\mathcal{O}_{\widehat{\mathfrak{Y}}(d)_{p}}\) for \(W\) a \(G_{p}\)-representation whose \(T_{p}\)-weights are contained in the set
\[\left\{\chi\in(M_{p})_{\mathbb{R}}:-\frac{1}{2}n_{\lambda,p}\leqslant\langle \lambda,\chi\rangle\leqslant\frac{1}{2}n_{\lambda,p}\text{ for all }\lambda\colon \mathbb{C}^{*}\to T_{p}\right\}+w\tau_{d,p}. \tag{5.20}\]
Here, the width \(n_{\lambda,p}\) is defined by
\[n_{\lambda,p}=\Big{\langle}\lambda,\det\left(\operatorname{Ext}_{Q^{\infty},d }^{1}(R_{p},R_{p})^{\vee}\oplus\mathfrak{gl}(d)\right)^{\lambda>0}\Big{\rangle} -\Big{\langle}\lambda,\det\left((\mathfrak{g}_{p}^{\vee})^{\lambda>0}\right) \Big{\rangle},\]
where \(\mathfrak{g}_{p}\) is the Lie algebra of \(G_{p}\). From (5.16), one can easily check that
\[n_{\lambda,p}=2g\Big{\langle}\lambda,\det\left(\mathfrak{gl}(d)^{\lambda>0} \right)\Big{\rangle}=n_{\lambda} \tag{5.21}\]
for any cocharacter \(\lambda\colon\mathbb{C}^{*}\to T_{p}\subset T(d)\).
### Proof of Proposition 3.7
In this subsection, we prove Proposition 3.7 and Corollary 3.8, and thus finish the proof of Theorem 5.1.
Proof of Proposition 3.7 and Corollary 3.8.: Let \(\iota_{p}\colon\widehat{\mathcal{P}}(d)_{p}\to\mathcal{P}(d)\) be the natural induced map and define \(\widehat{\mathbb{T}}_{p}(d)_{w}\) to be the subcategory of \(D^{b}(\widehat{\mathcal{P}}(d)_{p})\) classically generated by the image of
\[\iota_{p}^{*}\colon\mathbb{T}(d)_{w}\to D^{b}(\widehat{\mathcal{P}}(d)_{p}).\]
By Theorem 3.4 and Lemma 5.5, we have the semiorthogonal decomposition
\[D^{b}(\widehat{\mathcal{P}}(d)_{p})=\left\langle\bigoplus_{p_{1}+\cdots+p_{k}= p}\boxtimes_{i=1}^{k}\widehat{\mathbb{T}}_{p_{i}}(d_{i})_{w_{i}+(g-1)d_{i}( \sum_{i>j}d_{j}-\sum_{i<j}d_{j})}\right\rangle. \tag{5.22}\]
Therefore it is enough to show that
\[\widehat{\mathbb{T}}_{p}(d)_{w}=\mathbb{T}_{p}(d)_{w}, \tag{5.23}\]
which is the claim of Corollary 3.8.
Let \(\widehat{\mathcal{X}}(d)_{p}\) be the formal fiber at \(p\) of the composition
\[\mathcal{X}(d)\to\mathcal{Y}(d)\to Y(d)=\mathfrak{gl}(d)^{\oplus 2g}/\!\!/GL(d),\]
where the first morphism is the natural projection. It is given by
\[\widehat{\mathcal{X}}(d)_{p}=(\widehat{\operatorname{Ext}}_{Q^{\infty},d}^{ 1}(R_{p},R_{p})\times\mathfrak{gl}(d)^{\vee})/G_{p}.\]
We have the Koszul duality equivalence, see Theorem 2.1
\[\Theta_{p}\colon D^{b}(\widehat{\mathcal{P}}(d)_{p})\stackrel{{ \sim}}{{\to}}\operatorname{MF}^{\operatorname{gr}}(\widehat{\mathcal{X}}(d)_ {p},\operatorname{Tr}W). \tag{5.24}\]
We next define categories Koszul equivalent to the two categories in (5.23):
\[\widehat{\mathbb{S}}_{p}^{\operatorname{gr}}(d)_{w}\subset\operatorname{MF}^ {\operatorname{gr}}(\widehat{\mathcal{X}}(d)_{p},\operatorname{Tr}W),\ \mathbb{S}_{p}^{\operatorname{gr}}(d)_{w}\subset\operatorname{MF}^{ \operatorname{gr}}(\widehat{\mathcal{X}}(d)_{p},\operatorname{Tr}W).\]
We define the subcategory \(\widehat{\mathbb{S}}_{p}^{\rm gr}(d)_{w}\) to be classically generated by the image of
\[\mathbb{S}^{\rm gr}(d)_{w}\subset{\rm MF}^{\rm gr}(\mathscr{X}(d),{\rm Tr}\,W) \to{\rm MF}^{\rm gr}(\widehat{\mathscr{X}}(d)_{p},{\rm Tr}\,W).\]
We define the subcategory \(\mathbb{S}_{p}^{\rm gr}(d)_{w}\) to be consisting of matrix factorizations whose factors are of the form \(W\otimes\mathcal{O}_{\widehat{\mathscr{X}}(d)_{p}}\), where \(W\) is a \(G_{p}\)-representation whose \(T_{p}\)-weights are contained in (5.20). By the equivalence (5.14) and using Lemma 2.2 and Remark 5.7, the equivalence \(\Theta_{p}\) restricts to equivalences
\[\Theta_{p}\colon\widehat{\mathbb{T}}_{p}(d)_{w}\stackrel{{ \sim}}{{\to}}\widehat{\mathbb{S}}_{p}^{\rm gr}(d)_{w},\ \mathbb{T}_{p}(d)_{w}\stackrel{{ \sim}}{{\to}}\mathbb{S}_{p}^{\rm gr}(d)_{w}.\]
It is enough to show that \(\widehat{\mathbb{S}}_{p}^{\rm gr}(d)_{w}=\mathbb{S}_{p}^{\rm gr}(d)_{w}\). By Remark 5.7, it is obvious that \(\widehat{\mathbb{T}}_{p}(d)_{w}\subset\mathbb{T}_{p}(d)_{w}\), hence \(\widehat{\mathbb{S}}_{p}^{\rm gr}(d)_{w}\subset\mathbb{S}_{p}^{\rm gr}(d)_{w}\).
By the semiorthogonal decomposition (5.22) together with the equivalence (5.24), we have the semiorthogonal decomposition
\[{\rm MF}^{\rm gr}(\widehat{\mathscr{X}}(d)_{p},{\rm Tr}\,W)=\left\langle \bigoplus_{p_{1}+\cdots+p_{k}=p}\mathbb{B}_{i=1}^{k}\widehat{\mathbb{S}}_{p_{ i}}^{\rm gr}(d_{i})_{w_{i}+gd_{i}(\sum_{i>j}d_{j}-\sum_{i<j}d_{j})}\right\rangle \tag{5.25}\]
for \(w_{1}/d_{1}<\cdots<w_{k}/d_{k}\), and each summand is given by the categorical Hall product, see [23, Proposition 3.1] or [24, Lemma 2.4.4, 2.4.7] for the compatibility of the categorical Hall products under Koszul duality. In Lemma 5.8 below, we show that the semiorthogonal summands in (5.25) except \(\widehat{\mathbb{S}}_{p}^{\rm gr}(d)_{w}\) are right orthogonal to \(\mathbb{S}_{p}^{\rm gr}(d)_{w}\). Then by (5.25) we have \(\mathbb{S}_{p}^{\rm gr}(d)_{w}\subset\mathbb{S}_{p}^{\rm gr}(d)_{w}\), hence that \(\widehat{\mathbb{S}}_{p}^{\rm gr}(d)_{w}=\mathbb{S}_{p}^{\rm gr}(d)_{w}\).
**Lemma 5.8**.: _The semiorthogonal summands in (5.25) with \(k\geqslant 2\) are right orthogonal to \(\mathbb{S}_{p}^{\rm gr}(d)_{w}\)._
Proof.: The proof is analogous to that of [23, Lemma 3.6]. The inclusion \(T_{p}\subset T(d)\) induces a surjection \(M(d)\twoheadrightarrow M_{p}\). We will regard \(T(d)\)-weights as \(T_{p}\)-weights by the above surjection. Let \(\widehat{\mathbf{W}}_{p}(d)_{w}\) be the image of \(\mathbf{W}(d)_{w}\subset M(d)_{\mathbb{R}}\twoheadrightarrow(M_{p})_{ \mathbb{R}}\). Recall the decomposition (5.15) and the weights \(\beta_{i}^{(a)}\) for \(1\leqslant a\leqslant m\) and \(1\leqslant i\leqslant\dim W^{(a)}\). Then a weight \(\chi\) in \(\widehat{\mathbf{W}}_{p}(d)_{w}\) is written as
\[\chi=\sum_{i,j,a,b}\alpha_{ij}^{(ab)}(\beta_{i}^{(a)}-\beta_{j}^{(b)})+\frac{ w}{d}\sum_{i,a}r^{(a)}\beta_{i}^{(a)}, \tag{5.26}\]
where the sum above is after all \(1\leqslant a,b\leqslant m\), \(1\leqslant i\leqslant\dim W^{(a)}\), \(1\leqslant j\leqslant\dim W^{(b)}\), and we have that \(|\alpha_{ij}^{(ab)}|\leqslant r^{(a)}r^{(b)}(g+1/2)\). We also note that a choice of \((p_{1},\dots,p_{k})\) corresponds to decompositions for all \(1\leqslant j\leqslant m\):
\[W^{(j)}=W_{1}^{(j)}\oplus\cdots\oplus W_{k}^{(j)}\]
such that \(d_{i}^{(j)}=\dim W_{i}^{(j)}\) satisfies \(d_{i}=d_{i}^{(1)}+\cdots+d_{i}^{(m)}\).
Let \(\lambda\) be the antidominant cocharacter of \(T_{p}\) which acts on the space \(W_{i}^{(j)}\) by weight \((k+1-i)\) for \(1\leqslant j\leqslant m\) and \(1\leqslant i\leqslant k\), and write it as \(\lambda=(\lambda^{(j)})_{1\leqslant j\leqslant m}\), where \(\lambda^{(j)}\) is a cocharacter of the maximal torus of \(GL(W^{(j)})\). We set \(\mathfrak{g}^{(j)}={\rm End}(W^{(j)})\). Consider the diagram of attracting loci
\[\widehat{\mathscr{X}}(d)_{p}^{\lambda}=\times_{i=1}^{k}\widehat{\mathscr{X}}(d _{i})_{p_{i}}\stackrel{{ q}}{{\leftarrow}}\widehat{\mathscr{X}}(d)_{p}^{ \lambda\geqslant 0}\stackrel{{ p}}{{\to}}\widehat{\mathscr{X}}(d)_{p}.\]
Let \(A=\Gamma_{GL(d)}(\chi)\otimes\mathcal{O}_{\widehat{\chi}(d)_{p}}\) and \(B=\Gamma_{GL(d)^{\lambda}}(\chi^{\prime})\otimes\mathcal{O}_{\chi(d)_{p}^{ \lambda}}\) such that
\[\chi+\rho_{p}\in\mathbf{W}_{p}(d)_{w},\ \chi^{\prime}+\sum_{i=1}^{k}\rho_{ pi}\in\bigoplus_{i=1}^{k}\widehat{\mathbf{W}}_{p_{i}}(d_{i})_{w_{i}^{\prime}} \subset\bigoplus_{i=1}^{k}M(d_{i})_{\mathbb{R}}=M(d)_{\mathbb{R}}, \tag{5.27}\]
where \(w=w_{1}+\cdots+w_{k}\), \(w_{1}/d_{1}<\cdots<w_{k}/d_{k}\) and \(w_{i}^{\prime}=w_{i}+gd_{i}(\sum_{i>j}d_{j}-\sum_{j>i}d_{j})\). We write
\[\chi^{\prime}=\sum_{i=1}^{k}(\psi_{i}+w_{i}^{\prime}\tau_{d_{i}}),\,\psi_{i} \in\widehat{\mathbf{W}}_{p_{i}}(d_{i})_{0}. \tag{5.28}\]
By the adjunction, we have
\[\mathrm{Hom}(A,p_{*}q^{*}B)=\mathrm{Hom}(p^{*}A,q^{*}B). \tag{5.29}\]
Let \(\chi^{\prime\prime}\) be a weight of \(\Gamma_{GL(d)}(\chi)\). Below we show that
\[\langle\lambda,\chi^{\prime\prime}\rangle>\langle\lambda,\chi^{\prime}\rangle. \tag{5.30}\]
Then (5.29) vanishes by [Pada, Proposition 4.2] and thus the lemma holds. Let \(\mu\) be the weight:
\[\mu=-\frac{1}{2}\mathfrak{gl}(d)^{\lambda>0}+\frac{1}{2}\sum_{a=1}^{m}( \mathfrak{g}^{(a)})^{\lambda^{(a)}>0}=\sum_{i,j,a<b}\gamma_{ij}^{(ab)}(\beta_{ i}^{(a)}-\beta_{j}^{(b)}) \tag{5.31}\]
where \(1\leqslant a<b\leqslant m\), \(1\leqslant i\leqslant\dim W^{(a)}\), \(1\leqslant j\leqslant\dim W^{(b)}\), and such that \(|\gamma_{ij}^{(ab)}|=r^{(a)}r^{(b)}/2\). To show (5.30), it is enough to show that
\[\langle\lambda,\chi^{\prime\prime}+\rho_{p}+\mu-w\tau_{d}\rangle>\langle \lambda,\chi^{\prime}+\rho_{p}+\mu-w\tau_{d}\rangle. \tag{5.32}\]
By (5.28), we write
\[\chi^{\prime}+\rho_{p}+\mu-w\tau_{d}=\sum_{i=1}^{k}\psi_{i}+\sum_{i=1}^{k}w_{ i}\tau_{d_{i}}-\frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0}-w\tau_{d},\]
where \(\psi_{i}\in\widehat{\mathbf{W}}_{p_{i}}(d_{i})_{0}\) for \(1\leqslant i\leqslant k\). In what follows, we write \(\mathfrak{gl}(d)^{\lambda>0}\) instead of \(\det\left(\mathfrak{gl}(d)^{\lambda>0}\right)\) to simplify notation. We compute
\[\left\langle\lambda,\chi^{\prime}+\rho_{p}+\mu-w\tau_{d}\right\rangle =\left\langle\lambda,\sum_{i=1}^{k}\psi_{i}+\sum_{i=1}^{k}w_{i} \tau_{d_{i}}-\frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0}-w\tau_{d}\right\rangle\] \[=\sum_{i=1}^{k}(k+1-i)d_{i}\left(\frac{w_{i}}{d_{i}}-\frac{w}{d} \right)-\left\langle\lambda,\frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0}\right\rangle.\]
For \(1\leqslant i\leqslant k\), define
\[\tilde{w}_{i}:=d_{i}\left(\frac{w_{i}}{d_{i}}-\frac{w}{d}\right).\]
Then \(\tilde{w}_{1}+\cdots+\tilde{w}_{k}=0\) and \(\tilde{w}_{1}+\cdots+\tilde{w}_{l}<0\) for \(1\leqslant l<k\). Therefore
\[\sum_{i=1}^{k}(k+1-i)d_{i}\left(\frac{w_{i}}{d_{i}}-\frac{w}{d}\right)=\sum_{l =1}^{k}\left(\sum_{i=1}^{l}\tilde{w}_{i}\right)<0.\]
It follows that
\[\left\langle\lambda,\chi^{\prime}+\rho_{p}+\mu-w\tau_{d}\right\rangle<-\left \langle\lambda,\frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0}\right\rangle. \tag{5.33}\]
On the other hand, by (5.27) and [12, Lemma 2.9], we have
\[\langle\lambda,\chi^{\prime\prime}-w\tau_{d}\rangle\geqslant-\frac{1}{2}n_{ \lambda,p}=-g\langle\lambda,\mathfrak{gl}(d)^{\lambda>0}\rangle.\]
Then
\[\langle\lambda,\chi^{\prime\prime}+\rho_{p}+\mu-w\tau_{d}\rangle\geqslant-g \langle\lambda,\mathfrak{gl}(d)^{\lambda>0}\rangle+\langle\lambda,\rho_{p}+ \mu\rangle=-\left\langle\lambda,\frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0} \right\rangle.\]
Therefore we have the inequality (5.32).
## 6. Smooth and properness of reduced quasi-BPS categories
In this section, we show that the reduced version of quasi-BPS category is smooth and proper, which gives evidence towards Conjecture 4.13. We first prove the strong generation of quasi-BPS categories. It relies on the strong generations of singular support quotients, which itself is of independent interest and is proved in Subsection 6.3.
### Strong generation of quasi-BPS categories
In this subsection, we prove the strong generation of the quasi-BPS category \(\mathbb{T}_{S}(v)_{w}\), see Subsection 2.1 for the terminology of strong generation. The strategy is to show that \(\mathbb{T}_{S}(v)_{w}\) is admissible in a singular support quotient category constructed from Joyce-Song pairs on the local Calabi-Tau threefold \(X:=S\times\mathbb{C}\), which has a strong generator by Theorem 6.11.
Let \(S\) be a smooth projective K3 surface, let \(H\) be an ample divisor on \(S\), and set \(\mathcal{O}(n)=\mathcal{O}_{S}(nH)\). For \(v\in N(S)\), let \(\mathfrak{M}=\mathfrak{M}_{S}^{H}(v)\) be the derived moduli stack of \(H\)-Gieseker semistable sheaves \(F\) on \(S\) with numerical class \(v\). We take \(H\) generic with respect to \(v\). Let \(n\gg 0\) be such that \(H^{i}(F(n))=0\) for all \(i>0\) and all \(H\)-Gieseker semistable sheaves \(F\) with numerical class \(v\). Let \(\mathbb{F}\in D^{b}(S\times\mathfrak{M})\) be the universal sheaf, and consider the following derived stack
\[\mathfrak{M}^{\dagger}:=\operatorname{Spec}_{\mathfrak{M}}\operatorname{Sym} (p_{\mathfrak{M}*}(\mathbb{F}\boxtimes\mathcal{O}(n))^{\vee}),\]
where \(p_{\mathfrak{M}}\colon S\times\mathfrak{M}\to\mathfrak{M}\) is the projection. The stack \(\mathfrak{M}^{\dagger}\) is the derived moduli stack of pairs \((F,s)\), where \(F\) is an \(H\)-Gieseker semistable sheaf on \(S\) with numerical class \(v\) and \(s\in H^{0}(F(n))\).
We consider its \((-1)\)-shifted cotangent space
\[\Omega_{\mathfrak{M}\uparrow}[-1]=\operatorname{Spec}_{\mathfrak{M}\uparrow} \operatorname{Sym}(\mathbb{T}_{\mathfrak{M}\uparrow}[1]).\]
Since the projection \(\mathfrak{M}^{\dagger}\to\mathfrak{M}\) is smooth, we have the isomorphism, see [11, Lemma 3.1.2]:
\[(\Omega_{\mathfrak{M}}[-1]\times_{\mathfrak{M}}\mathfrak{M}^{\dagger})^{ \operatorname{cl}}\stackrel{{\cong}}{{\to}}\Omega_{\mathfrak{M} \uparrow}[-1]^{\operatorname{cl}}.\]
Therefore, \(\Omega_{\mathfrak{M}\uparrow}[-1]\) is the derived moduli stack of pairs \((E,s)\), where \(E\) is a compactly supported coherent sheaf on the local K3 surface
\[X:=\operatorname{Tot}(\omega_{S})=S\times\mathbb{C}\stackrel{{ r}}{{\to}}S\]
such that \(r_{*}E\) has numerical class \(v\), and \(s\in H^{0}(E(n))\). Here the pull-back of \(\mathcal{O}(n)\) on \(S\) to \(X\) is also denoted by \(\mathcal{O}(n)\). We recall the definition of Joyce-Song (JS) stable pairs on \(X\):
**Definition 6.1**.: ([12, Definition 5.20]) A pair \((E,s)\) on \(X=S\times\mathbb{C}\) is JS-stable if \(E\) is a compactly supported \(H\)-Gieseker semistable sheaf on \(X\) and \(s\in H^{0}(E(n))\) is a section such that there is no non-trivial exact sequence of framed sheaves
\[0\to(\mathcal{O}_{X}\to E^{\prime}(n))\to(\mathcal{O}_{X}\overset{s}{\to}E(n)) \to(0\to E^{\prime\prime}(n))\to 0, \tag{6.1}\]
where \(E^{\prime}\), \(E^{\prime\prime}\) are \(H\)-Gieseker semistable sheaves with the same reduced Hilbert polynomials.
We denote by
\[\Omega^{\mathrm{JS}}_{\mathfrak{M}\uparrow}[-1]\subset\Omega_{\mathfrak{M} \uparrow}[-1]\]
the open substack consisting of JS-stable pairs, and we denote by \(\mathcal{Z}^{\mathrm{JS}}\) its complement. It is well-known that \(\Omega^{\mathrm{JS}}_{\mathfrak{M}\uparrow}[-1]^{\mathrm{cl}}\) is a quasi-projective scheme, which easily follows from [12, Theorem 5.22] by taking a compactification of \(X\). We set
\[\ell:=\det p_{\mathfrak{M}\uparrow}(\mathbb{F}\boxtimes\mathcal{O}(n))\in \operatorname{Pic}(\mathfrak{M}).\]
Its pull-back to \(\Omega_{\mathfrak{M}\uparrow}[-1]\) is also denoted by \(\ell\). We denote by \(\Omega^{\ell\text{-ss}}_{\mathfrak{M}\uparrow}[-1]\) the stack of \(\ell\)-semistable points in \(\Omega_{\mathfrak{M}\uparrow}[-1]^{\mathrm{cl}}\).
**Lemma 6.2**.: _We have \(\Omega^{\mathrm{JS}}_{\mathfrak{M}\uparrow}[-1]=\Omega^{\ell\text{-ss}}_{ \mathfrak{M}\uparrow}[-1]\)._
Proof.: Let \(\mathfrak{M}^{\mathrm{cl}}\to M\) be a good moduli space. It is enough to prove the identity on each fiber at a closed point \(y\in M\) for the composition of the projections
\[\gamma\colon\Omega_{\mathfrak{M}\uparrow}[-1]^{\mathrm{cl}}\to\mathfrak{M}^{ \dagger,\mathrm{cl}}\to\mathfrak{M}^{\mathrm{cl}}\to M. \tag{6.2}\]
A point \(y\) corresponds to a polystable sheaf \(\bigoplus_{i=1}^{m}V^{(i)}\otimes F^{(i)}\). Let \((Q_{y}^{\circ,d},\mathcal{I}_{y})\) be the Ext-quiver of \((F^{(1)},\dots,F^{(m)})\) with relation \(\mathcal{I}_{y}\). The quiver \(Q_{y}^{\circ,d}\) is the double of some quiver \(Q_{y}^{\circ}\), see Remark 4.2. Let \((Q_{y},W)\) be the tripled quiver with potential of \(Q_{y}^{\circ}\), see Subsection 3.1.3. Let \(c^{(i)}:=h^{0}(F^{(i)}(n))>0\) and let \(Q_{y}^{\dagger}\) be the quiver obtained by adding a vertex \(\{0\}\) to \(Q_{y}\) and \(c^{(i)}\)-arrows from \(0\) to \(i\) for \(1\leqslant i\leqslant m\). Then a fiber of (6.2) at \(y\) corresponds to nilpotent \(Q_{y}^{\dagger}\)-representations with dimension vector \((1,\boldsymbol{d})\) where \(\boldsymbol{d}=(\dim V^{(i)})_{i=1}^{m}\) and \(1\) is the dimension at the vertex \(\{0\}\):
\[\gamma^{-1}(y)\cong R^{\mathrm{nil}}_{Q_{y}^{\dagger}}(1,\boldsymbol{d})/G( \boldsymbol{d}). \tag{6.3}\]
Also the line bundle \(\ell\) restricted to \(\gamma^{-1}(y)\) corresponds to the character
\[\ell_{y}\colon G(\boldsymbol{d})=\prod_{i=1}^{m}GL(V^{(i)})\to\mathbb{C}^{*}, \ (g_{i})_{i=1}^{m}\mapsto\prod_{i=1}^{m}(\det g_{i})^{c^{(i)}}.\]
By [11, Lemma 5.1.9, 5.1.19], the \(\ell_{y}\)-semistable \(Q_{y}^{\dagger}\)-representations are those generated by the images from the arrows \(0\to i\) with \(1\leqslant i\leqslant m\). The above \(\ell_{y}\)-semistable locus in the right hand side of (6.3) corresponds to pairs \((E,s)\) on \(X\) in \(\gamma^{-1}(y)\) such that \(r_{*}E\) is S-equivalent to \(\bigoplus_{i=1}^{m}V^{(i)}\otimes F^{(i)}\) and there is no exact sequence of the form (6.1), i.e. it is a JS pair. Therefore we obtain the desired identity on \(\gamma^{-1}(y)\).
We set
\[b:=\operatorname{ch}_{2}(p_{\mathfrak{M}*}(\mathbb{F}\boxtimes\mathcal{O}(n) ))\in H^{4}(\mathfrak{M},\mathbb{Q}).\]
Its pull-back to \(\Omega_{\mathfrak{M}^{\dagger}}[-1]\) is also denoted by \(b\). Consider the \(\Theta\)-stratification with respect to \((\ell,b)\), see [11, Theorem 4.1.3]:
\[\Omega_{\mathfrak{M}^{\dagger}}[-1]=\mathcal{S}_{1}\sqcup\dots\sqcup\mathcal{S }_{N}\sqcup\Omega_{\mathfrak{M}^{\dagger}}^{\ell\text{-ss}}[-1].\]
By Theorem 2.3, for each choice of \(m_{\bullet}=(m_{i})_{i=1}^{N}\in\mathbb{R}^{N}\), there is a subcategory \(\mathbb{W}(\mathfrak{M}^{\dagger})_{m_{\bullet}}^{\ell}\subset D^{b}( \mathfrak{M}^{\dagger})\) such that the composition
\[\Phi\colon\mathbb{W}(\mathfrak{M}^{\dagger})_{m_{\bullet}}^{\ell}\subset D^{b }(\mathfrak{M}^{\dagger})\twoheadrightarrow D^{b}(\mathfrak{M}^{\dagger})/ \mathcal{C}_{\mathbb{Z}^{\dagger\mathbb{S}}} \tag{6.4}\]
is an equivalence. Let \(\eta\colon\mathfrak{M}^{\dagger}\to\mathfrak{M}\) be the projection. We have the following lemma:
**Lemma 6.3**.: _Let \(\delta\in\operatorname{Pic}(\mathfrak{M}_{S}^{H}(v))_{\mathbb{R}}\). There exists a choice \(m_{\bullet}\) such that the functor \(\eta^{*}\colon D^{b}(\mathfrak{M})\to D^{b}(\mathfrak{M}^{\dagger})\) restricts to a functor \(\eta^{*}\colon\mathbb{T}_{S}^{H}(v)_{\delta}\to\mathbb{W}(\mathfrak{M}^{ \dagger})_{m_{\bullet}}^{\ell}\)._
Proof.: We use the notation in the proof of Lemma 6.2. For \(y\in M\), let \(\mathcal{X}_{y}(\boldsymbol{d})\) be the moduli stack of \(Q_{y}\)-representations with dimension vector \(\boldsymbol{d}\) and let \(\mathcal{X}_{y}^{\dagger}(\boldsymbol{d})\) be the moduli stack of \(Q_{y}^{\dagger}\)-representations with dimension vector \((1,\boldsymbol{d})\). Let \(\widehat{\mathcal{X}}_{y}^{\dagger}(\boldsymbol{d})\) be the formal fiber of the composition
\[\mathcal{X}_{y}^{\dagger}(\boldsymbol{d})\to\mathcal{X}_{y}(\boldsymbol{d}) \to X_{y}(\boldsymbol{d})\]
at the origin, where the last map is the good moduli space morphism. Let
\[\widehat{\mathcal{X}}_{y}^{\dagger}(\boldsymbol{d})=\widehat{\mathcal{S}}_{1 }\sqcup\dots\sqcup\widehat{\mathcal{S}}_{N}\sqcup\widehat{\mathcal{X}}_{y}^{ \dagger}(\boldsymbol{d})^{\ell_{y}\text{-ss}}\]
be the Kempf-Ness stratification with respect to \((\ell_{y},b_{y})\). For \(1\leqslant i\leqslant N\), consider the center \(\widehat{\mathcal{S}}_{i}\) of \(\widehat{\mathcal{S}}_{i}\) and its corresponding one parameter subgroup \(\lambda_{i}\) for the maximal torus of \(G(\boldsymbol{d})\). Let \(\widehat{\mathfrak{M}}_{y}^{\dagger}\), \(\widehat{\mathfrak{M}}_{y}\) be the formal fibers along \(\mathfrak{M}^{\dagger}\to M\), \(\mathfrak{M}\to M\) at \(y\) respectively. We have the commutative diagram
(6.5)
Here the horizontal arrows are Koszul duality equivalences in Theorem 2.1, and the vertical arrows are pull-backs along the natural projections.
By [13, Proposition 6.1], there exists a choice \(m_{\bullet}=(m_{i})_{i=1}^{N}\in\mathbb{R}^{N}\) such that an object \(\mathcal{E}\in D^{b}(\mathfrak{M}^{\dagger})\) lies in \(\mathbb{W}(\mathfrak{M}^{\dagger})_{m_{\bullet}}^{\ell}\) if and only if, for any \(y\) as above, we have
\[\operatorname{wt}_{\lambda_{i}}\Theta_{y}^{\dagger}(\mathcal{E}|_{\widehat{ \mathfrak{M}}_{y}^{\dagger}})|_{\widehat{z}_{i}}\subset\left[-\frac{1}{2}n_{i} ^{\dagger},\frac{1}{2}n_{i}^{\dagger}\right)+\langle\lambda_{i},\delta_{y}\rangle.\]
Here, the width \(n_{i}^{\dagger}\) is defined by
\[n_{i}^{\dagger}:=\left\langle\lambda_{i},\det\left(\mathbb{L}_{\mathfrak{X}_ {y}^{\dagger}(\boldsymbol{d})}^{\lambda_{i}>0}\Big{|}_{0}\right)\right\rangle =n_{i}+\sum_{j=1}^{m}c^{(j)}\left\langle\lambda_{i},\det\left((V_{j}^{\vee})^ {\lambda_{i}>0}\right)\right\rangle\]
and \(n_{i}:=\left\langle\lambda_{i},\det\left(\mathbb{L}_{\mathfrak{X}_{y}( \boldsymbol{d})}^{\lambda_{i}>0}|_{0}\right)\right\rangle\). On the other hand, by the definition of \(\mathbb{T}_{S}^{H}(v)_{\delta}\), for an object \(A\in\mathbb{T}_{S}^{H}(v)_{\delta}\), the \(\lambda_{i}\)-weights of \(\Theta_{y}(A|_{\widehat{\mathfrak{M}}_{y}})|_{\widehat{\mathcal{X}}_{y}( \boldsymbol{d})^{\lambda_{i}}}\) lie in \([-n_{i}/2,n_{i}/2]+\langle\lambda_{i},\delta_{y}\rangle\) for all \(1\leqslant i\leqslant N\). As in [13, Lemma 5.1.9], each \(\lambda_{i}\) has only non-positive weights in each \(V^{(j)}\) for \(1\leqslant j\leqslant m\), hence we have \(n_{i}^{\dagger}>n_{i}\). From the diagram (6.5), we have
\[\Theta_{y}^{\dagger}((\eta^{*}A)|_{\widehat{\mathfrak{M}}_{y}^{\dagger}})\cong \eta^{\prime*}\Theta_{y}(A|_{\mathfrak{M}_{y}}),\]
hence its restriction to \(\widehat{\mathscr{Z}}_{i}\) has \(\lambda_{i}\)-weights in \([-n_{i}^{\dagger}/2,n_{i}^{\dagger}/2)+\langle\lambda_{i},\delta_{y}\rangle\). Therefore we have \(\eta^{*}A\in\mathbb{W}(\mathfrak{M}^{\dagger})_{m_{\bullet}}^{\ell}\).
We prove the following theorem, using the strong generation of singular support quotients in Theorem 6.11 which will be proved in Subsection 6.3:
**Theorem 6.4**.: _The quasi-BPS category \(\mathbb{T}_{S}(v)_{w}\) is regular._
Proof.: By Corollary 4.15, it is enough to show that \(\mathbb{T}_{S}^{H}(v)_{w}\subset D^{b}(\mathfrak{M}_{S}^{H}(v))\) is regular. We consider the following composition
\[F\colon\mathbb{T}_{S}^{H}(v)_{w}\overset{i}{\hookrightarrow}D^{b}( \mathfrak{M})_{w}\overset{\eta^{*}}{\rightarrow}D^{b}(\mathfrak{M}^{\dagger}) \overset{p}{\twoheadrightarrow}D^{b}(\mathfrak{M}^{\dagger})/\mathcal{C}_{ \mathscr{Z}^{\mathscr{JS}}}.\]
Let \(\Phi\) be window equivalence (6.4) as in Lemma 6.3, and let \(\Phi^{-1}\) be its inverse. Let \(\Psi\colon D^{b}(\mathfrak{M})_{w}\twoheadrightarrow\mathbb{T}_{S}^{H}(v)_{w}\) be the projection with respect to the semiorthogonal decomposition in Theorem 5.1. We also define the following functor
\[G\colon D^{b}(\mathfrak{M}^{\dagger})/\mathcal{C}_{\mathscr{Z}^{\mathscr{JS} }}\overset{\Phi^{-1}}{\rightarrow}\mathbb{W}(\mathfrak{M}^{\dagger})_{m_{ \bullet}}^{\ell}\overset{j}{\hookrightarrow}D^{b}(\mathfrak{M}^{\dagger}) \overset{(\eta_{*})_{w}}{\twoheadrightarrow}D^{b}(\mathfrak{M})_{w} \overset{\Psi}{\twoheadrightarrow}\mathbb{T}_{S}^{H}(v)_{w}.\]
Here, \((\eta_{*})_{w}(-)\) is the weight \(w\)-part of \(\eta_{*}(-)\), which is the projection onto \(D^{b}(\mathfrak{M})_{w}\) with respect to the semiorthogonal decomposition
\[D^{b}(\mathfrak{M}^{\dagger})=\langle\ldots,D^{b}(\mathfrak{M})_{-1},D^{b}( \mathfrak{M})_{0},D^{b}(\mathfrak{M})_{1},\ldots\rangle. \tag{6.6}\]
Every fully-faithful functor in (6.6) is given by the restriction of \(\eta^{*}\) to \(D^{b}(\mathfrak{M})_{w}\). The above semiorthogonal decomposition exists since \(\eta\colon\mathfrak{M}^{\dagger}\rightarrow\mathfrak{M}\) is an affine space bundle such that the cone of \(\mathcal{O}_{\mathfrak{M}}\rightarrow\eta_{*}\mathcal{O}_{\mathfrak{M}^{ \dagger}}\) has strictly negative \(\mathbb{C}^{*}\)-weights, see [11, Amplification 3.18].
Then \(G\circ F\cong\operatorname{id}\). Indeed, we have
\[\Psi\circ(\eta_{*})_{w}\circ\Phi^{-1}\circ p\circ\eta^{*}\circ i\cong\Psi \circ(\eta_{*})_{w}\circ\eta^{*}\circ i\cong\Psi\circ i\cong\operatorname{id }.\]
For the first isomorphism, the image of \(\eta^{*}\circ i\) lies in \(\mathbb{W}(\mathfrak{M}^{\dagger})_{m_{\bullet}}^{\ell}\) by Lemma 6.3 and then \(\Phi^{-1}\circ p\) is identity on \(\mathbb{W}(\mathfrak{M}^{\dagger})_{m_{\bullet}}^{\ell}\) by the definition of \(\Phi\). The second isomorphism follows since \((\eta_{*})_{w}\circ\eta^{*}\cong\operatorname{id}\). The last isomorphism also holds by the definition of \(\Psi\). By Theorem 6.11 together with the fact that \(\Omega_{\mathfrak{M}^{\dagger}}^{\operatorname{JS}}[-1]\) is a quasi-projective scheme, the category \(D^{b}(\mathfrak{M}^{\dagger})/\mathcal{C}_{\mathscr{Z}^{\mathscr{JS}}}\) is regular, so it is \(\langle\mathcal{E}\rangle^{\star n}\) for some \(\mathcal{E}\in D^{b}(\mathfrak{M}^{\dagger})/\mathcal{C}_{\mathscr{Z}^{ \mathscr{JS}}}\) and \(n\geqslant 1\). Then as \(\operatorname{Im}(F)\subset\langle\mathcal{E}\rangle^{\star n}\) and \(G\circ F\cong\operatorname{id}\), we conclude that \(\mathbb{T}_{S}^{H}(v)_{w}=\langle G(\mathcal{E})\rangle^{\star n}\), hence \(\mathbb{T}_{S}^{H}(v)_{w}\) is regular.
By an analogous argument using window categories of the reduced stack \(\mathfrak{M}^{\dagger,\operatorname{red}}\), we obtain:
**Theorem 6.5**.: _The reduced quasi-BPS category \(\mathbb{T}_{S}(v)_{w}^{\operatorname{red}}\) is regular._
### Properness of reduced quasi-BPS categories
Recall that we write \(v=dv_{0}\) for \(d\in\mathbb{Z}_{\geqslant 1}\) and for \(v_{0}\) primitive with \(\langle v_{0},v_{0}\rangle=2g-2\). Let \(\mathfrak{M}^{\operatorname{red}}=\mathfrak{M}^{\sigma}_{S}(v)^{\operatorname{ red}}\) for a generic \(\sigma\in\operatorname{Stab}(S)\). We consider its \((-1)\)-shifted cotangent space:
\[\Omega_{\mathfrak{M}^{\operatorname{red}}}[-1]\rightarrow\mathfrak{M}^{ \operatorname{red}}.\]
Its classical truncation is identified with the moduli stack of pairs
\[(F,\theta),\ F\in\mathcal{M}^{\sigma}_{S}(v),\ \theta\colon F\to F \tag{6.7}\]
such that \(\operatorname{tr}(\theta)=0\), see [Toda, Lemma 3.4.1] for the non-reduced case and the proof for the reduced case is similar. Let
\[\mathcal{N}_{\operatorname{nil}}\subset\Omega_{\mathfrak{M}^{\operatorname{red} }}[-1]\]
be the closed substack consisting of pairs (6.7) such that \(\theta\) is nilpotent. The following is the global version of the categorical support lemma.
**Theorem 6.6**.: _Let \(w\in\mathbb{Z}\) be coprime with \(d\) and let \(\mathcal{E}\in\mathbb{T}^{\sigma}_{S}(v)_{w}^{\mathrm{red}}\subset D^{b}( \mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}})\). Then \(\mathrm{Supp}^{\mathrm{sg}}(\mathcal{E})\subset\mathcal{N}_{\mathrm{nil}}\)._
Proof.: It is enough to prove the inclusion \(\mathrm{Supp}^{\mathrm{sg}}(\mathcal{E})\subset\mathcal{N}_{\mathrm{nil}}\) over any point \(y\in M^{\sigma}_{S}(v)\). For simplicity, we write \(\widehat{\mathfrak{M}}^{\mathrm{red}}_{y}=\widehat{\mathfrak{M}}^{\sigma}_{S} (v)_{w}^{\mathrm{red}}\). The equivalence in Lemma 4.3 induces the isomorphism of classical truncations of \((-1)\)-shifted cotangents,
\[\Omega_{\widehat{\mathfrak{M}}^{\mathrm{red}}_{y}}[-1]^{\mathrm{cl}} \overset{\cong}{\to}\Omega_{\widehat{\mathcal{P}}(d_{p}^{\mathrm{red}}}[-1]^{ \mathrm{cl}}. \tag{6.8}\]
The right hand side is the critical locus of the function
\[\mathrm{Tr}\,W\colon\widehat{\mathfrak{gl}}(d)_{p}^{\oplus 2g}\times\mathfrak{gl}(d)_ {0}\to\mathbb{C}\]
where \(\mathrm{Tr}\,W\) is the function (5.12) associated with the tripled quiver of the \(g\)-loop quiver, see Subsection 2.6.3. Then the isomorphism (6.8) restricts to the isomorphism
\[\mathcal{N}_{\mathrm{nil}}\times_{\mathfrak{M}^{\mathrm{red}}}\widehat{ \mathfrak{M}}^{\mathrm{red}}_{y}\overset{\cong}{\to}\mathrm{Crit}(\mathrm{Tr} \,W)\cap(\widehat{\mathfrak{gl}}(d)_{p}^{\oplus 2g}\times\mathfrak{gl}(d)_{ \mathrm{nil}}).\]
Therefore the theorem follows from Lemma 3.9.
Recall that a pre-triangulated category \(\mathcal{D}\) over \(\mathbb{C}\) is called _proper_ if for any \(\mathcal{E}_{1},\mathcal{E}_{2}\in\mathcal{D}\), the vector space \(\bigoplus_{i\in\mathbb{Z}}\mathrm{Hom}^{*}(\mathcal{E}_{1},\mathcal{E}_{2})\) is finite dimensional. We also have the following global analogue of Proposition 3.10:
**Theorem 6.7**.: _If \((d,w)\in\mathbb{N}\times\mathbb{Z}\) are coprime and \(g\geqslant 2\), the category \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) is proper._
Proof.: We regard \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) as a subcategory of \(D^{b}(\mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}})\) for a generic \(\sigma\in\mathrm{Stab}(S)\) via \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}=\mathbb{T}^{\sigma}_{S}(v)_{w}^{\mathrm{ red}}\). For \(\mathcal{E}_{1},\mathcal{E}_{2}\in\mathbb{T}^{\sigma}_{S}(v)_{w}^{\mathrm{red}}\), let
\[\mathcal{H}om(\mathcal{E}_{1},\mathcal{E}_{2})\in D_{\mathrm{qc}}(\mathfrak{M }^{\sigma}_{S}(v)^{\mathrm{red}})\]
be the internal homomorphism, see Subsection 2.6. Recall that \(\mathfrak{M}^{\sigma}_{S}(v)^{\mathrm{red}}=\mathcal{M}^{\sigma}_{S}(v)\) by Lemma 4.4. Let \(\pi\) be the good moduli space morphism from (4.3). Then we have
\[\pi_{*}\mathcal{H}om(\mathcal{E}_{1},\mathcal{E}_{2})\in D^{b}(M^{\sigma}_{S} (v)). \tag{6.9}\]
Indeed, the statement (6.9) is local on \(M^{\sigma}_{S}(v)\), hence it follows from Proposition 3.10 and Lemma 4.3. Then the theorem holds as
\[\mathrm{Hom}^{*}(\mathcal{E}_{1},\mathcal{E}_{2})=R^{*}\Gamma(\pi_{*} \mathcal{H}om(\mathcal{E}_{1},\mathcal{E}_{2}))\]
and \(M^{\sigma}_{S}(v)\) is a proper algebraic space.
**Corollary 6.8**.: _If \((d,w)\in\mathbb{N}\times\mathbb{Z}\) are coprime and \(g\geqslant 2\), then \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) is proper and smooth._
Proof.: By Theorem 6.7 and Theorem 6.5, the category \(\mathbb{T}_{S}(v)_{w}^{\mathrm{red}}\) is proper and regular if \(\gcd(d,w)=1\). Then it is also proper and smooth by [16, Theorem 3.18].
### Strong generation of singular support quotients
In this subsection, we prove Theorem 6.11 on strong generation of singular support quotients, which was used in the proof of Theorem 6.4.
Let \(\mathfrak{M}\) be a quasi-smooth derived stack of finite type over \(\mathbb{C}\) such that its classical truncation \(\mathcal{M}=\mathfrak{M}^{\mathrm{cl}}\) admits a good moduli space \(\mathcal{M}\to M\) which is quasi-separated. Note that \(M\) is quasi-compact by the assumption on \(\mathfrak{M}\).
We denote by \(\mathrm{Et}/M\) the category whose objects \((U,\rho)\), where \(U\) is a \(\mathbb{C}\)-scheme and \(\rho\colon U\to M\) is an etale morphism. The set of morphisms \((U^{\prime},\rho^{\prime})\to(U,\rho)\) consists of etale morphisms \(U^{\prime}\to U\) commuting with \(\rho\) and \(\rho^{\prime}\).
For a closed subscheme \(Z\subset U\), an etale morphism \(f\colon U^{\prime}\to U\) is called an _etale neighborhood_ of \(Z\) if \(f^{-1}(Z)\to Z\) is an isomorphism.
We will use the following result in the proof of Theorem 6.4:
**Theorem 6.9**.: ([12, Theorem D]) _Let \(\mathbf{D}\subset\mathrm{Et}/M\) be the subcategory satisfying the following conditions:_
1. _If_ \((U\to M)\in\mathbf{D}\) _and_ \((U^{\prime}\to U)\) _is a morphism in_ \(\mathrm{Et}/M\)_, then_ \((U^{\prime}\to M)\in\mathbf{D}\)_._
2. _If_ \((U^{\prime}\to M)\in\mathbf{D}\) _and_ \((U^{\prime}\to U)\) _is a morphism in_ \(\mathrm{Et}/M\) _which is finite and surjective, then_ \((U\to M)\in\mathbf{D}\)_._
3. _If_ \((j\colon U^{\circ}\to U)\) _and_ \((f\colon W\to U)\) _are morphisms in_ \(\mathrm{Et}/M\) _such that_ \(j\) _is an open immersion and_ \(f\) _is an etale neighborhood of_ \(U\setminus U^{\circ}\)_, and_ \((U^{\circ}\to M)\in\mathbf{D}\) _and_ \((W\to M)\in\mathbf{D}\)_, then_ \((U\to M)\in\mathbf{D}\)_._
_Then if there is \((g\colon M^{\prime}\to M)\in\mathbf{D}\) such that \(g\) is surjective, then \((\mathrm{id}\colon M\to M)\in\mathbf{D}\)._
For each object \((U\to M)\in\mathrm{Et}/M\), let \(\mathcal{M}_{U}\to U\) be the pull-back of \(\mathcal{M}\to M\) by \(U\to M\). There is a derived stack \(\mathfrak{M}_{U}\), unique up to equivalence, such that for each morphism \(\rho\colon U^{\prime}\to U\) in \(\mathrm{Et}/M\) there is an induced diagram, see Subsection 2.6
(6.10)
For each \(y\in M\), there is \(\rho\colon U\to M\) in \(\mathrm{Et}/M\) whose image contains \(y\) such that \(\mathfrak{M}_{U}\) is equivalent to a Koszul stack
\[\mathfrak{M}_{U}\simeq s^{-1}(0)/G \tag{6.11}\]
for some \((Y,V,s,G)\), where \(Y\) is a smooth scheme with an action of a reductive algebraic group \(G\), \(V\to Y\) is a \(G\)-equivariant vector bundle with a \(G\)-invariant section \(s\) and \(s^{-1}(0)\) is the derived zero locus of \(s\), see Subsection 2.6.
For \(\ell\in\mathrm{Pic}(\mathfrak{M})_{\mathbb{R}}\) and \((U\to M)\in\mathrm{Et}/M\), consider the \(\ell\)-semistable locus
\[\Omega_{\mathfrak{M}_{U}}^{\ell\text{-ss}}[-1]^{\mathrm{cl}}\subset\Omega_{ \mathfrak{M}_{U}}[-1]^{\mathrm{cl}}. \tag{6.12}\]
We denote by \(\mathbb{Z}_{U}\) the complement of the open immersion (6.12), which is a conical closed substack. Let \(\mathcal{C}_{\mathbb{Z}_{U}}\subset D^{b}(\mathfrak{M}_{U})\) be the subcategory of objects with singular supports contained in \(\mathbb{Z}_{U}\).
**Lemma 6.10**.: _Suppose that the open substack (6.12) is an algebraic space. Then for a Koszul stack as in (6.11), the category \(D^{b}(\mathfrak{M}_{U})/\mathcal{C}_{\mathbb{Z}_{U}}\) is regular. In particular, there is a compact generator \(\mathcal{E}_{U}\in D^{b}(\mathfrak{M}_{U})/\mathcal{C}_{\mathbb{Z}_{U}}\)._
Proof.: By the Koszul duality equivalence in Theorem 2.1, we have the equivalence
\[D^{b}(\mathfrak{M}_{U})\stackrel{{\sim}}{{\to}}\operatorname{MF}^{ \operatorname{gr}}(V^{\vee}/G,f). \tag{6.13}\]
The above equivalence descends to an equivalence, see [11, Proposition 2.3.9]
\[D^{b}(\mathfrak{M}_{U})/\mathcal{C}_{\mathcal{Z}_{U}}\stackrel{{ \sim}}{{\to}}\operatorname{MF}^{\operatorname{gr}}((V^{\vee}/G) \setminus\mathcal{Z}_{U},f).\]
Note that we have
\[(\Omega_{\mathfrak{M}}[-1]^{\operatorname{cl}}\setminus\mathcal{Z})\times_{ \mathcal{M}}U=(\operatorname{Crit}(w)/G)\setminus\mathcal{Z}_{U}, \tag{6.14}\]
hence the right hand side is an algebraic space by the assumption. Let \((V^{\vee})^{\operatorname{free}}\subset(V^{\vee})^{\ell\text{-}88}\) be the \((G\times\mathbb{C}^{*})\)-invariant open subspace of \(\ell\)-semistable points with free closed \(G\)-orbits. Then (6.14) is a closed substack of \(Y:=(V^{\vee})^{\operatorname{free}}/G\). Since the category of matrix factorizations depends only on an open neighborhood of the critical locus, there is an equivalence
\[\operatorname{MF}^{\operatorname{gr}}((V^{\vee}/G)\setminus\mathcal{Z}_{U},f) \stackrel{{\sim}}{{\to}}\operatorname{MF}^{\operatorname{gr}}(Y,f).\]
Note that \(Y\) is quasi-projective since it is an open subset of the quasi-projective good moduli space \((V^{\vee})^{\ell\text{-}88}/\!\!/G\). The category \(\operatorname{MF}^{\operatorname{gr}}(Y,f)\) is proven to be smooth in [10, Lemma 2.11, Remark 2.12], hence it is regular.
The main result of this subsection is the following strong generation of singular support quotient:
**Theorem 6.11**.: _Let \(\mathfrak{M}\) be a quasi-smooth derived stack of finite type over \(\mathbb{C}\) with a good moduli space_
\[\mathfrak{M}^{\operatorname{cl}}\to M,\]
_where \(M\) is a quasi-separated algebraic space. For \(\ell\in\operatorname{Pic}(\mathfrak{M})_{\mathbb{R}}\), suppose that \(\Omega_{\mathfrak{M}}^{\ell\text{-}88}[-1]^{\operatorname{cl}}\) is an algebraic space. Let \(\mathcal{Z}=\Omega_{\mathfrak{M}}[-1]^{\operatorname{cl}}\setminus\Omega_{ \mathfrak{M}}^{\ell\text{-}88}[-1]^{\operatorname{cl}}\). Then the quotient category \(D^{b}(\mathfrak{M})/\mathcal{C}_{\mathcal{Z}}\) is regular._
Proof.: For \((U\to M)\in\operatorname{Et}/M\), we define
\[\mathcal{T}_{U}=D^{b}(\mathfrak{M}_{U})/\mathcal{C}_{\mathcal{Z}_{U}},\ \operatorname{Ind} \mathcal{T}_{U}=\operatorname{Ind}D^{b}(\mathfrak{M}_{U})/\operatorname{Ind} \mathcal{C}_{\mathcal{Z}_{U}}. \tag{6.15}\]
By the diagram (6.10), there is an adjoint pair:
\[\operatorname{Ind}\mathcal{T}_{U}\xleftarrow{\rho^{*}}\operatorname{Ind} \mathcal{T}_{U^{\prime}}\,\rho^{*}\dashv\rho_{*}.\]
Then \(U\mapsto\operatorname{Ind}\mathcal{T}_{U}\) is a \(\operatorname{Et}/M\)-pre-triangulated categories with adjoints, see [11, Section 5].
Let \(\mathbf{D}^{\operatorname{st}}\subset\operatorname{Et}/M\) be the full subcategory of \((U\to M)\) such that \(\mathcal{T}_{U}\) is regular. The condition \((U\to M)\in\mathbf{D}^{\operatorname{st}}\) is equivalent to \(\mathcal{T}_{U}=\langle\mathcal{E}_{U}\rangle^{\star n}\) for some \(\mathcal{E}_{U}\in\mathcal{T}_{U}\) and \(n\geqslant 1\). On the other hand, it is proved in [11, Proposition 3.2.7, Section 7.2] that \(\operatorname{Ind}\mathcal{T}_{U}=\operatorname{Ind}(\mathcal{T}_{U})\) with compact objects the idempotent closure of \(\mathcal{T}_{U}\). Therefore by [14, Proposition 1.9], the condition \(\mathcal{T}_{U}=\langle\mathcal{E}_{U}\rangle^{\star n}\) is equivalent to \(\operatorname{Ind}\mathcal{T}_{U}=\langle\langle\mathcal{E}_{U}\rangle^{\star n}\) for some \(n\geqslant 1\). By Lemma 6.10, there exists \((M^{\prime}\to M)\in\mathbf{D}^{\operatorname{st}}\) which is surjective. By Theorem 6.9, it is enough to check the conditions (i), (ii) and (iii) for the subcategory \(\mathbf{D}^{\operatorname{st}}\subset\operatorname{Et}/M\).
To show condition (i), consider a morphism \((\rho\colon U^{\prime}\to U)\) in \(\operatorname{Et}/M\). Suppose that \(\operatorname{Ind}\mathcal{T}_{U}=\langle\!\langle\mathcal{E}_{U}\rangle\! \rangle^{\star n}\). For each \(A\in\operatorname{Ind}\mathcal{T}_{U^{\prime}}\), there is a natural morphism \(\rho^{*}\rho_{*}A\to A\)
and \(\rho_{*}A\in\langle\!\langle\mathcal{E}_{U}\rangle\!\rangle^{*n}\) by the assumption. Since \(U^{\prime}\times_{U}U^{\prime}\to U^{\prime}\) admits a section given by the diagonal, we have a decomposition into open and closed subsets
\[U^{\prime}\times_{U}U^{\prime}=U^{\prime}\sqcup U^{\prime\prime}.\]
Then, by the base change for \(U^{\prime}\to U\gets U^{\prime}\), the morphism \(\rho^{*}\rho_{*}A\to A\) splits, hence \(A\in\langle\!\langle\rho^{*}\mathcal{E}_{U}\rangle\!\rangle^{*n}\). Therefore \(\operatorname{Ind}\mathcal{T}_{U^{\prime}}=\langle\!\langle\mathcal{E}_{U^{ \prime}}\rangle\!\rangle^{*n}\) for \(\mathcal{E}_{U^{\prime}}=\rho^{*}\mathcal{E}_{U}\) and \((U^{\prime}\to M)\in\mathbf{D}^{\mathrm{st}}\) holds.
To show condition (ii), let \((\rho\colon U^{\prime}\to U)\) be a morphism in \(\operatorname{Et}/M\) such that \(\rho\) is finite surjective. Assume that \((U^{\prime}\to M)\in\mathbf{D}^{\mathrm{st}}\), so \(\operatorname{Ind}\mathcal{T}_{U^{\prime}}=\langle\!\langle\mathcal{E}_{U^{ \prime}}\rangle\!\rangle^{*n}\) for some \(\mathcal{E}_{U^{\prime}}\in\mathcal{T}_{U^{\prime}}\) and \(n\geqslant 1\). For \(A\in\operatorname{Ind}\mathcal{T}_{U}\), let \(A\to\rho_{*}\rho^{*}A=A\otimes\rho_{*}\mathcal{O}_{\mathfrak{M}_{U}}\) be the natural morphism. The induced map \(\rho\colon\mathfrak{M}_{U^{\prime}}\to\mathfrak{M}_{U}\) is also finite and surjective, and \(\mathcal{O}_{\mathfrak{M}_{U}}\to\rho_{*}\mathcal{O}_{\mathfrak{M}_{U^{\prime}}}\) splits. In fact, we have \(\rho_{*}=\rho_{!}\) as \(\rho\) is finite etale, and the natural map \(\rho_{!}\mathcal{O}_{\mathfrak{M}_{U^{\prime}}}\to\mathcal{O}_{\mathfrak{M}_{U}}\) gives a splitting. Therefore \(A\) is a direct summand of \(\rho_{*}\rho^{*}A\). As \(\rho^{*}A\in\langle\!\langle\mathcal{E}_{U^{\prime}}\rangle\!\rangle^{*n}\), we have \(A\in\langle\!\langle\rho_{*}\mathcal{E}_{U^{\prime}}\rangle\!\rangle^{*n}\). Since \(\rho\) is finite, we have \(\rho_{*}\mathcal{E}_{U^{\prime}}\in\mathcal{T}_{U}\). Then by setting \(\mathcal{E}_{U}=\rho_{*}\mathcal{E}_{U^{\prime}}\), we have \(A\in\langle\!\langle\mathcal{E}_{U}\rangle\!\rangle^{*n}\), hence \(\operatorname{Ind}\mathcal{T}_{U}=\langle\!\langle\mathcal{E}_{U}\rangle\! \rangle^{*n}\) and \((U\to M)\in\mathbf{D}^{\mathrm{st}}\) holds.
To show condition (iii), let \((j\colon U_{\circ}\to U)\) and \((f\colon W\to U)\) be morphisms in \(\operatorname{Et}/M\) such that \(j\) is an open immersion and \(f\) is an etale neighborhood of \(U\setminus U_{\circ}\). Suppose that \(\operatorname{Ind}\mathcal{T}_{U_{\circ}}=\langle\!\langle\mathcal{E}_{U_{\circ }}\rangle\!\rangle^{*n}\) and \(\operatorname{Ind}\mathcal{T}_{W}=\langle\!\langle\mathcal{E}_{W}\rangle\! \rangle^{*n}\) for some \(n\geqslant 1\) and \(\mathcal{E}_{W}\in\mathcal{T}_{W}\), \(\mathcal{E}_{U_{\circ}}\in\mathcal{T}_{U_{\circ}}\). For an object \(A\in\operatorname{Ind}\mathcal{T}_{U}\), there is a distinguished triangle in \(\operatorname{Ind}\mathcal{T}_{U}\), see [17, Lemma 5.9]:
\[A\to j_{*}j^{*}A\oplus f_{*}f^{*}A\to f_{*}f^{*}j_{*}j^{*}A\to A[1]. \tag{6.16}\]
We have \(j_{*}j^{*}A\in\langle\!\langle j_{*}\mathcal{E}_{U_{\circ}}\rangle\!\rangle^{*n}\), \(f_{*}f^{*}A\in\langle\!\langle f_{*}\mathcal{E}_{W}\rangle\!\rangle^{*n}\) and \(f_{*}f^{*}j_{*}j^{*}A\in\langle\!\langle f_{*}\mathcal{E}_{W}\rangle\!\rangle^{*n}\). By Lemma 6.12, there exist \(\mathcal{E}_{U}\in\mathcal{T}_{U}\) such that \(j_{*}\mathcal{E}_{U_{\circ}}\) and \(f_{*}\mathcal{E}_{W}\) are objects in \(\langle\!\langle\mathcal{E}_{U}\rangle\!\rangle^{*m}\) for some \(m\geqslant 1\). Then we have \(j_{*}j^{*}A\in\langle\!\langle\mathcal{E}_{U}\rangle\!\rangle^{*nm}\), \(f_{*}f^{*}A\in\langle\!\langle\mathcal{E}_{U}\rangle\!\rangle^{*nm}\) and \(f_{*}f^{*}j_{*}j^{*}A\in\langle\!\langle\mathcal{E}_{U}\rangle\!\rangle^{*nm}\). From the triangle (6.16), we conclude that \(\operatorname{Ind}\mathcal{T}_{U}=\langle\!\langle\mathcal{E}_{U}\rangle\! \rangle^{*nm+1}\), therefore \((U\to M)\in\mathbf{D}^{\mathrm{st}}\).
We have used the following lemma:
**Lemma 6.12**.: _Let \(f\colon U^{\prime}\to U\) be a morphism in \(\operatorname{Et}/M\). Then for any object \(P\in D^{b}(\mathfrak{M}_{U^{\prime}})\), there is \(Q\in D^{b}(\mathfrak{M}_{U})\) and \(m\geqslant 1\) such that \(f_{*}P\in\langle\!\langle Q\rangle\!\rangle^{*m}\) in \(\operatorname{Ind}D^{b}(\mathfrak{M}_{U})\)._
Proof.: Since \(P\) is a finite extension of objects from the image of the pushforward functor \(D^{b}(\mathcal{M}_{U^{\prime}})\to D^{b}(\mathfrak{M}_{U^{\prime}})\), we may assume that \(P\in D^{b}(\mathcal{M}_{U^{\prime}})\). It suffices to find \(Q\in D^{b}(\mathcal{M}_{U})\) and \(m\geqslant 1\) such that \(f_{*}P\in\langle\!\langle Q\rangle\!\rangle^{*m}\) in \(\operatorname{Ind}D^{b}(\mathcal{M}_{U})\). By Nagata compactification, there is a factorization
\[f\colon U^{\prime}\overset{j}{\hookrightarrow}\overline{U}\overset{g}{\to}U,\]
where \(j\) is an open immersion and \(g\) is proper. There is an object \(\overline{P}\in D^{b}(\mathcal{M}_{\overline{U}})\) such that \(j^{*}\overline{P}\cong P\). Then \(j_{*}P\cong\overline{P}\otimes_{\mathcal{O}_{\overline{U}}}j_{*}\mathcal{O}_{U^ {\prime}}\), where \(j_{*}\mathcal{O}_{U^{\prime}}\in D_{\mathrm{qc}}(\overline{U})\) and the tensor product is given by the action of \(D_{\mathrm{qc}}(\overline{U})\) on \(\operatorname{Ind}D^{b}(\mathcal{M}_{\overline{U}})\). By [10, Theorem 6.2], there is \(B\in\operatorname{Perf}(\overline{U})\) such that \(j_{*}\mathcal{O}_{U^{\prime}}\in\langle\!\langle B\rangle\!\rangle^{*m}\) for some \(m\geqslant 1\) in \(D_{\mathrm{qc}}(\overline{U})\). Then \(j_{*}P\in\langle\!\langle\overline{P}\otimes_{\mathcal{O}_{\overline{U}}}B\rangle \!\rangle^{*m}\), hence \(f_{*}P\in\langle\!\langle Q\rangle\!\rangle^{*m}\) for \(Q=g_{*}(\overline{P}\otimes_{\mathcal{O}_{\overline{U}}}B)\in D^{b}(\mathcal{M}_{U})\).
## 7. Serre functor for reduced quasi-BPS categories
In this section, we show that the reduced quasi-BPS categories have etale locally trivial Serre functor, which gives further evidence towards Conjecture 4.13.
### Serre functor
Recall that we write \(v=dv_{0}\) for \(d\in\mathbb{Z}_{\geqslant 1}\) and a primitive Mukai vector \(v_{0}\) with \(\langle v_{0},v_{0}\rangle=2g-2\). We assume \(g\geqslant 2\). Consider a generic stability \(\sigma\in\operatorname{Stab}(S)\). Recall that the derived stack \(\mathfrak{M}_{\mathcal{S}}^{\sigma}(v)^{\operatorname{red}}\) is equivalent to its classical truncation \(\mathcal{M}=\mathcal{M}_{\mathcal{S}}^{\sigma}(v)\) by Lemma 4.4. Let \(w\in\mathbb{Z}\) such that \(\gcd(d,w)=1\), and consider the quasi-BPS category
\[\mathbb{T}=\mathbb{T}_{S}^{\sigma}(v)_{w}^{\operatorname{red}}\subset D^{b}( \mathcal{M}).\]
We recall some terminology from [10]. Let \(\mathcal{T}\) be a \(\mathbb{C}\)-linear pre-triangulated category. A contravariant functor \(F\colon\mathcal{T}\to\operatorname{Vect}(\mathbb{C})\) is called _of finite type_ if \(\oplus_{i\in\mathbb{Z}}F(A[i])\) is finite dimensional for all objects \(A\) of \(\mathcal{T}\). The category \(\mathcal{T}\) is called _saturated_ if every contravariant functor \(H\colon\mathcal{T}\to\operatorname{Vect}(\mathbb{C})\) of finite type is representable.
By Corollary 6.8 and [10, Theorem 1.3], the category \(\mathbb{T}\) is saturated, and thus it admits a Serre functor
\[S_{\mathbb{T}}\colon\mathbb{T}\to\mathbb{T}\]
i.e. a functor such that there are functorial isomorphisms for \(\mathcal{E}_{1},\mathcal{E}_{2}\in\mathbb{T}\):
\[\operatorname{Hom}(\mathcal{E}_{1},\mathcal{E}_{2})\cong\operatorname{Hom}( \mathcal{E}_{2},S_{\mathbb{T}}(\mathcal{E}_{1}))^{\vee}.\]
There is also a version of the Serre functor relative to the good moduli space \(\pi\colon\mathcal{M}\to M\). For \(\mathcal{E}_{1},\mathcal{E}_{2}\in\mathbb{T}\), let \(\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2})\in D_{\operatorname {qc}}(\mathcal{M})\) be its internal homomorphism. Then a functor \(S_{\mathbb{T}/M}\colon\mathbb{T}\to\mathbb{T}\) is called a _relative Serre functor_ if there are functorial isomorphisms in \(D^{b}(M)\):
\[\mathcal{H}om_{M}(\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{ E}_{2}),\mathcal{O}_{M})\cong\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{2},S _{\mathbb{T}/M}(\mathcal{E}_{1})). \tag{7.1}\]
**Remark 7.1**.: We note that \(M\) has at worst Gorenstein singularities. The result is most probably well-known, but we did not find a reference. The statement follows from Lemma 4.3 and [21, Lemma 5.7]. Thus \(\mathcal{H}om(-,\mathcal{O}_{M})\) is an equivalence
\[\mathcal{H}om(-,\mathcal{O}_{M})\colon D^{b}(M)\stackrel{{ \sim}}{{\to}}D^{b}(M)^{\operatorname{op}}.\]
Moreover, the dualizing complex is \(\omega_{M}=\mathcal{O}_{M}[\dim M]\), since the singular locus of \(M\) is at least codimension two and there is a holomorphic symplectic form on the smooth part.
**Remark 7.2**.: The category \(\mathbb{T}\) is proper over \(M\), i.e. \(\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1},\mathcal{E}_{2})\in D^{b}(M)\), and it is strongly generated. Thus the relative Serre functor also exists, and is constructed as follows. Let \(\mathcal{E}\in\mathbb{T}\) be a strong generator and consider the sheaf of dg-algebras on \(M\):
\[\mathcal{A}=\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E},\mathcal{E}).\]
Then \(\mathbb{T}\) is equivalent to the derived category of coherent right dg-\(\mathcal{A}\)-modules. Under the above equivalence, the relative Serre functor is given by the \(\mathcal{A}^{\operatorname{op}}\otimes_{\mathcal{O}_{M}}\mathcal{A}\)-module \(\mathcal{H}om_{M}(\mathcal{A},\mathcal{O}_{M})\).
The absolute and the relative Serre functors are related as follows:
**Lemma 7.3**.: _We have \(S_{\mathbb{T}}=S_{\mathbb{T}/M}[\dim M]\)._
Proof.: By taking the global sections of (7.1), we obtain
\[\operatorname{Hom}_{M}(\pi_{*}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_{1}, \mathcal{E}_{2}),\mathcal{O}_{M})\cong\operatorname{Hom}(\mathcal{E}_{2},S_{ \mathbb{T}/M}(\mathcal{E}_{1})).\]
By the Serre duality for \(M\) and using that \(\omega_{M}=\mathcal{O}_{M}[\dim M]\) from Remark 7.1, the left hand side is isomorphic to
\[\operatorname{Hom}_{M}(\mathcal{O}_{M},\pi_{*}\mathcal{H}om_{\mathbb{T}}( \mathcal{E}_{1},\mathcal{E}_{2})[\dim M])^{\vee}=\operatorname{Hom}(\mathcal{ E}_{1},\mathcal{E}_{2}[\dim M])^{\vee}.\]
Then the lemma holds by the uniqueness of \(S_{\mathbb{T}}\).
We believe that \(S_{\mathbb{T}}\) is isomorphic to the shift functor \([\dim M]\), see the discussion in Subsection 1.2, which reinforces the analogy between reduced quasi-BPS categories and hyperkahler varieties, see Conjecture 4.13. The main result in this section is the following weaker form of this expectation, which we prove in Subsection 7.4:
**Theorem 7.4**.: _The Serre functor \(S_{\mathbb{T}}\) is isomorphic to the shift functor \([\dim M]\) etale locally on \(M\), i.e. there is an etale cover \(U\to M\) such that for each \(\mathcal{E}\in\mathbb{T}\) we have \(S_{\mathbb{T}}(\mathcal{E})|_{U}\cong\mathcal{E}|_{U}[\dim M]\)._
### Construction of the trace map
In this subsection, we construct a trace map for objects with nilpotent singular supports in a general setting. The construction here is used in the proof of Theorem 7.4.
Let \(G\) be a reductive algebraic group which acts on a smooth affine variety \(Y\). We assume that there is a one-dimensional subtorus \(\mathbb{C}^{*}\subset G\) which acts on \(Y\) trivially, so the \(G\)-action on \(Y\) factors through the action of \(\mathbb{P}(G):=G/\mathbb{C}^{*}\). We say that \(Y\) is _unimodular_ if \(\det\Omega_{Y}\) is trivial as a \(G\)-equivariant line bundle. We also say that the action of \(\mathbb{P}(G)\) on \(Y\) is _generic_ if the subset \(Y^{s}\subset Y\) of points with closed \(\mathbb{P}(G)\)-orbit and trivial stabilizer is non-empty and \(\operatorname{codim}(Y\setminus Y^{s})\geqslant 2\).
**Lemma 7.5**.: ([11, Corollary 2]) _If \(Y\) is unimodular and generic, then \(Y/\!\!/G\) has only Gorenstein singularities and its canonical module is trivial._
Let \(Y\) be unimodular and generic. By Lemma 7.5, the quotient \(Y/\!\!/G\) is Gorenstein and its dualizing complex is
\[\omega_{Y/\!\!/G}=\mathcal{O}_{Y/\!\!/G}[\dim Y/\!\!/G]. \tag{7.2}\]
Let \(V\to Y\) be a \(G\)-equivariant vector bundle with a \(G\)-invariant regular section \(s\) such that \(V\) is also unimodular and generic. We refer to such choices of \(G\), \(Y\), \(V\), and \(s\) as _a good data_\((G,Y,V,s)\).
Let \(\mathcal{U}:=s^{-1}(0)\) be the zero locus of \(s\), which is equivalent to the derived zero locus as we assumed that \(s\) is regular. We have the following diagram
(7.3)
Here \(0\colon Y/G\to V^{\vee}/G\) is the zero section, \(\eta\) is the projection, and the bottom horizontal arrows are induced maps on good moduli spaces.
Recall the Koszul duality equivalence in Theorem 2.1
\[\Theta\colon D^{b}(\mathcal{U}/G)\stackrel{{\sim}}{{\to}} \operatorname{MF}^{\operatorname{gr}}(V^{\vee}/G,f).\]
For \(\mathcal{E}\in D^{b}(\mathcal{U}/G)\), let \(\mathcal{P}=\Theta(\mathcal{E})\). Then we have the following isomorphism in \(D_{\operatorname{qc}}(Y/G)\), see [21, Lemma 2.7]:
\[j_{*}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E},\mathcal{E})\stackrel{{ \cong}}{{\to}}\eta_{*}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P}).\]
Here \(\mathcal{H}om_{V^{\vee}/G}(\mathcal{P},\mathcal{P})\) is the internal homomorphism of matrix factorizations, which is an object in \(\mathrm{MF}^{\mathrm{gr}}(V^{\vee}/G,0)\). As \(V^{\vee}/G\) is smooth, by taking a resolution of \(\mathcal{P}\) by vector bundles, we obtain the natural trace map in \(\mathrm{MF}^{\mathrm{gr}}(V^{\vee}/G,0)\):
\[\mathrm{tr}\colon\mathcal{H}om_{V^{\vee}/G}(\mathcal{P},\mathcal{P})\to \mathcal{O}_{V^{\vee}/G}.\]
By taking \(\pi_{V^{\vee}*}\), we obtain the morphism in \(D^{\mathrm{gr}}(V^{\vee}/\!\!/G)\):
\[\pi_{V^{\vee}*}\mathrm{tr}\colon\pi_{*}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P},\mathcal{P})\to\mathcal{O}_{V^{\vee}/G} \tag{7.4}\]
Here the grading on \(V^{\vee}/\!\!/G\) is induced by the fiberwise weight two \(\mathbb{C}^{*}\)-action on \(V^{\vee}/\!\!/G\to Y/G\), see Subsection 2.3 for the graded category \(D^{\mathrm{gr}}(V^{\vee}/\!\!/G)\).
We say that \(\mathcal{P}\) has _nilpotent support_ if:
\[\mathrm{Supp}(\mathcal{P})\subset\pi_{V^{\vee}}^{-1}(\mathrm{Im}(\overline{0} )).\]
We say \(\mathcal{E}\) has _nilpotent singular support_ with respect to \((G,Y,V,s)\) if \(\mathcal{P}\) has nilpotent support.
Assume that \(\mathcal{P}\) has nilpotent support. Then the object \(\pi_{V^{\vee}*}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P},\mathcal{P})\) in \(D^{\mathrm{gr}}(V^{\vee}/\!\!/G)\) has proper support over \(Y/\!\!/G\). Moreover, we have
\[\omega_{V^{\vee}/\!\!/G}=\mathcal{O}_{V^{\vee}/\!\!/G}[\dim V^{\vee}/\!\!/G]( -2\operatorname{rank}V)=\mathcal{O}_{V^{\vee}/\!\!/G}[\dim Y/\!\!/G- \operatorname{rank}V] \tag{7.5}\]
in \(D^{\mathrm{gr}}(V^{\vee}/\!\!/G)\), where (1) is the grade shift functor of \(D^{\mathrm{gr}}(V^{\vee}/\!\!/G)\) which is isomorphic to the cohomological shift functor [1]. Then by Lemma 7.6 below, the morphism (7.4) induces the morphism in \(D^{b}(Y/\!\!/G)\):
\[a_{\mathcal{P}}\colon\eta_{*}\pi_{V^{\vee}*}\mathcal{H}om_{V^{\vee}/G}( \mathcal{P},\mathcal{P})\to\mathcal{O}_{Y/\!\!/G}[\operatorname{rank}V]. \tag{7.6}\]
Suppose that \(\mathcal{U}/\!\!/G\) is Gorenstein with trivial canonical module and has dimension \(\dim Y/\!\!/G-\operatorname{rank}V\). Then \(\overline{j}^{!}\mathcal{O}_{Y/\!\!/G}=\mathcal{O}_{\mathcal{U}/\!\!/G}[- \operatorname{rank}V]\). Since there are isomorphisms:
\[\overline{\eta}_{*}\pi_{V^{\vee}*}\mathcal{H}om_{V^{\vee}/G}( \mathcal{P},\mathcal{P}) =\pi_{Y*}\eta_{*}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P},\mathcal{ P})\] \[\stackrel{{\cong}}{{\to}}\pi_{Y*}j_{*}\mathcal{H}om_{ \mathcal{U}/G}(\mathcal{E},\mathcal{E})\] \[=\overline{j}_{*}\pi_{\mathcal{U}*}\mathcal{H}om_{\mathcal{U}/G}( \mathcal{E},\mathcal{E}),\]
the morphism (7.6) induces the trace morphism in \(D^{b}(\mathcal{U}/\!\!/G)\):
\[\mathrm{tr}_{\mathcal{E}}\colon\pi_{\mathcal{U}*}\mathcal{H}om_{\mathcal{U}/G} (\mathcal{E},\mathcal{E})\to\overline{j}^{!}\mathcal{O}_{Y/\!\!/G}[ \operatorname{rank}V]=\mathcal{O}_{\mathcal{U}/\!\!/G}. \tag{7.7}\]
We have used the following lemma in the above construction:
**Lemma 7.6**.: _Let \(X,Y\) be Noetherian \(\mathbb{C}\)-schemes with \(\mathbb{C}^{*}\)-actions, and let \(f\colon X\to Y\) be a \(\mathbb{C}^{*}\)-equivariant morphism. Let \(\omega_{X}\) be a dualizing complex for \(X\). If \(\mathcal{E}\in D^{\mathrm{gr}}(X)\) has proper support over \(Y\), then there is a natural isomorphism_
\[\phi_{f}\colon\operatorname{Hom}_{X}(\mathcal{E},\omega_{X})\stackrel{{ \cong}}{{\to}}\operatorname{Hom}_{Y}(f_{*}\mathcal{E},\omega_{Y}).\]
_Moreover, let \(g\colon Y\to Z\) be another \(\mathbb{C}^{*}\)-equivariant morphism and assume the support of \(\mathcal{E}\) is proper over \(Z\). Let \(h=g\circ f\colon X\to Z\). Then we have_
\[\phi_{h}=\phi_{g}\circ\phi_{f}\colon\operatorname{Hom}_{X}(\mathcal{E},\omega_ {X})\stackrel{{\phi_{f}}}{{\to}}\operatorname{Hom}_{Y}(f_{*} \mathcal{E},\omega_{Y})\stackrel{{\phi_{g}}}{{\to}}\operatorname{ Hom}_{Z}(h_{*}\mathcal{E},\omega_{Z}).\]
Proof.: The lemma is obvious if \(f\) and \(g\) are proper since \(\omega_{X}=f^{!}\omega_{Y}\), \(\omega_{Y}=g^{!}\omega_{Z}\) and \(f^{!}\) and \(g^{!}\) are right adjoints to \(f_{*}\), \(g_{*}\). In general, let \(i\colon T\hookrightarrow X\) be a closed subscheme such that \(f|_{T}\), \(g|_{f(T)}\) are proper. By a standard devissage argument, it suffices to check the statement for \(\mathcal{E}=i_{*}\mathcal{F}\) for some \(\mathcal{F}\in D^{b}(T)\). Then \(\operatorname{Hom}_{X}(\mathcal{E},\omega_{X})=\operatorname{Hom}_{T}(\mathcal{F},\omega_{T})\) as \(\omega_{T}=i^{!}\omega_{X}\). Then the lemma holds from the case of \(f\), \(g\) proper.
**Definition 7.7**.: Let \((G,Y,V,s)\) be a good data. Suppose that \(\mathcal{U}/\!\!/G\) is Gorenstein with trivial canonical module and of dimension \(\dim Y/\!\!/G-\!\mathrm{rank}\,V\). For \(\mathcal{E}\in D^{b}(\mathcal{U}/G)\) with nilpotent singular support with respect to this data, the morphism
\[\mathrm{tr}_{\mathcal{E}}:\pi_{\mathcal{U}*}\mathcal{H}om_{\mathcal{U}/G}( \mathcal{E},\mathcal{E})\to\mathcal{O}_{\mathcal{U}/\!\!/G}\]
constructed in (7.7) is called the _trace map determined by \((G,Y,V,s)\)_.
The following lemma is immediate from the construction of the trace map:
**Lemma 7.8**.: _For another good data \((G^{\prime},Y^{\prime},V^{\prime},s^{\prime})\), suppose that there is a commutative diagram of stacks_
_where the horizontal arrows are isomorphisms. Let \(\mathcal{U}^{\prime}=(s^{\prime})^{-1}(0)\) and consider the induced equivalence \(\phi\colon\mathcal{U}/G\overset{\cong}{\to}\mathcal{U}^{\prime}/G^{\prime}\). For \(\mathcal{E}\in D^{b}(\mathcal{U}/G)\) with nilpotent singular support for \((G,Y,V,s)\), the object \(\phi_{*}\mathcal{E}\) has nilpotent singular support with respect to \((G^{\prime},Y^{\prime},V^{\prime},s^{\prime})\). Further, the trace map \(\mathrm{tr}_{\mathcal{E}}\) determined by \((G,Y,V,s)\) is identified with that of \(\mathrm{tr}_{\phi_{*}\mathcal{E}}\) determined by \((G^{\prime},Y^{\prime},V^{\prime},s^{\prime})\), i.e. the following diagram commutes_
_where the vertical arrows are natural isomorphisms induced by \(\phi\)._
Suppose that \(\mathcal{E}\in D^{b}(\mathcal{U}/G)\) is a perfect complex. In this case, there is a canonical trace map \(\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E},\mathcal{E})\to\mathcal{O}_{ \mathcal{U}/G}\). By taking the push-forward to \(\mathcal{U}/\!\!/G\), we obtain the map
\[\pi_{\mathcal{U}*}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E},\mathcal{E})\to \mathcal{O}_{\mathcal{U}/\!\!/G}. \tag{7.8}\]
Note that the above construction is independent of a choice of \((G,Y,V,s)\). The following lemma is straightforward to check, and we omit the details.
**Lemma 7.9**.: _If \(\mathcal{E}\) is a perfect complex, then \(\mathrm{tr}_{\mathcal{E}}\) is the same as the map (7.8)._
### Comparison of the trace maps
In this subsection, we compare the trace map constructed in the previous subsection under a change of the presentations of quasi-smooth affine derived schemes.
Suppose that \((G,Y,V,s)\) is a good data and let \(W\) be another \(G\)-representation such that \(\det W\) is a trivial \(G\)-character. Let \(i\colon Y/G\hookrightarrow(Y\oplus W)/G\) be the embedding given by \(y\mapsto(y,0)\). We have the section \(s^{\prime}\) of the vector bundle \(V\oplus W\oplus W\to Y\oplus W\) given by \((y,w)\mapsto(s(y),w,w)\), whose zero locus is \(\mathcal{U}\subset Y\). Then \((G,Y\oplus W,V\oplus W\oplus W,s^{\prime})\) is a good data. Let \(\mathcal{E}\in D^{b}(\mathcal{U}/G)\) be a complex with nilpotent singular support with respect to \((G,Y,V,s)\). Then \(\mathcal{E}\) also has nilpotent singular support with respect to \((G,Y\oplus W,V\oplus W\oplus W,s^{\prime})\) and we can consider the trace determined by the good data \((G,Y\oplus W,V\oplus W\oplus W,s^{\prime})\):
\[\mathrm{tr}^{\prime}_{\mathcal{E}}\colon\mathcal{H}om_{\mathcal{U}/G}( \mathcal{E},\mathcal{E})\to\mathcal{O}_{\mathcal{U}/\!\!/G}.\]
**Lemma 7.10**.: _Let \(\mathcal{E}\in D^{b}(\mathcal{U}/G)\) have nilpotent singular support with respect to the good data \((G,Y,V,s)\). Then \(\mathcal{E}\) also has nilpotent singular support with respect to the good data \((G,Y\oplus W,V\oplus W\oplus W,s^{\prime})\). Further, we have that \(\mathrm{tr}_{\mathcal{E}}=\mathrm{tr}^{\prime}_{\mathcal{E}}\)._
Proof.: We have the following diagram
(7.9)
Let \(q\colon W\oplus W^{\vee}\to\mathbb{C}\) be the natural non-degenerate pairing. From the construction of the Koszul equivalences, there is a commutative diagram:
Here, the horizontal arrows are the Koszul equivalences from Theorem 2.1, and \(\Phi\) is the Knorrer periodicity equivalence, given by \(\Phi(-)=(-)\otimes_{\mathbb{C}}\mathcal{K}\). The Koszul factorization \(\mathcal{K}\) of \(q\) has the form
\[\mathcal{K}=\left(\bigwedge^{\mathrm{even}}W\otimes\mathcal{O}_{W\oplus W^{ \vee}}\rightleftarrows\bigwedge^{\mathrm{odd}}W\otimes\mathcal{O}_{W\oplus W^ {\vee}}\right)\in\mathrm{MF}^{\mathrm{gr}}((W\oplus W^{\vee})/G,q)\]
and is isomorphic to \(\mathcal{O}_{(W\oplus\{0\})/G}\), see [1, Proposition 3.20]. In the above, the grading is given by the \(\mathbb{C}^{*}\)-action on \(W\oplus W^{\vee}\) of weight \((0,2)\). By a diagram chasing, we see that
\[\mathcal{Q}:=\Theta^{\prime}(\mathcal{E})=\Phi(\mathcal{P})\]
has support in \(\mathrm{Im}(\overline{0})\), where \(\overline{0}\colon(Y\oplus W)/\!\!/G\to(Y\oplus W\oplus W^{\vee})/\!\!/G\). Then \(\mathcal{E}\) has nilpotent singular support with respect to \((G,Y\oplus W,V\oplus W\oplus W,s^{\prime})\).
Let \(i_{0}\colon BG\hookrightarrow(W\oplus W^{\vee})/G\) be the inclusion of the origin. We have the Koszul equivalence
\[D^{b}(BG)\stackrel{{\sim}}{{\to}}\mathrm{MF}^{\mathrm{gr}}((W \oplus W^{\vee})/G,q)\]
which sends \(\mathcal{O}_{BG}\) to \(\mathcal{K}\). Then \(\mathcal{H}om(\mathcal{K},\mathcal{K})=i_{0*}\mathcal{O}_{BG}\), hence we have the isomorphism \(i_{V^{\vee}*}\mathcal{H}om(\mathcal{P},\mathcal{P})\stackrel{{ \cong}}{{\to}}\mathcal{H}om(\mathcal{Q},\mathcal{Q})\). We have the commutative diagram:
(7.10)
where the bottom arrow is the morphism obtained by adjunction and using the isomorphism in \(D^{b}(V^{\vee}/G)\):
\[i_{V^{\vee}}^{\dagger}\mathcal{O}_{(V^{\vee}\oplus W\oplus W^{\vee})/G}\cong \det W\otimes\det(W^{\vee}(2))[-2\dim W]=\mathcal{O}_{V^{\vee}/G}.\]
Applying \(\pi_{V^{\vee}\oplus W\oplus W^{\vee}*}\) to the sheaves in the diagram (7.10), we obtain the commutative diagram:
Then by Lemma 7.6 applied for the map \(p\) together with the commutative diagram (7.9), we have the commutative diagram in \(D^{b}((Y\oplus W)/\!\!/G)\)
The bottom arrow is the natural morphism by \(\overline{i}_{Y}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
See [Davc, Theorem 5.11] for a proof of the second diagram. To also obtain the first diagram, one can prove a stronger statement accounting for the derived structure of \(\mathfrak{M}\) and \(\mathcal{P}(\boldsymbol{d})\) as in [HLa, Theorem 4.2.3], because \(A\) (\(R\) in loc.cit.) can be chosen etale over \(\operatorname{Ext}_{S}^{1}(F,F)=R_{Q^{\circ,d}}(\boldsymbol{d})\), see the proof of loc.cit. Then (7.11) and the right square of (7.12) commute, and the left square of (7.12) commutes by [HLa, Theorem 4.2.3]. For such \(e\colon Z\to M\) and for \(\mathcal{E}\in D^{b}(\mathcal{M})\), we denote by \(\mathcal{E}|_{Z}=e^{*}(\mathcal{E})\in D^{b}(\mathcal{Z})\).
The upshot of the discussion above is that \(y\in M\) is in the image of \(e\colon Z\to M\) for a good data \((G,A,V,s)\).
**Proposition 7.11**.: _Let \(\mathcal{E}\in\mathbb{T}\). Then \(\mathcal{E}|_{Z}\in D^{b}(\mathcal{Z})\) has nilpotent singular support with respect to \((G,A,V,s)\)._
Proof.: The object \(\mathcal{E}|_{Z}\) is in the subcategory of \(D^{b}(\mathcal{Z})\) classically generated by the image of \(e^{\prime\prime}\colon D^{b}(\mathcal{P}(\boldsymbol{d}))\to D^{b}(\mathcal{ Z})\), see [PTf, Subsection 2.11, Subsection 9.2]. Then the claim follows from [PTd, Lemma 5.4, Corollary 5.5].
Proof of Theorem 7.4.: By Proposition 7.11, the object \(\mathcal{E}|_{Z}\in D^{b}(\mathcal{Z})\) admits a trace map determined by \((G,A,V,s)\), see the construction of Subsection 7.2 and Definition 7.7:
\[\operatorname{tr}_{Z}\colon\pi_{*}\mathcal{H}om(\mathcal{E}|_{Z},\mathcal{E}| _{Z})\to\mathcal{O}_{Z}. \tag{7.13}\]
By the definition of the relative Serre functor, it corresponds to a morphism
\[\phi_{Z}\colon\mathcal{E}|_{Z}\to S_{\mathbb{T}/M}(\mathcal{E})|_{Z}. \tag{7.14}\]
By Lemma 7.3, it is enough to show that the above morphism is an isomorphism.
Set \(\mathcal{A}=A/G\) and \(\mathcal{V}=V/G\). For each point \(u\in Z\hookrightarrow A/\!\!/G\), let \(\widehat{\mathcal{A}}_{u}\) be the formal fiber of \(\mathcal{A}\to A/\!\!/G\) at \(u\), and (by abuse of notation) denote by \(u\in\mathcal{A}\) the unique closed point in the fiber of \(\mathcal{A}\to A/\!\!/G\) at \(u\). Let \(G_{u}=\operatorname{Aut}(u)\subset G\). By the etale slice theorem, there is an isomorphism
\[\widehat{\mathcal{A}}_{u}\cong\widehat{\mathcal{H}}^{0}(\mathbb{T}_{\mathcal{ A}}|_{u})/G_{u}. \tag{7.15}\]
From the triangle \(\mathbb{T}_{\mathcal{Z}}\to\mathbb{T}_{\mathcal{A}}|_{\mathcal{Z}}\to \mathcal{V}|_{\mathcal{Z}}\to\mathbb{T}_{\mathcal{Z}}[1]\), there is an exact sequence of \(G_{u}\)-representations
\[0\to\mathcal{H}^{0}(\mathbb{T}_{\mathcal{Z}|_{u}})\to\mathcal{H}^{0}(\mathbb{ T}_{\mathcal{A}}|_{u})\stackrel{{ ds|_{u}}}{{\to}}\mathcal{V}|_{u}\to\mathcal{H}^{1}(\mathbb{T}_{\mathcal{Z}}|_{u})\to 0.\]
Hence there exist isomorphisms of \(G_{u}\)-representations
\[\mathcal{H}^{0}(\mathbb{T}_{\mathcal{A}}|_{u})\cong\mathcal{H}^{0}(\mathbb{T} _{\mathcal{Z}}|_{u})\oplus W,\ \mathcal{V}|_{u}\cong\mathcal{H}^{1}(\mathbb{T}_{\mathcal{Z}}|_{u})\oplus W \tag{7.16}\]
for some \(G_{u}\)-representation \(W\) such that \(ds|_{u}=(0,\mathrm{id}_{W})\).
First assume that \(u\) corresponds to a point in the deepest stratum, so that
\[\mathcal{H}^{0}(\mathbb{T}_{\mathcal{Z}}|_{u})=\mathfrak{gl}(d)^{\oplus 2g},\ \mathcal{H}^{1}(\mathbb{T}_{\mathcal{Z}}|_{u})=\mathfrak{gl}(d)_{0},\text{ and }G_{u}=GL(d). \tag{7.17}\]
Let \(\mu_{0}\colon\mathfrak{gl}(d)^{\oplus 2g}\to\mathfrak{gl}(d)_{0}\) be the moment map (3.19). Note that the zero locus of \(s|_{\widehat{\mathcal{A}}_{u}}\) is isomorphic to the formal fiber of \(\mu_{0}^{-1}(0)/GL(d)\to\mu_{0}^{-1}(0)/\!\!/GL(d)\) at the origin, see Lemma 4.3. As both of \(s\) and \(\mu_{0}\) are regular sections, by a formal coordinate change we may replace the isomorphism (7.15) and assume that \(s|_{\widehat{\mathcal{A}}_{u}}\) corresponds to the map
\[(\mu_{0},\mathrm{id}_{W})\colon\mathfrak{gl}(d)^{\oplus 2g}\oplus W\to \mathfrak{gl}(d)_{0}\oplus W\]
under the decompositions (7.16) and isomorphisms (7.17). By Lemmas 7.8 and 7.10, the trace map (7.13) pulled back via \(\widehat{Z}_{u}:=\operatorname{Spec}\widehat{\mathcal{O}}_{Z,u}\to Z\) coincides with the
trace map determined by the good data \((GL(d),\mathfrak{gl}(d)^{\oplus 2g},\mathfrak{gl}(d)_{0},\mu_{0})\). Then from Theorem 3.11, the map (7.14) is an isomorphism at \(\widehat{Z}_{u}\).
In general, let \(p\in\mathfrak{gl}(d)^{\oplus 2g}/\!\!/GL(d)\) be a point corresponding to \(u\) as in Lemma 4.3, i.e. there is an equivalence
\[\widehat{\mathcal{Z}}_{u}\simeq\widehat{\mathcal{H}}(d)_{p} \tag{7.18}\]
for the \(g\)-loop quiver \(Q^{\circ}\). Let \(\mathcal{Y}(d)=\mathfrak{gl}(d)^{\oplus 2g}/GL(d)\) be the moduli stack of representations of the doubled quiver of \(Q^{\circ}\). We also denote by \(p\in\mathcal{Y}(d)\) the unique closed point in the fiber of \(\mathcal{Y}(d)\to\mathfrak{gl}(d)^{\oplus 2g}/\!\!/GL(d)\) at \(p\). Then we have decompositions
\[\mathcal{H}^{0}(\mathbb{T}_{\mathcal{Y}(d)}|_{p})=\mathcal{H}^{0}(\mathbb{T} _{\mathcal{Z}}|_{u})\oplus W^{\prime},\ \mathfrak{gl}(d)_{0}=\mathcal{H}^{1}(\mathbb{T}_{\mathcal{Z}}|_{u})\oplus W^{\prime} \tag{7.19}\]
for some \(G_{u}\)-representation \(W^{\prime}\).
By Lemma 7.8 and the isomorphism (7.15), the trace map (7.13) at \(\widehat{Z}_{u}\) equals the trace map determined by \((G_{u},\mathcal{H}^{0}(\mathbb{T}_{\mathcal{A}}|_{u}),\mathcal{V}|_{u},s|_{ \widehat{A}_{u}})\). Then by the decomposition (7.16) and Lemma 7.10, it also equals the trace map determined by the good data \((G_{u},\mathcal{H}^{0}(\mathbb{T}_{\mathcal{Z}}|_{u}),\mathcal{H}^{1}(\mathbb{ T}_{\mathcal{Z}}|_{u}),\kappa)\). Then by (7.19) and Lemma 7.10, under the equivalence (7.18) the trace map (7.13) at \(\widehat{Z}_{u}\) also equals the trace map determined by \((G_{p},\mathcal{H}^{0}(\mathbb{T}_{\mathcal{Y}(d)}|_{p}),\mathfrak{gl}(d)_{0},\mu_{0})\), which in turn equals the trace map determined by \((GL(d),\mathfrak{gl}(d)^{\oplus 2g},\mathfrak{gl}(d)_{0},\mu_{0})\) at \(p\) by Lemma 7.8. Again by Theorem 3.11, the map (7.14) is an isomorphism on \(\widehat{Z}_{u}\). Therefore (7.14) is an isomorphism at any point \(u\in Z\), hence it is an isomorphism.
The proof of Theorem 7.4 also implies the following:
**Corollary 7.12**.: _In the situation of Theorem 7.4, for each \(\mathcal{E}\in\mathbb{T}\) there are isomorphisms:_
\[\mathcal{H}^{i}(S_{\mathbb{T}}(\mathcal{E}))\cong\mathcal{H}^{i}(\mathcal{E} [\dim M])\]
_for all \(i\in\mathbb{Z}\). In particular, if there exists \(k\in\mathbb{Z}\) such that \(\mathcal{E}\) is an object in \(\mathbb{T}\cap\operatorname{Coh}(\mathcal{M})[k]\), then \(S_{\mathbb{T}}(\mathcal{E})\cong\mathcal{E}[\dim M]\)._
Proof.: Let \(\operatorname{tr}_{Z}\) be the morphism in (7.13). For another etale morphism \(Z^{\prime}\to M\), the proof of Theorem 3.11 shows that the morphism
\[\operatorname{tr}_{Z}|_{Z\times_{M}Z^{\prime}}-\operatorname{tr}_{Z^{\prime}} |_{Z\times_{M}Z^{\prime}}\colon\pi_{*}\mathcal{H}om(\mathcal{E}|_{Z\times_{M} Z^{\prime}},\mathcal{E}|_{Z\times_{M}Z^{\prime}})\to\mathcal{O}_{Z\times_{M}Z^{ \prime}}\]
is a zero map formally locally at any point in \(Z\times_{M}Z^{\prime}\). Thus for the morphism \(\phi_{Z}\) in (7.14), the morphism
\[\phi_{Z}|_{Z\times_{M}Z^{\prime}}-\phi_{Z^{\prime}}|_{Z\times_{M}Z^{\prime}} \colon\mathcal{E}|_{Z\times_{M}Z^{\prime}}\to S_{\mathbb{T}/M}(\mathcal{E})| _{Z\times_{M}Z^{\prime}}\]
is a zero map formally locally at each point in \(Z\times_{M}Z^{\prime}\). Therefore for each \(i\in\mathbb{Z}\), the isomorphism
\[\mathcal{H}^{i}(\phi_{Z})\colon\mathcal{H}^{i}(\mathcal{E}|_{Z})\stackrel{{ \cong}}{{\to}}\mathcal{H}^{i}(S_{\mathbb{T}/M}(\mathcal{E})|_{Z})\]
glues to give an isomorphism \(\mathcal{H}^{i}(\mathcal{E})\cong\mathcal{H}^{i}(S_{\mathbb{T}/M}(\mathcal{E}))\). Then the corollary follows from Lemma 7.3.
We also have the following:
**Corollary 7.13**.: _In the situation of Theorem 3.11, for \(\mathcal{E}\in\mathbb{T}\cap\operatorname{Perf}(\mathcal{M})\), we have \(S_{\mathbb{T}}(\mathcal{E})\cong\mathcal{E}[\dim M]\)._
Proof.: As \(\mathcal{E}\) is perfect, there is a trace map \(\mathcal{H}om_{\mathcal{M}}(\mathcal{E},\mathcal{E})\to\mathcal{O}_{\mathcal{M}}\), thus its push-forward \(\pi_{*}\) gives a morphism
\[\pi_{*}\mathcal{H}om_{\mathcal{M}}(\mathcal{E},\mathcal{E})\to\mathcal{O}_{M}.\]
The above morphism corresponds to \(\phi\colon\mathcal{E}\to S_{\mathbb{T}/M}(\mathcal{E})\). By Lemma 7.9, the above morphism coincides with (7.14) on each etale map \(Z\to M\), thus \(\phi\) is an isomorphism. Then the corollary follows from Lemma 7.3.
## 8. Topological K-theory of quasi-BPS categories for K3 surfaces
### Statement of the main result
In this section, we prove Theorem 1.3 using the computation of topological K-theory of quasi-BPS categories of preprojective algebras from [10]. We actually compute the topological K-theory of quasi-BPS categories for all weights \(w\in\mathbb{Z}\), not only in the case of \(w\) coprime with \(v=dv_{0}\), see Theorem 8.1.
For a stack \(\mathcal{X}\), we denote by \(D_{\mathrm{con}}(\mathcal{X})\) the bounded derived category of constructible sheaves on \(\mathcal{X}\) and \(\mathrm{Perv}(\mathcal{X})\subset D_{\mathrm{con}}(\mathcal{X})\) the subcategory of perverse sheaves [10]. We denote by \(D_{\mathrm{con}}^{+}(\mathcal{X})\) the category of locally bounded below complexes of constructible sheaves on \(\mathcal{X}\): if \(\mathcal{X}\) is connected, then \(D_{\mathrm{con}}^{+}(\mathcal{X})\) is the limit of the diagram of categories \(D_{n}:=D_{\mathrm{con}}^{b}(\mathcal{X})\) for all \(n\in\mathbb{N}\) and for the functors \({}^{p}{}_{\tau}{}^{\leqslant n^{\prime}}\colon D^{n}\to D^{n^{\prime}}\); for general \(\mathcal{X}\), we have \(D_{\mathrm{con}}^{+}(\mathcal{X})=\prod_{\mathcal{X}^{\prime}\in\pi_{0}( \mathcal{X})}D_{\mathrm{con}}^{+}(\mathcal{X}^{\prime})\).
In this section, we assume that \(d\geqslant 2\), \(g\geqslant 2\), and that \(\sigma\in\mathrm{Stab}(S)\) corresponds to a Gieseker stability condition for an ample divisor \(H\), see Proposition 4.1 and Corollary 4.15. The reason we restrict to Gieseker stability conditions is that, in this case, \(\mathcal{M}\) is a global quotient stack and one can construct a cycle map as in [10]. We fix \(v=dv_{0}\) and \(w\in\mathbb{Z}\).
In Subsection 8.2.1, we recall the definition of the BPS sheaf
\[\mathcal{BPS}_{v}=\mathcal{BPS}_{S}^{\sigma}(v)\in\mathrm{Perv}\left(M_{S}^{ \sigma}(v)\right).\]
For a partition \(A=(d_{i})_{i=1}^{k}\) of \(d\), define the perverse sheaf \(\mathcal{BPS}_{A}\) on \(M_{S}^{\sigma}(v)\) to be
\[\mathcal{BPS}_{A}:=\left(\oplus_{*}\boxtimes_{i=1}^{k}\mathcal{BPS}_{d_{i}v_{ 0}}\right)^{\mathfrak{S}_{A}},\]
where \(\mathfrak{S}_{A}\subset\mathfrak{S}_{k}\) is the subgroup of permutations \(\sigma\in\mathfrak{S}_{k}\) such that \(d_{i}=d_{\sigma(i)}\), and \(\oplus\) is the addition map
\[\oplus\colon M_{S}^{\sigma}(d_{1}v_{0})\times\cdots\times M_{S}^{\sigma}(d_{k }v_{0})\to M_{S}^{\sigma}(dv_{0}).\]
For \(w\in\mathbb{Z}\), let \(S_{w}^{d}\) be the set of partitions of \(d\) from [10, Subsection 6.1.2]. From [10, Proposition 8.8], it consists of partitions \(A=(d_{i})_{i=1}^{k}\) such that, for all \(1\leqslant i\leqslant k\), \(w_{i}:=d_{i}w/d\) is an integer, thus \(S_{w}^{d}\) is in bijection with the set of partitions of \(\gcd(d,w)\). We set
\[\mathcal{BPS}_{v,w}:=\bigoplus_{A\in S_{w}^{d}}\mathcal{BPS}_{A}.\]
For a dg-category \(\mathcal{D}\), we denote by \(K^{\mathrm{top}}(\mathcal{D})\) the topological K-theory spectrum as defined by Blanc [1]. We consider its (rational) homotopy groups:
\[K_{i}^{\mathrm{top}}(\mathcal{D}):=\left(\pi_{i}K^{\mathrm{top}}(\mathcal{D}) \right)\otimes_{\mathbb{Z}}\mathbb{Q}.\]
For a review of (and references on) topological K-theory, see [10, Subsection 2.4]. If \(\mathcal{M}\) is a quotient stack, we denote by \(G^{\mathrm{top}}(\mathcal{M})\) the (rational) K-homology of \(\mathcal{M}\). Then, by [1], we have that \(G^{\mathrm{top}}(\mathcal{M})=K^{\mathrm{top}}(D^{b}(\mathcal{M}))\).
For a \(\mathbb{Z}\)-graded vector space \(H^{*}\) and \(i\in\mathbb{Z}\), let \(\widetilde{H}^{i}:=\prod_{j\in\mathbb{Z}}H^{i+2j}\). In this section, we prove the following result, which implies Theorem 1.3 as a special case. Note that the second isomorphism is not canonical, see Theorem 8.9 for a statement involving canonical isomorphisms:
**Theorem 8.1**.: _For \(i\in\mathbb{Z}\), there exist isomorphisms of \(\mathbb{Q}\)-vector spaces_
\[K_{i}^{\mathrm{top}}(\mathbb{T}_{S}^{\sigma}(v)_{w}^{\mathrm{red}})\stackrel{{ \cong}}{{\to}}K_{i}^{\mathrm{top}}(\mathbb{T}_{S}^{\sigma}(v)_{w}) \cong\widetilde{H}^{i}\left(M_{S}^{\sigma}(v),\mathcal{BPS}_{v,w}\right). \tag{8.1}\]
### BPS sheaves for K3 surfaces
As in the case of symmetric quivers with potential or preprojective algebras, the BPS cohomology for K3 surfaces is the "primitive" part of the Hall algebra of \(S\) for the chosen stability condition, and is computed as the cohomology of the BPS sheaf.
In this section, we recall the definition of BPS sheaves for K3 surfaces due to Davison-Hennecart-Schlegel Mejia [DHSMb] and we compare these sheaves with BPS sheaves for preprojective algebras.
#### 8.2.1. BPS sheaves via intersection complexes
Let \(\mathbb{D}\) be the Verdier duality functor on \(D^{b}_{\mathrm{con}}(\mathcal{M}_{S}^{\sigma}(v))\) and let \(D_{d}:=\mathbb{D}\mathbb{Q}\in D^{b}_{\mathrm{con}}(\mathcal{M}_{S}^{\sigma}( v))\) be the dualizing complex on \(\mathcal{M}_{S}^{\sigma}(v)=\mathcal{M}_{S}^{\sigma}(dv_{0})\). Recall the good moduli space map
\[\pi_{d}:=\pi\colon\mathcal{M}\to M.\]
The BBDG decomposition theorem holds for \(\pi_{d*}D_{d}\in D^{+}_{\mathrm{con}}(M_{S}^{\sigma}(v))\), see [Davb, Theorem C]. The BPS sheaf of \(M_{S}^{\sigma}(v)\) is a certain direct summand of the zeroth perverse truncation (which itself is a perverse sheaf on \(M_{S}^{\sigma}(v)\), see loc. cit.):
\[{}^{p}\tau^{\leqslant 0}\pi_{d*}D_{d}\in\mathrm{Perv}(M_{S}^{\sigma}(v)). \tag{8.2}\]
We now explain the definition of the BPS sheaf. The cohomological Hall product \(m=p_{*}q^{*}\) for the maps \(p,q\) in (5.1) induces an algebra structure on the \(\mathbb{N}\)-graded complex
\[\mathcal{A}:=\bigoplus_{d\in\mathbb{N}}{}^{p}\tau^{\leqslant 0}\pi_{d*}D_{d} \in\bigoplus_{d\in\mathbb{N}}D^{+}_{\mathrm{con}}(M_{S}^{\sigma}(dv_{0})).\]
There is a natural map
\[\mathrm{IC}_{M_{S}^{\sigma}(v)}\to{}^{p}\tau^{\leqslant 0}\pi_{d*}D_{d}.\]
The main theorem of Davison-Hennecart-Schlegel Mejia [DHSMb, Theorem C] says that the induced map from the free algebra generated by the intersection complexes is isomorphic to \(\mathcal{A}\):
\[\mathrm{Free}\left(\bigoplus_{d\in\mathbb{N}}\mathrm{IC}_{M_{S}^{\sigma}(dv_{0 })}\right)\stackrel{{\sim}}{{\to}}\mathcal{A}.\]
The BPS sheaves
\[\mathcal{BPS}_{S}^{\sigma}(v)=\mathcal{BPS}_{S}^{\sigma}(dv_{0})\in\mathrm{Perv }\left(M_{S}^{\sigma}(v)\right)\]
are defined via the free Lie algebra on the intersection complexes
\[\mathrm{Free}_{\mathrm{Lie}}\left(\bigoplus_{d\in\mathbb{N}}\mathrm{IC}_{M_{S} ^{\sigma}(v)}\right)=:\bigoplus_{d\in\mathbb{N}}\mathcal{BPS}_{S}^{\sigma}(dv_{ 0}). \tag{8.3}\]
We obtain that:
\[\mathrm{Sym}\left(\bigoplus_{d\in\mathbb{N}}\mathcal{BPS}_{S}^{\sigma}(dv_{0}) \right)\stackrel{{\sim}}{{\to}}\mathcal{A}. \tag{8.4}\]
A precise formulation for the heuristics that the BPS cohomology is the "primitive" part of the Hall algebra is the following: the relative Hall algebra of \(S\) for the multiples of the Mukai vector \(v_{0}\) and stability condition \(\sigma\) has a PBW decomposition in terms of BPS sheaves:
\[\operatorname{Sym}\left(\bigoplus_{d\in\mathbb{N}}\mathcal{BPS}_{S}^{\sigma}( dv_{0})\otimes H^{\cdot}(B\mathbb{C}^{*})\right)\xrightarrow{\sim}\mathcal{H}:= \bigoplus_{d\in\mathbb{N}}\pi_{d*}D_{d}, \tag{8.5}\]
see [5, Theorem 1.5], and note that the above isomorphism is of constructible sheaves, not of relative algebras. There is also a version for the absolute Hall algebra [5, Corollary 1.6]. The above PBW theorem is the analogue for K3 surfaces of the Davison-Meinhardt PBW theorem for cohomological Hall algebras of quivers with potential [5]. The results in [5] cited above hold by a computation of all the simple summands of \(\pi_{d*}D_{d}\), which satisfies a version of the BBDG/ Saito decomposition theorem due to Davison [10].
#### 8.2.2. The moduli stack of semistable sheaves on a K3 surface via preprojective algebras
One can describe the map \(\pi_{d}\colon\mathcal{M}_{S}^{\sigma}(v)\to M_{S}^{\sigma}(v)\) etale, formally, or analytically locally on the target via preprojective algebras [10], [11], [12, Theorems 4.3.2], [13], Subsections 2.6.2 and 7.4.
We will use the setting of Subsection 7.4, see diagrams (7.11) and (7.12). We will continue with the notation from Subsection 7.4.
The quiver \(Q_{y}^{\circ}\) is totally negative in the sense of [5, Section 1.2.3], see [11]. Thus the results in [5] about construction of BPS sheaves via intersection complexes apply, so the BPS sheaves \(\mathcal{BPS}^{p}(\boldsymbol{d})\) of the preprojective algebras of the quiver \(Q_{y}^{\circ}\) have a similar description via intersection complexes (8.3). Then the maps \(e\) and \(e^{\prime}\) induce isomorphisms:
\[e^{*}\left(\mathcal{BPS}_{S}^{\sigma}(v)\right)=e^{\prime*}\left(\mathcal{BPS }^{p}(\boldsymbol{d})\right). \tag{8.6}\]
**Remark 8.2**.: If we are interested in a local analytic description of \(\mathcal{M}_{S}^{\sigma}(v)\), then it is possible to choose \(Y\) an analytic open subset of \(P(\boldsymbol{d})\) and \(M_{S}^{\sigma}(v)\), that is, we may assume that \(e\) and \(e^{\prime}\) are open inclusions of analytic sets. Thus, locally analytically near \(p\), the preimage of the map \(\pi_{d}\) is isomorphic to the preimage of the map \(\pi_{P}\).
### Topological K-theory and etale covers
We use the shorthand notations \(M=M_{S}^{\sigma}(v)\), \(\mathcal{M}:=\mathcal{M}_{S}^{\sigma}(v)\), \(\mathfrak{M}=\mathfrak{M}_{S}^{\sigma}(v)\) and
\[\mathbb{T}(M)^{\operatorname{red}}:=\mathbb{T}_{S}^{\sigma}(v)_{w}^{ \operatorname{red}},\ \mathbb{T}(M):=\mathbb{T}_{S}^{\sigma}(v)_{w}.\]
We write the semiorthogonal decomposition for \(\mathcal{M}\) as:
\[D^{b}(\mathcal{M})=\langle\mathbb{A}(M)^{\operatorname{red}},\mathbb{T}(M)^{ \operatorname{red}}\rangle. \tag{8.7}\]
By the following lemma, it suffices to prove Theorem 8.1 for \(\mathbb{T}(M)^{\operatorname{red}}\). The argument for \(\mathbb{T}(M)\) is the same, but we prefer working with the stack \(\mathcal{M}\) because the good moduli space map is defined from \(\mathcal{M}\).
**Lemma 8.3**.: _The closed immersion \(\iota\colon\mathcal{M}\hookrightarrow\mathfrak{M}\) induces the isomorphism_
\[\iota_{*}\colon G_{\bullet}^{\operatorname{top}}(\mathbb{T}(M)^{\operatorname {red}})\xrightarrow{\sim}G_{\bullet}^{\operatorname{top}}(\mathbb{T}(M)).\]
Proof.: We have the isomorphism
\[\iota_{*}\colon G_{\bullet}^{\operatorname{top}}(\mathcal{M})\xrightarrow{ \sim}G_{\bullet}^{\operatorname{top}}(\mathfrak{M})\]
since both spaces have the same underlying topological space. Then the lemma holds since \(\iota_{*}\) sends \(\mathbb{T}(M)^{\operatorname{red}}\) to \(\mathbb{T}(M)\).
The semiorthogonal decomposition in Theorem 5.2 holds etale locally over \(M\) by [PTf, Section 9] and the diagram (7.12). Indeed, let \(R\to M\) be an etale map which factors through \(R\xrightarrow{h}Z\to M\) as in (7.12). Let \(\mathcal{R}:=\mathcal{M}_{S}^{\sigma}(v)\times_{M_{S}^{\sigma}(v)}R\). By [PTf, Section 9], there is a semiorthogonal decomposition:
\[D^{b}(\mathcal{R})=\langle\mathbb{A}(R)^{\mathrm{red}},\mathbb{T}(R)^{ \mathrm{red}}\rangle \tag{8.8}\]
such that for an etale map \(b\colon R^{\prime}\to R\), the pull-back \(b^{*}\) induce functors
\[b^{*}\colon\mathbb{A}(R)^{\mathrm{red}}\to\mathbb{A}(R^{\prime})^{\mathrm{red }},\ b^{*}\colon\mathbb{T}(R)^{\mathrm{red}}\to\mathbb{T}(R^{\prime})^{\mathrm{ red}}.\]
Consider etale covers
\[U=(Z\xrightarrow{e}M),\,\mathcal{U}=(\mathcal{Z}\xrightarrow{e}\mathcal{M})\]
generated by the etale covers \(Z\to M\) as in (7.12).
Consider the presheaves of spectra \(\mathcal{K}\), \(\mathcal{A}\) and \(\mathcal{T}\) on \(U\) defined as follows: for \((R\xrightarrow{e}M)\in U\) (and dropping \(e\) from the notation), we have:
\[\mathcal{K}(R)=G^{\mathrm{top}}(\mathcal{R}),\,\mathcal{A}(R)=K^{\mathrm{top }}(\mathbb{A}(R)^{\mathrm{red}}),\,\mathcal{T}(R)=K^{\mathrm{top}}(\mathbb{T}( R)^{\mathrm{red}}).\]
By [PTf, Theorem 9.2], there is a direct sum of presheaves of spectra on \(U\):
\[\mathcal{K}=\mathcal{A}\oplus\mathcal{T}. \tag{8.9}\]
Let \(\mathcal{F}\) be a presheaf of spectra and consider a cover \((Z_{i}\xrightarrow{e}M)_{i\in I}\) as in diagram (7.12) for a set \(I\). Consider the cosimplicial sheaf of spectra:
\[\prod_{i\in I}\mathcal{F}(Z_{i})\xRightarrow{\overbrace{\Longrightarrow}}\prod _{i,j\in I}\mathcal{F}(Z_{i}\times_{M}Z_{j})\xRightarrow{\overbrace{ \Longrightarrow}}\cdots,\]
which can be used to compute the cohomology of the sheafification of \(\mathcal{F}\), and which can be also related to Cech cohomology \(\check{\mathrm{H}}(U,\mathcal{F})\), see [Tho85, Definition 1.33, Remark 1.38]. There is a natural map
\[\eta_{\mathcal{F}}\colon\mathcal{F}(M)\to\check{\mathrm{H}}(U,\mathcal{F}). \tag{8.10}\]
For a presheaf of spectra \(\mathcal{F}\) and for \(i\in\mathbb{Z}\), denote by \(\mathcal{F}_{i}=\pi_{i}\mathcal{F}\) the corresponding presheaf of abelian groups and by \(\mathcal{F}_{i}^{s}\) the sheafification of \(\mathcal{F}_{i}\).
**Proposition 8.4**.: _The map (8.10) induces a weak equivalence of spectra:_
\[G^{\mathrm{top}}(\mathcal{M})=\mathcal{K}(M)\xrightarrow{\sim}\check{ \mathrm{H}}(U,\mathcal{K}).\]
_Thus there is a spectral sequence_
\[E_{p,q}:=\check{\mathrm{H}}^{p}(U,\mathcal{K}_{q}^{s})\implies G^{\mathrm{ top}}_{q-p}(\mathcal{M}). \tag{8.11}\]
Proof.: The above statement is proved for (rational) algebraic K-theory by Thomason in [Tho85, theorem 2.15, Corollary 2.16, Corollary 2.17]. The proof in loc. cit. also applies to the easier case of (rational) topological K-theory. Indeed, pushforward maps along etale maps exist on topological K-theory, so topological K-theory satisfies the weak transfer property [Tho85, Definition 2.12], thus topological K-theory has etale cohomological descent [Tho85, Proposition 2.14], and then the statement of [Tho85, Theorem 2.15] also holds for topological K-theory.
Alternatively, the analogous statement holds for singular cohomology [Mil80, Chapter III, Theorem 2.17], then by a standard devissage argument also for Borel-Moore homology, and then the statement for topological K-theory can be obtained using [PTf, Proposition 3.1].
**Remark 8.5**.: Even more, the presheaf \(\mathcal{K}\) is a sheaf of spectra. Indeed, let \(\mathcal{K}^{s}\) be the sheafification of \(\mathcal{K}\). For any \((E\to M)\in\operatorname{Et}(M)\), we can compute the sections \(\mathcal{K}^{s}(E)\) using Cech cohomology for a cover \(U_{E}\) of \(E\):
\[\mathcal{K}^{s}(E)\xrightarrow{\sim}\check{\operatorname{H}}(U_{E},\mathcal{K }).\]
By the same argument as in Proposition 8.4, we also have that \(\mathcal{K}(E)\xrightarrow{\sim}\check{\operatorname{H}}(U_{E},\mathcal{K})\), thus \(\mathcal{K}\) is indeed a sheaf.
**Corollary 8.6**.: _The map (8.10) induces a weak equivalence:_
\[K^{\operatorname{top}}(\mathbb{T}(M)^{\operatorname{red}})\xrightarrow{\sim }\check{\operatorname{H}}(U,\mathcal{T}).\]
_Thus there is a spectral sequence_
\[\check{\operatorname{H}}^{p}(U,\mathcal{T}^{s}_{q})\implies K^{\operatorname{ top}}_{q-p}(\mathbb{T}(M)^{\operatorname{red}}).\]
Proof.: The map \(\eta_{\mathcal{K}}=\eta_{\mathcal{A}}\oplus\eta_{\mathcal{T}}\) is an isomorphism by Proposition 8.4, so \(\eta_{\mathcal{T}}\) is also an isomorphism.
Let \(\mathcal{H}_{q}\) be the presheaf of \(\mathbb{Q}\)-vector spaces such that, for \((Z\xrightarrow{e}M)\in U\), we have
\[\mathcal{H}_{q}(Z)=H^{\operatorname{BM}}_{q}(\mathcal{Z}).\]
Then \(\mathcal{H}_{q}=\pi_{q}\mathcal{H}\), where \(\mathcal{H}\) is the presheaf of Eilenberg-MacLane spectra. As for \(\mathcal{K}\), the presheaf \(\mathcal{H}\) is actually a sheaf. There is an spectral sequence analogous to (8.11):
\[E^{\prime}_{p,q}:=\check{\operatorname{H}}^{p}(U,\mathcal{H}^{s}_{q})\implies H ^{\operatorname{BM}}_{q-p}(\mathfrak{M})=H^{\operatorname{BM}}_{q-p}( \mathcal{M}), \tag{8.12}\]
see the proof of Proposition 8.4.
**Proposition 8.7**.: _We have \(\mathcal{K}_{1}=\widetilde{\mathcal{H}}^{s}_{1}=0\). Thus the terms \(E_{p,q}\) from (8.11) and \(E^{\prime}_{p,q}\) from (8.12) vanish for \(q\) odd._
Proof.: By [111, Proposition 3.1], it suffices to check that \(\widetilde{\mathcal{H}}^{s}_{1}=0\). It suffices to check that the stalks of \(\widetilde{\mathcal{H}}^{s}_{1}\) over \(y\in M\) are zero. We can define spectra \(\mathcal{H}^{\operatorname{an}}\) in the analytic topology, and \(\mathcal{H}^{\operatorname{an}}_{y}\cong\mathcal{H}_{y}\) for any \(y\in M\), which follows as in [105, Chapter III, Theorem 3.12]. It thus suffices to check that \(H^{\operatorname{BM}}_{\operatorname{odd}}(V)=0\) for a system of open sets \(V\subset M\). By the local description from Subsection 8.2.2, we may assume that \(V\subset P(\boldsymbol{d})\) is an open neighborhood of the origin, where \(P(\boldsymbol{d})\) is the coarse space of dimension \(\boldsymbol{d}\) representations of the preprojective algebra of the Ext-quiver \(Q^{\circ}_{y}\).
Consider the action of \(\mathbb{C}^{*}\) on spaces of representations of the double quiver of \(Q^{\circ}_{y}\), which acts with weight one. It induces a scaling action on \(P(\boldsymbol{d})\) which contracts it onto \(0\). We can choose a system of open sets \(0\in V\subset P(\boldsymbol{d})\) such that \(V\) is homeomorphic to \(P(\boldsymbol{d})\) and \(\pi_{P}^{-1}(V)\) is homeomorphic to \(\mathcal{P}(\boldsymbol{d})\). It then suffices to check that \(H^{\operatorname{BM}}_{\operatorname{odd}}(\mathcal{P}(\boldsymbol{d}))=0\), which was proved by Davison in [106, Theorem A].
Let \(i\in\mathbb{Z}\). Consider the Chern character for the quotient stack \(\mathcal{M}\):
\[\operatorname{ch}\colon G^{\operatorname{top}}_{i}(\mathcal{M})\to\widetilde{ H}^{\operatorname{BM}}_{i}(\mathcal{M}),\]
see [111, Subsection 3.1]. There are analogous such Chern characters for \(\mathcal{Z}\) with \((e\colon\mathcal{Z}\to\mathcal{M})\in\mathcal{U}\). By Proposition 8.7, there are compatible spectral sequences with
terms for bi-degrees:
\[\begin{CD}\check{\mathrm{H}}^{2q-i}(U,\mathcal{T}^{s}_{2q})@>{}>{}>K^{\mathrm{ top}}_{i}(\mathbb{T}(M))\\ @V{}V{}V@V{}V{}V{}V\\ \check{\mathrm{H}}^{2q-i}(U,\mathcal{K}^{s}_{2q})@>{}>{}>G^{\mathrm{top}}_{i}( \mathcal{M})\\ @V{}V{\mathrm{ch}}V{\mathrm{ch}}V@V{}V{\mathrm{ch}}V\\ \check{\mathrm{H}}^{2a+i}(U,\widetilde{\mathcal{H}}^{s}_{2q})@>{}>{}>\widetilde{ H}^{\mathrm{BM}}_{i}(\mathcal{M}).\end{CD} \tag{8.13}\]
Let \(F_{\bullet}\mathcal{K}^{s}_{2g}\subset\mathcal{K}^{s}_{2g}\) and \(F_{\bullet}\mathcal{T}^{s}_{2g}\subset\mathcal{T}^{s}_{2g}\) be the increasing filtrations defined by
\[F_{j}\mathcal{K}^{s}_{2q}=\mathrm{ch}^{-1}(\mathcal{H}^{s}_{\leqslant 2q+2j}), \;F_{j}\mathcal{T}^{s}_{2g}=\mathcal{T}^{s}_{2g}\cap F_{j}\mathcal{K}^{s}_{2q}.\]
We denote by \(\mathrm{gr}_{\bullet}\) the associated graded with respect to the above filtrations. We obtain compatible spectral sequences:
\[\begin{CD}\check{\mathrm{H}}^{2q-i}(U,\mathrm{gr}_{j}\mathcal{T}^{s}_{2q})@>{ }>{}>\mathrm{gr}_{j}K^{\mathrm{top}}_{i}(\mathbb{T}(M))\\ @V{}V{\mathrm{\alpha}}V{}V@V{}V{}V\\ \check{\mathrm{H}}^{2q-i}(U,\mathrm{gr}_{j}\mathcal{K}^{s}_{2q})@>{}>{}>\mathrm{gr }_{j}G^{\mathrm{top}}_{i}(\mathcal{M})\\ @V{}V{\mathrm{c}}V@V{}V{\mathrm{c}}V\\ \check{\mathrm{H}}^{2q+i}(U,\mathcal{H}^{s}_{2q+2j})@>{d}>{}>H^{\mathrm{BM}}_{i +2j}(\mathcal{M}),\end{CD} \tag{8.14}\]
where the cycle maps \(\mathrm{c}\) are isomorphisms by [PTf, Proposition 3.1].
**Proposition 8.8**.: _The image of the map \(dc\alpha\) is \(H^{-i-2j}(\mathcal{M},\mathcal{BPS}_{v,w})\)._
Proof.: By [PTf, Theorem 9.2], the image of \(\mathrm{c}\alpha\) is the bi-graded complex with terms \(E^{\circ}_{p,q}:=\check{\mathrm{H}}^{2q+i}(U,\mathcal{H}^{-2q-2j}(\mathcal{BPS }_{v,w}))\). The restriction of \(d\) to \(E^{\circ}_{p,q}\) corresponds to the Cech spectral sequence for \(\mathcal{BPS}_{v,w}\):
\[d\colon E^{\circ}_{p,q}\implies H^{-i-2j}(M,\mathcal{BPS}_{v,w}).\]
The conclusion then follows.
We obtain the following:
**Theorem 8.9**.: _For any \(i\in\mathbb{Z}\), there is an isomorphism_
\[\mathrm{c}\colon\mathrm{gr}_{j}K^{\mathrm{top}}_{i}(\mathbb{T}^{\sigma}_{S}(v )^{\mathrm{red}}_{w})\xrightarrow{\sim}H^{-i-2j}\left(M^{\sigma}_{S}(v), \mathcal{BPS}_{v,w}\right).\]
Proof.: The conclusion follows from the diagram (8.14) and Proposition 8.8.
Proof of Theorem 8.1.: By Theorem 8.9 and Lemma 8.3, it suffices to check that there is a non-canonical isomorphism \(K^{\mathrm{top}}_{i}(\mathbb{T}^{\sigma}_{S}(v)^{\mathrm{red}}_{w})\cong \bigoplus_{j\in\mathbb{Z}}\mathrm{gr}_{j}K^{\mathrm{top}}_{i}(\mathbb{T}^{ \sigma}_{S}(v)^{\mathrm{red}}_{w})\). It suffices to check that the Chern character
\[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathbb{T}^{\sigma}_{S}(v)^{\mathrm{ red}}_{w})\hookrightarrow G^{\mathrm{top}}_{i}(\mathcal{M}^{\sigma}_{S}(v))\to \widetilde{H}^{\mathrm{BM}}_{i}(\mathcal{M}^{\sigma}_{S}(v))\]
is injective. By the diagram (8.13), it suffices to check that the following Chern character is injective
\[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathbb{T}(R)^{\mathrm{red}}) \hookrightarrow G^{\mathrm{top}}_{i}(\mathcal{R})\to\widetilde{H}^{\mathrm{BM }}_{i}(\mathcal{R}),\]
where \((R\xrightarrow{e}M)\in U\). This was proved in [PTf, Proposition 9.9]. |
2309.13771 | Matching powers of monomial ideals and edge ideals of weighted oriented
graphs | We introduce the concept of matching powers of monomial ideals. Let $I$ be a
monomial ideal of $S=K[x_1,\dots,x_n]$, with $K$ a field. The $k$th matching
power of $I$ is the monomial ideal $I^{[k]}$ generated by the products
$u_1\cdots u_k$ where $u_1,\dots,u_k$ is a monomial regular sequence contained
in $I$. This concept naturally generalizes that of squarefree powers of
squarefree monomial ideals. We study depth and regularity functions of matching
powers of monomial ideals and edge ideals of weighted oriented graphs. We show
that the last nonvanishing power of a quadratic monomial ideal is always
polymatroidal and thus has a linear resolution. When $I$ is a non-quadratic
edge ideal of a weighted oriented forest, we characterize when $I^{[k]}$ has a
linear resolution. | Nursel Erey, Antonino Ficarra | 2023-09-24T22:53:46Z | http://arxiv.org/abs/2309.13771v2 | # Matching powers of monomial ideals and edge ideals of weighted oriented graphs
###### Abstract.
We introduce the concept of matching powers of monomial ideals. Let \(I\) be a monomial ideal of \(S=K[x_{1},\ldots,x_{n}]\), with \(K\) a field. The \(k\)th matching power of \(I\) is the monomial ideal \(I^{[k]}\) generated by the products \(u_{1}\cdots u_{k}\) where \(u_{1},\ldots,u_{k}\) is a monomial regular sequence contained in \(I\). This concept naturally generalizes that of squarefree powers of squarefree monomial ideals. We study depth and regularity functions of matching powers of monomial ideals and edge ideals of weighted oriented graphs. We show that the last nonvanishing power of a quadratic monomial ideal is always polymatroidal and thus has a linear resolution. When \(I\) is a non-quadratic edge ideal of a weighted oriented forest, we characterize when \(I^{[k]}\) has a linear resolution.
Key words and phrases:Edge Ideals, Linear Resolutions, Matching Powers, Polymatroids, Weighted Graphs 2020 Mathematics Subject Classification: Primary 13F20; Secondary 05E40
## Introduction
Let \(S=K[x_{1},\ldots,x_{n}]\) be the polynomial ring over a field \(K\). Recall that the edge ideal of a finite simple graph \(G\) with vertices \(x_{1},\ldots,x_{n}\) is generated by all the monomials \(x_{i}x_{j}\) such that \(\{x_{i},x_{j}\}\) is an edge of \(G\). The study of minimal free resolutions of edge ideals and their powers produced a great deal of interaction between combinatorics and commutative algebra. One of the most natural problems in this regard is to understand when those ideals, or more generally monomial ideals, have linear resolutions. Although edge ideals with linear resolutions are combinatorially characterized by a famous result of Froberg [16], it is unknown in general when powers of edge ideals have linear resolutions. Herzog, Hibi and Zheng [21] showed that if an edge ideal has a linear resolution, then so does every power of it. It is their result that served as a starting point for the close examination of linear resolutions of powers of edge ideals by many researchers, resulting in several interesting results and conjectures.
For any squarefree monomial ideal \(I\) of \(S\), the \(k\)th squarefree power of \(I\), denoted by \(I^{[k]}\) is the monomial ideal generated by all squarefree monomials in \(I^{k}\). Recently, squarefree powers of edge ideals were studied in [4, 7, 8, 9, 10, 14, 27, 28]. Determining linearity of minimal free resolutions of squarefree powers or finding their invariants is as challenging as those of ordinary powers although squarefree and ordinary powers have quite different behavior. In the case that \(I\) is considered as edge ideal of a hypergraph \(\mathcal{H}\), the minimal monomial generators of \(I^{[k]}\) correspond
to matchings of \(\mathcal{H}\) of size \(k\), which makes combinatorial aspect of squarefree powers interesting as well.
This paper aims at presenting a wider framework for the study of squarefree powers by introducing a more general concept which we call matching powers. If \(I\) is a monomial ideal of \(S\), then the \(k^{\text{th}}\)_matching power_\(I^{[k]}\) of \(I\) is generated by the products \(u_{1}\cdots u_{k}\) where \(u_{1},\ldots,u_{k}\) is a monomial regular sequence contained in \(I\), or equivalently, \(u_{1},\ldots,u_{k}\) is a sequence of monomials with pairwise disjoint support. Indeed, if \(I\) is a squarefree monomial ideal, then the \(k\)th squarefree power of \(I\) is the same as the \(k\)th matching power of \(I\). With this new concept, since we are no longer restricted to squarefree monomial ideals, we can consider not only edge ideals of simple graphs but also edge ideals of weighted oriented graphs.
We now discuss how the paper is organized. In Section 1, we summarize basic facts of the theory of matching powers. We define the normalized depth function \(g_{I}\) of a monomial ideal \(I\) in Definition 1.5. This function generalizes the normalized depth function introduced in [10] for squarefree monomial ideals. It was conjectured in [10] that \(g_{I}\) is a non-increasing function for any squarefree monomial ideal. We show in Proposition 1.8 that this conjecture can be equivalently stated for monomial ideals. Hence, the normalized depth functions of squarefree monomial ideals comprise all normalized depth functions. In Theorem 1.10 we show that if \(I\) is a quadratic monomial ideal, then the highest nonvanishing matching power of \(I\) (namely \(I^{[\nu(I)]}\), where \(\nu(I)\) is the monomial grade of \(I\)) is polymatroidal. Since polymatroidal ideals have linear quotients, Theorem 1.10 provides a stronger result than [4, Theorem 4.1].
In Section 2, we turn our attention to edge ideals of weighted oriented graphs. We make comparisons between homological invariants of matching powers \(I(\mathcal{D})^{[k]}\) and \(I(G)^{[k]}\), where \(G\) is the underlying graph of a weighted oriented graph \(\mathcal{D}\). We provide a lower bound for the regularity of \(I(\mathcal{D})^{[k]}\) when \(k\) does not exceed the induced matching number of the underlying graph of \(\mathcal{D}\).
In Section 3, we are interested in linearly related matching powers. The main result of this section is Theorem 3.6 which characterizes when \(I(\mathcal{D})^{[k]}\) has a linear resolution or is linearly related provided that the underlying graph \(G\) of \(\mathcal{D}\) has the property that every subgraph of \(G\) has at most one perfect matching and \(I(\mathcal{D})\neq I(G)\). In particular, this result combined with [8, Theorem 41] gives a complete classification of weighted oriented forests \(\mathcal{D}\) such that \(I(\mathcal{D})^{[k]}\) has a linear resolution. The last section is devoted to demonstrate how one can recursively construct those weighted oriented forests described in Theorem 3.6.
## 1. Matching Powers
Let \(S=K[x_{1},\ldots,x_{n}]\) be the standard graded polynomial ring with coefficients in a field \(K\). Recall that \(f_{1},\ldots,f_{m}\) is a _regular sequence_ (on \(S\)) if \(f_{i}\) is a non zero-divisor on \(S/(f_{1},\ldots,f_{i-1})\) for \(i=1,\ldots,m\).
Let \(I\subset S\) be a monomial ideal. We denote by \(G(I)\) its unique minimal monomial generating set. Whereas, by \(M(I)\) we denote the set of monomials belonging to \(I\).
The \(k\)_th matching power_ of \(I\subset S\) is the monomial ideal defined as
\[I^{[k]}\ =\ (f_{1}\cdots f_{k}\ :\ f_{i}\in M(I),f_{1},\ldots,f_{k}\text{ is a regular sequence}).\]
If \(u\) is a monomial, we set \(\operatorname{supp}(u)=\{i:x_{i}\ \text{divids}\ u\}\). It is easy to check when a sequence of monomials is a (monomial) regular sequence. Indeed,
**Lemma 1.1**.: _Let \(v_{1},\ldots,v_{r}\) be monomials of \(S\). Then \(v_{1},\ldots,v_{r}\) is a regular sequence \((\)for any ordering\()\) if and only if \(\operatorname{supp}(v_{i})\cap\operatorname{supp}(v_{j})=\emptyset\) for all \(1\leq i<j\leq k\)._
Let \(I\subset S\) be a monomial ideal. The set \(\operatorname{supp}(I)=\bigcup_{u\in G(I)}\operatorname{supp}(u)\) is called the _support_ of \(I\). We say that \(I\) is _fully supported_ if \(\operatorname{supp}(I)=\{1,2,\ldots,n\}\). From now, we tacitly assume that all monomial ideals we consider are fully supported.
We denote by \(\nu(I)\) the _monomial grade_ of \(I\), that is, the maximal length of a monomial regular sequence contained in \(I\). In the next proposition, we collect some basic facts about matching powers.
**Proposition 1.2**.: _Let \(I\subset S\) be a monomial ideal. Then, the following hold._
1. \(I^{[k]}=(u_{1}\cdots u_{k}\ :\ u_{i}\in G(I),\operatorname{supp}(u_{i})\cap \operatorname{supp}(u_{j})=\emptyset,1\leq i<j\leq k)\)_._
2. \(I^{[k]}\neq 0\) _if and only if_ \(1\leq k\leq\nu(I)\)_._
3. \(I^{[k]}\) _is a monomial ideal._
Proof.: Statements (b) and (c) follow from statement (a). To prove the latter assertion, we set \(J=(u_{1}\cdots u_{k}:u_{i}\in G(I),\operatorname{supp}(u_{i})\cap \operatorname{supp}(u_{j})=\emptyset,1\leq i<j\leq k)\) and show that \(J=I^{[k]}\). By Lemma 1.1, it is clear that \(J\subseteq I^{[k]}\).
Conversely, let \(v_{1},\ldots,v_{k}\) be a monomial regular sequence contained in \(I\). Then \(v_{1}\cdots v_{k}\in I^{[k]}\) and by Lemma 1.1, we have \(\operatorname{supp}(v_{i})\cap\operatorname{supp}(v_{j})=\emptyset\) for \(i\neq j\). Since \(I\) is a monomial ideal, \(v_{i}=f_{i}u_{i}\) where \(f_{i}\) is a monomial of \(S\) and \(u_{i}\in G(I)\), for all \(i\). Hence, \(\operatorname{supp}(u_{i})\cap\operatorname{supp}(u_{j})=\emptyset\) for all \(i\neq j\), as well. Thus \(u_{1}\cdots u_{k}\in J\) divides \(v_{1}\cdots v_{k}\), and this implies that \(I^{[k]}\subseteq J\).
**Example 1.3**.:
1. Suppose that \(I\) is a squarefree monomial ideal. Then, a product \(u_{1}\cdots u_{k}\) with \(u_{i}\in G(I)\) is in \(I^{[k]}\) if and only if \(u_{1}\cdots u_{k}\) is squarefree. Thus, in this case \(I^{[k]}\) is just the usual \(kth\)_squarefree power_ of \(I\) introduced in [4].
2. Let \(I\) be a complete intersection monomial ideal generated by \(u_{1},\ldots,u_{m}\). Then \(I^{[k]}=(u_{i_{1}}\cdots u_{i_{k}}:1\leq i_{1}<\cdots<i_{k}\leq m)\) and \(\nu(I)=m\).
3. Let \((x_{1}^{2},\,x_{2}^{2},\,x_{3}^{2},\,x_{3}x_{4},\,x_{5}^{5})\). Then \(\nu(I)=4\) and \[I^{[2]} = (x_{1}^{2}x_{2}^{2},\,x_{1}^{2}x_{3}^{2},\,x_{1}^{2}x_{3}x_{4},\, x_{1}^{2}x_{5}^{5},\,x_{2}^{2}x_{3}^{2},\,x_{2}^{2}x_{3}x_{4},\,x_{2}^{2}x_{5}^{5},\, x_{3}x_{4}x_{5}^{5})\] \[I^{[3]} = (x_{1}^{2}x_{2}^{2}x_{3}^{2},\,x_{1}^{2}x_{2}^{2}x_{3}x_{4},\,x_{ 1}^{2}x_{2}^{2}x_{5}^{5},\,x_{1}^{2}x_{3}^{2}x_{5}^{5},\,x_{1}^{2}x_{3}x_{4}x_ {5}^{5},\,x_{2}^{2}x_{3}^{2}x_{5}^{5},\,x_{2}^{2}x_{3}x_{4}x_{5}^{5}),\] \[I^{[4]} = (x_{1}^{2}x_{2}^{2}x_{3}^{2}x_{5}^{5},\,x_{1}^{2}x_{2}^{2}x_{3}x _{4}x_{5}^{5}).\]
### Normalized depth function
For a monomial \(u\in S\), \(u\neq 1\), the \(x_{i}\)_-degree_ of \(u\) is defined as the integer
\[\deg_{x_{i}}(u)\ =\ \max\{j\geq 0:x_{i}^{j}\ \text{divids}\ u\}.\]
Let \(I\subset S\) be a monomial ideal. The _initial degree_ of \(I\), denoted by \(\operatorname{indeg}(I)\) is the smallest degree of a monomial belonging to \(I\). Following [11], we define the
bounding multidegree_ of \(I\) to be the vector
\[{\bf deg}(I)\ =\ (\deg_{x_{1}}(I),\ldots,\deg_{x_{n}}(I)),\]
with
\[\deg_{x_{i}}(I)\ =\ \max_{u\in G(I)}\deg_{x_{i}}(u),\ \ \mbox{for all}\ \ \ 1\leq i\leq n.\]
We provide a lower bound for the depth of \(S/I^{[k]}\) in terms of the initial degree of \(I^{[k]}\) and the bounding multidegree of \(I\) as follows:
**Theorem 1.4**.: _Let \(I\subset S\) be a monomial ideal. Then, for all \(1\leq k\leq\nu(I)\), we have_
\[\operatorname{depth}(S/I^{[k]})\ \geq\ \operatorname{indeg}(I^{[k]})-1+(n-|{ \bf deg}(I)|).\]
Proof.: We divide the proof in three steps.
**(Step 1).** Let \(J\subset S\) be a monomial ideal. We claim that
\[\operatorname{pd}(J)\leq|{\bf deg}(J)|-\operatorname{indeg}(J).\]
To prove the assertion, we use the Taylor resolution. Let \(\beta_{i,j}(J)\) be a non-zero graded Betti number with \(i=\operatorname{pd}(J)\). Then \(j\geq\operatorname{indeg}(J)+\operatorname{pd}(J)\). It follows from the Taylor resolution that the highest shift in the minimal resolution of \(J\) is at most \(|{\bf deg}(J)|\), see [11, Theorem 1.3]. Thus, \(|{\bf deg}(J)|\geq j\). Altogether, we obtain \(|{\bf deg}(J)|\geq j\geq\operatorname{indeg}(J)+\operatorname{pd}(J)\) and the assertion follows.
**(Step 2).** We claim that \(|{\bf deg}(I^{[k]})|\leq|{\bf deg}(I)|\) for all \(1\leq k\leq\nu(I)\). Indeed, we even show that \(\deg_{x_{\ell}}(I^{[k]})\leq\deg_{x_{\ell}}(I)\) for all \(\ell\). A set of generators of \(I^{[k]}\) is
\[\Omega\ =\ \{u_{1}\cdots u_{k}\ :\ u_{i}\in G(I),\operatorname{supp}(u_{i}) \cap\operatorname{supp}(u_{j})=\emptyset,1\leq i<j\leq k\}.\]
Thus, \(G(I^{[k]})\) is a subset of \(\Omega\). Hence, if \(v\in G(I^{[k]})\), then \(v=u_{1}\cdots u_{k}\in\Omega\). Let \(x_{\ell}\) be a variable dividing \(v\), then \(x_{\ell}\) divides at most one monomial \(u_{i}\), say \(u_{i_{\ell}}\). Therefore, \(\deg_{x_{\ell}}(v)\leq\deg_{x_{\ell}}(u_{i_{\ell}})\leq\deg_{x_{\ell}}(I)\) and the assertion follows.
**(Step 3).** By Steps 1 and 2 we have
\[\operatorname{pd}(S/I^{[k]})\ \leq\ |{\bf deg}(I^{[k]})|-\operatorname{ indeg}(I^{[k]})+1\ \leq\ |{\bf deg}(I)|-\operatorname{ indeg}(I^{[k]})+1.\]
The asserted inequality follows from the Auslander-Buchsbaum formula.
As a consequence of Theorem 1.4, we can give the next definition:
**Definition 1.5**.: Let \(I\subset S\) be a monomial ideal. For all \(1\leq k\leq\nu(I)\), we set
\[g_{I}(k)\ =\ \operatorname{depth}(S/I^{[k]})+|{\bf deg}(I)|-n-(\operatorname{ indeg}(I^{[k]})-1),\]
and call \(g_{I}\) the _normalized depth function_ of \(I\).
By Theorem 1.4 we have \(g_{I}(k)\geq 0\) for all \(1\leq k\leq\nu(I)\).
If \(I\subset S\) is a squarefree monomial ideal, then \({\bf deg}(I)={\bf 1}=(1,\ldots,1)\) and so
\[g_{I}(k)=\operatorname{depth}(S/I^{[k]})-(\operatorname{indeg}(I^{[k]})-1)\]
is the normalized depth function of \(I\) introduced in [10]. It is expected that the following is true.
**Conjecture 1.6**.: (Erey-Herzog-Hibi-Madani [10]). _Let \(I\subset S\) be a squarefree monomial ideal. Then \(g_{I}\) is a nonincreasing function._
Since the concept of the normalized depth function is extended from squarefree monomial ideals to all monomial ideals, it is natural to expect that the following more general statement is true.
**Conjecture 1.7**.: _Let \(I\subset S\) be a monomial ideal. Then \(g_{I}\) is nonincreasing._
It is clear that Conjecture 1.7 implies Conjecture 1.6. Surprisingly, we show that the converse also holds.
**Proposition 1.8**.: _Conjectures 1.6 and 1.7 are equivalent._
To prove this result, we use the _polarization_ technique. Let \(u=x_{1}^{b_{1}}\cdots x_{n}^{b_{n}}\in S\) be a monomial. Then, the _polarization_ of \(u\) is the monomial
\[u^{\wp}=\prod_{i=1}^{n}(\prod_{j=1}^{b_{i}}x_{i,j})=\prod_{\begin{subarray}{c }1\leq i\leq n\\ b_{i}>0\end{subarray}}x_{i,1}x_{i,2}\cdots x_{i,b_{i}}\]
in the polynomial ring \(K[x_{i,j}:1\leq i\leq n,1\leq j\leq b_{i}]\).
Let \(I\subset S\) be a monomial ideal. Then, the _polarization_ of \(I\) is defined to be the squarefree monomial ideal \(I^{\wp}\) of \(S^{\wp}=K[x_{i,j}:1\leq i\leq n,1\leq j\leq\deg_{x_{i}}(I)]\) with minimal generating set \(G(I^{\wp})=\{u^{\wp}:u\in G(I)\}\).
Proof of Proposition 1.8.: Suppose that Conjecture 1.6 holds, and let \(I\subset S\) be a monomial ideal. We claim that
\[(I^{[k]})^{\wp}\ =\ (I^{\wp})^{[k]},\ \ \text{for all}\ \ 1\leq k\leq\nu(I). \tag{1}\]
Indeed, let \(v_{1},\ldots,v_{k}\in G(I^{\wp})\) with \(\operatorname{supp}(v_{i})\cap\operatorname{supp}(v_{j})=\emptyset\) for all \(1\leq i<j\leq k\). Then \(v_{1}\cdots v_{k}\in(I^{\wp})^{[k]}\). Since \(G(I^{\wp})=\{u^{\wp}:u\in G(I)\}\), we see that \(v_{i}=u_{i}^{\wp}\) with \(u_{i}\in G(I)\) for all \(i\). It is clear that the condition \(\operatorname{supp}(u_{i}^{\wp})\cap\operatorname{supp}(u_{j}^{\wp})=\emptyset\) is verified if and only if \(\operatorname{supp}(u_{i})\cap\operatorname{supp}(u_{j})=\emptyset\). By this discussion, we have
\[(I^{\wp})^{[k]} =\ (u_{1}^{\wp}\cdots u_{k}^{\wp}:u_{i}^{\wp}\in G(I^{\wp}), \operatorname{supp}(u_{i}^{\wp})\cap\operatorname{supp}(u_{j}^{\wp})= \emptyset,1\leq i<j\leq k)\] \[=\ (u_{1}^{\wp}\cdots u_{k}^{\wp}:u_{i}\in G(I),\operatorname{ supp}(u_{i})\cap\operatorname{supp}(u_{j})=\emptyset,1\leq i<j\leq k)\] \[=\ ((u_{1}\cdots u_{k})^{\wp}:u_{i}\in G(I),\operatorname{supp}(u_{i })\cap\operatorname{supp}(u_{j})=\emptyset,1\leq i<j\leq k)\] \[=\ (I^{[k]})^{\wp}.\]
In the third equality we used the equation \(u_{1}^{\wp}\cdots u_{k}^{\wp}=(u_{1}\cdots u_{k})^{\wp}\), which holds because the monomials \(u_{1},\ldots,u_{k}\) are in pairwise disjoint sets of variables. Hence, equation (1) follows.
By [19, Corollary 1.6.3(d)] and equation (1) it follows that
\[\operatorname{pd}(S/I^{[k]})\ =\ \operatorname{pd}(S^{\wp}/(I^{[k]})^{\wp})\ =\ \operatorname{pd}(S^{\wp}/(I^{\wp})^{[k]}).\]
Taking into account that \(S^{\wp}\) is a polynomial ring in \(|\mathbf{deg}(I)|\) variables, applying the Auslander-Buchsbaum formula we get
\[\operatorname{depth}(S/I^{[k]})+|\mathbf{deg}(I)|-n=\operatorname{depth}(S^{ \wp}/(I^{\wp})^{[k]}).\]
Since \(\operatorname{indeg}(I^{[k]})=\operatorname{indeg}((I^{\wp})^{[k]})\), subtracting \(\operatorname{indeg}(I^{[k]})-1\) from both sides of the above equation, we obtain
\[g_{I}(k)\ =\ g_{I^{\wp}}(k),\ \text{ for all }\ 1\leq k\leq\nu(I).\]
By our assumption, \(g_{I^{\wp}}\) is nonincreasing, because \(I^{\wp}\) is a squarefree monomial ideal. Hence, \(g_{I}\) is nonincreasing, as well.
In the course of the proof, we have shown:
**Corollary 1.9**.: _Let \(I\subset S\) be a monomial ideal. Then, the following hold._
1. \(g_{I}=g_{I^{\wp}}\) _and_ \(\nu(I)=\nu(I^{\wp})\)_._
2. \((I^{[k]})^{\wp}=(I^{\wp})^{[k]}\) _for all_ \(1\leq k\leq\nu(I)\)_._
3. \(\operatorname{depth}(S/I^{[k]})=\operatorname{depth}(S^{\wp}/(I^{\wp})^{[k] })-|\mathbf{deg}(I)|+n\)_, for all_ \(1\leq k\leq\nu(I)\)_._
### Highest nonvanishing matching power of a quadratic monomial ideal
A monomial ideal \(I\subset S\) generated in a single degree is called _polymatroidal_ if the _exchange property_ holds: for all \(u,v\in G(I)\) and all \(i\) with \(\deg_{x_{i}}(u)>\deg_{x_{i}}(v)\) there exists \(j\) such that \(\deg_{x_{j}}(u)<\deg_{x_{j}}(v)\) and \(x_{j}(u/x_{i})\in G(I)\). A squarefree polymatroidal ideal is called _matroidal_.
A polymatroidal ideal has linear quotients with respect to the lexicographic order induced by any ordering of the variables. Indeed, a polymatroidal ideal is weakly polymatroidal and the above claim follows from [19, Proof of Theorem 12.7.2].
Our next main result states that the highest nonvanishing matching power of a quadratic monomial ideal is polymatroidal and thus it has linear quotients.
**Theorem 1.10**.: _Let \(I\subset S\) be a monomial ideal generated in degree two. Then \(I^{[\nu(I)]}\) is a polymatroidal ideal._
We postpone the proof of Theorem 1.10 until the end of this section because it is based upon the squarefree version of the theorem which we will prove first. We will use the technique of polarization to pass from the squarefree case to the non-squarefree case. If \(I\) is a polymatroidal ideal, then \(I^{\wp}\) is not necessarily polymatroidal. For instance, the ideal \(I=(x_{1}^{2},x_{1}x_{2},x_{2}^{2})\) is polymatroidal but \(I^{\wp}\) is not. On the other hand, we have
**Lemma 1.11**.: _Let \(I\subset S\) be a monomial ideal. If \(I^{\wp}\) is polymatroidal, then so is \(I\)._
Proof.: Let \(u,v\in G(I)\) with \(p=\deg_{x_{i}}(u)>\deg_{x_{i}}(v)\). Then \(x_{i,p}\) divides \(u^{\wp}\) but not \(v^{\wp}\). In fact,
\[\deg_{x_{i,p}}(u^{\wp})=1>0=\deg_{x_{i,p}}(v^{\wp}).\]
Since \(I^{\wp}\) is polymatroidal, there exists \(x_{j,k}\) with \(j\neq i\) such that
\[\deg_{x_{j,k}}(v^{\wp})=1>0=\deg_{x_{j,k}}(u^{\wp})\]
and \(x_{j,k}(u^{\wp}/x_{i,p})\in G(I^{\wp})\). This implies \(\deg_{x_{j}}(u)=k-1\) and \(\deg_{x_{j}}(v)\geq k\). Then
\[(x_{j}u/x_{i})^{\wp}=x_{j,k}(u^{\wp}/x_{i,p})\in G(I^{\wp})\]
and thus \(x_{j}u/x_{i}\in G(I)\).
Now, let us recall some definitions and fix some notation. Hereafter, for an integer \(n\geq 1\), we set \([n]=\{1,2,\ldots,n\}\). If \(F\subseteq[n]\) is non empty, we set \(\mathbf{x}_{F}=\prod_{i\in F}x_{i}\).
Let \(G\) be a finite simple graph on vertex set \(V(G)=[n]\) and with edge set \(E(G)\). The _edge ideal_ of \(G\) is the ideal \(I(G)=(x_{i}x_{j}:\{i,j\}\in E(G))\) of \(S=K[x_{1},\ldots,x_{n}]\). A _matching_ of \(G\) is a set of edges of \(G\) which are pairwise disjoint. If \(M\) is a matching, then we denote by \(V(M)\) the set of vertices \(\bigcup_{e\in M}e\). We denote by \(\nu(G)\) the _matching number_ of \(G\) which is the maximum size of a matching of \(G\). Then one can verify that \(\nu(I(G))=\nu(G)\).
Bigdi et al. showed in [4, Theorem 4.1] that \(I(G)^{[\nu(G)]}\) has linear quotients for any finite simple graph \(G\). We strengthen their result as follows:
**Theorem 1.12**.: _Let \(G\) be a finite simple graph. Then \(I(G)^{[\nu(G)]}\) is polymatroidal._
Proof.: Set \(k=\nu(G)\), and let \(u,v\in G(I(G)^{[k]})\) and \(i\) such that \(\deg_{x_{i}}(u)>\deg_{x_{i}}(v)\). Our job is to find \(j\) such that \(\deg_{x_{j}}(u)<\deg_{x_{j}}(v)\) and \(x_{j}(u/x_{i})\in G(I(G)^{[k]})\).
Since \(\nu(G)=\nu(I(G))\), we have
\[u=\mathbf{x}_{e_{1}}\cdots\mathbf{x}_{e_{k}}\quad\text{and}\quad v=\mathbf{x}_ {f_{1}}\cdots\mathbf{x}_{f_{k}},\]
where \(M_{u}=\{e_{1},\ldots,e_{k}\}\) and \(M_{v}=\{f_{1},\ldots,f_{k}\}\) are \(k\)-matchings of \(G\). Up to relabelling, we have \(e_{1}=\{i,h\}\) for some \(h\in[n]\). Since \(\deg_{x_{i}}(u)>\deg_{x_{i}}(v)\) and \(u\) and \(v\) are squarefree, it follows that \(i\notin V(M_{v})\). Thus \(h\in V(M_{v})\), otherwise \(\{e_{1},f_{1},\ldots,f_{k}\}\) would be a \((k+1)\)-matching of \(G\), against the fact that \(k=\nu(G)\). Thus, we may assume that \(f_{1}=\{h,i_{1}\}\) for some vertex \(i_{1}\neq h\).
Suppose that \(i_{1}\notin V(M_{u})\). Then we have \(\deg_{x_{i_{1}}}(u)<\deg_{x_{i_{1}}}(v)\) and
\[x_{i_{1}}(u/x_{i})=(x_{h}x_{i_{1}})\mathbf{x}_{e_{2}}\cdots\mathbf{x}_{e_{k}} \in G(I(G)^{[k]}).\]
The exchange property holds in this case.
Otherwise, if \(i_{1}\in V(M_{u})\), then we may assume that \(e_{2}=\{i_{1},j_{1}\}\) for some vertex \(j_{1}\notin\{i,h\}\). Then, \(j_{1}\) must be in \(V(M_{v})\), otherwise \(\{\{i,h\},\{i_{1},j_{1}\},f_{2},\ldots,f_{k}\}\) would be a \((k+1)\)-matching of \(G\), which is absurd. Hence, we may assume that \(f_{2}=\{j_{1},i_{2}\}\) for some \(i_{2}\notin\{i,h,i_{1},j_{1}\}\). Now, we distinguish two more cases.
Suppose that \(i_{2}\notin V(M_{u})\). Then we have \(\deg_{x_{i_{2}}}(u)<\deg_{x_{i_{2}}}(v)\) and
\[x_{i_{2}}(u/x_{i})=(x_{h}x_{i_{1}})(x_{j_{1}}x_{i_{2}})\mathbf{x}_{e_{3}} \cdots\mathbf{x}_{e_{k}}\in G(I(G)^{[k]}).\]
Thus, we are finished in this case.
Otherwise, if \(i_{2}\in V(M_{u})\), then we may assume that \(e_{3}=\{i_{2},j_{2}\}\) for some vertex \(j_{2}\notin\{i,h,i_{1},j_{1},i_{2}\}\). Arguing as before, we obtain that \(j_{2}\in V(M_{v})\), and we can assume that \(f_{3}=\{j_{2},i_{3}\}\) for some vertex \(i_{3}\notin\{i,h,i_{1},j_{1},i_{2}\}\).
Iterating this argument, we obtain at the \(p\)th step that
* \(e_{1}=\{i,h\}\), \(e_{2}=\{i_{1},j_{1}\}\),..., \(e_{p}=\{i_{p-1},j_{p-1}\}\) and
* \(f_{1}=\{h,i_{1}\}\), \(f_{2}=\{j_{1},i_{2}\}\),..., \(f_{p}=\{j_{p-1},i_{p}\}\).
Thus, if \(i_{p}\notin V(M_{u})\), then \(\deg_{x_{i_{p}}}(u)<\deg_{x_{i_{p}}}(v)\) and
\[x_{i_{p}}(u/x_{i})=\mathbf{x}_{f_{1}}\cdots\mathbf{x}_{f_{p}}\mathbf{x}_{e_{p+1 }}\cdots\mathbf{x}_{e_{k}}\in G(I(G)^{[k]}).\]
The exchange property holds in such a case.
Otherwise, if \(i_{p}\in V(M_{u})\), then \(e_{p+1}=\{i_{p},j_{p}\}\) for some vertex \(j_{p}\) different from all vertices \(i,h,i_{1},j_{1},\ldots,i_{p-1},j_{p-1},i_{p}\), and \(f_{p+1}=\{j_{p},i_{p+1}\}\) for some vertex \(i_{p+1}\).
It is clear that the process described in (i)-(ii) terminates at most after \(k\) steps. If we reach the \(k\)th step, then \(\deg_{x_{i_{k}}}(u)<\deg_{x_{i_{k}}}(v)\) and
\[x_{i_{k}}(u/x_{i})=\mathbf{x}_{f_{1}}\cdots\mathbf{x}_{f_{k}}=v\in G(I(G)^{[k ]}).\]
Thus, the exchange property holds in any case and \(I(G)^{[k]}\) is polymatroidal.
We are now ready for the proof of Theorem 1.10.
Proof of Theorem 1.10.: Let \(k=\nu(I)\). By Corollary 1.9(b), \((I^{[k]})^{\wp}=(I^{\wp})^{[k]}\). Moreover, \(I^{\wp}\) is an edge ideal and \(\nu(I)=\nu(I^{\wp})\) by Corollary 1.9(a). Then Theorem 1.12 implies that \((I^{[k]})^{\wp}\) is polymatroidal. Finally, Lemma 1.11 implies that \(I^{[k]}\) is polymatroidal as well.
In [10, Corollary 3.5] it was proved that \(g_{I(G)}(\nu(G))=0\) for any fully supported edge ideal \(I(G)\). As an interesting consequence of Theorem 1.12 we extend this result to quadratic monomial ideals.
**Corollary 1.13**.: _Let \(I\subset S\) be a monomial ideal generated in degree two. Then \(g_{I}(\nu(I))=0\) and \(\operatorname{reg}(I^{[\nu(I)]})=2\nu(I)\)._
Proof.: By Theorem 1.12, \((I^{\wp})^{[\nu(I)]}\) is matroidal. Hence [10, Theorem 1.6] yields that \(\operatorname{depth}(S^{\wp}/(I^{\wp})^{[\nu(I^{\wp})]})=\operatorname{ indeg}((I^{\wp})^{[k]})-1\) and \(g_{I^{\wp}}(\nu(I^{\wp}))=0\). Corollary 1.9 implies that \(g_{I}(\nu(I))=g_{I^{\wp}}(\nu(I^{\wp}))=0\). Since \(I^{[\nu(I)]}\) is a polymatroidal ideal generated in degree \(2\nu(I)\), \(I^{[\nu(I)]}\) has a linear resolution. Hence \(\operatorname{reg}(I^{[\nu(I)]})=2\nu(I)\).
The above result is no longer valid for monomial ideals generated in a single degree bigger than two. For instance, for the ideal \(I=(x_{1}x_{2}^{2},x_{2}x_{3}^{2},x_{3}x_{4}^{2},x_{4}x_{1}^{2})\) of \(S=K[x_{1},\ldots,x_{4}]\) we have \(\nu(I)=2\) but \(I^{[2]}\) does not have a linear resolution and \(g_{I}(2)=1\neq 0\).
## 2. Edge ideals of weighted oriented graphs
In this section, we focus our attention on matching powers of edge ideals of weighted oriented graphs. The interest in these ideals stemmed from their relevance in coding theory, in particular in the study of Reed-Muller type codes [24]. Recently, these ideals have been the subject of many research papers in combinatorial commutative algebra, e.g. [2, 3, 5, 18, 23, 26]. Hereafter, by a graph \(G\) we mean a finite simple undirected graph without isolated vertices.
A \((\)_vertex\()\)-weighted oriented graph_\(\mathcal{D}=(V(\mathcal{D}),E(\mathcal{D}),w)\) consists of an underlying graph \(G\), with \(V(\mathcal{D})=V(G)\), on which each edge is given an orientation and it is equipped with a _weight function_\(w:V(G)\to\mathbb{Z}_{\geq 1}\). The _weight_ of a vertex \(i\in V(G)\), denoted by \(w_{i}\), is the value \(w(i)\) of the weight function at \(i\). The directed edges of \(\mathcal{D}\) are denoted by pairs \((i,j)\in E(\mathcal{D})\) to reflect the orientation, hence \((i,j)\) represents an edge directed from \(i\) to \(j\). The _edge ideal_ of \(\mathcal{D}\) is defined as the ideal
\[I(\mathcal{D})\ =\ (x_{i}x_{j}^{w_{j}}\ :\ (i,j)\in E(\mathcal{D}))\]
of the polynomial ring \(S=K[x_{i}:i\in V(G)]\). If \(w_{i}=1\) for all \(i\in V(G)\), then \(I(\mathcal{D})=I(G)\) is the usual edge ideal of \(G\).
**Remark 2.1**.: If \(i\in V(G)\) is a _source_, that is a vertex such that \((j,i)\notin E(\mathcal{D})\) for all \(j\), then \(\deg_{x_{i}}(I(\mathcal{D}))=1\). Therefore, hereafter we assume that \(w_{i}=1\) for all sources \(i\in V(G)\).
By Proposition 1.2(a), \(I(\mathcal{D})^{[k]}\) is generated by the products \(u=u_{1}\cdots u_{k}\) where \(u_{p}=x_{i_{p}}x_{j_{p}}^{w_{j_{p}}}\in G(I(\mathcal{D}))\) and \(\operatorname{supp}(u_{p})\cap\operatorname{supp}(u_{q})=\emptyset\) for all \(p\neq q\). Thus \(u\in I(\mathcal{D})^{[k]}\) if and only if \(M=\{\{i_{1},j_{1}\},\ldots,\{i_{k},j_{k}\}\}\) is a \(k\)-matching of \(G\). This observation justifies the choice to name \(I^{[k]}\) the \(k\)th matching power of \(I\).
Firstly, we establish the homological comparison between the matching powers \(I(\mathcal{D})^{[k]}\) and \(I(G)^{[k]}\), where \(G\) is the underlying graph of \(\mathcal{D}\). The assumption in Remark 2.1 is crucial for the statement (e) of Theorem 2.2.
**Theorem 2.2**.: _Let \(\mathcal{D}\) be a weighted oriented graph with underlying graph \(G\). Then, the following statements hold._
1. \(\nu(I(\mathcal{D}))=\nu(I(G))=\nu(G)\)_._
2. \(\operatorname{pd}(I(G)^{[k]})\leq\operatorname{pd}(I(\mathcal{D})^{[k]})\)_, for all_ \(1\leq k\leq\nu(G)\)_._
3. \(\operatorname{reg}(I(G)^{[k]})\leq\operatorname{reg}(I(\mathcal{D})^{[k]})\)_, for all_ \(1\leq k\leq\nu(G)\)_._
4. \(\beta_{i}(I(G)^{[k]})\leq\beta_{i}(I(\mathcal{D})^{[k]})\)_, for all_ \(1\leq k\leq\nu(G)\) _and_ \(i\)_._
5. \(g_{I(\mathcal{D})}(k)\leq g_{I(G)}(k)+\sum\limits_{i\in V(G)}w_{i}-|V(G)|\)_, for all_ \(1\leq k\leq\nu(G)\)_._
For the proof we recall a few basic facts. Let \(I\subset S\) be a monomial ideal.
1. We have \(\beta_{i,j}(I)=\beta_{i,j}(I^{\wp})\) for all \(i\) and \(j\)[19, Corollary 1.6.3].
2. For a monomial \(u\in S\), we set \(\sqrt{u}=\prod_{i\in\operatorname{supp}(u)}x_{i}\). If \(G(I)=\{u_{1},\ldots,u_{m}\}\), then [19, Proposition 1.2.4] gives \[\sqrt{I}=(\sqrt{u_{1}},\ldots,\sqrt{u_{m}}).\]
3. Let \(P\) be a monomial prime ideal of \(S\). Let \(S(P)\) be the polynomial ring in the variables which generate \(P\). The _monomial localization_ of \(I\) at \(P\) is the monomial ideal \(I(P)\) of \(S(P)\) which is obtained from \(I\) by the substitution \(x_{i}\mapsto 1\) for all \(x_{i}\notin P\). The monomial localization can also be described as the saturation \(I:(\prod_{x_{i}\notin P}x_{i})^{\infty}\). If \(\mathbb{F}\) is the minimal (multi)graded free \(S\)-resolution of \(I\), one can construct, starting from \(\mathbb{F}\), a possibly non-minimal (multi)graded free \(S\)-resolution of \(I(P)\)[20, Lemma 1.12]. It follows from this construction that \(\beta_{i}(I(P))\leq\beta_{i}(I)\) for all \(i\). Moreover, \(\operatorname{pd}(I(P))\leq\operatorname{pd}(I)\) and \(\operatorname{reg}(I(P))\leq\operatorname{reg}(I)\).
Proof.: Statement (a) is clear. To prove (b), (c) and (d), set \(J=I(\mathcal{D})^{[k]}\). Assume that \(I(\mathcal{D})\) is a fully supported ideal of \(S=K[x_{1},\ldots,x_{n}]\). Let \(P=(x_{1,1},\ldots,x_{n,1})\). Identifying \(x_{i,1}\) with \(x_{i}\) for all \(i\), by applying (ii), \(J^{\wp}(P)\) can be identified with \(\sqrt{J}\). Then by (i) and (iii) we obtain
\[\beta_{i}(\sqrt{J})=\beta_{i}(J^{\wp}(P))\leq\beta_{i}(J^{\wp})=\beta_{i}(J)\]
for all \(i\). To complete the proof, we will show that \(\sqrt{J}=I(G)^{[k]}\). For this aim, let \(v\in G(J)\). Then \(v=(x_{i_{1}}x_{j_{1}}^{w_{j_{1}}})\cdots(x_{i_{k}}x_{j_{k}}^{w_{j_{k}}})\) with \((i_{1},j_{1}),\ldots,(i_{k},j_{k})\in E(\mathcal{D})\) and the corresponding undirected edges form a \(k\)-matching of \(G\). Thus \(\sqrt{v}=(x_{i_{1}}x_{j_{1}})\cdots(x_{i_{k}}x_{j_{k}})\in I(G)^{[k]}\) and consequently \(\sqrt{J}\subseteq I(G)^{[k]}\). Conversely, let \(u=(x_{i_{1}}x_{j_{1}})\cdots(x_{i_{k}}x_{j_{k}})\in G(I(G)^{[k]})\) with \(\{\{i_{1},j_{1}\},\ldots,\{i_{k},j_{k}\}\}\) a \(k\)-matching of \(G\). Then \((i_{1},j_{1}),\ldots,(i_{k},j_{k})\in E(\mathcal{D})\) up to relabelling. So \(v=(x_{i_{1}}x_{j_{1}}^{w_{j_{1}}})\cdots(x_{i_{k}}x_{j_{k}}^{w_{j_{k}}})\in J\) and \(\sqrt{v}=u\in\sqrt{J}\). This shows that \(I(G)^{[k]}\subseteq\sqrt{J}\). Equality follows.
It remains to prove (e). Let \(L\) be a monomial ideal of \(S\). By the Auslander-Buchsbaum formula we have \(\mathrm{depth}(S/L)=n-1-\mathrm{pd}(L)\). Hence, for all \(1\leq k\leq\nu(L)\) we can rewrite \(g_{L}(k)\) as
\[g_{L}(k)=|\mathbf{deg}(L)|-\mathrm{pd}(L^{[k]})-\mathrm{indeg}(L^{[k]}).\]
By (b) we have \(\mathrm{pd}(I(G)^{[k]})\leq\mathrm{pd}(I(\mathcal{D})^{[k]})\) for all \(k\). It is clear that \(|\mathbf{deg}(I(G))|=n\) and \(\mathrm{indeg}(I(G)^{[k]})=2k\leq\mathrm{indeg}(I(\mathcal{D})^{[k]})\) for all \(1\leq k\leq\nu(G)\). Therefore,
\[g_{I(\mathcal{D})}(k) = |\mathbf{deg}(I(\mathcal{D}))|-\mathrm{pd}(I(\mathcal{D})^{[k]})- \mathrm{indeg}(I(\mathcal{D})^{[k]})\] \[\leq |\mathbf{deg}(I(\mathcal{D}))|-\mathrm{pd}(I(G)^{[k]})-\mathrm{ indeg}(I(G)^{[k]})\] \[= n-\mathrm{pd}(I(G)^{[k]})-\mathrm{indeg}(I(G)^{[k]})+|\mathbf{ deg}(I(\mathcal{D}))|-n\] \[= g_{I(G)}(k)+|\mathbf{deg}(I(\mathcal{D}))|-n.\]
Since \(\mathrm{deg}_{x_{i}}(I(\mathcal{D}))=w_{i}\) for all \(i\), we have \(|\mathbf{deg}(I(\mathcal{D}))|=\sum_{i=1}^{n}w_{i}\), as wanted.
The inequalities in (b), (c), (d) and (e) need not to be equalities.
**Example 2.3**.: Let \(\mathcal{D}\) be the oriented \(4\)-cycle with all vertices having weight \(2\) and with edge set \(E(\mathcal{D})=\{(a,b),(b,c),(c,d),(d,a)\}\). Then \(I(G)^{[2]}=(abcd)\), while \(I(\mathcal{D})^{[2]}=(ab^{2}cd^{2},a^{2}bc^{2}d)\). By using _Macaulay2_[17] and the package [13], we checked that \(\mathrm{pd}(I(G)^{[2]})=1<2=\mathrm{pd}(I(\mathcal{D})^{[2]})\), \(\mathrm{reg}(I(G)^{[2]})=4<7=\mathrm{reg}(I(\mathcal{D})^{[2]})\), \(\beta_{1}(I(G)^{[2]})=0<1=\beta_{1}(I(\mathcal{D})^{[2]})\), and \(g_{I(G)}(2)=1<5=g_{I(\mathcal{D})}(2)+\sum_{i=1}^{4}w_{i}-4\).
Hereafter, we concentrate our attention on edge ideals of vertex-weighted oriented graphs. Let \(\mathcal{D}^{\prime}\) and \(\mathcal{D}\) be weighted oriented graphs with underlying graphs \(G^{\prime}\) and \(G\) respectively. We say \(\mathcal{D}^{\prime}\) is a _weighted oriented subgraph_ of \(\mathcal{D}\) if the vertex and edge sets of \(\mathcal{D}^{\prime}\) are contained in respectively those of \(\mathcal{D}\) and the weight functions coincide on \(V(\mathcal{D}^{\prime})\). A weighted oriented subgraph \(\mathcal{D}^{\prime}\) of \(\mathcal{D}\) is called _induced weighted oriented subgraph_ of \(\mathcal{D}\) if \(G^{\prime}\) is an induced subgraph of \(G\).
Firstly, we turn to the problem of bounding the regularity of matching powers of edge ideals. We begin with the so-called restriction lemma.
**Lemma 2.4**.: _Let \(\mathcal{D}^{\prime}\) be an induced weighted oriented subgraph of \(\mathcal{D}\). Then_
1. \(\beta_{i,\mathbf{a}}(I(\mathcal{D}^{\prime})^{[k]})\leq\beta_{i,\mathbf{a}}(I (\mathcal{D})^{[k]})\) _for all_ \(i\) _and_ \(\mathbf{a}\in\mathbb{Z}^{n}\)_._
2. \(\mathrm{reg}(I(\mathcal{D}^{\prime})^{[k]})\leq\mathrm{reg}(I(\mathcal{D})^{[k ]})\)_._
Proof.: It follows from [9, Lemma 1.2].
Let \(\mathrm{im}(G)\) denote the _induced matching number_ of \(G\). For any weighted oriented graph \(\mathcal{D}\) with underlying graph \(G\), let \(\mathrm{wim}(\mathcal{D})\) denote the _weighted induced matching
number_ of \(\mathcal{D}\). That is,
\[\operatorname{wim}(\mathcal{D})=\max\big{\{}\sum_{i=1}^{m}w(y_{i}) : \{\{x_{1},y_{1}\},\ldots,\{x_{m},y_{m}\}\}\text{ is an}\] \[\text{induced matching of }G,\text{ and }(x_{i},y_{i})\in E( \mathcal{D})\big{\}}.\]
Notice that if \(w_{i}=1\) for every \(i\in V(\mathcal{D})\), then \(\operatorname{wim}(\mathcal{D})=\operatorname{im}(G)\). Otherwise, we have the inequality \(\operatorname{wim}(\mathcal{D})\geq\operatorname{im}(G)\). We extend the regularity lower bound given in [3, Theorem 3.8] as follows.
**Proposition 2.5**.: _Let \(\mathcal{D}\) be a weighted oriented graph with underlying graph \(G\). Then_
\[\operatorname{reg}(I(\mathcal{D})^{[k]})\geq\operatorname{wim}(\mathcal{D})+k\]
_for all \(1\leq k\leq\operatorname{im}(G)\)._
Proof.: The proof is similar to [9, Theorem 2.1]. We include the details for the sake of completeness. Let \(\{\{x_{1},y_{1}\},\ldots,\{x_{r},y_{r}\}\}\) be an induced matching. Suppose that \((x_{i},y_{i})\in E(\mathcal{D})\) with \(w(y_{i})=t_{i}\) and \(\sum_{i=1}^{r}t_{i}=\operatorname{wim}(\mathcal{D})\). Let \(\mathcal{D}^{\prime}\) be the induced weighted oriented subgraph of \(\mathcal{D}\) on the vertices \(x_{1},\ldots,x_{r},y_{1},\ldots,y_{r}\). Then by Lemma 2.4 it suffices to show that
\[\operatorname{reg}(I(\mathcal{D}^{\prime})^{[k]})\geq\operatorname{wim}( \mathcal{D})+k.\]
To this end, we set \(I=I(\mathcal{D}^{\prime})\) and we claim that
\[\beta_{r-k,\operatorname{wim}(\mathcal{D})+r}(I^{[k]})\neq 0.\]
Let \(J=(z_{1},\ldots,z_{r})\), where \(z_{1},\ldots,z_{r}\) are new variables. Then \(J^{[k]}\) is a squarefree strongly stable ideal in the polynomial ring \(R=K[z_{1},\ldots,z_{r}]\). It was proved in [9, Theorem 2.1] that \(\beta_{r-k,r}(J^{[k]})\neq 0\).
Define the map \(\phi:R\to S=K[x_{1},\ldots,x_{r},y_{1},\ldots,y_{r}]\) by \(z_{i}\mapsto x_{i}y_{i}^{t_{i}}\) for \(i=1,\ldots,r\). Since \(x_{1}y_{1}^{t_{1}},\ldots,x_{r}y_{r}^{tr}\) is a regular sequence on \(S\), the \(K\)-algebra homomorphism \(\phi\) is flat. If \(\mathbb{F}\) is the minimal free resolution of \(J^{[k]}\) over \(R\), then \(\mathbb{G}:\mathbb{F}\otimes_{R}S\) is the minimal free resolution of \(I^{[k]}\) over \(S\). It follows that
\[\beta_{i,(a_{1},\ldots,a_{r})}(J^{[k]})=\beta_{i,(a_{1},\ldots,a_{r},t_{1}a_{ 1},\ldots,t_{r}a_{r})}(I^{[k]})\]
for any \(i\) and \((a_{1},\ldots,a_{r})\in\mathbb{Z}^{r}\). Then,
\[0\neq\beta_{r-k,r}(J^{[k]})=\beta_{r-k,(1,\ldots,1)}(J^{[k]})=\beta_{r-k,(1, \ldots,1,t_{1},\ldots,t_{r})}(I^{[k]})\]
and \(\beta_{r-k,\operatorname{wim}(\mathcal{D})+r}(I^{[k]})\neq 0\) as desired.
We close this section by providing a lower bound for the projective dimension of matching powers of edge ideals. Let \(P_{n}\) be the _path of length_\(n\). That is, \(V(P_{n})=[n]\) and \(E(P_{n})=\{\{1,2\},\{2,3\},\ldots,\{n-1,n\}\}\). We denote by \(\mathcal{P}_{n}\) a weighted oriented path of length \(n\), that is, a weighted oriented graph whose underlying graph is \(P_{n}\). It is well-known that \(\nu(P_{n})=\lfloor\frac{n}{2}\rfloor\).
For a weighted oriented graph \(\mathcal{D}\) with underlying graph \(G\), we denote by \(\ell(\mathcal{D})\) the maximal length of an induced path of \(G\).
**Proposition 2.6**.: _Let \(\mathcal{D}\) be a weighted oriented graph. Then \(\nu(I(\mathcal{D}))\geq\lfloor\frac{\ell(\mathcal{D})}{2}\rfloor\) and_
\[\operatorname{pd}(I(\mathcal{D})^{[k]})\ \geq\ \begin{cases}\ell(\mathcal{D})-\lceil\frac{\ell( \mathcal{D})}{3}\rceil-k&\text{if }1\leq k\leq\lceil\frac{\ell(\mathcal{D})}{3}\rceil, \\ \ell(\mathcal{D})-2k&\text{if }\lceil\frac{\ell(\mathcal{D})}{3}\rceil+1\leq k \leq\lfloor\frac{\ell(\mathcal{D})}{2}\rfloor.\end{cases}\]
Proof.: Let \(\ell=\ell(\mathcal{D})\). There exists a subset \(W\) of \(V(\mathcal{D})\) such that the induced subgraph of \(\mathcal{D}\) on \(W\) is a weighted oriented path \(\mathcal{P}_{\ell}\). Theorem 2.2(b) combined with Lemma 2.4 implies that \(\operatorname{pd}(I(P_{\ell})^{[k]})\leq\operatorname{pd}(I(\mathcal{P}_{ \ell})^{[k]})\leq\operatorname{pd}(I(\mathcal{D})^{[k]})\). It was shown in [7, Theorem 3.1] that
\[g_{I(P_{\ell})}(k)\ =\ \begin{cases}\lceil\frac{\ell}{3}\rceil-k&\text{if }1 \leq k\leq\lceil\frac{\ell}{3}\rceil,\\ \ \ \ \ \ \ 0&\text{if }\lceil\frac{\ell}{3}\rceil+1\leq k\leq\lfloor\frac{\ell}{2}\rfloor. \end{cases}\]
For a squarefree monomial ideal \(I\subset S\), we have \(g_{I}(k)=n-\operatorname{pd}(I^{[k]})-\operatorname{indeg}(I^{[k]})\). Hence, the assertion follows from the above formula.
Although we only considered weighted oriented graphs in this section, our methods can be useful to prove analogous results for matching powers of edge ideals of edge-weighted graphs. An _edge-weighted graph_\(G_{w}=(V(G_{w}),E(G_{w}),w)\) consists of an underlying graph \(G\), with \(V(G_{w})=V(G)\) and \(E(G_{w})=E(G)\), equipped with a _weight function_\(w:E(G)\to\mathbb{Z}_{\geq 1}:\{i,j\}\in E(G)\mapsto w(\{i,j\})=w_{i,j}\). The _edge ideal_ of \(G_{w}\) is defined as the ideal
\[I(G_{w})\ =\ ((x_{i}x_{j})^{w_{i,j}}\ :\ \{i,j\}\in E(G))\]
of \(S=K[x_{i}:i\in V(G)]\), see [25]. Notice that if the weight of every edge is \(1\), then the edge ideal of \(G_{w}\) coincides with that of \(G\).
## 3. Linearly related matching powers
Let \(I\subset S\) be a graded ideal generated in a single degree. We say \(I\) is _linearly related_, if the first syzygy module of \(I\) is generated by linear relations. In this section, we want to discuss which matching powers of the edge ideal \(I(\mathcal{D})\) of a vertex-weighted oriented graph \(\mathcal{D}\) are linearly related.
Let \(I\) be a monomial ideal of \(S\) generated in degree \(d\). Let \(G_{I}\) denote the graph with vertex set \(G(I)\) and edge set
\[E(G_{I})=\{\{u,v\}:u,v\in G(I)\text{ with }\deg(\operatorname{lcm}(u,v))=d+1\}.\]
For all \(u,v\in G(I)\) let \(G_{I}^{(u,v)}\) be the induced subgraph of \(G_{I}\) whose vertex set is
\[V(G_{I}^{(u,v)})=\{w\in G(I)\colon w\text{ divides }\operatorname{lcm}(u,v)\}.\]
The following theorem provides a criterion through the graphs defined above to determine if a monomial ideal is linearly related.
**Theorem 3.1**.: _[_4_, Corollary 2.2]_ _Let \(I\) be a monomial ideal generated in degree \(d\). Then \(I\) is linearly related if and only if for all \(u,v\in G(I)\) there is a path in \(G_{I}^{(u,v)}\) connecting \(u\) and \(v\)._
**Lemma 3.2**.: _Let \(I\) be a monomial ideal and let \(1\leq k<\nu(I)\). Suppose that \(I^{[k]}\) is generated in single degree. Then, there is an integer \(d\) such that_
* \(I^{[k]}\) _is generated in degree_ \(dk\)_._
* \(I^{[k+1]}\) _is generated in degree_ \(d(k+1)\)_. Moreover, if_ \(u=u_{1}\ldots u_{k+1}\in G(I^{[k+1]})\)_, with each_ \(u_{i}\in G(I)\) _and_ \(\operatorname{supp}(u_{i})\cap\operatorname{supp}(u_{j})=\emptyset\) _for_ \(i\neq j\)_, then_ \(\deg(u_{i})=d\) _for each_ \(i\)_._
Proof.: Let \(u=u_{1}\cdots u_{k+1}\in G(I^{[k+1]})\) with each \(u_{i}\in G(I)\) and \(\operatorname{supp}(u_{i})\cap\operatorname{supp}(u_{j})=\emptyset\) for \(i\neq j\). Observe that \(u/u_{\ell}\in G(I^{[k]})\) for any \(\ell=1,\ldots,k+1\). First, we show that \(\deg(u_{i})=\deg(u_{j})\) for each \(i\neq j\). Without loss of generality, assume for a contradiction that \(\deg(u_{1})\neq\deg(u_{2})\). Then \(u_{2}u_{3}\cdots u_{k+1}\) and \(u_{1}u_{3}\cdots u_{k+1}\) are minimal monomial generators of \(I^{[k]}\) of different degrees, which is a contradiction. It follows that \(u_{1},\ldots,u_{k}\) are all of degree \(d\) for some \(d\). Now, suppose that \(v=v_{1}\cdots v_{k+1}\in G(I^{[k+1]})\) with each \(v_{i}\in G(I)\) and \(\operatorname{supp}(v_{i})\cap\operatorname{supp}(v_{j})=\emptyset\) for \(i\neq j\). By the above argument, each \(v_{i}\) is of the same degree, say \(d^{\prime}\). Then \(u_{1}\cdots u_{k}\) is a minimal monomial generator of \(I^{[k]}\) of degree \(dk\) whereas \(v_{1}\cdots v_{k}\) is a minimal monomial generator of \(I^{[k]}\) of degree \(d^{\prime}k\). Therefore \(d=d^{\prime}\) and \(u\) and \(v\) have the same degree.
In [4, Theorem 3.1] it was proved that \(I(G)^{s}\) is linearly related for some \(s\geq 1\) if and only if \(I(G)^{k}\) is linearly related for all \(k\geq 1\). Unlike the ordinary powers of edge ideals, not all squarefree powers of \(I(G)\) are linearly related if some squarefree power is linearly related. On the other hand, it was proved in [9, Theorem 3.1] that if \(I(G)^{[k]}\) is linearly related for some \(k\geq 1\), then \(I(G)^{[k+1]}\) is linearly related as well. We extend [9, Theorem 3.1] to monomial ideals, under some additional assumptions.
**Theorem 3.3**.: _Let \(I\) be a monomial ideal such that \(|\operatorname{supp}(w)|=2\) for every \(w\in G(I)\). Suppose that \(I^{[k]}\) is linearly related for some \(1\leq k<\nu(I)\). If \(\operatorname{supp}(u)\neq\operatorname{supp}(v)\) for every \(u,v\in G(I^{[k+1]})\) with \(u\neq v\), then \(I^{[k+1]}\) is linearly related._
Proof.: Suppose that \(\operatorname{supp}(u)\neq\operatorname{supp}(v)\) for every \(u,v\in G(I^{[k+1]})\) with \(u\neq v\). By the previous lemma, \(I^{[k]}\) is generated in degree \(dk\), and \(I^{[k+1]}\) is generated in degree \(d(k+1)\). Let \(u,v\in G(I^{[k+1]})\) with \(u\neq v\). By Theorem 3.1 and Lemma 3.2, it suffices to find a path in \(G^{(u,v)}_{I^{[k+1]}}\) connecting \(u\) to \(v\). Let \(u=u_{1}\cdots u_{k+1}\) and let \(v=v_{1}\cdots v_{k+1}\) where \(u_{i},v_{i}\in G(I)\) for each \(i=1,\ldots,k+1\) and
\[\operatorname{supp}(u_{p})\cap\operatorname{supp}(u_{q})=\emptyset= \operatorname{supp}(v_{p})\cap\operatorname{supp}(v_{q})\]
for every distinct \(p,q\in\{1,\ldots,k+1\}\). By Lemma 3.2, we have that \(\deg(u_{i})=\deg(v_{i})=d\) for every \(i=1,\ldots,k+1\). By the initial assumption, we may assume that there exists \(\ell\in\operatorname{supp}(u)\setminus\operatorname{supp}(v)\). Without loss of generality, we may assume that \(x_{\ell}\) divides \(u_{1}\). Let \(\operatorname{supp}(u_{1})=\{\ell,m\}\). By definition of matching power, there exists at most one \(j\) such that \(x_{m}\) divides \(v_{j}\). Again, without loss of generality, we may assume that \(x_{m}\) does not divide \(v_{i}\) for \(i=2,\ldots,k+1\). Now, we have
\[\operatorname{supp}(u_{1})\cap\operatorname{supp}(v_{p})=\emptyset\quad\text{ for all}\quad p=2,3,\ldots,k+1.\]
Let \(u^{\prime}=u_{2}\ldots u_{k+1}\) and \(v^{\prime}=v_{2}\ldots v_{k+1}\). Since \(u^{\prime},v^{\prime}\in G(I^{[k]})\) there exists a path \(u^{\prime}=z_{0},z_{1},z_{2},\ldots,z_{t},v^{\prime}=z_{t+1}\) in \(G^{(u^{\prime},v^{\prime})}_{I^{[k]}}\) connecting \(u^{\prime}\) to \(v^{\prime}\). We claim that
\[P:u,u_{1}z_{1},u_{1}z_{2},\ldots,u_{1}z_{t},u_{1}v^{\prime}\]
is a path in \(G^{(u,u_{1}v^{\prime})}_{I^{[k+1]}}\). To prove the claim, we must show that
1. \(u_{1}z_{i}\in G(I^{[k+1]})\) for all \(i=1,\ldots,t+1\),
2. \(u_{1}z_{i}\) divides \(\operatorname{lcm}(u,u_{1}v^{\prime})\) for all \(i=1,\ldots,t\) and,
3. \(\deg(\operatorname{lcm}(u_{1}z_{i},u_{1}z_{i+1}))=d(k+1)+1\) for all \(i=0,\ldots,t\).
Since \(\operatorname{supp}(u_{1})\cap\operatorname{supp}(\operatorname{lcm}(u^{ \prime},v^{\prime}))=\emptyset\), the monomial \(u_{1}z_{i}\) belongs to \(I^{[k+1]}\) for all \(i=1,\ldots,t+1\). Moreover, since \(u_{1}z_{i}\) is of degree \(d(k+1)\), it follows that \(u_{1}z_{i}\in G(I^{[k+1]})\), which proves (i). To see (ii) holds, observe that
\[\operatorname{lcm}(u,u_{1}v^{\prime})=\operatorname{lcm}(u_{1}z_{0},u_{1}z_{t +1})=u_{1}\operatorname{lcm}(z_{0},z_{t+1}).\]
Lastly, (iii) holds because for all \(i=0,\ldots,t\) we have
\[\deg(\operatorname{lcm}(u_{1}z_{i},u_{1}z_{i+1}))=\deg(u_{1})+\deg( \operatorname{lcm}(z_{i},z_{i+1}))=d+(dk+1).\]
Now, let \(w=u_{1}v_{2}\ldots v_{k}\) and \(w^{\prime}=v_{1}v_{2}\ldots v_{k}\). Since \(w,w^{\prime}\in G(I^{[k]})\) there exists a path \(w,y_{1},y_{2},\ldots,y_{s},w^{\prime}\) in \(G^{(w,w^{\prime})}_{I^{[k]}}\) connecting \(w\) to \(w^{\prime}\). As before, we can then form a path \(P^{\prime}\)
\[P^{\prime}:wv_{k+1},y_{1}v_{k+1},y_{2}v_{k+1},\ldots,y_{s}v_{k+1},w^{\prime}v_ {k+1}=v\]
in \(G^{(u_{1}v^{\prime},v)}_{I^{[k+1]}}\). Connecting \(P\) and \(P^{\prime}\) we get the required path, as \(u_{1}v^{\prime}=wv_{k+1}\).
We will now observe that the assumption of the previous theorem is satisfied for edge ideals of some weighted oriented graphs including those whose underlying graphs are forests. Hereafter, to simplify the notation, we identify each vertex \(i\in V(\mathcal{D})\) with the variable \(x_{i}\). Hence, we will often write \(x_{i}\) to denote \(i\).
**Lemma 3.4**.: _Let \(\mathcal{D}\) be a weighted oriented graph whose underlying graph is \(G\). Suppose that every subgraph of \(G\) has at most one perfect matching. Let \(1\leq k\leq\nu(I(\mathcal{D}))\) and \(u,v\in G(I(\mathcal{D})^{[k]})\). If \(\operatorname{supp}(u)=\operatorname{supp}(v)\), then \(u=v\)._
Proof.: Let \(u=x_{1}y_{1}^{w(y_{1})}\ldots x_{k}y_{k}^{w(y_{k})}\) where \((x_{i},y_{i})\in E(\mathcal{D})\) for each \(i\) and \(M_{1}=\{\{x_{i},y_{i}\}:i=1,\ldots,k\}\) is a matching in \(G\). Let \(v=z_{1}t_{1}^{w(t_{1})}\ldots z_{k}t_{k}^{w(t_{k})}\) where \((z_{i},t_{i})\in E(\mathcal{D})\) for each \(i\) and \(M_{2}=\{\{z_{i},t_{i}\}:i=1,\ldots,k\}\) is a matching in \(G\). Suppose that \(\operatorname{supp}(u)=\operatorname{supp}(v)\). Then we can set
\[W:=\{x_{1},\ldots x_{k},y_{1},\ldots,y_{k}\}=\{z_{1},\ldots,z_{k},t_{1}, \ldots t_{k}\}.\]
Since the induced subgraph of \(G\) on \(W\) has at most one perfect matching, it follows that \(M_{1}=M_{2}\) and therefore \(u=v\).
Combining Theorem 3.3 and Lemma 3.4, we get the following immediate corollary.
**Corollary 3.5**.: _Let \(\mathcal{D}\) be a weighted oriented graph such that every subgraph of its underlying graph has at most one perfect matching \((\)e.g., a forest\()\). If \(I(\mathcal{D})^{[k]}\) is linearly related for some \(1\leq k<\nu(I(\mathcal{D}))\), then \(I(\mathcal{D})^{[k+1]}\) is linearly related as well._
Let \(G\) be the underlying graph of \(\mathcal{D}\). If every subgraph of \(G\) has at most one perfect matching (e.g., \(G\) is a forest, or an odd cycle), and \(I(\mathcal{D})\neq I(G)\), then even more is true.
**Theorem 3.6**.: _Let \(\mathcal{D}\) be a weighted oriented graph with underlying graph \(G\). Suppose that every subgraph of \(G\) has at most one perfect matching, and that \(I(\mathcal{D})\neq I(G)\). Let \(1\leq k\leq\nu(G)\). If \(I(\mathcal{D})^{[k]}\) is linearly related, then \(k=\nu(I(\mathcal{D}))\)._
The next example shows that we can not drop the hypothesis that every subgraph of \(G\) has at most one perfect matching.
**Example 3.7**.: Let \(\mathcal{D}\) be the oriented graph on vertex set [6], with weights \(w(1)=2\) and \(w(i)=1\) for \(i\in[6]\setminus\{1\}\), and with edge set
\[E(\mathcal{D})=\{(2,1),(1,3),(1,4),(1,5),(1,6)\}\cup\{(i,j):2\leq i<j\leq 6\}.\]
Then, \(G\) has several perfect matchings, and
\[I(\mathcal{D})=(x_{1}^{2}x_{2},x_{1}x_{3},x_{1}x_{4},x_{1}x_{5},x_{1}x_{6},x_ {2}x_{3},x_{2}x_{4},x_{2}x_{5},x_{2}x_{6},\ldots,x_{4}x_{5},x_{4}x_{6},x_{5}x_ {6}).\]
We have \(\nu(I(\mathcal{D}))=3\). However \(I(\mathcal{D})^{[2]}=I(G)^{[2]}\) and \(I(\mathcal{D})^{[3]}=I(G)^{[3]}\) are linearly related, indeed they even have a linear resolution.
Before we can prove Theorem 3.6, we need some preliminary lemmas. Hereafter, with abuse of notation, for a monomial \(u\), we denote by \(\operatorname{supp}(u)\) also the set of variables dividing \(u\).
**Lemma 3.8**.: _Let \(\mathcal{D}\) be a weighted oriented graph and let \(1\leq k\leq\nu(I(\mathcal{D}))\)._
1. _Suppose that every subgraph of the underlying graph_ \(G\) _of_ \(\mathcal{D}\) _has at most one perfect matching. Then,_ \(u\in G(I(\mathcal{D}^{[k]}))\) _if and only if_ \(u=x_{1}y_{1}^{w(y_{1})}\ldots x_{k}y_{k}^{w(y_{k})}\) _for some_ \((x_{i},y_{i})\in E(\mathcal{D})\) _with_ \(\{\{x_{i},y_{i}\}:i=1,\ldots,k\}\) _a matching in_ \(G\)_._
2. _Let_ \(u,v\in G(I(\mathcal{D})^{[k]})\) _such that_ \(\operatorname{supp}(u)\neq\operatorname{supp}(v)\) _and_ \[\deg(\operatorname{lcm}(u,v))=\deg(u)+1=\deg(v)+1.\] _Then there exist variables_ \(z_{1}\notin\operatorname{supp}(u)\)_,_ \(z_{2}\notin\operatorname{supp}(v)\) _such that_ \(v=uz_{1}/z_{2}\)_,_ \(\deg_{z_{1}}(v)=1\) _and_ \(\deg_{z_{2}}(u)=1\)_._
Proof.: (a) The "only if" side of the statement is by definition of matching power. The "if" side of the statement follows from Lemma 3.4 and the fact that every minimal monomial generator of \(I(\mathcal{D})^{[k]}\) has a support of size \(2k\).
(b) Since both \(u\) and \(v\) have support of size \(2k\) and \(\operatorname{supp}(u)\neq\operatorname{supp}(v)\), there exists a variable \(z_{1}\in\operatorname{supp}(v)\setminus\operatorname{supp}(u)\) and \(z_{2}\in\operatorname{supp}(u)\setminus\operatorname{supp}(v)\). Since \(\deg(\operatorname{lcm}(u,v))=\deg(u)+1\), we get \(\operatorname{supp}(v)\setminus\operatorname{supp}(u)=\{z_{1}\}\) and \(\deg_{z_{1}}(v)=1\). Similarly, since \(\deg(\operatorname{lcm}(u,v))=\deg(v)+1\), we get \(\operatorname{supp}(u)\setminus\operatorname{supp}(v)=\{z_{2}\}\) and \(\deg_{z_{2}}(u)=1\). Then for every \(t\in\operatorname{supp}(u)\cap\operatorname{supp}(v)\), we get \(\deg_{t}(u)=\deg_{t}(v)\) and the result follows.
**Lemma 3.9**.: _Let \(\mathcal{D}\) be a weighted oriented graph with underlying graph \(G\). Suppose that every subgraph of \(G\) has at most one perfect matching. Suppose that \(I(\mathcal{D})^{[k]}\) is linearly related. Let \(u\in G(I(\mathcal{D})^{[k]})\) and let \(x\) be a variable such that \(\deg_{x}(u)=r>1\). Then \(\deg_{x}(v)=r\) for every \(v\in G(I(\mathcal{D})^{[k]})\)._
Proof.: Let \(u\neq v\). By Theorem 3.1 there is a path \(u_{0}=u,u_{1},u_{2},\ldots,u_{s}=v\) in the graph \(H:=G_{I(\mathcal{D})^{[k]}}^{(u,v)}\). Since \(\{u_{0},u_{1}\}\in E(H)\), by Lemma 3.4 and Lemma 3.8(b) it follows that \(\deg_{x}(u_{1})=r\). Similarly, since \(\{u_{1},u_{2}\}\in E(H)\) it follows that \(\deg_{x}(u_{2})=r\). Continuing this way, we obtain \(\deg_{x}(u_{s})=r\).
Proof of Theorem 3.6.: We assume for a contradiction that \(I(\mathcal{D})^{[k]}\) is linearly related but \(k<\nu(I(\mathcal{D}))\). Let \(M=\{\{a_{i},b_{i}\}:i=1,\ldots,k+1\}\) be a matching with \((a_{i},b_{i})\in E(\mathcal{D})\). We claim that all the \(b_{i}\)s have the same weight, say \(q\). To see this, we let \(z=(a_{1}b_{1}^{w(b_{1})})\cdots(a_{k}b_{k}^{w(b_{k})})(a_{k+1}b_{k+1}^{w(b_{k +1})})\). Then by Lemma 3.8(a) we see that \(z/(a_{i}b_{i}^{w(b_{i})})\in G(I(\mathcal{D})^{[k]})\) for each \(i=1,\ldots,k+1\). Since \(I(\mathcal{D})^{[k]}\) is generated in single degree, it follows that \(w(b_{i})=w(b_{j})\) for all \(i,j\).
Since \(I(\mathcal{D})\neq I(G)\) there is an edge \((c,d)\in E(\mathcal{D})\) with \(w(d)=r>1\). We will show that \(r=q\). Without loss of generality, we may assume that
\[\{c,d\}\cap\{a_{3},a_{4},\ldots,a_{k+1},b_{3},b_{4},\ldots,b_{k+1}\}=\emptyset.\]
Then \(\{\{c,d\},\{a_{3},b_{3}\},\ldots,\{a_{k+1},b_{k+1}\}\}\) is a matching.
On the other hand, by Lemma 3.8(a)
\[(cd^{r})(a_{3}b_{3}^{q})\cdots(a_{k+1}b_{k+1}^{q})\in G(I(\mathcal{D})^{[k]}) \text{ and }(a_{2}b_{2}^{q})(a_{3}b_{3}^{q})\cdots(a_{k+1}b_{k+1}^{q})\in G(I( \mathcal{D})^{[k]}).\]
Since \(I(\mathcal{D})^{[k]}\) is generated in single degree, it follows that \(r=q>1\).
Let \(u=(a_{1}b_{1}^{r})(a_{2}b_{2}^{r})\cdots(a_{k}b_{k}^{r})\) and \(v=(a_{2}b_{2}^{r})(a_{3}b_{3}^{r})\cdots(a_{k+1}b_{k+1}^{r})\). Since \(\deg_{b_{1}}(u)=r>1\), Lemma 3.9 implies \(\deg_{b_{1}}(v)=r\), which is a contradiction.
We can now characterize when \(I(\mathcal{D})^{[k]}\) has a linear resolution or is linearly related provided that every subgraph of \(G\) has at most one perfect matching.
**Theorem 3.10**.: _Let \(\mathcal{D}\) be a weighted oriented graph with underlying graph \(G\). Suppose that every subgraph of \(G\) has at most one perfect matching. Suppose that \(I(\mathcal{D})\neq I(G)\) and \(1\leq k\leq\nu(G)\). Then the following statements are equivalent._
* \(I(\mathcal{D})^{[k]}\) _is linearly related._
* \(I(\mathcal{D})^{[k]}\) _is polymatroidal._
* \(I(\mathcal{D})^{[k]}\) _has a linear resolution._
Proof.: A polymatroidal ideal has linear quotients [19, Theorem 12.6.2] and therefore it has a linear resolution [19, Proposition 8.2.1]. We will only show that (a) \(\Rightarrow\) (b) because (b) \(\Rightarrow\) (c) \(\Rightarrow\) (a) is known.
Suppose that \(I(\mathcal{D})^{[k]}\) is linearly related. For the rest of the proof, keep in mind that by Lemma 3.9 for any \(m_{1},m_{2}\in G(I(\mathcal{D})^{[k]})\)
\[\deg_{t}(m_{1})=\deg_{t}(m_{2})\text{ for every }t\in\operatorname{supp}(m_{1}) \cap\operatorname{supp}(m_{2}). \tag{2}\]
Let \(u,v\in G(I(\mathcal{D})^{[k]})\). Let \(\{e_{1},\ldots,e_{k}\}\) be the underlying matching (of undirected edges) for \(u\), that is, \(\bigcup_{i=1}^{k}e_{i}=\operatorname{supp}(u)\). Similarly, let \(\{f_{1},\ldots,f_{k}\}\) be the underlying matching (of undirected edges) for \(v\), that is, \(\bigcup_{i=1}^{k}f_{i}=\operatorname{supp}(v)\). Let \(M_{e_{i}}\) be the monomial factor of \(u\) corresponding to \(e_{i}\). More precisely, we define
\[M_{e_{i}}=\prod_{t\in e_{i}}t^{\deg_{t}(u)}\quad\text{and}\quad M_{f_{i}}= \prod_{t\in f_{i}}t^{\deg_{t}(v)}\]
for every \(i=1,\ldots,k\) so that \(u=M_{e_{1}}\ldots M_{e_{k}}\) and \(v=M_{f_{1}}\ldots M_{f_{k}}\).
We know from Theorem 3.6 that \(k=\nu(G)\). Therefore, \(\operatorname{supp}(v)\cap e_{i}\neq\emptyset\) for every \(i=1,\ldots,k\). Suppose that \(z_{0}\) is a variable which divides \(u\) but not \(v\). Then by Lemma 3.9 we must have \(\deg_{z_{0}}(u)=1\). We may assume that \(z_{0}\in e_{1}=\{z_{0},y_{1}\}\). Then \(y_{1}\) divides \(v\) because \(\operatorname{supp}(v)\cap e_{1}\neq\emptyset\). We may assume that \(y_{1}\in f_{1}\).
**(Step 1).** Let \(f_{1}=\{y_{1},z_{1}\}\). Assume for a moment that \(z_{1}\) does not divide \(u\). Then by Lemma 3.9 we must have \(\deg_{z_{1}}(v)=1\). If \((y_{1},z_{1})\in E(\mathcal{D})\), then \(w:=(y_{1}z_{1})M_{e_{2}}\ldots M_{e_{k}}\in G(I(\mathcal{D})^{[k]})\) by Lemma 3.8(a) and \(\deg_{y_{1}}(u)=1\) by (2). In that case, the exchange property is satisfied because \(w=z_{1}u/z_{0}\). On the other hand, if \((z_{1},y_{1})\in E(\mathcal{D})\), then similarly the exchange property is satisfied because \(w:=(z_{1}y_{1}^{w(y_{1})})M_{e_{2}}\ldots M_{e_{k}}\in G(I(\mathcal{D})^{[k]})\).
We may assume that \(z_{1}\) divides \(u\) and \(z_{1}\in e_{2}\). Let \(e_{2}=\{y_{2},z_{1}\}\). Then \(y_{2}\) divides \(v\) since otherwise \(\nu(G)>k\).
**(Step 2).** Let \(f_{2}=\{y_{2},z_{2}\}\). Assume for a moment that \(z_{2}\) does not divide \(u\). Then by Lemma 3.9 we must have \(\deg_{z_{2}}(v)=1\). If \((y_{2},z_{2})\in E(\mathcal{D})\), then \(w:=(y_{2}z_{2})M_{f_{1}}M_{e_{3}}\ldots M_{e_{k}}\in G(I(\mathcal{D})^{[k]})\) by Lemma 3.8 and \(\deg_{y_{2}}(u)=1\) by (2). In that case, the exchange property is satisfied because \(w=z_{2}u/z_{0}\) by (2). On the other hand, if \((z_{2},y_{2})\in E(\mathcal{D})\), then similarly the exchange property is satisfied because \(w:=(z_{2}y_{2}^{w(y_{2})})M_{f_{1}}M_{e_{3}}\ldots M_{e_{k}}\in G(I(\mathcal{ D})^{[k]})\).
We may assume that \(z_{2}\) divides \(u\) and \(z_{2}\in e_{3}\). Let \(e_{3}=\{y_{3},z_{2}\}\). Then \(y_{3}\) divides \(v\) since otherwise \(\nu(G)>k\). If this process stops at some point, then we are done. Suppose that it continues until the last step:
**(Step \(\mathbf{k-1}\)).** At this point, we have \(e_{i}=\{y_{i},z_{i-1}\}\) for all \(1\leq i\leq k\) and \(f_{j}=\{y_{j},z_{j}\}\) for all \(1\leq j\leq k-1\). First, observe that \(y_{k}\in f_{k}\) since otherwise \(\{e_{1},e_{2},\ldots,e_{k}\}\cup\{f_{k}\}\) is a matching in \(G\), which is not possible because \(\nu(G)=k\). Now, let \(f_{k}=\{y_{k},z_{k}\}\). Then by Lemma 3.9 we must have \(\deg_{z_{k}}(v)=1\). If \((y_{k},z_{k})\in E(\mathcal{D})\), then \(w:=(y_{k}z_{k})M_{f_{1}}M_{f_{2}}\ldots M_{f_{k-1}}\in G(I(\mathcal{D})^{[k]})\) by Lemma 3.8. By (2) we get \(w=z_{k}u/z_{0}\). On the other hand, if \((z_{k},y_{k})\in E(\mathcal{D})\), then by a similar argument \(w:=(z_{k}y_{k}^{w(y_{k})})M_{f_{1}}M_{f_{2}}\ldots M_{f_{k-1}}\in G(I(\mathcal{ D})^{[k]})\) and \(w=z_{k}u/z_{0}\).
**Example 3.11**.: Let \(\mathcal{D}\) be a weighted oriented graph whose underlying graph \(G\) is an odd cycle, say \(C_{2k+1}\) with \(V(C_{2k+1})=[2k+1]\) and edge set
\[E(C_{2k+1})=\{\{1,2\},\{2,3\},\ldots,\{2k,2k+1\},\{2k+1,1\}\}.\]
It is well-known that \(\nu(G)=k\). We claim that \(I(\mathcal{D})^{[\nu(G)]}\) is linearly related if and only if \(I(\mathcal{D})=I(G)\). Indeed, suppose that this is not the case but that \(I(\mathcal{D})^{[\nu(G)]}\) is linearly related. Then there exists a vertex \(i\in V(G)\) which is not a source such that \(w(i)>1\). Up to relabeling, we may assume that \(i=1\) and \((2,1)\in E(\mathcal{D})\). Hence, there is a generator of \(I(\mathcal{D})^{[\nu(G)]}\) whose \(x_{1}\)-degree is \(w(1)>1\). Then, Lemma 3.9 would imply that all generators of \(I(\mathcal{D})^{[\nu(G)]}\) have \(x_{1}\)-degree bigger than \(1\). However, if we consider the \(k\)-matching \(M=\{\{2,3\},\{4,5\},\ldots,\{2k,2k+1\}\}\) of undirected edges of \(G\), then there is a unique generator \(v\) of \(I(\mathcal{D})^{[\nu(G)]}\) whose support is \(V(M)\) and so \(\deg_{x_{1}}(v)=0\), which is absurd. Thus, we must have \(I(\mathcal{D})=I(G)\) and by Theorem 1.12\(I(\mathcal{D})^{[\nu(G)]}\) is linearly related, indeed it even has a linear resolution.
**Example 3.12**.: In the above Theorem 3.10, the condition that every subgraph of \(G\) has at most one perfect matching is crucial. For example, let \(\mathcal{D}\) be a weighted oriented graph with \(I(\mathcal{D})=(x_{1}x_{2}^{2},x_{2}x_{3}^{2},x_{2}x_{4}^{2},x_{3}x_{1}^{2},x_{ 3}x_{4}^{2},x_{4}x_{1}^{2})\). Then \(I(\mathcal{D})^{[2]}\) has a linear resolution but it is not polymatroidal. On the other hand, we do not know the answer to the following question:
**Question 3.13**.: _Let \(\mathcal{D}\) be a weighted oriented graph with \(I(\mathcal{D})\neq I(G)\) where \(G\) is the underlying graph. Suppose that \(I(\mathcal{D})^{[k]}\) is linearly related. Then, does \(I(\mathcal{D})^{[k]}\) have a linear resolution?_
If \(\mathcal{D}\) is a connected weighted oriented graph with \(I(\mathcal{D})\neq I(G)\), then the above question has a positive answer for \(k=1\) by [3, Theorem 3.5].
## 4. Forests whose last matching power is polymatroidal
In this section, we combinatorially classify the weighted oriented forests \(\mathcal{D}\) whose last matching power \(I(\mathcal{D})^{[\nu(I(\mathcal{D}))]}\) is polymatroidal.
To state the classification, we recall some concepts. A _leaf_\(v\) of a graph \(G\) is a vertex incident to only one edge. Any tree with at least one edge possesses at least two leaves. Let \(a\in V(G)\) be a leaf and \(b\) be the unique neighbor of \(a\). Following [8], we say that \(a\) is a _distant leaf_ if at most one of the neighbors of \(b\) is not a leaf. In this case, we say that \(\{a,b\}\) is a _distant edge_. It is proved in [22, Proposition 9.1.1] (see, also, [8, Lemma 4.2] or [7, Proposition 2.2]) that any forest with at least one edge has a distant leaf.
We say that an edge \(\{a,b\}\) of a graph \(G\) is a _separated edge_ if \(a\) and \(b\) are leaves. In this case \(I(G)=I(G\setminus\{a,b\})+(ab)\).
Suppose that \(G\) is a forest whose not all edges are separate. Then, the above result [22, Proposition 9.1.1] implies that we can find vertices \(a_{1},\ldots,a_{t},b,c\), with \(t\geq 1\), such that \(a_{1},\ldots,a_{t}\) are distant leaves and \(\{a_{1},b\},\ldots,\{a_{t},b\},\{b,c\}\in E(G)\). In this case we say that \((a_{1},\ldots,a_{t}\mid b,c)\) is a _distant configuration_ of the forest \(G\). Figure 1 displays this situation.
Let \(\mathcal{D}\) be a weighted oriented graph with underlying graph \(G\). If \(W\subset V(\mathcal{D})\), we denote by \(\mathcal{D}\setminus W\) the induced weighted oriented subgraph of \(\mathcal{D}\) on the vertex set \(V(\mathcal{D})\setminus W\). For any edge \(\{a,b\}\in E(G)\), we set
\[\mathbf{x}_{\{a,b\}}^{(\mathcal{D})}\ =\ \begin{cases}x_{a}x_{b}^{w(b)}&\text{if }(a,b)\in E( \mathcal{D}),\\ x_{b}x_{a}^{w(a)}&\text{if }(b,a)\in E(\mathcal{D}).\end{cases}\]
Figure 1. A forest \(G\) with distant configuration \((a_{1},\ldots,a_{t}\mid b,c)\).
We say that \(\{a,b\}\in E(G)\) is a _strong edge_ if \(\{a,b\}\) belongs to all matchings of \(G\) having maximal size \(\nu(G)\). In such a case, \(I(\mathcal{D})^{[\nu(G)]}=\mathbf{x}_{\{a,b\}}^{(\mathcal{D})}I(\mathcal{D} \setminus\{a,b\})^{[\nu(G)-1]}\). It is clear that a separate edge is a strong edge.
**Lemma 4.1**.: _Let \(G\) be a forest with distant configuration \((a_{1},\ldots,a_{t}\mid b,c)\) and with \(\nu(G)\geq 2\). Then \(\{a_{i},b\}\) is a strong edge of \(G\), for some \(i\), if and only if, \(t=1\) and \(c\in V(M)\) for all \((\nu(G)-1)\)-matchings \(M\) of \(G\setminus\{b\}\)._
Proof.: Suppose that \(\{a_{i},b\}\) is a strong edge for some \(i\). Then \(t=1\). Indeed, let \(M\) be a matching of \(G\) of size \(\nu(G)\). Then \(\{a_{i},b\}\in M\). But, if \(t>1\) then for some \(j\neq i\), \((M\setminus\{\{a_{i},b\}\})\cup\{\{a_{j},b\}\}\) would also be a matching of \(G\) of maximal size not containing \(\{a_{i},b\}\), which is absurd. Thus \(t=1\). Now, suppose that there exists a \((\nu(G)-1)\)-matching \(M\) of \(G\setminus b\) with \(c\notin V(M)\). Then \(M\cup\{\{b,c\}\}\) would be a maximum matching of \(G\) not containing \(\{a_{i},b\}\), which is absurd.
Conversely, assume that \((a\mid b,c)\) is a distant configuration of \(G\) and that \(c\in V(M)\), for all \((\nu(G)-1)\)-matchings of \(G\setminus b\). Note that every matching \(N\) of \(G\) of size \(\nu(G)\) contains either \(\{b,c\}\) or \(\{a,b\}\). But if \(N\) contains \(\{b,c\}\), then \(N\setminus\{\{b,c\}\}\) would be a \((\nu(G)-1)\)-matching of \(G\setminus b\) whose vertex set does not contain \(c\), against our assumption. The conclusion follows.
**Theorem 4.2**.: _Let \(\mathcal{D}\) be a weighted oriented graph whose underlying graph \(G\) is a forest, with \(\nu(G)\geq 2\). Suppose that \(I(\mathcal{D})\neq I(G)\). Then, the following conditions are equivalent._
1. \(I(\mathcal{D})^{[\nu(G)]}\) _is linearly related._
2. \(I(\mathcal{D})^{[\nu(G)]}\) _is polymatroidal._
3. \(I(\mathcal{D})^{[\nu(G)]}\) _has a linear resolution._
4. _One of the following conditions holds:_
1. \(G\) _has a separate edge_ \(\{a,b\}\) _such that_ \(I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}\) _is polymatroidal, and_ \[I(\mathcal{D})^{[\nu(G)]}=\mathbf{x}_{\{a,b\}}^{(\mathcal{D})}I(\mathcal{D} \setminus\{a,b\})^{[\nu(G)-1]}.\]
2. \(G\) _has a distant configuration_ \((a\mid b,c)\) _with_ \(\{a,b\}\in E(G)\) _a strong edge of_ \(G\)_,_ \(I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}\) _is polymatroidal, and_ \[I(\mathcal{D})^{[\nu(G)]}=\mathbf{x}_{\{a,b\}}^{(\mathcal{D})}I(\mathcal{D} \setminus\{a,b\})^{[\nu(G)-1]}.\]
3. \(G\) _has a distant configuration_ \((a_{1},\ldots,a_{t}\mid b,c)\)_,_ \(w(a_{1})=\cdots=w(a_{t})=1\)_, and_ \(I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\) _is polymatroidal. Moreover the following statements hold._ 1. _If_ \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=0\)_, then_ \(\mathbf{x}_{\{a_{i},b\}}^{(\mathcal{D})}=x_{a_{i}}x_{b}^{\delta}\) _with_ \(\delta\in\{1,w(b)\}\) _for all_ \(i\)_, and_ \[I(\mathcal{D})^{[\nu(G)]}=x_{b}^{\delta}[(x_{a_{1}},\ldots,x_{a_{t}})I(\mathcal{ D}\setminus\{b\})^{[\nu(G)-1]}].\] (3)
4. _Otherwise,_ \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\neq 0\) _is polymatroidal,_ \(\delta=w(b)\)_,_ \(w(c)=1\) _and_ \[I(\mathcal{D})^{[\nu(G)]}=x_{b}^{w(b)}[(x_{a_{1}},\ldots,x_{a_{t}})I(\mathcal{ D}\setminus\{b\})^{[\nu(G)-1]}+x_{c}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}].\] (4)
Proof.: From Theorem 3.10 and Lemma 3.6 it follows that (a) \(\Longleftrightarrow\) (b) \(\Longleftrightarrow\) (c). To conclude the proof, we show that (b) \(\Longleftrightarrow\) (d).
Firstly, we show that (b) \(\Rightarrow\) (d). Suppose that \(I(\mathcal{D})^{[\nu(G)]}\) is polymatroidal. If \(G\) has a separate edge, then the statement (d-1) holds. Let us assume that \(G\) has no separate edge. Then \(G\) contains a distant configuration \((a_{1},\ldots,a_{t}\mid b,c)\).
Suppose that \(\{a_{i},b\}\) is a strong edge for some \(i\). Then, Lemma 4.1 implies \(t=1\). Since \(I(\mathcal{D})^{[\nu(G)]}\) is polymatroidal if and only if \(I(\mathcal{D}\setminus\{a,b\})^{[\nu(G)-1]}\) is polymatroidal, (d-2) follows.
Suppose that \(\{a_{i},b\}\) is not a strong edge for all \(i\). Every matching of \(G\) of size \(\nu(G)\) contains either \(\{b,c\}\) or \(\{a_{i},b\}\) for some \(i=1,\ldots,t\). Therefore
\[I(\mathcal{D})^{[\nu(G)]}=\ \sum_{i=1}^{t}\mathbf{x}_{\{a_{i},b\}}^{( \mathcal{D})}I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}+\ \mathbf{x}_{\{b,c\}}^{(\mathcal{D})}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G) -1]}. \tag{5}\]
We claim that
1. \(w(a_{i})=1\) for all \(i=1,\ldots,t\) and
2. there exists \(\delta\in\{1,w(b)\}\) such that \(\mathbf{x}_{\{a_{i},b\}}^{(\mathcal{D})}=x_{a_{i}}x_{b}^{\delta}\) for all \(i=1,\ldots,t\),
3. if \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\neq 0\) then \(\mathbf{x}_{\{b,c\}}^{(\mathcal{D})}=x_{c}x_{b}^{w(b)}\) and \(\delta=w(b)\).
Once we have proved these facts, if \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\neq 0\), equation (5) combined with (i), (ii) and (iii) implies that
\[I(\mathcal{D})^{[\nu(G)]}=x_{b}^{w(b)}\Big{[}(x_{a_{1}},\ldots,x_{a_{t}})I( \mathcal{D}\setminus\{b\})^{[\nu(G)-1]}+x_{c}I(\mathcal{D}\setminus\{b,c\})^ {[\nu(G)-1]}\Big{]}.\]
Since \(I(\mathcal{D})^{[\nu(G)]}\) is polymatroidal by assumption, by Lemma 2.4 applied to the graph \(\mathcal{D}\setminus\{a_{1},\ldots,a_{t}\}\), it follows that \(x_{c}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\) has a linear resolution. By applying [1, Theorem 1.1], we obtain that
\[(I(\mathcal{D})^{[\nu(G)]}:x_{a_{1}}\cdots x_{a_{t}}) =\ x_{b}^{w(b)}\Big{[}I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}+x _{c}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\Big{]}\] \[=\ x_{b}^{w(b)}I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\]
has a linear resolution. Now, Theorem 3.10 implies that both \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\) and \(I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\) are polymatroidal, and so (d-3-ii) follows.
Otherwise, if \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=0\), then equation (5) combined with (i) and (ii) implies that
\[I(\mathcal{D})^{[\nu(G)]}=x_{b}^{\delta}[(x_{a_{1}},\ldots,x_{a_{t}})I( \mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}]\]
By a similar argument as before, \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\) has a linear resolution. Then, Theorem 3.10 implies that it is polymatroidal and thus (d-3-i) follows.
Next, we prove (i), (ii) and (iii).
_Proof of_ (i): By Remark 2.1 if \(a_{i}\) is a source, then we assume \(w(a_{i})=1\). Assume for a contradiction that \((b,a_{i})\in E(\mathcal{D})\) but \(w(a_{i})>1\) for some \(i\). Since \(\{b,c\}\) is not a strong edge, equation (5) implies that we can find a generator \(u\) of \(I(\mathcal{D})^{[\nu(G)]}\) with \(\deg_{x_{a_{i}}}(u)=w(a_{i})>1\). Lemma 3.9 implies that all generators of \(I(\mathcal{D})^{[\nu(G)]}\) must
have \(x_{a_{i}}\)-degree equal to \(w(a_{i})\). Then this implies that \(\{b,a_{i}\}\) is a strong edge which is against our assumption. So, \(w(a_{i})=1\) for all \(i\).
_Proof of_ (ii): Lemma 3.9 and definition of \(I(\mathcal{D})\) implies that there exists a \(\delta\in\{1,w(b)\}\) such that \(\mathbf{x}_{\{a_{i},b\}}^{(\mathcal{D})}=x_{a_{i}}x_{b}^{\delta}\) for all \(i=1,\ldots,t\).
_Proof of_ (iii): Suppose that \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\) is non-zero. Let \(u\) be a minimal monomial generator of \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\). Then \(\mathbf{x}_{\{a_{i},b\}}^{(\mathcal{D})}u\) is a minimal monomial generator of \(I(\mathcal{D})^{[\nu(G)]}\) whose \(x_{c}\)-degree is zero. Lemma 3.9, the assumption that \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\) is non-zero and equation (5), imply that \(\deg_{x_{c}}(\mathbf{x}_{\{b,c\}}^{(\mathcal{D})})=1\). Next, we claim that \(\delta=w(b)\). If \(b\) is a source then \(w(b)=1\) and there is nothing to prove. If \(w(b)=1\), there is also nothing to prove. Suppose that \(b\) is not a source and \(w(b)>1\), then there is a vertex \(d\in\{a_{1},\ldots,a_{t},c\}\) with \((d,b)\in E(G)\) and \(\mathbf{x}_{\{b,d\}}^{(\mathcal{D})}=x_{d}x_{b}^{w(b)}\). Equation (5) then implies the existence of a generator of \(I(\mathcal{D})^{[\nu(G)]}\) whose \(x_{b}\)-degree is \(w(b)>1\). Lemma 3.9 implies that all generators of \(I(\mathcal{D})^{[\nu(G)]}\) have \(x_{b}\)-degree equal to \(w(b)\). Therefore \(\delta=w(b)\) and \(\mathbf{x}_{\{b,c\}}^{(\mathcal{D})}=x_{c}x_{b}^{w(b)}\).
We now prove that (d) \(\Rightarrow\) (b). If (d-1) or (d-2) holds then (b) follows from the following fact. If \(I\) is a polymatroidal ideal and \(u\in S\) is a monomial, then \(uI\) is again polymatroidal. Suppose that (d-3) holds. If \(I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=0\), then by equation (3), the ideal \(I(\mathcal{D})^{[\nu(G)]}\) is a product of polymatroidal ideals. Therefore it is polymatroidal as well by [19, Theorem 12.6.3].
Now, suppose that (d-3-ii) holds. Then \(I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\) and \(x_{c}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\) are polymatroidal ideals. Hence, \((x_{a_{1}},\ldots,x_{a_{t}})I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\) has a linear resolution, as it is the product of monomial ideals with linear resolution in pairwise disjoint sets of variables. Therefore [15, Corollary 2.4] implies that (4) is a Betti splitting. Now, since
\[x_{c}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\subset I(\mathcal{D}\setminus \{b,c\})^{[\nu(G)-1]}\subset I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\]
and \(x_{a_{i}}\) do not divide any generator of \(x_{c}I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\) and \(I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\), for all \(1\leq i\leq t\), we obtain that
\[(x_{a_{1}},\ldots,x_{a_{t}})I(\mathcal{D}\setminus\{b\})^{[\nu(G)-1]}\cap x_{c }I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}=\]
\[=x_{c}(x_{a_{1}},\ldots,x_{a_{t}})I(\mathcal{D}\setminus\{b,c\})^{[\nu(G)-1]}\]
and this ideal has a linear resolution. Thus [6, Proposition 1.8] implies that \(I(\mathcal{D})^{[\nu(G)]}\) has a linear resolution. By Theorem 3.10 it follows that \(I(\mathcal{D})^{[\nu(G)]}\) is polymatroidal and (b) follows.
Inspecting the proof of Theorem 4.2, we see how to construct, recursively, all weighted oriented forests \(\mathcal{D}\), with a given matching number, whose last matching power \(I(\mathcal{D})^{[\nu(I(\mathcal{D}))]}\) is polymatroidal. Indeed, suppose that we have constructed all weighted oriented forests \(\mathcal{D}\) with \(\nu(I(\mathcal{D}))=k\) and \(I(\mathcal{D})^{[k]}\) polymatroidal, then, according to the three possible cases (d-1), (d-2), (d-3), we can construct all weighted oriented forests \(\mathcal{H}\) with \(I(\mathcal{H})^{[\nu(I(\mathcal{H}))]}\) polymatroidal and with matching number \(k+1\), one bigger than the previous fixed matching number.
Let \(\mathcal{D}\) be a weighted oriented graph, whose underlying graph \(G\) is a forest, such that \(I(\mathcal{D})\neq I(G)\). We illustrate the above procedure.
If \(\nu(G)=1\), then \(G\) is a star graph, with, say, \(V(G)=[m]\) and \(E(G)=\{\{i,m\}:1\leq i\leq m-1\}\). If \(I(\mathcal{D})^{[1]}=I(\mathcal{D})\) is polymatroidal, then \(w_{1}=\cdots=w_{m-1}=1\) by Lemma 3.9. Since \(I(\mathcal{D})\neq I(G)\), then \(w_{m}>1\) and \(E(\mathcal{D})=\{(i,m):1\leq i\leq n-1\}\). Thus, \(I(\mathcal{D})=(x_{1}x_{m}^{w_{m}},x_{2}x_{m}^{w_{m}},\ldots,x_{m-1}x_{m}^{w_{ m}})=x_{m}^{w_{m}}(x_{1},\ldots,x_{m-1})\) is polymatroidal, for it is the product of polymatroidal ideals. In this case,
Now, let \(\nu(G)=2\), and suppose that \(I(\mathcal{D})^{[2]}\) is polymatroidal. By Theorem 4.2, only one of the possibilities (d-1), (d-2), (d-3) occurs. Exploiting these three possibilities, one can see that the only weighted oriented forests \(\mathcal{D}\) such that \(I(\mathcal{D})^{[2]}\) is polymatroidal, are the following ones:
In the second graph displayed above, the edge connecting the two bottom vertices can have an arbitrary orientation.
|
2309.05113 | Personalized Search Via Neural Contextual Semantic Relevance Ranking | Existing neural relevance models do not give enough consideration for query
and item context information which diversifies the search results to adapt for
personal preference. To bridge this gap, this paper presents a neural learning
framework to personalize document ranking results by leveraging the signals to
capture how the document fits into users' context. In particular, it models the
relationships between document content and user query context using both
lexical representations and semantic embeddings such that the user's intent can
be better understood by data enrichment of personalized query context
information. Extensive experiments performed on the search dataset, demonstrate
the effectiveness of the proposed method. | Deguang Kong, Daniel Zhou, Zhiheng Huang, Steph Sigalas | 2023-09-10T19:01:12Z | http://arxiv.org/abs/2309.05113v1 | # Personalized Search Via Neural Contextual Semantic Relevance Ranking
###### Abstract.
Existing neural relevance models do not give enough consideration for query and item context information which diversifies the search results to adapt for personal preference. To bridge this gap, this paper presents a neural learning framework to personalize document ranking results by leveraging the signals to capture how the document fits into users' context. In particular, it models the relationships between document content and user query context using both lexical representations and semantic embeddings such that the user's intent can be better understood by data enrichment of personalized query context information. Extensive experiments performed on the search dataset, demonstrates the effectiveness of the proposed method.
Contextual, Personalization, Search, Semantics +
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Information systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal: Journal of Information Systems Content ranking
+
Footnote †: journal of Information Systems Content ranking
* To the best of our knowledge, we provide the _first_ benchmark search dataset that leverages the document's contextual information for improving the search quality, based on human annotations to facilitate the work along this direction.
* The document context and user query context information are interacted properly in a holistic way to improve rank relevance, with demonstrated performance gains over baseline methods.
## 2. Neural ranking framework
In the paper next, we define the query as \(q\) submitted by a user with a specific search intent. Every query \(q\) is associated with a set of related documents \(D=\{D_{1},\cdots,D_{m}\}\) that are ranked by its relevance to the query, and \(Y=\{y_{1},\cdots,y_{m}\}\) is the set of relevance labels for each document in \(D\). In a typical search engine, \(y_{i}\) is usually modeled by a categorical variable, i.e., {Prefect, Good, Fair, Bad}. A query \(q_{i}\) generally consists of a short sequence of words as \(q_{i}=\{q_{i}^{1},q_{i}^{2},\cdots,q_{i}^{n}\}\) and document \(D_{j}\) consists of title and body sequence and \(D_{j}=\{D_{j}^{t},D_{j}^{b}\}\). The query context is denoted as a set of attributes \(C=\{C_{1},C_{2},\cdots,C_{K}\}\), e.g., geo, job family and etc.
**Problem Definition** The context relevance ranking task studied in this paper refers to the rank of the searching result based on their relevance w.r.t the given queries by considering user intent and query context. We not only have to consider the relevance between the document and the query, but also wish that the higher-ranking documents are correlated with the context of the query such that the search engine provides personalized ranking results based on user query context. The key _challenge_ is to maintain the semantic consistency between the surfaced document and the query context. In this paper we focus on explicit context that describes users' segmentation information (e.g., geo and job family) clearly at user-cohort level (instead of introducing vagueness or ambiguity). Prior IR approaches ( (Han et al., 2015), (Han et al., 2016)) do not give enough considerations for explicit context at user cohort level, although many researches have been performed for penalization of search results based on user interaction behaviors (Kong et al., 2017), such as click-steam and conversion channels. In contrast, this paper presents a method to adapt the ranking results based on how the document fits both users' intent and underlying context information.
### High-level Idea
Ranking the retrieved document for an input query and its context is the problem we wish to solve. More formally, let \(Pr(D|q,C)\) be the relevance score between the document and the input query and its associated context and this can be
Figure 1. Personalized Search via neural contextual semantic relevance ranking in the LTR framwork with deep cross network for modeling pRelevance score between (query, doc) pairs by considering (context, doc) relevance using triplet loss.
formulated as
\[Pr(D|q,C)\propto Pr(D|C)Pr(D|q)Pr(q,C), \tag{1}\]
where \(Pr(D|q)\) models the traditional ranking relevance (Kang et al., 2017) between document and query, \(Pr(D|C)\) models how the document fits the context, and \(Pr(q,C)\) gives the prior information about how the query is associated with the particular context (which is fixed given the specific query and context). The final ranking score should combine the document-query relevance \(Pr(D|q)\), document-context ranking score \(Pr(D|C)\) based on prior distributions of query and context pairs \(Pr(q,C)\).
**System Workflow** Given a search query from the search session, e.g., "benefit", the system will first capture the context of a search query. The query interpretation automatically interprets the operators and filters in the user's query. In particular, the contexts would be a set of named attributes for a specific search query. For example, an Engineer in Seattle entered a search query "benefit", the context attribute of the query would be "engineer", "Seattle". It is evident that in current search ranking results, this context information has not been necessarily met in the learning-to-rank (LTR) results. A straightforward idea is to capture document context and see how the document is relevant to the user' query context. For example, we may check "an employee benefit document" and see whether it is relevant to the context of "engineer", "Seattle". However, the document context relevance score is missing in many documents. Therefore, a contextual-semantic matching component is needed to capture the document context relevance score. After obtaining this score, we integrate this score into a standard LTR framework for improving the search quality.
### Neural Contextual Semantic Ranking
The core idea of neural contextual semantic relevance ranking is to predict the relevance score between each query context and document corpus, which we define as _document-context relevance score_. More formally, for each context attribute \(k\), it would need to model the relevance \(\mathcal{S}^{k}(C_{j},D_{i})\) between a document \(D_{i}\) and context value \(C_{j}\) for each attribute category \(k\), i.e,
\[Pr(D_{i}|C_{j})\propto\mathcal{S}^{k}(C_{j},D_{i}). \tag{2}\]
The signals can be extracted via lexical representations or semantic representations. In practice, we combine them together to take advantage of each individual strength at both lexical granularity and semantic granularity levels.
**Lexical representations** One straight-forward way of computing Eq(2) is using lexical representation of both context and documents to capture the matching information at token-level. Basically, it heuristically combines token overlap information, from which they compute a matching score for context and document pairs. Given its popularity in existing systems, we would adopt BM25 (Kang et al., 2017) as a candidate. Given a context \(c\) and document \(d\), it will generate a score based on overlapping token statistics between context-document pairs, i.e,
\[\mathcal{S}_{lex}(c,d)=\sum_{t\in c\cap d}r_{t}\frac{tf_{c,d}}{tf_{c,d}+k_{1} \big{[}(1-b)+b\frac{|d|}{\ell}\big{]}}, \tag{3}\]
Figure 2: An example of annotated personalized search dataset given (query, doc) pairs with extra user query context information (the doc websites are anonymized).
where \(t\) is a term, \(tf_{t,d}\) is \(t\)'s frequency in document \(d\), \(r_{t}\) is the \(t\)'s Robertson-Sparck Jones weight (Rong et al., 2017), \(t\) is the average document length, and \(k_{1}\) and \(b\) are parameters.
**Contextual Semantic embedding** The semantic embedding model can encode both the context (\(c\)) and document \(d\) information into the dense embedding vectors (i.e., \(v_{c}\in\Re^{d}\), \(v_{d}\in\Re^{d}\)) before computing their similarity in the embedding space. Instead of using CNN, LSTM (Hochreiter and Schmidhuber, 2015) architectures, we leverage the pre-trained SentenceBERT (Rong et al., 2017) model to generate the embeddings by average pooling representations from the encoder's last layer, i.e.,
\[\mathbf{v}_{c}=avgPooling(Bert_{\theta}(context)),\ \mathbf{v}_{d}=avgPooling( Bert_{\theta}(document))\]
The context-document matching score \(\mathcal{S}_{sem}(c,d)\) is defined as the dot-product of embedding vectors of \(\mathbf{v}_{c}\) and \(\mathbf{v}_{c}\) as it allows accelerations using vector quantization (Dong et al., 2017) for efficient feature computations, i.e.,
\[\mathcal{S}_{sem}(c,d)=\frac{\mathbf{v}_{c}^{\top}\mathbf{v}_{d}}{\|\mathbf{ v}_{c}\|\|\mathbf{v}_{d}\|}. \tag{4}\]
### End to End optimization
Fig. 1 gives an overview of the LTR framework using deep cross network, which consists of feature extraction and modeling part. In the feature extraction stage, we stack the existing features extracted from query \(q\) and documents \(D=\{d_{i}\}\) side, along with the document-context \(c\) matching features (illustrated in Section 2.2) into the dense feature representations, i.e.,
\[\mathbf{x}(q,d,c)=[\mathbf{v}_{query},\mathbf{v}_{doc},\mathbf{ v}_{qMd}\cdot\mathcal{S}_{lex}(c,d),\mathcal{S}_{sem}(c,d)], \tag{5}\]
where \(\mathbf{v}_{query}\), \(\mathbf{v}_{doc}\), \(\mathbf{v}_{qMd}\) denotes query features, document features, and document-query matching features typically used in search ranking system, \(\mathcal{S}_{lex}(c,d)\) and \(\mathcal{S}_{sem}(c,d)\) are the contextual features extracted from Eq.(3) and Eq.(4), respectively.
Since deep cross network (Rong et al., 2017) can learn feature interactions automatically to capture feature interactions, we adopt DCN model and feed \(\mathbf{x}(q,d,c)\) to it to generate the feature embeddings by emphasizing the feature interactions among document-context matching score and other features, which actually maps input \(\mathbf{x}(q,d,c)\) to embeddings in the last hidden of \(t\) layer (\(F(q,d,c)\overset{\Lambda}{=}\mathbf{h}_{t}\)) (please refer to Appendix A.2 for details).
**E2E optimization** For E2E optimization, given the set of the query, documents, and human-labeled task-specific data \(\{q,D=\{d_{i}\},Y=\{Y_{i}\in[0,1,2,3]\}\}\), we adopt a _triplet loss_ an an objective to minimize:
\[\mathcal{L}^{\text{hinge}}(q,D,Y)=\sum_{q}\sum_{i,j}\mathrm{I}(y_{i}>y_{j}) \max\left[0,\zeta-(F(q,d_{i})-F(q,d_{j}))\right]\]
where \(\mathrm{I}(y_{i}>y_{j})\) is an indicator function that maps elements of the subset to one if the rank of document \(y_{i}\) is larger than \(y_{j}\) given query \(q\) and all other elements to zero, \(\zeta\) is the parameter tuned in hinge loss (typically set to 1.0) which indicates the margin enforced between positive and negative pairs, and \(F(q,d_{i})\) is the semantic score learned using DCN from Eq.(6). In optimization, the model was trained end-to-end and we used mini-batch SGD with Adam (Kingma and Ba, 2014) for optimization.
## 3. Experiment results
We conducted experiments on the collected search dataset using an intelligent enterprise search service that allows users search across different content repositories given built-in connectors.
### Dataset benchmarking
Since there is no ready personalized data set that incorporates user query context and doc context, we build benchmark datasets for personalized search. In particular, we collected datasets from two industry search applications, where domain 1 was from a big tech company1 and domain 2 was from an insurance company, as summarized in Table 6. Each domain consists of two datasets, one with contextual signals and the other w/o contextual signals.
Footnote 1: Due to privacy concerns, we are restrained from revealing more details of the datasets.
For the dataset w/o contextual signals, we have features (refer to Eq.5) generated from (query, doc) pairs and obtain relevance labels such as {perfect, good, fair, bad}. For the dataset w/ contextual signals, we generate (context,doc) features in addition to (query, doc) features. The relevance labels are annotated by annotates as {perfect, good, fair, bad} to indicate how the document is relevant to the queries by considering users' contextual signals as well (Fig.2 gives an example). The average length of the queries used in the experiment is around 5.6, and the maximum allowable number of retrieved documents is set to 500.
### Experiment settings and results
We train the model using D1-A, D1-B dataset respectively. For each dataset, we divided the data into training and test sets, with the percentage of 80%, 20% respectively. Since the D1-B dataset does not contain any contextual signals, we perform _mixed training_ by combining D1-A, D1-B dataset together where contextual signals are set to be zeros for D1-B
\begin{table}
\begin{tabular}{l|l l l l} \hline \hline
**Dataset** & **domain** & **\# query** & **\# docs** & **contextual** \\ & & & & **signals** \\ \hline D1-A & 1 & 266 & 89k & w/ \\ \hline D1-B & 1 & 288 & 399k & w/o \\ \hline D2-A & 2 & 5193 & 3213k & w/ \\ \hline D2-B & 2 & 5193 & 3213k & w/o \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset description from two domains (D1 and D2)
\begin{table}
\begin{tabular}{l|l|c c c c} \hline \hline Training data & context features & ndcg@10 & MAP & p@10 & recall@10 \\ \hline D1-A & w/ & 0.5882 & 0.3945 & 0.4414 & 0.442 \\ D1-A & w/o & 0.0550 & 0.0480 & 0.0602 & 0.2101 \\ mixed training & w/ & 0.5791 & 0.3873 & 0.4375 & 0.4390 \\ mixed training & w/o & 0.0483 & 0.0432 & 0.0602 & 0.2056 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model performance on D1-A testing dataset
\begin{table}
\begin{tabular}{l|l|c c c c} \hline \hline Training data & context features & ndcg@10 & MAP & p@10 & recall@10 \\ \hline D1-B & w/ & 0.5003 & 0.4587 & 0.3950 & 0.7737 \\ D1-B & w/o & 0.5003 & 0.4587 & 0.3950 & 0.7737 \\ mixed training & w/ & 0.5071 & 0.4610 & 0.3963 & 0.7728 \\ mixed training & w/o & 0.5071 & 0.4610 & 0.3963 & 0.7728 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Model performance on D1-B testing dataset
dataset. We test the model performance on D1-A and D1-B datasets, whose performance are presented in Table 2 and Table 3, respectively.
**(1)** After adding the contextual signals, the ranking performance has been significantly improved on D1-A dataset (shown in Table 2) both for in-domain data training using only D1-A data and mixed training with both D1-A dataset and D1-B dataset. This demonstrates the effectiveness of adding contextual signals, which also implies the strong correlations between the relevance score and contextual signals.
**(2)** The relevance ranking performance is neutral when compared mixed training (using both D1-A and D1-B dataset) (shown in Table 2 and Table 3) against single-dataset training on both D1-A and D1-B datasets. This indicates we are able to serve the model from mixed training for traffic w/ and w/o contextual signals, but without introducing any performance loss.
**Generalization capability** To show how the model can be transferred to out-of-domain data, we collect another dataset D2-A, D2-B from domain 2, which has no overlap of queries and docs with domain 1. Similarly, D2-A dataset provides contextual signals, whereas D2-B is absent of such signals. We use the model trained from domain 1 (with mixed training) to test model performance on domain 2. Table 4 and 5 present the performance comparisons. We observe that the model can generalize well from domain 1 to domain 2 (with slight performance loss).
### Ablation study
**Impact of lexical features vs. semantic features** In the model training, we incorporate both lexical feature of Eq.(3) and semantic feature of Eq.(4) since semantic matching features can be complementary to the lexical features which perform exact token matching but can not handle vocabulary mismatch very well. Table 6 shows the experiment results using only lexical features and semantic features for training the model in mixed training on D1 dataset. We observe the performance gains by combining both lexical granularity and semantic granularity features on other datasets as well.
**Impact of loss functions and semantic embeddings** We investigated the role of loss functions and pre-trained sentence-BERT embeddings. We changed the pairwise hinge loss to pairwise pairwise logistic loss of Eq.8), but only found subtle performance changes (i.e., ndcg@10 changed from 0.4351 to 0.4346 on D2-A using mixed training). We found slight performance differences using different versions of sentence-BERT embeddings (i.e., \(\sim\)0.005 absolute changes in ndcg@10). However, we found significant performance drop (i.e., \(\sim\)0.15 absolute changes in ndcg@10) if we do not adopt any pre-trained sentence-BERT embeddings.
\begin{table}
\begin{tabular}{l l|c c c c} \hline \hline Training data & context features & ndcg@10 & MAP & p@10 & recall@10 \\ \hline D2-A & w/ & 0.4414 & 0.4042 & 0.0332 & 0.8067 \\ D2-B & w/o & 0.2600 & 0.2233 & 0.0241 & 0.6238 \\ D1-A + D1-B & w/ & 0.4351 & 0.3972 & 0.0330 & 0.8044 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Generalization Capability: model performance on D2-A testing dataset
\begin{table}
\begin{tabular}{l l|c c c c} \hline \hline Training data & context features & ndcg@10 & MAP & p@10 & recall@10 \\ \hline D2-A & w/o & 0.3243 & 0.2765 & 0.0306 & 0.7884 \\ D2-B & w/o & 0.3243 & 0.2765 & 0.0306 & 0.7884 \\ D1-A + D1-B & w/o & 0.3146 & 0.2631 & 0.0298 & 0.7693 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Generalization Capability: model performance on D2-B testing dataset
## 4. Related Work
**Document Ranking and Ad-hoc Retrieval** Traditional lexical based methods perform exact matching of query and document words with different normalization and weighting mechanism includes BM25 (Kumar et al., 2017), query likelihood (Kumar et al., 2018), etc. Deep neural network based document ranking methods firstly embed the queries and documents into dense representation space, and the ranking is calculated based on queries, document embeddings and other relevant features such as DRMM (Deng et al., 2017), DSSM (Deng et al., 2018), etc. In addition, the interactions between query embedding and document embedding are considered in (Kumar et al., 2019). Recently, pre-trained language (PLM) models (Deng et al., 2018) have shown state-of-the-art performances (Kumar et al., 2019) (Kumar et al., 2019) for ranking the document. Reconciling PLM-based ranking's efficiency and effectiveness is a critical problem in real-world deployment since the computation cost generally scales quadratically to the input text length. For example, ColBERT (Kumar et al., 2019) introduces the late interaction layer to model the fine-grained query-document similarity between the query and the document using BERT, Pyramid-ERNIE (Kumar et al., 2019) architecture exploits the noisy and biased post-click behavioral data for relevance-oriented pre-training using BERT. However, none of these works give sufficient considerations for query context and document context, which is thoroughly studied in this work based on PLM models.
**Contextual Search** Contextual search (Kumar et al., 2019) is a type of web-based search that optimizes the searching results based on the context provided by the user. For example, in enterprise level search engine(e.g., (Kumar et al., 2019)), the query context can be derived from certain job-related user properties (e.g. job title, function, department, etc.) or are already managed in IT systems like directory services. In addition, the physical condition that user used to enter the query, time related factors (e.g, season/trend), user previous search queries/experience, building off of previous knowledge that allows queries to be automatically augmented for similar contexts (in a session or across-session), user profile/interest would be obtained based on particular user queries. It is recognized that the search history (Deng et al., 2018) and contextual relations (Kumar et al., 2019) play important roles in enterprise search. In customer search engine, many strategies have been applied to personalized search result based on mining the rich query logs, including historical clicks (Beng et al., 2018), user interest (Kumar et al., 2019), query-session information (Kumar et al., 2019), friend network (Kumar et al., 2019), etc. Compared against these existing works, this paper provides a new angle of incorporating query context information (in the form of user attribute) by modeling the document-context relevance, which provides additional signals for optimizing the ranking results.
## 5. Conclusion
In this work, we propose a personalized search ranking framework with data enrichment of contextual signals, and show that incorporation of the contextual signals can benefit document ranking tasks. This paper builds the benchmark datasets (with human annotations) to show the effectiveness of personalized search with incorporated personalized contextual signals. As our future work, we would like to leverage the personalized contextual signals to benefit Q&A tasks.
\begin{table}
\begin{tabular}{l|l l} \hline \hline
**Dataset** & **features** & **NDCG@10** \\ \hline mixed training & lexical only & 0.5478 \\ \hline mixed training & semantic only & 0.5691 \\ \hline mixed training & combination & 0.5882 \\ \hline \hline \end{tabular}
\end{table}
Table 6. NDCG@10 at D1-A datasets |
2303.00030 | Another comment on claims of a transition to the ultimate regime of heat
transfer | Claims made by Zhu et al., PRL 120, 144502, (2018), that they had found
evidence of a transition to the so called "ultimate regime" in 2D simulations
of Rayleigh-B\'enard convection, have recently been repeated by Lohse &
Shishkina, Rev. Mod. Phys. 96, 03501 (2024). The author questions the validity
of these claims. | Erik Lindborg | 2023-02-28T19:12:39Z | http://arxiv.org/abs/2303.00030v5 | # Comment on evidence of a transition to the ultimate regime of heat transfer
###### Abstract
Zhu _et al._[1] carried out DNS of 2D Rayleigh-Benard convection (RBC) up to Rayleigh number \(Ra=10^{14}\) and reported evidence of a transition to the 'ultimate regime' of heat transfer predicted by [2] for 3D RBC, with Nusselt number dependence \(Nu\sim Ra^{\gamma}\), where \(\gamma>1/3\) for high \(Ra\). Doering _et al._[3] analysed the results of [1] and concluded that they should rather be interpreted as evidence of absence of a transition. Zhu _et al._[4] carried out two more simulations at \(Ra>10^{14}\) and claimed that they had now collected 'overwhelming evidence' of a transition.
The author of this comment would like to point out that none of the simulations at \(Ra>10^{10}\) presented in [1] reached a statistically stationary state. A sensitive indicator of stationarity is the development of the mean kinetic energy, \(E\). In requesting information from two of the authors of [1] (Detlef Lohse and Xiaojue Zhu), the author was informed that \(E\) was still growing in all simulations at \(Ra>10^{10}\), when they were ended. For \(Ra\leq 10^{13}\) the simulations were all ended at \(t=1000\), where time is measured in \(H/u_{f}\), \(H\) being the height of the domain and \(u_{f}\) the free fall velocity. Two simulations were carried out at \(10^{13}<Ra<10^{14}\), ending at \(t=500\), and one simulations at \(Ra=10^{14}\), ending at \(t=250\). No information was provided in [4] on how long time the two simulations at \(Ra>10^{14}\) were run. Figure 1, communicated to the author by Zhu, shows the development of \(E\), normalised by \(u_{f}^{2}\), from four simulations at \(Ra\in[10^{10},10^{11}]\), which were continued by Zhu after publication of [1]. For \(Ra=10^{10}\) and \(Ra=10^{11}\), \(E\) reaches approximate stationarity at \(t_{s}\approx 1000\) and \(t_{s}\approx 3000\), with stationary values \(E\approx 0.25\) and \(E\approx 0.48\approx 0.5\), in each case respectively. The simulation at \(Ra=10^{11}\) was thus far from stationarity when it was ended at \(t=1000\), and the higher \(Ra\) simulations were even farther away when they were ended. Assuming that \(E\) continues to double and \(t_{s}\) continues to triple when \(Ra\) is increased by a factor of ten, the simulation at \(Ra=10^{14}\) would reach stationary first at \(t_{s}\approx 80000\) with \(E\approx 4\). Since this simulations was ended at \(t=250\) with \(E\approx 0.2\), the Nusselt number was evaluated in a state that was, indeed, very far from stationarity.
A cornerstone of scaling theories of RBC, for example the theory of [5], is the exact expression for the mean kinetic energy dissipation rate in a statistically stationary state,
\[\epsilon=\nu\kappa^{2}Ra(Nu-1)/H^{4}\,, \tag{1}\]
where \(\nu\) is the kinematic viscosity and \(\kappa\) the diffusivity. For \(Pr=\nu/\kappa\sim 1\), a condition for this relation to be satisfied is
\[|\frac{\mathrm{d}E}{\mathrm{d}t}|\ll Ra^{-1/2}Nu\,, \tag{2}\]
where the time derivative on the left hand side is nondimensionalized by \(u_{f}^{3}/H\). The high \(Ra\) simulations of [1] were far from satisfying this condition in the state where the Nusselt number was evaluated. As pointed out by [6]: 'One can only start to collect statistics when the flow is fully developed and has attained a statistically stationary state.' In conclusion, the issue regarding the scaling of \(Nu\) in high \(Ra\) 2D RBC is not settled yet.
## References
* [1] X. Zhu. V. Mathai, R.J.A.M. Stevens, R. Verzicco, and D. Lohse, Phys. Rev. Lett. **120**, 144502 (2018).
* [2] R.H. Kraichnan, Phys. Fluids, **5**, 1374 (1962).
* [3] C.H. Doering, S. Toppoladoddi, and J.S. Wettlaufer, Phys. Rev. Lett. **123**, 259401, (2019).
* [4] X. Zhu. V. Mathai, R.J.A.M. Stevens, R. Verzicco, and D. Lohse, Phys. Rev. Lett. **123**, 259402 (2019).
* [5] S. Grossmann, and D. Lohse, J. Fluid. Mech. **407**, 27 (2000)
* [6] Ahlers, G., Bodenschatz, E., Hartmann, R., He, X., Lohse, D., Reiter, P., Stevens, R., Verzicco, R., Wedi, M., Weiss, S., Zhang. X., Zwirner, L. & Shishkina, O. Phys. Rev. Lett. **128**, 084501, Supplementary material (2022).
Figure 1: Nondimensional mean kinetic energy versus non-dimensional time at \(Ra\in[10^{10},10^{11}]\), from four simulations that were continued after publication of [1]. The figure was communicated to the author by Zhu. |
2309.16621 | Stress Testing Chain-of-Thought Prompting for Large Language Models | This report examines the effectiveness of Chain-of-Thought (CoT) prompting in
improving the multi-step reasoning abilities of large language models (LLMs).
Inspired by previous studies \cite{Min2022RethinkingWork}, we analyze the
impact of three types of CoT prompt perturbations, namely CoT order, CoT
values, and CoT operators on the performance of GPT-3 on various tasks. Our
findings show that incorrect CoT prompting leads to poor performance on
accuracy metrics. Correct values in the CoT is crucial for predicting correct
answers. Moreover, incorrect demonstrations, where the CoT operators or the CoT
order are wrong, do not affect the performance as drastically when compared to
the value based perturbations. This research deepens our understanding of CoT
prompting and opens some new questions regarding the capability of LLMs to
learn reasoning in context. | Aayush Mishra, Karan Thakkar | 2023-09-28T17:21:33Z | http://arxiv.org/abs/2309.16621v1 | # Stress Testing Chain-of-Thought Prompting for Large Language Models
###### Abstract
This report examines the effectiveness of Chain-of-Thought (CoT) prompting in improving the multi-step reasoning abilities of large language models (LLMs). Inspired by previous studies [8], we analyze the impact of three types of CoT prompt perturbations, namely CoT order, CoT values, and CoT operators on the performance of GPT-3 on various tasks. Our findings show that incorrect CoT prompting leads to poor performance on accuracy metrics. Correct values in the CoT is crucial for predicting correct answers. Moreover, incorrect demonstrations, where the CoT operators or the CoT order are wrong, do not affect the performance as drastically when compared to the value based perturbations. This research deepens our understanding of CoT prompting and opens some new questions regarding the capability of LLMs to learn reasoning in context.
Listing, Self-supervised Statistical Models, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning,
describes our experimental setup in detail. The results of our experiments are then presented in section 4. Lastly, we discuss our conclusions and scope for future work in later sections.
## 2 Prompting LLMs
The three prompting techniques compared in our experiments are:
* **Few-Shot**: Provide a few questions along with their respective answer. This lets the model decide what reasoning to use for getting to the answer. Nothing other than the answer is provided in the prompt. This is the standard prompting technique used in extracting knowledge from LLMs.
* **CoT**: Provide a few questions along with their respective answer that follows from an explicitly described CoT. The CoT helps the model understand what reasoning to deduce from the prompt [11].
* **Perturbed CoT**: Provide the same prompts as in the case of CoT but with some perturbations that make the CoT logically incorrect while keeping the correct answer untouched. We define three types of perturbations. (refer Fig. 1)
* **CoT values**: Changing the objective values in the CoT.
* **CoT order**: Changing the order of sentences in the CoT.
* **CoT operators**: Changing the operators (+, -, /, *) in CoT sentences (only applicable on numerical tasks).
Using the third novel category, we want to stress test the capability of CoT prompting. If the performance does not drop considerably with perturbed CoTs, it would be strong evidence in favor of our hypothesis. For this reason, we only perturb the CoT and keep the answers that follow them to be correct. With this experiment, we expect to gain a better understanding of how crucial the correctness of CoT is in performance gains.
## 3 Experimental Setup
### Model
In this study, we only tested GPT-3. We used the default settings provided by the OpenAI API with model=text-davinci-002, temperature=0.6 and max_tokens=1024.
### Tasks and Datasets
Two main reasoning tasks are considered in the study: numerical and non-numerical. The datasets used for them are described briefly below.
* Numerical Tasks
* **ASDiv**[7] a diverse English math word problem (MWP) corpus with 2,305 MWPs covering various language usage patterns and problem types.
* **GSM8k**[3] a dataset of 8.5K high quality linguistically diverse grade school math word problems.
* **SVAMP**[9] Modified versions of grade four or lower math problems to make MWPs more challenging.
* **MAWPS**[5] - Multiple difficulty levels (single operator, single equation, adddition/subtraction only, and, multiple arithmetic operations)
Figure 2: Number of samples per dataset used to evaluate the LLM
In total, 7 numerical tasks, all math word problems of different styles and different levels of difficulty. An example question:
_Faye was cutting out some fabric for a friend. She cut a piece that was 3 centimeters wide and had an area of 24 square centimeters. How long was the piece?_
* Non-Numerical Tasks:
* **Date Understanding*
* (from BIG Bench [1]). Tests a mix of common sense reasoning and a bit of numerical computation.
* Symbolic reasoning [11]:
* **Coin flip**. e.g.: _A coin is heads up. Christie flips the coin. Jaymie flips the coin. Is the coin still heads up?_
* **Concat**. e.g.: _Take the last letters of the words in "Valentin Owen" and concatenate them._
* **Reverse list**. e.g.: _Reverse the sequence "umbrella, head, camera, battery, scissors"._
The number of samples used for evaluation per category (i.e. few-shot, CoT, and perturbed CoTs) is described in Fig 2. These numbers were decided based on budget constraints and the diversity of individual task.
### Evaluation
After collecting responses for each category, accuracy for each task is calculated from the responses for performance evaluation. To get the final answer we parse the response and check for syntax similar to the examples given the prompting. If found, we take the final answer token or value as the prediction and match it with the ground truth labels in the dataset. As we keep the answer format (i.e "The answer is <prediction>") constant across the experiments we expect the model to do the same when predicting. If the answer format is violated, we consider it a false prediction irrespective of the value or token it predicts.
## 4 Results
In this section, we discuss the results of the experiments performed along with our observations. The following subsections discuss the results and observations for each group: Non-numerical and Numerical separately followed by general observations.
### Non-Numerical Tasks
Fig 3 summarizes the performance of GPT-3 on the non-numerical reasoning tasks.
* **Reverse list** and **Concat**: We see that these two tasks are probably quite trivial for the model to understand as standard few-shot prompting itself works almost perfectly. However, we do see a drop in performance for _Order_ and _Value_ based perturbed CoT prompts. This suggests that _providing a wrong context can mislead the model_ even when provided with the correct answer.
* **Date Understanding**: We see that almost all types of prompts have almost similar performance and CoT prompts works best. We also see a drop in performance when the CoT is perturbed.
* **Concat**: This case provides strong support for the effectiveness of correct CoT based prompting and how it can effectively turn the model from dense to expert. However, we don't understand why the performance varies so drastically for this particular task.
### Numerical Tasks
Fig 4 summarizes the performance of GPT-3 on various numerical reasoning tasks.
* **MAWPS**: From Fig 4 (a), we see a general trend that perturbing numerical values have a strong negative impact on performance, which goes even below the standard few-shot prompting level. _This strongly suggests that the model can be misled or fooled using some value based adversarial examples._
* **ASDiv, SVAMP**: We see a similar trend that CoT value perturbations are most severely hit in performance while correct CoT stays on top of the others (Fig. 4 (b)).
* **GSM**: This is more difficult compared to other tasks and we see that CoT helps a lot in comparison to standard few-shot prompting. We also see that even value perturbed CoT based prompts perform better than few-shot prompting in this case. (Fig. 4 (b))
### General Observations
We noticed some peculiarities in the responses generated by GPT-3 and also some patterns in the accuracy plots.
* In the Date Understanding task, we saw several variations in the answer format generated by the model. For example, _format variation_: The model generated various date formats like Aug. 25th, 2021, 7/8/1972 vs 07/08/1972, etc. when it was explicitly asked to generate answers in the MM/DD/YYYY format. The model also generated _answer structure variation_ like "... so tomorrow is 11/23/2001" vs "... answer is 11/23/2001" when all the prompts had the same answer structure. These variations did not show up in other tasks and datasets. _This suggests that the model probably has poor understanding of date format templates._
* _Larger the gap in performance between few-shot vs CoT, implies better performance retention after CoT perturbation._ We see a general trend in the numerical reasoning tasks that in cases where CoT prompting doesn't drastically improve performance over few-shot prompting, perturbing the CoT has a severe impact on performance. However, in cases like GSM and MAWPS (Multi-Arith), the performance retention is higher and mostly better than few-shot. Interestingly, these are also the toughest tasks in terms of the number of steps to be performed to get to the answer. _This suggests that the model might not depend as much on the CoT correctness in tougher tasks, as it does on contextual awareness provided by even the incorrect CoT._
* _Retention of CoT Perturbation_: Although we only quantitatively test on the correctness of the answer generated by the model, we also noticed many cases where the model retains the incorrect CoT structure while coming up with the correct answer. This is particularly surprising because to do this systematically, the model must understand: 1. What is asked in the question and come up with correct reasoning to derive it. 2. What is the pattern in the CoT prompts and generate that pattern independent of how the answer is generated. An assortment of such responses generated by the model is provided below. **Wrong CoT order** _Q: Today is the last day of the first quarter of 2008. What is the date one week ago from today in MM/DD/YYYY? A: So one week ago from today was 03/24/2008. Today is 03/31/2008, which is the last day of the first quarter. So the answer is 03/24/2008._ **Wrong CoT value/operator** _Q: Isabel uploaded 2 pictures from her phone and 4 from her camera to Facebook. If she sorted the pics into 3 different albums with the same amount of pics in each album, how many pictures were in each of the albums? A: Isabel had 18 pictures in total. 2 + 4 = 8. 18 / 8 is 2. So each album would have 2 pictures._
## 5 Conclusions
It is straightforward to see from our results that the correctness of Chain-of-Thought
Figure 3: Performance of the LLM for Non-Numerical Tasks for different modes. ’#’ represents the number of samples per mode
Figure 4: (a) top: Performance of the LLM for Numerical Tasks for different modes on the MAWPS dataset. (b) bottom: Performance of the LLM for Numerical Tasks for different modes on ASDIV, GSM, and SVAMP datasets.
prompts is essential to performance gains when compared to standard few-shot prompting. This is unlike [8] which suggested that the correctness of few-shot samples might not matter as much as we might think. In general, we found out that the correctness of the _values_ in CoT is most critical to performance. Perturbing them might mislead the model and performance may drop even below standard few-shot levels. Our experiments reveal that our hypothesis is wrong and we can not skip creating correct CoTs for performance gains. Although the results of our experiments provide support for CoT prompting, there might still be better/easier methods to extract performance out of LLMs that do not require manual labeling efforts [4, 12, 6].
## 6 Future Directions
Our experiments reveal several interesting phenomena which spawn potential future works.
* **Adversarial Examples**: We saw that value-based perturbations severely hit performance, which even falls below standard few-shot prompting in most cases. Although we changed each quantitative value in the CoT for this type of perturbation, there might be scope for changing just a few or even a single value in a prompt which could result in a severe performance hit. If true, this would mean that LLMs suffer from the same adversarial sample problems as standard neural networks.
* **Retention of CoT Perturbation** is an interesting by-product that we noticed in many model responses. This motivates a study to systematically identify what is causing the models to retain the perturbed CoT structure while coming up with correct answers.
* **Difficulty of task vs correctness of CoT**: We observed that for more difficult problems, even perturbed CoTs provide some benefits over standard prompting. Is the structural context provided by CoTs all that is being used in those cases? This motivates future studies on what exactly is the recipe for the perfect prompt.
## 7 Contribution
* **Aayush Mishra**: Experiment design, coding perturbations.
* **Karan Thakkar**: Experiment design, collecting results.
* **Shared**: Brainstorming ideas, presentations, and report.
## Acknowledgements
We would like to thank Professor Daniel Khashabi ([email protected]) for the constant support and necessary resources required for the completion of this project. We would also like to thank our classmates for their valuable feedback during project presentations.
|
2309.06432 | Can the Parker Solar Probe Detect a CME-flare Current Sheet? | A current sheet (CS) is the central structure in the disrupting magnetic
configuration during solar eruptions. More than 90\% of the free magnetic
energy (the difference between the energy in the non-potential magnetic field
and that in the potential one) stored in the coronal magnetic field beforehand
is converted into heating and kinetic energy of the plasma, as well as
accelerating charged particles, by magnetic reconnection occurring in the CS.
However, the detailed physical properties and fine structures of the CS are
still unknown since there is no relevant information obtained via in situ
detections. The Parker Solar Probe (PSP) may provide us such information should
it traverse a CS in the eruption. The perihelion of PSP's final orbit is
located at about 10 solar radii from the center of the Sun, so it can observe
the CS at a very close distance, or even traverses the CS, which provides us a
unique opportunity to look into fine properties and structures of the CS,
helping reveal the detailed physics of large-scale reconnection that was
impossible before. We evaluate the probability that PSP can traverse a CS, and
examine the orbit of a PSP-like spacecraft that has the highest probability to
traverse a CS. | Yuhao Chen, Zhong Liu, Pengfei Chen, David F. Webb, Qi Hao, Jialiang Hu, Guanchong Cheng, Zhixing Mei, Jing Ye, Qian Wang, Jun Lin | 2023-09-12T17:53:41Z | http://arxiv.org/abs/2309.06432v1 | # Can the Parker Solar Probe Detect a CME-flare Current Sheet?
###### Abstract
A current sheet (CS) is the central structure in the disrupting magnetic configuration during solar eruptions. More than 90% of the free magnetic energy (the difference between the energy in the non-potential magnetic field and that in the potential one) stored in the coronal magnetic field beforehand is converted into heating and kinetic energy of the plasma, as well as accelerating charged particles, by magnetic reconnection occurring in the CS. However, the detailed physical properties and fine structures of the CS are still unknown since there is no relevant information obtained via in situ detections. The Parker Solar Probe (PSP) may provide us such information should it traverse a CS in the eruption. The perihelion of PSP's final orbit is located at about 10 solar radii from the center of the Sun, so it can observe the CS at a very close distance, or even traverses the CS, which provides us a unique opportunity to look into fine properties and structures of the CS, helping reveal the detailed physics of large-scale reconnection that was impossible before. We evaluate the probability that PSP can traverse a CS, and examine the orbit of a PSP-like spacecraft that has the highest probability to traverse a CS.
Solar coronal mass ejections (310); Solar magnetic reconnection (1504); Current sheet; Methods: in situ measurements; Heliocentric orbit (706) 0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-0000-000-0000-000-0000-0000-00000-0000-000-00000-0000-0000-00000-0000-00000-0000-00000-0000-00000-00000-00000-000000-000000-0000-000000-00000-000000-000000-000000-000000-000000-000000-000000-0000000-0000000-0000000-000000-000000-000000-00000000-000000000-0
taking place in the CME-flare CS, and enrich the existing classical theory of magnetic reconnection and plasma physics processes throughout the universe.
Theoretically, Lin & Forbes (2000) and Lin (2002) pointed out that a large-scale CS forms in the eruption as the catastrophe occurs in the coronal magnetic field, the closed magnetic field is severely stretched, and two magnetic fields of opposite polarity are pushed toward each other. Magnetic reconnection then takes place inside the CS, diffuses the magnetic field, and helps the ejected magnetic structure escape from the Sun smoothly, constituting a CME. Meanwhile, magnetic reconnection also continuously produces closed magnetic field below the CS, constituting flare loops. Figure 1 describes this process in an explicit fashion. Because the catastrophe occurs on the Alfven time scale, \(\tau_{A}\), and reconnection on the diffusive time scale, \(\tau_{d}\), with \(\tau_{A}\ll\tau_{d}\), the CS cannot be dissipated fast enough by reconnection, and a long CS between the CME and the associated flare would be expected (e.g., see also discussions of Priest & Forbes, 2000).
As shown in Figure 1, the CS is confined to a region that is small compared to the whole disrupting configuration. This is expected since the electric conductivity is high. In the framework of the traditional theory of magnetic reconnection, the thickness of the CS must be as small as the proton Larmor radius, otherwise the fast reconnection process that is needed to drive the major solar eruptions cannot progress (e.g., Litvinenko, 1996; Wood & Neukirch, 2005; and references therein). The proton Larmor radius is around tens of meters, not exceeding a hundred meters, in the coronal environment. After analyzing a set of unique data for several eruptions observed by Ultraviolet Coronagraph Spectrometer (UVCS; Kohl et al., 1995) and Large Angle and Spectrometric Coronagraph Experiment (LASCO; Brueckner et al., 1995) on the Solar and Heliospheric Observatory (SOHO), on the other hand, Lin et al. (2007, 2009) found that, in some circumstances, the CSs are observable, and their thickness in real events could be as large as a few \(10^{4}\) km or even \(10^{5}\) km. Many follow-ups on this topic by different authors for different events observed in different wavelengths by different instruments both in space and on the ground gave the similar results such that the apparent thickness of the CME-flare CS ranges from \(10^{3}\) to \(10^{5}\) km (see Ciaravella et al., 2013; Lin et al., 2015; and Lin & Ni, 2018 for more details). Ciaravella & Raymond (2008) noticed that observational data in different wavelengths for the same event gave the same value of the CS thickness. Significant difference apparently exists between the value of the CS thickness expected according to the classical theory of magnetic reconnection and that deduced from observations. Although the values of the CS thickness given by observations span two orders of magnitudes, the difference among these values is still small compared to that between several tens of meters and a few \(10^{4}\) km.
Usually, it is believed that the difference between the apparent thickness, \(d^{\prime}\), of the CS and the true thickness, \(d\), results from three issues: the projection effects, the complex structure of the CS, and the thermal halo around the CS. The projection effects exist for all the images we obtained via the remote sensing approach, since any image we obtained is the projection of the emission from the optically thin three-dimensional object onto the two-dimensional plane of the sky. The intensity of the emission reaching the detector is the sum of all the emission from the object in the line-of-sight (LOS), and the level of intensity is governed by both the density and the column depth in LOS. Thus, a bright object manifests differently when being seen at different angle (see detailed discussions of Forbes & Acton, 1996). This yields that \(d^{\prime}\geq d\), and that the emission measure of the CS reaches maximum and \(d^{\prime}=d\) as the CS is observed edge-on as shown in Figure 1. In principle, \(d^{\prime}\) could be very large when the CS is observed face-on as shown by Bemporad et al. (2006). The terms "edge-on" and "face-on" here refer to two distinct observational angles: "edge-on" implies that the LOS is parallel to the surface of the CS, namely along the \(z\)-direction (see Figure 1), while "face-on" means that the LOS is perpendicular to the surface of the CS, namely along the \(x\)-direction (see Figure 1). For more information, interested readers refer to Bemporad et al. (2006), Lin et al. (2007, 2009), as well as Shen et al. (2022).
On the other hand, Lin et al. (2009) pointed out that the emission measure of the CS is usually small compared to the nearby structures like CMEs, helmet streamers, and so on. If the viewpoint toward the sheet deviates largely from that edge-on, the CS would become too faint to be seen. They found that the emission measure of the CS is roughly related to \(d/d^{\prime}\) linearly, which suggests that the projection effects on measuring \(d\) are limited. Furthermore, the limited signal-to-noise ratio of the instrument also enhances the difficulty in identifying the CS in reality. Therefore, Lin et al. (2009) concluded that the CS would become invisible if \(d/d^{\prime}<0.1\), and Ciaravella & Raymond (2008) realized that \(d/d^{\prime}\) ranged from 0.2 to 0.4 for the CS developed in a specific event.
The fact that the CS may possess complex structure could also increase the value of \(d^{\prime}/d\). Vrsnak et al. (2009) studied three events occurring on 26 June 2005, 8 January 2002, and 10 November 2003, respectively, and found that the values of \(d^{\prime}\) varies from \(7\times 10^{4}\) km to \(2.1\times 10^{5}\). They showed that a CS forms as the associated closed coronal
magnetic field is severely stretched by the eruption, the CS is thus typically highly planar, and no obvious warping occurs in the eruption although various small scale structures exist inside the sheet (see also discussions of Lin & Ni, 2018). The results of Vrsnak et al. (2009) suggested that the impact of the complex structure of the CS on measuring \(d\) may yield that \(d^{\prime}\) differs from \(d\) by only a factor of single digit.
The thermal halo also plays a role in broadening the CS observed in spectral lines forming at high temperature like [Fe XVIII] and [Ca XIV]. Yokoyama & Shibata (1998, 2001) noticed the occurrence of the thermal halo for the first time such that the plasma heated by reconnection inside the CS may leak to the inflow region, constituting a thermal halo around the CS. Numerical simulations by Seaton & Forbes (2009), Reeves et al. (2010), and Ni et al. (2012)
Figure 1: A sketch of disrupted magnetic field that forms during solar eruptive process. Colors roughly indicate the plasma layers in different temperatures (from Lin et al., 2004). The diagram combines the two-ribbon flare configuration proposed by Forbes & Acton (1996), as well as the CME configuration of Lin & Forbes (2000).
confirmed this result, and the CS is in fact embedded in the thermal halo. This implies that \(d^{\prime}\) is actually the scale of the halo, not that of the CS itself (see the detailed discussions by Lin et al., 2015 and Lin & Ni, 2018).
However, both observations and theories indicated that the thermal halo does not always occur in reality. The CS developed in an event studied by Ciaravella & Raymond (2008) was observed in both white-light and high temperature spectral lines obtained by the UVCS, and Ciaravella & Raymond (2008) found that the values of \(d^{\prime}\) deduced from both the white-light and the high temperature spectral data are the same. Since the white-light emission of the observed target results from the scattering of the photospheric emission by the free electrons, the thermal property of the target does not affect its manifestation in white-light. This implies very limited impact of the thermal halo on measuring \(d\). Seaton et al. (2017) pointed out that the region of the thermal halo must be tenuous compared to that of the CS if the thermal halo does occur but cannot be recognized. However, Raymond et al. (2017) argued that the hot plasma inside the CS is prohibited from leaking outside by the electric field in the slow mode shock should the Petschek reconnection take place through the CS, therefore the role of the thermal halo is often overestimated. Numerical calculations by Ni et al. (2016) also indicated the limited impact of the thermal halo on measuring \(d\) (see also detailed discussions of Lin & Ni, 2018).
The above discussions and results indicate that the reconnecting CS occurring in the solar eruption may indeed possess huge thickness, and that the projection effects, the complex structure, and the thermal halo are not able to account for difference in the CS thickness between the expectation of the classical theory of reconnection and the observational results. Lin et al. (2015) and Lin & Ni (2018) concluded that the three issues below may account for the huge thickness of the CME-flare CS. First of all, the CME-flare CS develops in a highly dynamic fashion in the solar eruption, instead of staying static. Both theories (Forbes & Lin, 2000; Lin & Forbes, 2000; Lin, 2002; Linker et al., 2003) and observations (Webb et al., 2003; Webb & Vourlidas, 2016) showed that the length of the CS increases at a speed up to a few hundred km s\({}^{-1}\) and at an acceleration of a few m s\({}^{-2}\). Such a highly dynamic process in the large-scale magnetic configuration could impossibly be governed by individual particles.
Second, large-scale CSs suffer from several MHD instabilities, such as the tearing mode, giving rise to turbulence and a large number of small structures in the CS (Shen et al., 2011; Mei et al., 2012; Ni et al., 2012; Ye et al., 2019). These small structures enhance the diffusion of the magnetic field through the CS equivalent to adding an extra resistivity in the reconnection region, which is also known as the "hyper-resistivity" (e.g., see also Strauss, 1988 and Bhattacharjee & Yuan, 1995 for more details). In addition to the small scale structure, the large-scale CS has enough space to allow different types of reconnection to occur simultaneously, which never happens in the scenario of the traditional reconnection theory (Mei et al., 2012, 2017; Ye et al., 2021; Zhang et al., 2022). This reminds us of the parallel computation usually used in modern numerical calculations, through which a large and complicated computing mission is divided into many small and simple ones that could be solved simultaneously in a shorter time.
Third, coalescence or merging of small-scale structures frequently occurs inside the CS as well (Shen et al., 2011; Mei et al., 2012; Takahashi et al., 2017; Ye et al., 2019), which is not a simple merging process but yields secondary reconnection or diffusion among these structures. The coalescence can be considered as the inverse cascading in which small scale structures merge into larger ones. In reality, the coalescence and the cascading processes take place simultaneously in the CS, and eventually reach a dynamic balance (see discussions of Barta et al., 2011, 2012). Shen et al. (2013) studied the two processes in the large-scale CS, and found that the kinetic energy of the plasma flow manifests similar cascading behavior, implying the dissipation of the kinetic energy of the fluid motion.
The above discussions draw a scenario of the reconnection process taking place in the solar eruption such that the large-scale CS is an assembly of many diffusive structures that allow reconnection to occur at many places in several ways simultaneously. We realize the analogy of this process to the parallel computing that is frequently used in modern numerical calculations for complicated mathematical problems. In the parallel computing process, large and difficult calculations are divided into many small and simple ones that could be solved easily and quickly. In principle, we have so far reached the theoretical explanation why magnetic reconnection in a large-scale CS could progress fast enough to drive the major eruption. But there is no existing in situ information about the physical property and the internal structures of the CS, and the explanation cannot be finalized. It was impossible to close this logical loop before PSP. However, difficulty still exists if PSP does not traverse a CS. So it is worth investigating whether PSP could traverse a CS in the solar eruption, and what the probability of traverse in a certain fashion would be. In fact, there are already many crossings of the heliospheric CS (e.g., Kasper et al., 2021; Liewer et al., 2023) and CS in the magnetosphere of Earth (e.g., Stephens et al., 2023) and Jupiter (e.g., Xu et al., 2023). However, the confirmed crossing of a CME-flare CS that undergoes rapid and complex magnetic reconnection has not been reported yet. A possible crossing was reported
recently by Romeo et al. (2023) and Patel et al. (2023) who investigated a fast CME event on September 5, 2022 that swept PSP.
Because the orbit of PSP is fixed, we cannot guarantee that PSP could possess the probability as high as expected to traverse the CS. In this work, according to the distribution of the filament and the filament channel, the orientation of the filament axis relative to the latitude line on the solar surface, the speed increase of the CS length, and the lifetime of the CS, we figure out the probability for a PSP-like spacecraft to traverse a CS (Lin et al., 2021) with a given orbit, and look for the orbit that would yield highest probability of traversing. In Section 2, we describe the model, with reasonable assumptions, describing the spacecraft orbit and the orientation of CSs behind CMEs. In Section 3, the fashion in which the orbit of spacecraft would intersect the CS will be discussed for various types of CSs. In Section 4, the probability of traversing a given CS for the spacecraft on a given orbit will be evaluated, and the orbit of the spacecraft that leads to the highest traversing probability will be further suggested. Finally, we shall discuss our results and draw our conclusions in Section 5.
## 2 Methods
In this section, we describe the mathematical approach to calculating the probability of a spacecraft (including PSP) traversing the CME-flare CS. First of all, the calculation employs two sets of coordinate systems on the basis of the ecliptic plane and the plane where the spacecraft orbit is located, respectively. Second, according to the Lin-Forbes model and the related observations, we constructed a simplified model for the CS geometry. Then, we relate the parameters of the spacecraft orbit to those of the CS; and finally, we are able to estimate the probability that a spacecraft on a given orbit traverses a CME-flare CS.
### Descriptions of Coordinate Systems
According to the purpose of the mission, the heliocentric orbit of the existing spacecraft for solar observations and/or detections falls into two categories: one like those of Ulysses (Wenzel et al., 1992) and Solar Orbiter (Muller et al., 2020), obviously deviating from the ecliptic plane, and another one like those of STEREO (Kaiser et al., 2008) and PSP, slightly deviating from the ecliptic plane. PSP is the first spacecraft to fly into the solar corona in human history and is, therefore, very likely to traverse CME-flare CSs. With more and more spacecraft being launched for solar exploration, it is necessary to analyze the impact of orbital parameters on realizing the spacecraft traversing and then detecting CSs, so that more scientific goals for the PSP-like missions in the future could be figured out. Therefore, this work is to evaluate the probability of PSP traversing a CME-flare CS, and to look into how a PSP-like spacecraft could traverse the CS with a reasonably high probability.
We set up two heliocentric coordinate systems, denoted as the "solar coordinate system", \(X^{\prime}Y^{\prime}Z^{\prime}\), and the "orbital coordinate system", \(XYZ\), respectively. In both systems, the Sun is located at the origin and at one of the foci of the elliptical orbit (see Figure 2), while the \(X^{\prime}Y^{\prime}\)-plane and \(XY\)-plane are coincident with the ecliptic and the orbital planes, respectively. In Figure 2, the gray ellipse is the ecliptic plane, \(Z^{\prime}\)-axis points to north and \(X^{\prime}\)- and \(Y^{\prime}\)-axes point toward longitudes of \(\phi_{s}=0^{\circ}\) and \(\phi_{s}=90^{\circ}\), respectively. On the other hand, the light yellow ellipse is the orbital plane, the \(Z\)-axis is perpendicular to this plane, while the \(X\)-axis is along the major axis of the orbit. We assume that the spacecraft moves in a counterclockwise direction around the \(Z\)-axis, i.e., the angular momentum of the spacecraft is parallel to the \(Z\)-axis. PSP moves in the plane of the Venus orbit, which slightly deviates from the ecliptic plane. We will show later that the PSP orbit is not the one that allows it to traverse the CS with a reasonably high probability. To look for the orbit with the highest probability of crossing a CME-flare CS, we introduce a parameter, \(\alpha\), the angle between the orbital and ecliptic planes (see Figure 2).
Figure 2 describes two ways by which two planes intersect with an angle of \(\alpha\). The first one, which is depicted in Figure 2a, involves rotating the orbital plane counterclockwise around the \(X^{\prime}\)-axis by an angle \(\alpha\), while the perihelion, aphelion, and the major axis of the orbit keep staying in the ecliptic plane and are all located on the \(X^{\prime}\)-axis. The perihelion and aphelion are co-located in space with the descending and the ascending nodes of the orbit at longitudes of \(180^{\circ}\) and \(0^{\circ}\), respectively. The second approach, illustrated in Figure 2b, rotates the orbital plane around the \(Y^{\prime}\)-axis by an angle \(\alpha\), while the \(Y^{\prime}\)-axis is parallel to the minor axis of the orbit ellipse. Here, the perihelion and aphelion deviate from the ecliptic plane and are located at the northernmost and southernmost points of the orbit, with the descending and the ascending nodes at longitudes of \(90^{\circ}\) and \(270^{\circ}\), respectively. In both cases, the angle \(\alpha\) represents the orbital inclination.
To provide a quantitative description of the two orbits shown in Figure 2, we describe them mathematically in the \(XYZ\) system first such that the orbit ellipse with one focus located at the origin of the coordinate system is given
below:
\[\frac{(x-c)^{2}}{a^{2}}+\frac{y^{2}}{b^{2}} =1, \tag{1}\] \[z =0, \tag{2}\]
where \(a\) and \(b\) are the lengths of the semi-major axis and the semi-minor axis, respectively, and \(c=\sqrt{a^{2}-b^{2}}\).
In principle, a more intuitive approach to the calculation would be to take the \(X^{\prime}Y^{\prime}\) plane as the referential plane (see Figure 2), to transform the orbital Equations (1) and (2) from the \(XYZ\) to the \(X^{\prime}Y^{\prime}Z^{\prime}\) system, and then to determine whether the orbit intersects the CS or not in the \(X^{\prime}Y^{\prime}Z^{\prime}\) system. However, this approach is obviously difficult due to the nonlinear property of Equation (1). But it should be much easier to perform transformation inversely by rotating the CS from the \(X^{\prime}Y^{\prime}Z^{\prime}\) system to the \(XYZ\) system since a linear function can be used to describe a CS.
### Morphology of a CME-Flare CS
As mentioned earlier, the global behavior of the CS is relatively simple although the internal structure of the CS could be fairly complex. This is because the CS forms as a result of the severe stretch of the closed coronal magnetic field in the initial stage of the eruption. On one hand, it elongates apparently in the direction of the CME propagation. In another two orthogonal directions, on the other hand, its scale is either confined by the reconnection inflow to a local region, governing the CS thickness, or confined by the two legs of the associated CME to a fan-like region of the finite angular width (see the discussions of Lin et al., 2015). Chen (2011) suggested that the upper limit of the angular width of this fan-like region to be approximately \(70^{\circ}\). The shape of the CS will be simplified into a triangle-like sheet with a thickness of \(d\) and an extending angle less than \(70^{\circ}\). For simplicity, we consider the half angular width of a typical CS to be \(23^{\circ}\). Although the selection of this value is somewhat arbitrary, it remains reasonable. In addition, we further assume that: (1) the CME erupts radially (although a small part of CMEs are ejected non-radially), (2) the CS trails behind the CME, and (3) the morphological evolutions of the CME and the CS are self-similar expansion.
On the basis of these assumptions, we use the GCS model (see Figure 3a) developed by Thernisien et al. (2009) for reconstructing the CME/ICME to describe the CS morphology. As illustrated in Figure 3b, we co-locate one vertex of the triangle-shaped CS with the origin of the GCS model, which is the center of the Sun denoted as \(O\). The symmetry axis of the CS intersects the solar surface at the center of the eruption source region, denoted as \(S\), and two boundaries of the CS extend along \(OC_{1}\) and \(OC_{2}\); \(\delta\) is the half angular width and \(\gamma\) is the tilt angle between the local latitude
Figure 2: The schematic diagram of two ways that the orbital plane deviates from the ecliptic plane. The gray plane is the ecliptic plane, whereas the yellow plane depicts the orbital plane. (a) The orbital plane can be obtained by rotating the inclination angle \(\alpha\) around \(X^{\prime}\)-axis, which denotes the major axis of the orbit. (b) The corresponding rotation axis is the \(Y^{\prime}\)-axis, which represents the minor axis of the orbit.
and the intersecting line of the CS with the solar surface. Considering the fact that a CME usually originates from a filament or filament channel, we assume that \(\gamma\) is identified with the tilt angle of the filament prior to eruption (e.g., Cheng et al., 2011, Zhang et al., 2012, and references therein). We define \(\gamma>0\) if the spine of filament is along the northwest-southeast direction, and \(\gamma<0\) if the filament follows the northeast-southwest direction. It should be noted that CMEs sometimes rotate as they lift off (Zhou et al., 2023 for instance), so there is some uncertainty about the orientation of the CS plane in the morphology we described here. Figure 3(c) shows the relative positions of the CS and the Sun in the heliocentric coordinate system, where the longitude and latitude of point \(S\), are \(\phi_{s}\) and \(\theta_{s}\), respectively. Figure 3d shows the enlarged area marked by the dashed box in Figure 3c, in which the black dashed line is the intersecting line of the CS with the solar surface, also known as the polarity inversion line (PIL) on the photospheric surface, and "F" is the location of the filament before eruption. Hao et al. (2015) reported that most filaments in the north hemisphere are along the northeast-southwest direction, so we let \(\gamma<0\) for the filament in Figure 3d to match this rule. In addition to these parameters, those, such as the CS life-time, \(\tau_{0}\), extending velocity of the CS, \(v\), and the eruption initiation time, \(t\), related to the dynamical properties of the eruption need to be taken into account as well.
It is necessary to determine the coordinates of \(C_{1}\) and \(C_{2}\) in the \(XYZ\) system before the criterion is established for traversing of the spacecraft on a given orbit with a CME-flare CS. For a CS described by its spatial and morphological parameters, denoted as (\(l\), \(\delta\), \(\phi_{s}\), \(\theta_{s}\), \(\gamma\)), the coordinates of \(C_{1}\) and \(C_{2}\) are given in Equations (A7) and (A8) in Appendix A, to which interested readers may refer. Now, we are able to discuss the conditions required for a spacecraft traversing the CS.
### Three Criteria for Spacecraft Traversing a Given CS
Obviously, the premise for a spacecraft to traverse the CS is that its orbit should intersect the CS. Alternatively, to tell whether a CS could be traversed by a spacecraft needs to answer these two questions: (1) What kind of CS will intersect a given orbit, or conversely, what kind of orbit will intersect a CS whose position and morphology are known? (2) If the intersection of the orbit and the CS is realized, how could the spacecraft traverse the CS?
First of all, we need to establish the criterion for determining the intersection of the orbit and the CS. Figure 4 displays how the intersection occurs. The yellow ellipse is the area surrounded by the orbit and the red triangle is
Figure 3: The figure depicts the simplified morphology of a CS. (a) The GCS model by Thernisien et al. (2009), where the red box highlights the location of CS. (b) The basic features of CS, including its length (\(l\)), half angular width (\(\delta\)), and tilt angle (\(\gamma\)). (c) The position, longitude (\(\phi_{s}\)), and latitude (\(\theta_{s}\)) of the eruption source labeled as “S”. (d) A magnified view of the region enclosed by the dotted box in (c), where ”F” denotes the filament along the northeast-southwest direction (\(\gamma<0\)), and the dashed line represents the polarity inversion line (PIL).
for the CS. For an infinitely long CS, namely \(l\gg R_{\odot}\), the intersection occurs as \(C_{1}(x_{1},y_{1},z_{1})\) and \(C_{2}(x_{2},y_{2},z_{2})\) are located on either side of the orbital plane so that:
\[Criterion\ 1:\ z_{1}z_{2}<0 \tag{3}\]
with \(z=0\) being the orbital plane that intersects the CS at line \(OC_{orb}\), which extends to point \(C_{0}\) located at line \(C_{1}C_{2}\). After simple algebraic calculations, we have
\[\mathbf{OC_{0}}=\left[x_{0},y_{0},z_{0}\right]^{\mathrm{T}}=\left[\frac{x_{2 }z_{1}-x_{1}z_{2}}{z_{1}-z_{2}},\frac{y_{2}z_{1}-y_{1}z_{2}}{z_{1}-z_{2}},0 \right]^{\mathrm{T}}. \tag{4}\]
Criterion 1 suggests that in order for a spacecraft to traverse the CS, we need to pay more attention to the case in which the eruption takes place near the orbital plane, the resultant CME propagates in the direction roughly parallel to the orbital plane, and the associated CS is nearly orthogonal to the orbital plane as shown in Figure 4. Otherwise, the probability of the intersection is fairly low.
Second, for a CS with a finite length, say \(l<100R_{\odot}\), Criterion 1 is not strong enough to finalize the condition for intersection, and the impact of the finite value of \(l\) on the criterion of intersection needs considering. Obviously, as Criterion 1 is satisfied, the CS intersects the orbit only if point \(C_{0}\) lies outside the elliptical orbit, which yields to the second criterion:
\[Criterion\ 2:\ \frac{(x_{0}-c)^{2}}{a^{2}}+\frac{y_{0}^{2}}{b^{2}}>1. \tag{5}\]
It suggests that the CS needs to develop apparently in length before it can intersect the orbit.
These two criteria clearly illustrate the conditions required for the intersection of the orbit and CS. We now discuss the condition necessary for a spacecraft to traverse the CS. In Figure 4, let \(t\) be the eruption time, and \(Q\) be the spacecraft position at time \(t\). The cyan arrows represent three critical time intervals: \(\tau_{orb}\), \(\tau_{0}\), and \(\tau_{fly}\), which demonstrate the time needed for the CS to propagate to point \(C_{orb}\), at which the orbit intersects the CS, from the solar surface, the lifetime of CS (see the discussions that will be given shortly), and the time that the spacecraft takes to fly from point \(Q\) to point \(C_{orb}\), respectively.
On one hand, the CS is continually dissipated by magnetic reconnection, so the spacecraft must be located at a position not very far from the CS in order to cross the CS before the CS disappears. On the other hand, the spacecraft should not be very close to the plane where the CS is supposed to be, otherwise, the spacecraft may pass \(C_{orb}\) before
Figure 4: The figure depicts a conceptual representation of a CS that may be traversed by a spacecraft. The orbital plane intersects the CS at the line \(OC_{0}\), while the orbit itself crosses the CS at the point \(C_{orb}\). \(C_{1}\) and \(C_{2}\) mark two endpoints of the CS. \(Q\) is the location of the spacecraft at the eruption time \(t\). The three cyan arrows represent the three characteristic periods: \(\tau_{0}\), \(\tau_{orb}\) and \(\tau_{fly}\), which indicate the lifetime of the CS, the time of the CS propagation to the orbit, and the time of the spacecraft flight from \(Q\) to \(C_{orb}\), respectively.
the developing CS touches the orbit and miss the chance to traverse the CS. Combining these two considerations yields the third criterion required for the spacecraft to traverse the CS:
\[Criterion\ 3:\ \tau_{0}>\tau_{fly}>\tau_{orb}. \tag{6}\]
The first time interval, \(\tau_{0}\), could be obtained according to observations (Webb and Vourlidas, 2016), and we set
\[\tau_{0}=18\ \text{hrs} \tag{7}\]
throughout this work (see more discussions in Section 3.2). The second time interval, \(\tau_{fly}\), is given as:
\[\tau_{fly}=t_{orb}-t, \tag{8}\]
where \(t_{orb}\) and \(t\) are the times for the spacecraft to travel from the perihelion to points \(C_{orb}\) and \(Q\), respectively. Interested readers refer to Appendix B for more details. Here, we assume that the CS develops at a constant speed, \(v\), then the third time interval, \(\tau_{orb}\), is given by:
\[\tau_{orb}=\frac{OC_{orb}}{v}, \tag{9}\]
where \(OC_{orb}=a(1-e^{2})/(1-e\cos\phi)\), \(e=c/a\), and \(\phi=\arctan\left(y_{0}/x_{0}\right)\). Lamy et al. (2019) investigated the correlation of the velocity to the acceleration of CME. They found the correlation is poor and the average acceleration is almost 0, which suggests that the CS increases in length at a constant speed is not a bad approximation.
We realize that Criteria 1 and 2 impose constrictions on the size and orientation of the orbit and the CS, as well as the location of the source region of the eruption; and that Criterion 3 requires that the spacecraft motion and the CS kinematic behaviors to be further constricted. In this work, we name the CSs that satisfy Criteria 1 and 2 the "candidates of detectable CSs (CDCSs)". Obviously, satisfying Criterion 3 allows a CDCS to be a "detectable CS (DCS)". We are now ready to apply these 3 criteria to determine whether a given CS is detectable.
As mentioned earlier, we assume the half angular width of the CS \(\delta=23^{\circ}\) in this work. The lifetime, \(\tau_{0}\), is given as 18 hrs (see Equation 7 and relevant discussions in Section 3.2), and the CS length, \(l\), is related to its lifetime, \(\tau_{0}\), and extending velocity, \(v\), in the way \(l=v\tau_{0}\). Therefore, the CS is characterized by five parameters: \(\theta_{s}\), \(\phi_{s}\), \(\gamma\), \(l\), and \(t\). According to the discussions above, the parameters that govern Criterion 1 include the orbital parameters, the rotational axis \(rot\), and the inclination \(\alpha\), as well as the CS parameters, \(\theta_{s}\), \(\phi_{s}\), and \(\gamma\). The parameters that govern Criterion 2 include the orbital parameters, \(a\), \(b\), and \(rot\), as well as the CS parameters, \(\theta_{s}\), \(\phi_{s}\), \(\gamma\), and \(l\). Note that Criterion 1 only applies to infinitely long CS, thus parameters related to length do not affect the result, but the situation for Criterion 2 changes, and it is impacted by \(a\), \(b\), and \(l\). In addition to all the parameters governing Criterion 2, Criterion 3 takes \(t\) into account, which is related to the development of the CS and the motion of the spacecraft.
### Probability Model
This section provides an introduction to estimating the probability of traversing CSs by a spacecraft. In the case where the spacecraft orbit is given, the orbital parameters, \(a\), \(b\), \(c\), and \(\alpha\) are fixed, and the results given by Equations (3) through (6) are governed by the parameters related to CS only. In the framework of the probability, an event \(\{\theta_{s},\phi_{s},\gamma,l,t\}\) is said to occur when a CS with parameters \(\theta_{s},\phi_{s},\gamma,l\) and \(t\) is produced by a solar eruption that is considered happening randomly. Thus, the occurrence of this event is equivalent to obtaining the coordinates of a random point in a five-dimensional parameter space spanned over \((\theta_{s},\phi_{s},\gamma,l,t)\). Therefore, the event "the parameters of CS meeting three criteria" can be considered equivalent to another event, "a random point located in a sub-domain of the parameter space". In other words, the probability of spacecraft traversing the CS could be obtained via evaluating the probability of which a given point is found in a sub-domain of the space \((\theta_{s},\phi_{s},\gamma,l,t)\). Obviously, this sub-domain is subject to the restriction of the three criteria strictly.
As mentioned earlier, Criterion 1 defines a large domain in the space spanned by \((\theta_{s},\phi_{s},\gamma)\), and we denote this large domain as \(\Omega_{1}\). Referring to Figure 4, we realize that the points located in \(\Omega_{1}\) help select the CS that possesses both the right location and the right orientation that might probably allow the traversing to occur. Similarly, Criterion 2 defines a smaller sub-domain \(\Omega_{2}\) in a four-dimensional parameter space spanned by \((\theta_{s},\phi_{s},\gamma,l)\), and helps select the
CDCS. Finally, Criterion 3 defines the smallest sub-domain \(\Omega_{3}\) in the five-dimensional parameter space spanned by \((\theta_{s},\phi_{s},\gamma,l,t)\), and determines whether the traversing could eventually occur.
For an infinitely long CS, if the parameters for its location and orientation, \((\theta_{s},\phi_{s},\gamma)\), are located in \(\Omega_{1}\), namely \((\theta_{s},\phi_{s},\gamma)\in\Omega_{1}\), it is a CDCS, independent of \(l\) and \(t\). Therefore, the corresponding probability of an infinitely long CS being a CDCS, \(P_{\infty}^{CD}\) is:
\[P_{\infty}^{CD}=P\left[(\theta_{s},\phi_{s},\gamma)\in\Omega_{1} \right]. \tag{10}\]
Here \(P_{\infty}^{CD}\) is calculated by integrating the joint probability density function (PDF) \(f(\theta_{s},\phi_{s},\gamma)\) over the domain \(\Omega_{1}\):
\[P_{\infty}^{CD}=\int_{\Omega_{1}}f(\theta_{s},\phi_{s},\gamma) \mathrm{d}\theta_{s}\mathrm{d}\phi_{s}\mathrm{d}\gamma. \tag{11}\]
Similarly, for a finitely long CS described by \((\theta_{s},\phi_{s},\gamma,l)\in\Omega_{1,2}\), where \(\Omega_{1,2}=\Omega_{1}\cap\Omega_{2}\), parameters of the CS satisfy Criterion 1 and 2 simultaneously, the CS is a CDCS with the corresponding probability, \(P^{CD}\), written as:
\[P^{CD}=P\left[(\theta_{s},\phi_{s},\gamma,l)\in\Omega_{1,2}\right], \tag{12}\]
and can be evaluated via integrating the joint PDF, \(f(\theta_{s},\phi_{s},\gamma,l)\) over domain \(\Omega_{1,2}\)
\[P^{CD}=\int_{\Omega_{1,2}}f(\theta_{s},\phi_{s},\gamma,l)\mathrm{d}\theta_{s} \mathrm{d}\phi_{s}\mathrm{d}\gamma\mathrm{d}l. \tag{13}\]
For the same reason, over the domain \(\Omega_{1,2,3}=\Omega_{1}\cap\Omega_{2}\cap\Omega_{3}\), the probability of a CS being traversed by a spacecraft, \(P^{tra}\), is given as:
\[P^{tra}=P\left[(\theta_{s},\phi_{s},\gamma,l,t)\in\Omega_{1,2,3 }\right], \tag{14}\]
and
\[P^{tra}=\int_{\Omega_{1,2,3}}f(\theta_{s},\phi_{s},\gamma,l,t) \mathrm{d}\theta_{s}\mathrm{d}\phi_{s}\mathrm{d}\gamma\mathrm{d}l\mathrm{d}t. \tag{15}\]
Now, the initial problem of evaluating the probability of a spacecraft traversing a CS has been transformed into completing integrals in Equations (11), (13) and (15) over the domains determined by the three criteria (see Equations 3, 5, and 6). We employed the Monte Carlo method to numerically evaluate these integrals. For simplicity, we conducted the computation by sampling the points uniformly in the integration domain. We discuss how to close these integrals in the next section.
## 3 Probability for the orbits of probes to traverse a CS
In this section, we conduct a detailed analysis of the probability of various orbits intersecting CME-flare CSs. Investigations are performed for the cases of infinitely and finitely long CSs separately.
### Infinitely Long CS
In reality, a CS could never be infinitely long; for the mathematical construction, on the other hand, we perform an investigation of the probability of the spacecraft traversing an infinitely long CS. For an infinitely long CS, as mentioned before, it is a CDCS as long as its parameters \((\theta_{s},\phi_{s},\gamma)\in\Omega_{1}\). Several issues related to \(\Omega_{1}\) are worth paying extra attention. First, how many sample points are included in \(\Omega_{1}\), which reveals the attribute of the CS that could be traversed by the spacecraft? Second, how large is the range of variations of parameters covered by these samples, which determines whether a CS is a CDCS? Third, how does a given orbit affect \(\Omega_{1}\), which reveals the impact of orbit parameters on \(\Omega_{1}\).
The left panels (a1), (b1), and (c1) in Figure 5 depict the sub-domain \(\Omega_{1}\) in the space \((\theta_{s},\phi_{s},\gamma)\). They illustrate three distinct orbits: (1) \(\alpha=0\), the orbital plane coinciding with the ecliptic plane (Figure 5a1), (2) \(\alpha\neq 0\), the orbital plane tilted away from the ecliptic plane by rotating an angle of \(\alpha\) around the \(X^{\prime}\)-axis (Figure 5b1), and (3) \(\alpha\neq 0\), the orbital plane deviating away from the ecliptic plane by rotating the same angle around the \(Y^{\prime}\)-axis (Figure 5c1). Since \(\alpha\approx 0\) for PSP, Figure 5a1 exhibits the case that is roughly suitable for PSP, in which \(\Omega_{1}\) is bounded by two
surfaces defined by \(z_{1}=0\) and \(z_{2}=0\) [see Equation (3) and the related discussions]. The region of \(z_{1}<z_{2}<0\) is located to the right of \(\Omega_{1}\), the region of \(z_{2}>z_{1}>0\) is to the left, and that of \(z_{1}<0<z_{2}\) lies within \(\Omega_{1}\).
Figure 5: The conditions required for an infinitely long CS to intersect with different orbits. From the top to the bottom panels, three rows correspond to the orbits that: (1) \(\alpha=0^{\circ}\), (2) \(\alpha\neq 0\) and \(rot=X^{\prime}\), and (3) \(\alpha\neq 0\) and \(rot=Y^{\prime}\). Left panels depict the sub-domain \(\Omega_{1}\) in the three-dimensional parameter space with \(\theta_{s}\), \(\phi_{s}\), and \(\gamma\) as coordinates. The red arrow in panel (a1) represents the set of CSs that erupt at the source \((\theta_{s},\phi_{s})\), with \(\gamma\) being allowed to take on any value. The blue lines in panels (a1-c1) represent the sets of tangent points of the red arrow to \(\Omega_{1}\). In the medium panels, we show the locations that could generate CDCSs, which is the domain bounded by the two blue dashed lines. The red and black dashed lines indicate the equator and the projection of the orbit, respectively. The right panels display the \(\Delta\gamma\) of CDCSs that erupt from different sources. The lines of different colors in the right panels have the same meaning as the medium panels.
The red arrow in Figure 5a1, normal to the \(\theta_{s}\phi_{s}\)-plane with its tip located at surface \(\gamma=0\), represents the CSs produced by an eruption from a fixed source \((\theta_{s},\phi_{s})\), while its orientation varies freely in a given range.
Figure 5a1 indicates that if \((\theta_{s},\phi_{s},\gamma)\in\Omega_{1}\), the associated arrow touches \(\Omega_{1}\), and the corresponding eruption would produce a CDCS. Figure 5a1 indicates that the arrow and surface \(z_{1}=0\) or \(z_{2}=0\) could have two, one, and no intersection. In the case of two intersections, the eruption produces the CDCS as the value of \(\gamma\) is in the range between these two intersections. We denote this range as \(\Delta\gamma\).
As the arrow is tangential to either of the two surfaces, the locations of the arrow in the \(\theta_{s}\phi_{s}\)-plane set up the upper and lower boundaries of \(\theta_{s}\) and \(\phi_{s}\) (see two blue lines in Figure 5a1), and no CDCS could be created outside these boundaries. These two boundaries are determined by the equations:
\[z_{i} = 0,\ i=1,2, \tag{16}\] \[\frac{\partial\theta_{s}}{\partial\gamma} = 0, \tag{17}\]
from which we could eliminate \(\gamma\) and obtain the equation of the two boundaries describing by \(\theta_{s}\) and \(\phi_{s}\):
\[\cos\alpha\sin\theta_{s}-\sin\alpha\cos\theta_{s}\sin\phi_{s}\!\equiv\!\sin( \pm\delta), \tag{18}\]
where \(\sin(+\delta)\) and \(\sin(-\delta)\) correspond to \(i=1\) and \(i=2\) in Equation (16), respectively.
In the case either that the arrow does not intersect surfaces \(z_{1}=0\) and \(z_{2}=0\), or that the value of \(\gamma\) is outside the above range, no CDCS could be created as well.
In Figure 5a2, the upper and lower boundaries of \(\theta_{s}\) at the solar surface are outlined by two blue dashed curves. Only eruptions from regions on the solar surface between the two boundaries could produce CDCS. As we mentioned earlier, the \(X^{\prime}Y^{\prime}\)-plane is the ecliptic plane that is co-located in space with the orbital plane, \(XY\)-plane, in the case of \(\alpha=0\). According to Equation (18), we obtain
\[\theta_{s}\equiv\pm\delta \tag{19}\]
as \(\alpha=0\) so the latitude of the blue dashed curves in Figure 5a2 is identified with the half angular width of the CS, \(\delta\). As expected, the larger the value of \(\delta\) is, the larger of the source region on the solar surface that could produce the CME associated with the CDCS is.
Furthermore, the region between the two dashed curves in Figure 5a2 corresponds to the straight blue belt in \(\theta_{s}\phi_{s}\)-plane in Figure 5a3. The size of the belt gives the range of \(\theta_{s}\) and \(\phi_{s}\) for CDCS, and the color shading describes that of \(\gamma\), namely \(\Delta\gamma\). At the center of this belt, \(\Delta\gamma\) attains its maximum, \(180^{\circ}\), indicating that the CS with any value of \(\gamma\) falls under the category of CDCS. At the boundary of this belt, on the other hand, \(\Delta\gamma\) vanishes, implying that any CS developing from this location and beyond is not detectable.
When \(\alpha\neq 0\), the structure of \(\Omega_{1}\) defined by Criterion 1 becomes complex, as depicted in Figures 5b and 5c. Specifically, Figures 5b1 and 5c1 illustrate the cases of which the orbital plane deviates from the ecliptic plane by rotating around the \(X^{\prime}\)- and the \(Y^{\prime}\)-axis, respectively. We repeat the analyses for the case of \(\alpha=0\), apply the approach to the cases of \(\alpha\neq 0\), and obtain the new detectable domains as shown in Figures 5b2 and 5c2.
To quantitatively describe the new domain in which the CS is detectable, we define \(\theta\) as the latitude in the \(XYZ\)-system, which is related to \(\theta_{s}\) and \(\phi_{s}\) according to Equations (A7) and (A8):
\[\cos\alpha\sin\theta_{s}-\sin\alpha\cos\theta_{s}\sin\phi_{s}\!=\!\sin\theta. \tag{20}\]
Compared Equations (18) through (20), we find
\[\theta\equiv\pm\delta, \tag{21}\]
which means that the range or scale of the domain in which the CS is detectable depends solely on \(\delta\), and only the eruption from the region between latitude of \(\theta=\delta\) and \(\theta=-\delta\) can develop CDCS. The results suggest that the new detectable domain can be obtained by rotating the domain with the same range in Figure 5a2 by an angle \(\alpha\) around the \(X^{\prime}\)- or the \(Y^{\prime}\)-axis, coinciding with the way the orbital plane is rotated.
Figures 5b3 and 5c3 present the same information as Figure 5a3 but for different cases of \(\alpha\neq 0\). Comparing with the detectable domain that appears as a straight belt for the case of \(\alpha=0\) (see Figure 5a3), the regions in the \(\theta_{s}\phi_{s}\)-plane
for \(\Omega_{1}\) displayed in Figures 5b3 and 5c3 fluctuate periodically, and the greater the value of \(\alpha\) is, the stronger the fluctuation is. Furthermore, a phase difference of \(\pi/2\) exists between the cases in which the orbital plane deviates from the elliptic plane in different fashions.
Figures 5b3 and 5c3 reveal that the probability of producing a CDCS by the eruption from the same source varies with the way that the ecliptic plane deviates from the orbital plane. Specifically, for \(rot=X^{\prime}\), CMEs from the north hemisphere are more likely to produce CDCS when erupting from longitudes of \(\phi_{s}\in(0^{\circ},180^{\circ})\), whereas in the case of \(rot=Y^{\prime}\), the corresponding range of longitudes moves to \(\phi_{s}\in(90^{\circ},270^{\circ})\). As the CME occurs in the south hemisphere, on the other hand, the corresponding values of \(\phi_{s}\) are just outside the above ranges. We also notice that positions with \(\Delta\gamma=180^{\circ}\) are mainly located around the intersection of the orbital plane and the solar surface (black dashed curve), instead of around the solar equator (red dashed curve). This implies that eruptions around the orbital plane are more likely to produce CDCS. Generally speaking, it is important to note that for an infinitely long CS, whether it is a CDCS depends not only on the location of the eruption source region on the solar surface but also on the parameters of the orbit. Consequently, the probability distributions of CDCS versus CS parameters will be considered in a more realistic manner to accurately calculate the probability of intersection between the CS and different orbits.
To utilize Equation (11) for evaluating the probability \(P_{\infty}^{CD}\) of an infinitely long CS being a CDCS, we first need to obtain the joint PDF \(f(\theta_{s},\phi_{s},\gamma)\). It is reasonable to assume that \(\phi_{s}\) of a CDCS is independent of \(\theta_{a}\) and \(\gamma\), namely the CS source region latitude and CS tilt angle. Therefore, the joint PDF of the three parameters is:
\[f(\theta_{s},\phi_{s},\gamma)=f(\theta_{s},\gamma)f(\phi_{s}), \tag{22}\]
where \(f(\phi_{s})\) is the PDF of \(\phi_{s}\) and \(f(\theta_{s},\gamma)\) is the joint PDF of \(\theta_{s}\) and \(\gamma\). As a plausible approximation, the PDF of \(\phi_{s}\) can be assumed uniform. Therefore,
\[f(\phi_{s}) = \frac{1}{360},\phi_{s}\in(0,360). \tag{23}\]
As for the joint PDF \(f(\theta_{s},\gamma)\), the optimal calculation method is to examine the proportion of CDCSs for different values of \(\gamma\) and \(\theta_{s}\) according to observations. However, it is difficult to directly infer the tilt angle \(\gamma\) from the observed CSs because of the limit to observations.
Overall, we obtained the joint PDF \(f(\theta_{s},\gamma)\) of filaments through observational data, and analyzed the correlation of \(\gamma\) to \(\theta_{s}\). The key point is that the value of \(\gamma\) of most CSs is close to 0. The meaning of this point is twofold. First, the CSs that can be observed are usually developed in the eruption occurring on either east or west edge of the Sun, and most of them were observed edge-on (e.g, see discussions of Ko et al., 2003, 2010; Lin et al., 2005, 2007, 2009, 2015; and Lin & Ni, 2018). Second, decreasing the inclination angle \(\alpha\) increases the frequency of the spacecraft crossing the CS since the eruption is more likely to occur in the middle and low latitude region; on the other hand, the spacecraft on the orbit of large inclination angle has more opportunities to cross the CS with large angle, even with right angle, which will help us attain accurate information about the CS thickness (e.g., see Lin et al., 2015 and Lin & Ni, 2018 for more discussions on the importance of such information). To obtain an orbit with the highest probability of detecting the CS, we need to balance the above two aspects regarding the inclination angle.
We employ a more realistic \(f(\theta_{s},\gamma)\) to evaluate the probability of a random CS being a CDCS for various orbits. We introduce the function \(B(\theta_{s},\phi_{s})\), which describes the likelihood of generating a CDCS by an eruption from a unit area at location \((\theta_{s},\phi_{s})\) on the solar surface, and is given by:
\[B(\theta_{s},\phi_{s})=\int\limits_{\Omega_{1}}f(\theta_{s},\phi_{s},\gamma) \mathrm{d}\gamma. \tag{26}\]
Figure 6: Relationship between the tilt angle \(\gamma\) and the latitude \(\theta_{s}\). (a) The joint PDF of \(f(\theta_{s},\gamma)\) in the \((\theta_{s},\gamma)\) space. The color represents numbers of the filament of different tilt angle, \(\gamma\), at different latitudes, \(\theta_{s}\) (data from Hao et al., 2015). (b) Variations of the average \(\gamma\) evaluated from panel (a) versus \(\theta_{s}\). The marginal PDFs of \(\theta_{s}\) and \(\gamma\) are illustrated in panels (c) and (d).
The top two rows of Figure 7 illustrate \(B(\theta_{s},\phi_{s})\) for different orbits (refer to Figure 5 for further clarification). This probability clearly depends on the location \((\theta_{s},\phi_{s})\) where the eruption takes place. Figures 7a1 through 7d1 and 7a2 through 7d2 demonstrate the likelihood of an orbit intersecting the CS after deviating from the ecliptic plane at different angles of \(\alpha\) by rotating the coordinate system around the \(X^{\prime}\)- and \(Y^{\prime}\)-axes, respectively. Similar to the results presented in Figures 5c1 through 5c3, parameter \(rot\) determines the phase of the undulations of the belt region, while \(\alpha\) controls their amplitudes. However, in contrast to Figure 5, which only describes \(\Delta\gamma\) at various latitudes and longitudes as a qualitative description of the intersection probability, Figure 7 directly gives the probability. For the special cases, say \(\alpha=90^{\circ}\), the CDCS could develop in the eruption from two regions. For \(rot=X^{\prime}\) (Figure 7d1), eruptions occurring near \(\phi_{s}=0^{\circ}\) or \(\phi_{s}=180^{\circ}\) are more likely to generate a CDCS, while for \(rot=Y^{\prime}\) (Figure 7d2), eruptions occurring near \(\phi_{s}=90^{\circ}\) or \(\phi_{s}=270^{\circ}\) are more favorable for producing CDCS. This further highlights that eruptions near the orbital plane are more likely to generate CDCSs. In addition, we can also calculate \(P_{\infty}^{CD}\) by integrating \(B(\theta_{s},\phi_{s})\) to further investigate this phenomenon:
\[P_{\infty}^{CD}=\int B(\theta_{s},\phi_{s})\mathrm{d}\theta_{s}\mathrm{d}\phi_ {s}. \tag{27}\]
Figure 7e presents variations of the intersection probability of the orbit and an infinitely long CS versus the inclination angle \(\alpha\). It is apparent that \(\alpha\) affects this probability in the case of an infinitely long CS, whereas \(rot\) does not. We find that as \(\alpha\) increases, \(P_{\infty}^{CD}\) initially increases and then decreases, with a peak value of \(P_{\infty}^{CD}=0.29\) achieved at \(\alpha\approx 29^{\circ}\). For the PSP orbit, the probability of an infinitely long CS being a CDCS is \(P_{\infty}^{CD}=0.26\).
### Finitely Long CS
In reality, the length of CSs is finite. Webb & Vourlidas (2016) studied about 130 CME-flare CSs in the solar maximum and the minimum of the 23rd solar cycle. They found that the average lengths of CSs in the maximum and the minimum years were \(12.4R_{\odot}\) and \(11.8R_{\odot}\), respectively. The longest CSs found so far were \(18.5R_{\odot}\) and \(17R_{\odot}\) long in the maximum and the minimum. Moreover, the average velocities of the CS increase in length during the solar
Figure 7: Top two panels show the probability of different orbits intersecting an infinitely long CS erupted from a unit area in the solar surface, where \(rot=X^{\prime}\) in the first row and \(rot=Y^{\prime}\) in the second one. In the bottom panel, we display the intersection probability versus the inclination angle \(\alpha\) of the orbit. The results indicate that the intersection probability is independent to \(rot\) and the orbit with \(\alpha=29^{\circ}\) is the most likely to intersect an infinitely long CS.
maximum and the minimum years were 324 km s\({}^{-1}\) and 188 km s\({}^{-1}\), respectively, with the corresponding accelerations of 6.3 m s\({}^{-2}\) and 8.3 m s\({}^{-2}\), respectively. The average lifetimes of CSs during the maximum and the minimum years were \(\tau_{0}=16\) hrs and \(\tau_{0}=18.2\) hrs, respectively. Assuming that the CS extends with a constant velocity, then according to the mean velocity and lifetime of CSs from observations, we are able to deduce the mean length of the CS in the maximum and the minimum years to be approximately \(27R_{\odot}\) and \(18R_{\odot}\), respectively. We note here that, to our knowledge for the time being, no report has ever been given about the true length of the CME-flare CS so far, and the longest CS that so far was reported and could be definitely identified is the one observed by LASCO/C3, which is between 20 R\({}_{\odot}\) and 30 R\({}_{\odot}\)(e.g., see Lin et al., 2005). We understand that, due to the limit of the observational techniques to our capabilities of acquiring the complete physical scenario regarding the CS, the length of a CME-flare CS should be longer than what we have known. This is also true for the lifetime of the CS. Therefore, both the length and the lifetime of the CME-flare CS used in the present work might be just a lower limit to the true values of the CS in reality. Hence, 27 R\({}_{\odot}\) and 18 R\({}_{\odot}\) for the CS length are used as references in the present work. The relevant parameters mentioned above are summarized in Table 1.
Webb & Vourlidas (2016) pointed out that since the CS is gradually dissipated, the estimated lifetime \(\tau_{0}\) is just a lower limit, because the fact that the CS disappears from observational data does not necessarily mean that it does not exist any longer, but only means that its emission measure in the given wavelength is below the sensitivity of the detector. Ciaravella et al. (2013) even identified a CS with a lifetime of approximately 38 hrs when analyzing the white-light data from LASCO. Recent observations of the Wide-Field Imager for Solar Probe (WISPR; Vourlidas et al., 2016) onboard the PSP showed that more complex CS structures were seen in the white light and their durations are longer than those observed near the Earth when the probe is very close to the Sun (Howard et al., 2022).
Using Equation (13), we can calculate the probability \(P^{CD}\) that a finitely long CS is a CDCS. For the joint PDF \(f(\theta_{s},\phi_{s},\gamma,l)\), as mentioned before, the dependence of \(f(\theta_{s},\phi_{s},\gamma,l)\) on \(\phi_{s}\) does not correlate to the dependence of \(f\) on any other variables such that
\[f(\theta_{s},\phi_{s},\gamma,l)=f(\theta_{s},\gamma,l)f(\phi_{s}). \tag{28}\]
However, obtaining the joint PDF of the variables \((\theta_{s},\gamma,l)\) is still difficult due to the lack of sufficient statistical samples on latitude, inclination, and length of CSs. Therefore, we make a relatively strong assumption that the length of the CS, \(l\), is also independent of the other variables. Thus, we obtain:
\[f(\theta_{s},\phi_{s},\gamma,l)=f(\theta_{s},\gamma)f(\phi_{s})f(l), \tag{29}\]
where \(f(l)\) is the marginal density of \(l\) and describes the likelihood that a CS with length of \(l\) occurs. Combining Equations (13) and (29), \(P^{CD}\) can be expressed as:
\[P^{CD} = \int_{\Omega_{1,2}}f(l)f(\theta_{s},\gamma)f(\phi_{s})\mathrm{d} \theta_{s}\mathrm{d}\phi_{s}\mathrm{d}\gamma\mathrm{d}l, \tag{30}\] \[= \int f(l)\left[\int_{\Omega_{1,2}^{l}}f(\theta_{s},\gamma)f(\phi _{s})\mathrm{d}\theta_{s}\mathrm{d}\phi_{s}\mathrm{d}\gamma\right]\mathrm{d}l,\]
\begin{table}
\begin{tabular}{c|c c} \hline \hline & Maximum Year & Minimum Year \\ \hline Average \(l\) (\(R_{\odot}\)) & 12.4 & 11.8 \\ Longest \(l\) (\(R_{\odot}\)) & 18.5 & 17 \\ Average \(v\) (km s\({}^{-1}\)) & 324 & 188 \\ Average acceleration (m s\({}^{-2}\)) & 6.3 & 8.3 \\ Average \(\tau_{0}\) (hrs) & 16 & 18.2 \\ Estimated \(l=v\tau_{0}\) (\(R_{\odot}\)) & \(\approx 27\) & \(\approx 18\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of CSs according to Webb & Vourlidas (2016).
where \(\Omega_{1,2}^{l}\) is the sub-domain inside \(\Omega_{1,2}\) for a given \(l\). We define \(P_{l}^{CD}\) as the conditional probability, which quantifies the likelihood of a CS being CDCS as \(l\) is known. Consequently, according to the law of the total probability, we obtain:
\[P^{CD} = \int f(l)P_{l}^{CD}\mathrm{d}l, \tag{31}\]
combining Equations (30) and (31) leads to:
\[P_{l}^{CD} = \int_{\Omega_{1,2}^{l}}f(\theta_{s},\gamma)f(\phi_{s})\mathrm{d} \theta_{s}\mathrm{d}\phi_{s}\mathrm{d}\gamma. \tag{32}\]
Webb & Vourlidas (2016) studied behaviors of the CME-flare CS comprehensively and revealed important information on \(l\), but it is not enough to construct \(f(l)\) because their samples include only 52 CSs. Instead, we look into the probability of a CS with the given length to be a CDCS, \(P_{l}^{CD}\) in this part of work. We shall further investigate the influence of \(f(l)\) on the final results later.
We now demonstrate the probabilities of CSs being CDCSs when their lengths are \(12R_{\odot}\), \(27R_{\odot}\), and \(90R_{\odot}\), respectively. These lengths are the average CS length obtained from LASCO data (Webb & Vourlidas, 2016), the product of the average speed and the lifetime of CSs in the solar maximum (Webb & Vourlidas, 2016), and the length of a hypothetically ultra-long CS that we may imagine. To perform further studies about the probability of the spacecraft crossing the CS, we consider six types of orbits (see Table 2), among which Orb\({}_{1}\) is the PSP orbit, and Orb\({}_{2}\) is the orbit obtained by scaling down the PSP orbit. The left three columns in Figure 8 show the detection probability belts, which represent the probability of producing CDCSs by the eruption from a unit area at the location \((\theta_{s},\phi_{s})\) on the solar surface, \(B_{l}(\theta_{s},\phi_{s})\). \(B_{l}(\theta_{s},\phi_{s})\) is given as:
\[B_{l}(\theta_{s},\phi_{s})=\int\limits_{\Omega_{1,2}^{l}}f(\theta_{s},\gamma)f (\phi_{s})\mathrm{d}\gamma. \tag{33}\]
Different from the probability given by Equation (26), the probability discussed here depends not only on \(\theta_{s}\) and \(\phi_{s}\), but on the length of the CS, \(l\), as well.
We first investigate the case where the orbital plane nearly coincides with the ecliptic plane (\(\alpha\approx 0\), or the \(XYZ\)-system is identified with the \(X^{\prime}Y^{\prime}Z^{\prime}\)-system). Comparing Figure 7a1 with Figures 8a1 through 8a3 indicates that the size of the source region of the eruption that can produce CDCS significantly shrinks if the CS length is finite. Figure 8 shows that, for a CS of \(l=12R_{\odot}\), its intersection with the orbit is confined to a range of approximately \(\Delta\theta=108^{\circ}\) near the perihelion. Comparing panels in Figures 8a1 through 8a3 with those in Figures 8b1 through 8b3 suggests that, for a CS that is not very long, intersection is more likely to occur with a small orbit. For example, when \(l=12R_{\odot}\), the probability of Orb\({}_{2}\) crossing the CS (\(P_{12R_{\odot}}^{CD}=0.142\)) is two times that of the PSP orbit (\(P_{12R_{\odot}}^{CD}=0.064\)), and
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline & Orb\({}_{1}\) & Orb\({}_{2}\) & Orb\({}_{3}\) & Orb\({}_{4}\) & Orb\({}_{5}\) & Orb\({}_{6}\) \\ \hline \(a\) (\(R_{\odot}\)) & 82.9 & 65 & 65 & 65 & 65 & 65 \\ \(b\) (\(R_{\odot}\)) & 39.1 & 25 & 25 & 25 & 25 & 25 \\ \(c\) (\(R_{\odot}\)) & 73.1 & 60 & 60 & 60 & 60 & 60 \\ \(perihelion\) (\(R_{\odot}\)) & 9.8 & 5 & 5 & 5 & 5 & 5 \\ \(aphelion\) (\(R_{\odot}\)) & 156 & 125 & 125 & 125 & 125 & 125 \\ \(rot\) & - & - & \(X^{\prime}\) & \(Y^{\prime}\) & \(X^{\prime}\) & \(Y^{\prime}\) \\ \(\alpha\) (degree) & 3.4 & 3.4 & 30 & 30 & 90 & 90 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters for different orbits.
the corresponding range of \(\theta\) in which Orb\({}_{2}\) could intersect with the CS is two times that of the PSP orbit intersects the CS (\(216^{\circ}:108^{\circ}\). See Figures 8a4 and 8b4). When \(l=90\)\(R\odot\) (see Figures 8a3 and 8b3), on the other hand, we find that \(P_{90R_{\odot}}^{det}=0.218\) for the PSP orbit, and \(P_{90R_{\odot}}^{det}=0.234\) for Orb\({}_{2}\). Therefore, for a CS that is not very long, the probability a small orbit crosses it is higher than that a large orbit crosses it. As the CS length increases, the difference in probabilities among different orbits crossing the CS decreases. As the CS becomes infinitely long, the difference vanishes.
Now we are looking into the case of \(\alpha\neq 0\). For the orbits with \(rot=X^{\prime}\) (Orb\({}_{3}\) and Orb\({}_{5}\)), the longer the CS is, the wider the longitude range is in which CDCSs occur. This is because the perihelion is located at \(\phi_{s}=180^{\circ}\), as \(l\) decreases, the region where CDCSs exist shrinks towards \(\phi_{s}=180^{\circ}\), which is consistent with the cases of Orb\({}_{1}\) and Orb\({}_{2}\). For the orbits with \(rot=Y^{\prime}\) (Orb\({}_{4}\) and Orb\({}_{6}\)), on the other hand, the latitude range of CDCS sources becomes wider as \(l\) increases. This is because the perihelion of the orbit is located above the north hemisphere of the Sun, and it is difficult for the CS developing in the south to reach the orbit. Therefore, as \(l\) decreases, the region where CDCS exists shrinks towards higher latitudes in the north hemisphere. These results indicate that, for the CS of finite length, only requiring the eruption to occur in the direction roughly parallel to the orbital plane could not help increase the probability of traversing. Apparently, it further requires the eruption to take place in the region near the perihelion.
Figure 8: The left three panels show the probability of various orbits intersecting a finitely long CS erupted from unit area in the solar surface, with the CS length of \(l=12R_{\odot},27R_{\odot}\), and \(90R_{\odot}\), respectively. The rightmost panel displays the intersection probability at different locations in the orbit. Panels through (a) to (f) correspond to the orbits from Orb\({}_{1}\) to Orb\({}_{6}\) listed in Table 2.
We further analyze the case in which the orbital plane is orthogonal to the ecliptic plane (Orb\({}_{5}\) and Orb6). Our results, presented in Figures 8e1 and 8f1, reveal that comparing with the case of an infinitely long CS (Figures 7d1 and 7d2), the range of CDCS sources for Orb\({}_{5}\) and Orb\({}_{6}\) decreases by nearly half when \(l=12\)\(R\odot\) as a result of that Orb\({}_{5}\) lacks contribution in the longitudinal direction from the region near the aphelion (ascending node), while Orb\({}_{6}\) lacks contribution from the region extending in the latitudinal direction around the ascending and descending nodes above the south hemisphere. When \(l=27\)\(R\odot\) (Figures 8e2 and 8f2), the range of the CDCS parameters for Orb\({}_{5}\) is still concentrated around \(\phi_{s}=180^{\circ}\), but Orb\({}_{6}\) could detect a large number of CSs produced by eruptions in the south hemisphere, and the probability of which Orb\({}_{6}\) intersects with CSs (\(P_{27}^{CD}R_{\odot}=0.218\)) is almost twice that of Orb\({}_{5}\) (\(P_{27}^{CD}R_{\odot}=0.131\)). This is because the ascending node of Orb\({}_{5}\) is very far from the Sun, making it difficult in detecting the CS even if \(l\) increases. On the other hand, the ascending and the descending nodes of Orb\({}_{6}\) are not very far, and the intersection with the CS could occur around both locations. Figures 8e4 and 8f4 confirm this point quantitatively, and also indicate that opportunities for Orb\({}_{5}\) concentrates around the perihelion. Finally, for \(l=90\) R\(\odot\), the probability of which the two orbits intersect the CS is roughly the same.
In addition, we further investigate the fashion in which they intersect, namely probability that they intersect at different angles, which is defined as \(\sigma\). As shown in Figure 9a, \(\sigma\) is the angle between the direction of the spacecraft motion and the plane of the CS at the intersection point. Since traversing the CS from either side has the same effect, we will not distinguish the "front" or the "back" side of CS, and only consider the case of which \(0^{\circ}<\sigma<90^{\circ}\). We define the case of \(0^{\circ}<\sigma<30^{\circ}\) as the small angle traverse, that of \(30^{\circ}<\sigma<60^{\circ}\) as the medium angle traverse, and that of \(60^{\circ}<\sigma<90^{\circ}\) as the large angle traverse. Figures 9b through 9e show the probability of the CS with \(l=27\)\(R_{\odot}\) intersecting various orbits at different angles \(\sigma\). Blue solid line, orange solid line, and green dashed line respectively represent three orbits: 1. \(a=65R_{\odot}\), \(b=60R_{\odot}\), \(rot=Y^{\prime}\); 2. \(a=65R_{\odot}\), \(b=60R_{\odot}\), \(rot=X^{\prime}\); 3. \(a=82.9R_{\odot}\), \(b=73.1R_{\odot}\), \(rot=X^{\prime}\).
Figure 9b displays variations of probabilities versus \(\alpha\) for all \(\sigma\) between \(0^{\circ}\) and \(90^{\circ}\). We notice that, first of all, regardless of the value of \(\alpha\), smaller orbits are more likely to encounter the CS than larger ones; second, for \(rot=X^{\prime}\), the value of \(\alpha\) that leads to the highest probability of intersection is about \(30^{\circ}\); third, for the orbit of \(rot=Y^{\prime}\), as \(\alpha\) increases, the probability slightly increases and reaches its maximum at \(\alpha=90^{\circ}\); fourth, for most of values of \(\alpha\)
Figure 9: (a) A sketch of the angle between the CS plane (red triangle) and the instantaneous velocity (cyan vector) of the spacecraft at the traverse moment, which is denoted as \(\sigma\). Panels (b-e) display the intersection probabilities versus the inclination angle of orbit, and correspond to four situations that \(\sigma\) could be any value, small \(\sigma\), medium \(\sigma\) and large \(\sigma\). The blue and orange lines respectively correspond to the smaller orbits with \(rot=Y^{\prime}\) and \(X^{\prime}\), and the green dashed line matches the larger orbit with \(rot=X^{\prime}\), where the larger and smaller orbits means \(a=82.9R_{\odot},b=73.1R_{\odot}\) and \(a=65R_{\odot},b=60R_{\odot}\). The numbers in panels (b-e) correspond to different orbits in table 2.
the orbit of \(rot=Y^{\prime}\) is more likely to encounter the CS than the orbit of \(rot=X^{\prime}\); and finally, eruptions near the ascending and the descending nodes of the orbit are more likely to produce CDCSs. However, the ascending node of the \(rot=X^{\prime}\) orbit is the aphelion, while the ascending and the descending nodes of the \(rot=Y^{\prime}\) orbit are much closer to the Sun, resulting in a parameter space for eruptions producing the CDCS almost twice that of the former (comparing Figures 8e2 and 8f2).
When considering the impact of individual \(\sigma\), we notice that the probability of the spacecraft passing through the CS at a medium angle \(\sigma\) is relatively high. As \(\alpha\) increases, the probability profile of different orbits intersecting the CS exhibits different varying patterns. For the three orbits discussed above, probabilities of traversing the CS at small angles show an increasing-decreasing trend (see Figure 9c). The probability for the orbit of \(rot=X^{\prime}\) intersecting the CS at medium angles continues to decrease with \(\alpha\), while that for the orbit of \(rot=Y^{\prime}\) crossing the CS slightly increases (see Figure 9d). The probability for the \(rot=X^{\prime}\) orbit intersecting the CS at large angles slightly increases, while that for the \(rot=Y^{\prime}\) situation increases at an almost negligible rate (see Figure 9e).
The above results indicate that the overall probability of the PSP orbit intersecting the CS is low, and it is difficult for the spacecraft to pass through the CS at large angles. The probabilities of PSP orbit crossing the CS at small and medium angles are 0.04 and 0.1, respectively, which seems fairly low but not impossible. The intersection probability of \(\mathrm{Orb}_{2}\) is higher than that of the PSP orbit, but it is mainly contributed by the case of small angle intersections. The case \(\mathrm{Orb}_{3}\) has the highest probability of encountering the CS at small angles among all the orbits. The intersection probability of \(\mathrm{Orb}_{4}\) is also high with contribution mainly from the medium angle intersections. The case \(\mathrm{Orb}_{5}\) belongs to the case of small orbit, but its probability of intersecting the CS is similar to that of the PSP orbit. However, the probability of \(\mathrm{Orb}_{5}\) passing through the CS at large angles is considered not low as \(\alpha\) is large. The \(\mathrm{Orb}_{6}\) has the highest intersection probability and is the orbit that is most likely to cross the CS at medium or high angles among all the orbits.
## 4 Probability for spacecraft to traverse CS
In previous sections, we calculated the probability of a heliocentric orbit intersecting a CS. In this section, we investigate the probability of the spacecraft itself crossing a CS. Apparently, the spacecraft can only traverse a CS if its orbit is capable of intersecting the CS. In addition to the parameters discussed earlier, the probability of spacecraft crossing the CS is constrained by the moment \(t\) when the eruption starts, the CME velocity \(v_{c}\), and the spacecraft velocity \(v_{s}\). In general, we assume that the time \(t\) is independent of the other parameters. Therefore, the joint PDF \(f(\theta_{s},\phi_{s},\gamma,l,t)\) can be expressed as on the basis of Equation (29):
\[f(\theta_{s},\phi_{s},\gamma,l,t)=f(\theta_{s},\gamma)f(\phi_{s})f(l)f(t). \tag{34}\]
It is also reasonable to assume a constant rate of the eruption within a given time interval:
\[f(t) = \frac{1}{T_{0}},t\in(0,T_{0}), \tag{35}\]
where, \(T_{0}=1\) year. In fact, \(T_{0}\) could be any value. Here, we mainly study the number of times the spacecraft may traverse the CS during one year, so we set \(T_{0}=1\) year.
Repeat the steps of deriving Equations (29) through (32), we rewrite Equation (14) for \(P^{tra}\) as:
\[P^{tra} = \int f(l)P^{tra}_{l}\mathrm{d}l, \tag{36}\]
where \(P^{tra}_{l}=P\left[(\theta_{s},\phi_{s},\gamma,l,t)\in\Omega_{1,2,3}\mid l\right]\) is the conditional probability of spacecraft traversing a CS with length \(l\). This probability is calculated in the way:
\[P^{tra}_{l} = \int_{\Omega^{t}_{1,2,3}}f(\theta_{s},\gamma)f(\phi_{s})f(t) \mathrm{d}\theta_{s}\mathrm{d}\phi_{s}\mathrm{d}\gamma\mathrm{d}t. \tag{37}\]
Because samples that could be collected here are discrete individual events, the integral in (36) could be simplified into a finite summation:
\[P^{tra} = \sum_{i=1}^{N}P(l_{i}-\frac{\Delta l}{2}<l<l_{i}+\frac{\Delta l}{ 2})P^{tra}_{l_{i}}, \tag{38}\]
where \(N\) is the total number of samples, \(l_{i}\) is the CS length of the \(i\)th sample, \(P(l)=f(l)\Delta l\), and \(P(l_{i}-\Delta l/2<l<l_{i}+\Delta l/2)\) is the total probability of the occurrence of the CS with a length in the range of \(l_{i}\pm\Delta l/2\).
According to Webb and Vourlidas (2016), the average ratio of the speed of the CMEs, \(v_{c}\), to the speed of the associating CS increase in length, \(v\), is 2.2. Assuming a constant growth rate of the CS for simplicity, \(l\) is thus related to \(v_{c}\) and the life-time of CS, \(\tau_{0}\), such as:
\[l(v_{c})\!=\!\frac{\tau_{0}}{2.2}v_{c}. \tag{39}\]
Then the probability \(P(l_{min}\leq l\leq l_{max})\) of the occurrence of a CS within a certain range of the length is related to the probability \(P(v_{min}\leq v_{c}\leq v_{max})\) within a certain growth rate range:
\[P(l_{i}-\frac{\Delta l}{2}<l<l_{i}+\frac{\Delta l}{2})\!=\!P(v_{ci}-\frac{ \Delta v_{c}}{2}<v_{c}<v_{ci}+\frac{\Delta v_{c}}{2}), \tag{40}\]
where \(l_{i}=\tau_{0}v_{ci}/2.2\), \(\Delta l=\tau_{0}\Delta v_{c}/2.2\). Furthermore, substituting Equation (40) into (38) gives:
\[P^{tra}\!=\!\sum_{i=1}^{N}P(v_{ci}-\frac{\Delta v_{c}}{2}<v_{c}<v_{ci}+\frac{ \Delta v_{c}}{2})P^{tra}_{l(v_{ci})}. \tag{41}\]
For extremely slow CMEs, the trailing CS is most likely to totally dissipate before they encounter the orbit. On the other hand, the probability of the occurrence of extremely fast CMEs are too low to be traversed. Therefore, as an approximation, we only consider that the CME velocity ranges from 100 km s\({}^{-1}\) to 1100 km s\({}^{-1}\). For convenience, we divide the velocity range into 11 intervals. Therefore, \(N=11\), \(\Delta v_{c}=100\) km s\({}^{-1}\), and \(v_{ci}=100i\) km s\({}^{-1}\), \(i=1\),..., \(N\). Finally, Equation (41) becomes:
\[P^{tra}\!\approx\!P(v_{c}<150)P^{tra}_{l(v_{ci})}+\sum_{i=2}^{10}P(v_{ci}-50<v _{c}<v_{ci}+50)P^{tra}_{l(v_{ci})}+P(v_{c}>1050)P^{tra}_{l(v_{ci1})}. \tag{42}\]
Combining Equations (37) and (39), we calculate the conditional probability \(P^{tra}_{l(v_{ci})}\) (see Equation 41) of detecting a CS trailing a CME of a given speed \(v_{ci}\). As mentioned before, we set \(\tau_{0}=18\) hrs as the lower limit of the CS lifetime. We compare three different orbits, namely Orb\({}_{1}\) (PSP), Orb\({}_{3}\), and Orb\({}_{6}\), as listed in Table 2, and present the results in Figure 9(a). The green, orange, and blue points give the results for PSP, Orb\({}_{3}\), and Orb\({}_{6}\), respectively. We notice that for \(v_{c}<300\) km s\({}^{-1}\), PSP cannot detect the CS behind the associated CME, as the CS produced by a slow CME cannot reach the PSP orbit within its lifetime, and thus does not meet Criterion 3 (see Equation 6). With increasing CME speed, the probability of detecting the associated CS also increases. Webb and Vourlidas (2016) statistically analyzed the speed of 40 CMEs and the associated CSs during the solar maximum, obtaining an average speed of CME of 705 km s\({}^{-1}\). We also noticed that some slow CMEs at speed as low as 100 km s\({}^{-1}\) could even produce CS (e.g., see also Ciaravella et al., 2002).
To estimate \(P^{tra}\) in a more realistic scenario, we need to consider the weight contributed by the number of CMEs with different speeds. Therefore, we plot the probability distribution of the CME occurrence, \(P(v_{ci}-\Delta v_{c}/2<v_{c}<v_{ci}+\Delta v_{c}/2)\) (see Equation 41), versus CME speeds in Figure 9(b) according to Lamy et al. (2019). Combining Figures 9(a) and 9(b), we obtain probabilities for a CS behind CME of various velocities to be the DCS (see Figure 9(c)). Results in Figure 9(c) are equivalent to those in Figure 9(a) with the difference in CME speeds as the weight being included in calculations by taking the product of the corresponding values given in Figures 9(a) and 9(b). Although a fast CME is more likely to generate a DCS, fast CMEs are usually less than slow ones. Therefore, in reality, the CS produced by a relatively slow CME could be detected more easily. Specifically, speeds of most CMEs are between 250 and 550 km s\({}^{-1}\), and thus they have the highest probability of producing DCSs.
Comparing probabilities of spacecraft in different orbits traversing CSs, we find that the relatively high detection probability for a small orbit is mainly due to the advantage of detecting CSs produced by slow CMEs. Moreover, although the probability of Orb\({}_{3}\) intersecting the CS is almost the same as that of Orb\({}_{6}\), considering the motion of the spacecraft and the extension of the CS, a spacecraft in Orb\({}_{3}\) has a higher probability of traversing. Based on the collected data, we use Equation (42) to calculate probabilities of a spacecraft in Orb\({}_{1}\), Orb\({}_{3}\), and Orb\({}_{6}\) crossing a CS produced by a random solar eruption, which is equivalent to summing up each term shown in Figure 9(c), and obtain
\(P^{tra}=3.95\times 10^{-4}\), \(1.36\times 10^{-3}\) and \(1.16\times 10^{-3}\), respectively (see Figure 10). On the basis of these results, we are able to further estimate the expected number of the spacecraft traversing the CS in a given year.
Lamy et al. (2019) performed a statistical analysis of the rate of the CME occurrence in solar cycles 23 and 24. The data was categorized into four groups based on different detection methods, namely ARTEMIS, SEEDS, CACTus, and CDAW. They found that the rate of CME occurrence in the first two categories is about 400/month, and that in the third and the fourth categories is about 200/month. As an approximation, we assume that 10 CMEs occur everyday in the solar maximum, so we have 3650 CMEs per year. As we mentioned earlier that the occurrence of a CS is the occurrence of a random event \(\{\theta_{s},\phi_{s},\gamma,l,t\}\). Therefore, calculating the probability of detecting a random CS is similar to the process of randomly sampling points in a five-dimensional parameter space, in which the vector constituted by
Figure 10: (a) The conditional probability for a CS to be traversed by the spacecraft, given that the CS is generated by a CME with speed \(v_{c}\). Here, the green, orange and blue scatter plots are results of different orbits: PSP, Orb\({}_{3}\) and Orb\({}_{6}\), respectively. (b) The probability distributions of the speed of CMEs (CDAW data from Lamy et al., 2019). (c) The probability for a CS that is produced by a CME with various velocities to be traversed, which is the product of the corresponding terms in panels (a) and (b). Unlike panel (a), panel (c) includes the impact of CME velocities on estimating the probability. At the bottom of panel (c), we present the probability for the spacecraft in different orbits to traverse a CS, \(P^{tra}\) (see Equation 42), and the expected numbers for the spacecraft to traverse the CSs in one year in the solar maximum (see Equation 44).
these parameters determines an individual point, and the CS is judged detectable when this point is located within domain \(\Omega_{1,2,3}\).
Equation (14) gives \(P^{tra}\), and also determines that a corresponding point in the five-dimensional space is located in \(\Omega_{1,2,3}\). The question of interest here is how many DCSs could be produced by CMEs every year? In other words, how many randomly sampled points among 3650 ones would fall within the region \(\Omega_{1,2,3}\)? Given that each CME event is independent of the other events, each sampling process is as an independent experiment with only two outcomes: success (the spacecraft traverses the CS) and failure (the spacecraft does not traverse the CS). The probability of success is \(P^{tra}\) for each event, and the probability of failure is \(1-P^{tra}\). Denote \(M\) the number of successful experiments among total \(n=3650\) experiments. Under the assumption of independence, \(M\) follows a binomial distribution, \(M\sim B(n,P^{tra})\):
\[P(M=k)=\mathrm{C}_{n}^{k}(P^{tra})^{k}(1-P^{tra})^{n-k},\ k=0,1,...,n. \tag{43}\]
The expected value of \(M\) is then:
\[E(M)=\sum_{k=0}^{n}kP(M=k)=\sum_{k=0}^{n}k\mathrm{C}_{n}^{k}(P^{tra})^{k}(1-P^ {tra})^{n-k}=nP^{tra}. \tag{44}\]
Combining Equations (42) and (44) gives the expected number of the spacecraft traversing the CS in different orbits per year in the solar maximum. Multiplying the \(P^{tra}\) in different orbits calculated earlier by \(n\), we obtain the expected number of the spacecraft in Orb\({}_{1}\), and Orb\({}_{2}\), and Orb\({}_{3}\) traversing the CS per year \(E(M)=1.4\), \(4.9\), and \(4.2\), respectively (see Figure 10). Therefore, the probability of PSP traversing a CME-flare CS is not high because: 1. the inclination of the PSP orbit is small, and the orbit of large inclination would allow high probability of traversing; 2. the perihelion of PSP orbit is still far away from the Sun.
An intriguing question is whether any spacecraft has ever detected a CME-flare CS yet. PSP, Solar Orbiter, and Bepi-Colombo (Benkhoff et al., 2010) might be the three candidates in orbit. Nour E. Raouafi (private communications) mentioned that a traversal was very likely to occur on September 5, 2022 because a major eruption produced a fast CME that swept the PSP first, and then the PSP traversed the CS behind the CME at a fairly large angle, \(\sigma\). Romeo et al. (2023) reported that reversal of the radial component of the magnetic field was detected by PSP, and proposed that the reversing might result from the traverse of a CME-flare CS as described by Lin & Forbes (2000). As of today, the Solar Orbiter has not been reported to cross a candidate of the CME-flare CS yet, it is probably too far from the Sun to encounter anything like a CS (Angelos Vourlidas, private communications). As for the Bepi-Colombo, it is a mission of orbiting Mercury with the perihelion of \(65.6\ R_{\odot}\) and aphelion of \(99.8\ R_{\odot}\), and after comparing with the Solar Orbiter with perihelion of \(59.8\ R_{\odot}\), we might not be able to expect a traverse of a CME-flare CS by Bepi-Colombo.
## 5 Conclusions
Although traversing the large-scale CME-flare CS was not initially among the scientific goals of the PSP mission, the probability exists that PSP traverses CSs and provides important and essential information on the large-scale CSs and the magnetic reconnection processes therein. Due to the randomness of solar eruptions in both space and time, not all orbits are expected to allow the spacecraft to traverse CSs, for example, our calculations indicate that the PSP orbit is not the optimal one for crossing CSs. Based on the Lin-Forbes model and existing observations, we utilized the GCS model for CME/ICME reconstruction developed by Thernisien et al. (2009) and employed a method to calculate the probability of PSP or similar spacecraft traversing the CS generated by a solar eruption. We simplified the CS as a triangle-shaped plate, established a quantitative relationship between the relevant parameters of the DCSs and the orbits, and then estimated the probability of a PSP-like probe crossing the CS on given orbits.
Three criteria were established to check whether a CME-flare CS could be traversed by a spacecraft in a given orbit. The first criterion checks whether the orbit of spacecraft could cross the CS, namely whether at least two points exist on the CS such that these two points are located either side of the orbital plane. Criterion 2 requires that at least one point on the CS-orbit intersection is located outside the ellipse of the orbit, and criterion 3 determines the condition under which a spacecraft itself crosses the CS. A spacecraft could traverse a CME-flare CS successfully only if these three criteria are satisfied simultaneously.
Our results show that the CS could be traversed by the spacecraft easily if the corresponding eruption propagates roughly along the plane of the spacecraft orbit, i.e., the symmetry axis of CS and CME almost lies in the orbital plane.
In addition, because of the finite length and lifetime of the CS, as well as the finite speed at which the spacecraft moves, the traverse is more likely to happen if the eruption that produces the CS occurs in the region near the perihelion.
On the basis of the existing cases of the solar eruption and the distribution of source regions of these eruptions (Hao et al., 2015), we investigated carefully various possible relative positions of the CME-flare CS produced in these events and a given orbit of the spacecraft with the purpose to detect CSs. We found that an orbit of inclination, \(\alpha>10^{\circ}\), to the ecliptic plane would help enhance the probability of the spacecraft traversing CSs. Considering the fact that traversing the CS orthogonally is very hard, if not impossible, we studied the probability for the satellite to traverse the CS occurring at the angle, \(\sigma\), of medium values, say \(30^{\circ}<\sigma<60^{\circ}\), and obtained a probability around 0.1% as \(\alpha>30^{\circ}\). In the solar maximum, the total number of traversing the CS by a spacecraft on such an orbit is about 4/year. The probability for PSP to traverse a CS is around 0.04%, and the expected number of traversing the CS is about 1.4/year.
Authors are appreciating the referee for the valuable comments and suggestions that helped improve this work. We gratefully acknowledge constructive comments and suggestions given by Weiqun Gan, Terry G. Forbes and John C. Raymond. This work was supported by National Key R&D Program of China No. 2022YFF0503804, the NSFC grant 11933009, 12273107 and U2031141, grants associated with the Yunling Scholar Project of the Yunnan Province, the Yunnan Province Scientist Workshop of Solar Physics, and the Applied Basic Research of Yunnan Province 2019FB005. The numerical computation in this paper was carried out on the computing facilities of the Computational Solar Physics Laboratory of Yunnan Observatories (CoSPLYO).
## Appendix A Formulations for deducing coordinates of \(C_{1}\) and \(C_{2}\)
For a CS described by the parameters of (\(l\), \(\delta\), \(\phi_{s}\), \(\theta_{s}\), \(\gamma\)), the calculation of the coordinates of \(C_{1}\) and \(C_{2}\) in the \(XYZ\)-system includes two steps. First, we consider a referential case with \(\theta_{s}=\gamma=0\). So the coordinates of \(C_{1a}\) and \(C_{2a}\), which are the counterparts of \(C_{1}\) and \(C_{2}\) in the referential case, can be easily obtained. Second, we transform the CS configuration from the referential case into the true case by performing three rotations, which eventually gives the coordinates of \(C_{1}\) and \(C_{2}\). The relevant transformations are illustrated in Figure 11 with detailed explanations given below.
Figure 11: A sketch of the rotating operations that correspond to rotating matrices \(\mathcal{M}(\mathbf{OA},\theta_{\mathbf{s}})\) and \(\mathcal{M}(\mathbf{OS_{b}},\gamma)\), illustrating how to obtain the endpoints of a CS in \(X^{\prime}Y^{\prime}Z^{\prime}\) coordinate system by rotating the reference CS, CS\({}_{a}\).
First, we consider CS\({}_{a}\) as the referential CS and assume that it lies initially in the ecliptic plane (see the yellow triangle in Figure 11a). The symmetry axis of CS\({}_{a}\) extends outward along the vector \(\mathbf{OS_{a}}\), and locations of \(C_{1a}\) and \(C_{2a}\), in the \(X^{\prime}Y^{\prime}Z^{\prime}\) system could be easily obtained by writing down two corresponding vectors: \(\mathbf{OC_{1a}}\)= \([l\cos(\phi_{s}-\delta),l\sin(\phi_{S}-\delta),0]\) and \(\mathbf{OC_{2a}}\)= \([l\cos(\phi_{S}+\delta),l\sin(\phi_{S}+\delta),0]\).
We then rotate the CS\({}_{a}\) counterclockwise an angle \(\theta_{s}\neq 0\) around \(\mathbf{OA}\), a vector perpendicular to \(\mathbf{OS_{a}}\) and lying in the ecliptic plane. This rotation transforms the CS\({}_{a}\) into CS\({}_{b}\) (see another yellow triangle in Figure 11a). The relevant parameters of CS change to \(S_{b}\), \(C_{1b}\), and \(C_{2b}\) accordingly. The coordinates of \(C_{1b}\) and \(C_{2b}\) in \(X^{\prime}Y^{\prime}Z^{\prime}\) can be expressed as follows:
\[\mathbf{OC_{1b}} = \mathscr{M}(\mathbf{OA},\theta_{\mathbf{s}})\mathbf{OC_{1a}}^{ \mathrm{T}},\] (A1) \[\mathbf{OC_{2b}} = \mathscr{M}(\mathbf{OA},\theta_{\mathbf{s}})\mathbf{OC_{2a}}^{ \mathrm{T}},\] (A2)
where \(\mathscr{M}(\mathbf{OA},\theta_{\mathbf{s}})\) is the matrix for the rotation by angle \(\theta_{s}\) around \(\mathbf{OA}\). We further rotate CS\({}_{b}\) counterclockwise by angle \(\gamma\neq 0\) around \(\mathbf{OS_{b}}=(\cos\phi_{\mathbf{s}}\cos\theta_{\mathbf{s}},\sin\phi_{ \mathbf{s}}\cos\theta_{\mathbf{s}},\sin\theta_{\mathbf{s}})\), the symmetry axis of the CS, to approach to the original status of the CS described by \((l,\delta,\phi_{s},\theta_{s},\gamma)\). Therefore, we can express the two endpoints of CS\({}_{c}\), \(C_{1c}\) and \(C_{2c}\), in the \(X^{\prime}Y^{\prime}Z^{\prime}\) coordinate system as below:
\[\mathbf{OC_{1e}} = \mathscr{M}(\mathbf{OS_{b}},\gamma)\mathbf{OC_{1b}}^{\mathrm{T}},\] (A3) \[\mathbf{OC_{2e}} = \mathscr{M}(\mathbf{OS_{b}},\gamma)\mathbf{OC_{2b}}^{\mathrm{T}}.\] (A4)
So far, we have finalized the description of any CS in the \(X^{\prime}Y^{\prime}Z^{\prime}\) system. The final step toward describing the CS morphology in the \(XYZ\) system is to transform the above descriptions of CS in the \(X^{\prime}Y^{\prime}Z^{\prime}\) system into the \(XYZ\) system. As shown in Figure 2, two ways exist that the orbital plane (\(XYZ\)-system) deviates from the ecliptic plane (\(X^{\prime}Y^{\prime}Z^{\prime}\)-system), which means that we have two choices for the transformation in this step, rotate the CS around either \(X^{\prime}\)- or \(Y^{\prime}\)-axis clockwise by an angle of \(\alpha\):
\[\mathbf{OC_{1}} = \mathscr{M}(rot,-\alpha)\mathbf{OC_{1e}}^{\mathrm{T}},\] (A5) \[\mathbf{OC_{2}} = \mathscr{M}(rot,-\alpha)\mathbf{OC_{2e}}^{\mathrm{T}},\] (A6)
where \(rot\) means either the \(X^{\prime}\)- or the \(Y^{\prime}\)-axis. Eventually, combining Equations (A1)-(A6) gives the coordinates of \(C_{1}\) and \(C_{2}\) in the \(XYZ\)-system:
\[\mathbf{OC_{1}} = \begin{bmatrix}x_{1}\\ y_{1}\\ z_{1}\end{bmatrix}=\mathscr{M}(rot,-\alpha)\mathscr{M}(\mathbf{OS_{b}},\gamma) \mathscr{M}(\mathbf{OA},\theta_{\mathbf{s}})\begin{bmatrix}l\cos\left(\phi_{s }-\delta\right)\\ l\sin\left(\phi_{s}-\delta\right)\\ 0\end{bmatrix},\] (A7) \[\mathbf{OC_{2}} = \begin{bmatrix}x_{2}\\ y_{2}\\ z_{2}\end{bmatrix}=\mathscr{M}(rot,-\alpha)\mathscr{M}(\mathbf{OS_{b}},\gamma) \mathscr{M}(\mathbf{OA},\theta_{\mathbf{s}})\begin{bmatrix}l\cos\left(\phi_{s }+\delta\right)\\ l\sin\left(\phi_{s}+\delta\right)\\ 0\end{bmatrix}.\] (A8)
## Appendix B Estimations of \(\tau_{Fly}\)
According to Equation (8), evaluating \(\tau_{fly}\) needs to know \(t\) and \(t_{orb}\), the times at which the spacecraft reaches the points \(Q\) and \(C_{orb}\), respectively. Therefore, a reference time is expected. We choose the time \(t_{0}\) when spacecraft passes the perihelion as a referential point for the following reasons. Calculations for \(t_{orb}\) involves evaluations of the eccentric anomaly, \(E\), measured in the same direction as measuring the true anomaly, \(\nu\). Figure 12 specifies the definition of \(E\) and \(\nu\): the Sun is located at one focus of the spacecraft orbit, \(O\); the spacecraft is at point \(Q\) on the orbit, which has projections \(Q^{\prime}\) and \(Q^{\prime\prime}\) on the reference circle of the orbit and on the major-axis of the orbit ellipse, respectively; similarly, the CS intersects the orbit at point \(C_{orb}\), with the corresponding projection points \(C^{\prime}_{orb}\) and \(C^{\prime\prime}_{orb}\) on the reference circle and the major axis of the orbit ellipse, respectively. So the eccentric and true anomalies for point \(Q\) are denoted as \(E_{Q}=\angle OOO^{\prime}Q^{\prime}\) and \(\nu_{Q}=\angle Q^{\prime\prime}OQ\), respectively. Similarly, for point \(C_{orb}\), the corresponding angles are \(E_{orb}=\angle OOO^{\prime}_{orb}\) and \(\nu_{orb}=\angle Q^{\prime\prime}OC_{orb}\). The perihelion corresponds to \(E_{0}=0\) and \(\nu_{0}=0\) simultaneously (see Figure 12), and we set the time, \(t_{0}=0\). This choice of reference time simplifies calculations (see Beutler 2005 for more details).
According to Beutler (2005), the flying time, \(t_{orb}\), is
\[t_{orb}=\sqrt{\frac{a^{3}}{GM_{\odot}}}(E_{orb}-e\sin E_{orb})\] (B9)
on the basis of Kepler equation. \(E_{orb}\) is related to \(\nu_{orb}\) in the way of
\[E_{orb}=2\arctan{(\sqrt{\frac{1-e}{1+e}})}\tan{(\frac{\nu_{orb}}{2})}.\] (B10)
In the coordinate system \(XYZ\) used in this work (see also Figure 12), a phase difference, \(\pi\), exists between angle \(\nu_{orb}\) and the longitude \(\phi\) of point \(C_{orb}\):
\[\nu_{orb}=-\pi+\phi.\] (B11)
Given the parameters of a CS and an orbit, we can calculate \(\tau_{fly}\) and examine Criterion 3 by substituting \(\nu_{orb}\) in Equation (B11) into (B10) for \(E_{orb}\), substituting resultant \(E_{orb}\) into Equation (B9) for \(t_{orb}\), and finally substituting the resultant \(t_{orb}\) into Equation (8).
|
Subsets and Splits